Home
Turium Algoreus
Turium Algoreus
  • Turium Algoreus Documentation
    • Turium Algoreus Overview
      • How to Guides
        • Axons (Pipeline) User Guide
          • Algoreus Genesis
          • Algoreus Node
          • Steps for a simple batch Axon in Algoreus
          • Configuring Axon in Algoreus
          • Deploying an Axon in Algoreus
          • Running an Axon in Algoreus
          • Viewing and downloading logs in the Genesis in Algoreus
          • Scheduling an Axon in Algoreus
          • Reusable Axons in Algoreus
          • Using Triggers in Algoreus
          • Working with multiple versions of the same node in Algoreus
          • Modifying a draft Axon in Algoreus
          • Editing a deployed Axon in Algoreus
          • Duplicating An Axon in Algoreus
          • Deleting an Axon in Algoreus
          • Deploying nodes from the Algoreus Hub
          • Using node templates in Algoreus
          • Exporting and importing Axons in Algoreus
          • Dynamic resource configuration in Algoreus
          • Working with namespaces in Algoreus
        • Soma (Transformation) User Guide
          • Algoreus Soma Overview
          • Algoreus Soma Concepts
          • Algoreus Soma UI components
          • Working with multiple datasets
          • Navigating between Soma and Algoreus Genesis
          • Editing a transformation created in the Soma
          • Soma data types
          • Working with connections in Soma
          • Parsing a CSV file
          • Strings Formatting
          • Sending records to error
          • Working with numbers in Soma
          • Working with Decimal types in Soma
          • Performing date transformations in Soma
          • Filtering records
          • Finding and replacing values in a column
          • Filling null or empty cells
          • Copying, deleting, and keeping columns
          • Renaming a column
          • Joining two columns
          • Swapping two column names
          • Extracting fields to retrieve values
          • Exploding fields
          • Masking data
          • Encoding records to store or transfer data
          • Decoding records to store or transfer data
          • Applying a Hashing algorithm to a column
          • Upgrading the Soma transformation node version
          • Viewing and downloading a schema in Soma
          • Viewing Soma Service logs
        • Cerebellum (Operations and Monitoring) User Guide
          • Logging and Monitoring
          • Metrics
          • Dashboard and Reports
          • Preferences and Runtime Arguments
          • Transaction Service Maintenance
        • Engram (Metadata) User Guide
          • System Metadata
          • Discovery and Lineage
          • Audit Logging
          • Metadata Management
          • Accessing Metadata Programmatically
          • Metadata Field-Level Lineage
        • Clone (Replication) User Guide
          • Cloning overview
          • Clone Concepts
          • Adding Transformations to a Cloning Job
          • Deleting a Cloning Job
          • Tutorial: Cloning data from Oracle Database to BigQuery
        • Algology (Visualisation) User Guide
          • Dashboards
            • Using Dashboards
            • Building Dashboards
            • Manage dashboards
            • Publishing Dashboard
            • Playlist
            • Create and manage reports
            • Share dashboards and panels
            • Access Dashboard Usage
            • Search Dashboards
          • Panel Editor
            • Configure Panel Options
            • Configure standard options
          • Visualisations
            • Alert List
            • Bar Chart
            • Bar Gauge
            • Candlestick Panel
            • Canvas
            • Dashboard List
            • Flame Graph
            • Gauge
            • Heatmap
            • Histogram
            • Logs
            • Node Graph
            • Traces Panel
            • Pie Chart
            • State Timeline
            • Stat Panel
            • Time series
            • Trend Panel
            • Text Panel
            • Table
            • GeoMap
            • Datagrid Panel
            • Status history
            • Annotations
          • Explore
            • Logs in Explore
            • Queries in Explore
            • Tracing in Explore
            • Inspector in Explore
    • Turium Algoreus Connectors
Powered by GitBook
On this page
  • Objectives
  • Create and run a Cloning Service job
  • Configure the target
  • Start the job
  • Monitor the job
  • View the results
  • Clean up

Was this helpful?

  1. Turium Algoreus Documentation
  2. Turium Algoreus Overview
  3. How to Guides
  4. Clone (Replication) User Guide

Tutorial: Cloning data from Oracle Database to BigQuery

  • Objectives

  • Create and run a Cloning job

  • Configure the target

  • Start the job

  • Monitor the job

  • View the results

  • Clean up

This guide will walk you through deploying a job that continuously clones data changes from an Oracle database to a BigQuery dataset using our Cloning Service.


Objectives

In this tutorial, you will:

  • Configure your Oracle database to allow supplemental logging.

  • Create and execute a Cloning job.

  • View the results in BigQuery.


Create and run a Cloning Service job

  1. Create the job In the Algoreus UI, click the menu and navigate to the Clone page.

  2. On the Create new Cloning job page, specify a Cloning job Name. Optionally, provide a description.

  3. Click Next.

  4. Configure the source:

  5. Select Oracle as the source.

  6. For Connectivity Method, if your Oracle server allows incoming traffic from public IPs, choose IP allowlisting. Otherwise, for Private Connection Name choose Private connectivity (VPC peering) and enter the VPC peering name.

  7. In the credentials section, enter your username and password for accessing the Oracle Server.

  8. Leave all other properties as is.

  9. Click Next. If the connection is successful, a list of tables under sample schema will appear. For this tutorial, select all of the tables and events to clone (including Insert, Update, and Delete events).

  10. Click Next.


Configure the target

  1. Select the BigQuery target.

  2. The Project ID and Service Account Key are automatically detected. Keep the default values as is.

  3. (Optional) In the Advanced section, you can configure the staging bucket's name and location, the load interval, the staging table prefix, and the behavior when tables or databases are dropped.

  4. Click Next.

  5. (Optional) Configure the advanced properties. For this tutorial, you can use the default settings.

  6. Click Next.

  7. On the Review assessment page, click View mappings by any of the tables to get an assessment of schema issues, missing features, or connectivity issues that might occur during Cloning. If any issues occur, they must be resolved before you can proceed. For this tutorial, if any of the tables have issues, go back to the step where you selected tables and instead select a table or event (Inserts, Updates, or Deletes) without issues.

  8. Click Next.

  9. Review the summary Cloning job details, and then click Deploy Cloning job.


Start the job

On the Cloning job details page, click Start.

The Cloning job transitions from Provisioning to Starting to Running state. In the running state, the Cloning job loads an initial snapshot of the table data that you selected into BigQuery. In this state, the state of the table is listed as Snapshotting. After the initial snapshot is loaded into BigQuery, any changes made to the table are cloned to BigQuery, and the state of the table is listed as Cloning.


Monitor the job

  1. You can start and stop the Cloning job, review its configuration and logs, and monitor your Cloning job.

  2. You can monitor Cloning job activities from the Cloning job details page.

  3. From the Cloning page, click the desired Cloning job Name.

  4. Click Monitoring.


View the results

The Cloning job creates a replicated dataset and table in BigQuery, with names inherited from the corresponding Oracle database and table names.

If your source database name does not comply with the BigQuery dataset naming conventions, or if you have naming conflicts when you Clone multiple datasets into a single BigQuery dataset, specify aliases in the BigQuery target properties.

  1. Open BigQuery in the console.

  2. In the left panel, click on your project name to expand a list of datasets.

  3. Select the dataset, and then select a table to view.


Clean up

To avoid incurring charges to your Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.


PreviousDeleting a Cloning JobNextAlgology (Visualisation) User Guide

Last updated 1 year ago

Was this helpful?