Data Warehouses

A comprehensive guide for integrating Tridens Monetization with your data warehouse, designed for technical users.

The Tridens Monetization Data Warehouse Integration feature allows you to seamlessly export your subscription, billing, usage, and revenue data from Tridens to your preferred analytics platforms. This guide provides a clear roadmap for setting up and managing this integration, focusing on scalability for high-volume usage-based billing in industries like telecom, energy, and electric vehicles.


Topics on this Document

  1. Introduction & Overview
  2. Architecture & Components
  3. Supported Data Sources & Targets
  4. Data Modeling & Schema Mapping
  5. Data Sync Modes & Strategies
  6. Security, Authentication & Credentials
  7. Permissions & Encryption
  8. Usage / Configuration Guide
  9. Limitations, Caveats, and Best Practices

1. Introduction & Overview

1.1 Purpose and Use Cases

The Tridens Monetization Data Warehouse Integration automates the extraction, transformation, and loading (ETL) of operational data from Tridens Monetization into customer-owned data warehouses. This enables advanced analytics, BI reporting, and machine learning workflows outside the native Tridens analytics module.

Key Use Cases:

  • Revenue Analytics: Sync subscription metrics (e.g., MRR, churn rates) for cohort analysis in SaaS or media companies.
  • Usage Pattern Insights: Load real-time IoT meter data (e.g., energy consumption) into PostgreSQL for predictive billing in utilities.
  • Compliance Reporting: Export audited billing records to MS SQL for IFRS15/ASC 606 reconciliation.
  • Custom Integrations: Feed anonymized usage events into dashboards via generic JDBC.

Data warehouse integration reduces manual exports (like CSV/API dumps), minimizes analytics latency, and supports schema evolution without downtime—critical for dynamic pricing models.


1.2 Supported Platforms

Supports direct connectors to the following targets, leveraging Tridens RESTful APIs and JDBC/ODBC standards:

PlatformIntegration TypeSupported Deployments
SnowflakeNative connector via Snowpipe for staged loadsAccount & warehouse-level
PostgreSQLJDBCOn-prem, AWS RDS, GCP SQL
Microsoft SQL Server (MS SQL)JDBCAzure SQL, On-prem, Always Encrypted
MySQLJDBCAWS Aurora, GCP SQL, Self-hosted
Generic JDBCCustom driverAny JDBC-compliant data warehouse (e.g., BigQuery, Redshift)

1.3 Key Features

  • Incremental Syncs: Low-latency updates.
  • Schema Evolution: Handles field additions/removals automatically.
  • Scalability: Serverless execution, 1M+ events/day.

2. Architecture & Components

2.1 High-Level Architecture

The Data Warehouse Integration follows a modular ETL pipeline design:

Source (Tridens Monetization)Integration EngineTarget Data Warehouse

This architecture ensures scalability, resilience, and low-latency data movement across Tridens services and supported data warehouses.


Architecture Overview

  • Source: Tridens Monetization microservices expose REST APIs and event logs for data extraction.
  • Integration Engine: The ETL process runs on AWS Lambda or Kubernetes Pods to extract, transform, and load data efficiently.
  • Target: Data warehouses like Snowflake, PostgreSQL, or MS SQL store structured analytics-ready data.

Architecture Diagram

Data Warehouses Integration Page

2.2 Data Flow

  • Extract: Pull data from Tridens APIs.
  • Transform: Apply in-memory mappings to target data warehouse objects.
  • Load: Bulk upsert to target data warehouse with transaction boundaries.

2.3 Components/Modules Involved

  • Extractor: Tridens Monetization notifications.
  • Loader: Java-based JDBC handler using bulk operations.
  • Transformation Engine: Transforms DTOs to data warehouse compatible objects.

2.4 Data Flow / Pipeline Steps

  1. Initialization: Validate config, acquire locks.
  2. Extract: Fetch data from DTO and prepare for data warehouse structure.
  3. Transform: Apply anonymization rules.
  4. Validate: Schema and row-level checks.
  5. Load: Transactional insert/update.
  6. Commit: Update metadata and emit metrics.

3. Supported Data Sources & Targets

3.1 Tridens Data Sources

Extracts from core Tridens entities via REST APIs:

EntityDescription
SubscriptionsPlans, amendments, status
Usage RecordsMetered events (e.g., calls, sms, data usage, energy)
Billing/InvoicesCharges, payments
Customers/AccountsHierarchies, balances

API Reference: Tridens API Documentation


3.2 Target Data Warehouses

TargetMin VersionDriver
SnowflakeN/ASnowflake JDBC 3.13+
PostgreSQL12pgJDBC 42.5+
MS SQL2019MS JDBC 12.4+
MySQL8.0Connector/J 8.1+
Generic JDBCN/ACustom

4. Data Modeling & Schema Mapping

4.1 Definition of Objects / Tables Exposed

Tridens exposes over 17 core data tables.

Primary Tables:

  • Subscriptions
  • Balances
  • Events
  • Bills
  • Invoices
  • Payments
  • Customers

5. Data Sync Modes & Strategies

5.1 Data Frequency

Tridens Monetization provides real-time data delivery, leveraging event-driven architecture and change data capture (CDC) for near-instant updates.

Mechanism:

  • Notifications ensure sub-second latency.
  • Ideal for use cases such as:
    • Real-time billing adjustments
    • Fraud detection
    • Instant customer insights

5.2 Handling Deletes, Updates, Inserts

OperationStrategy
InsertsNew rows created with unique primary keys.
UpdatesPerformed via MERGE or UPSERT on primary key columns.
DeletesHandled as soft deletes, depending on target warehouse support.

6. Security, Authentication & Credentials

6.1 Credentials Storage, Rotation, and Encryption

Security MechanismDescription
StorageCredentials stored on Tridens Monetization (encrypted)
EncryptionAll payloads encrypted with AES-256 during transit and at rest.

7. Permissions

Minimal Access Required:

  • CREATE TABLE on schema
  • INSERT, UPDATE, DELETE on target tables

Role-Based Example:

GRANT USAGE ON DATABASE analytics_db TO ROLE tridens_role;
GRANT CREATE SCHEMA ON DATABASE analytics_db TO ROLE tridens_role;

8. Usage / Configuration Guide

8.1 Step-by-Step Setup for Data Warehouse

Data Warehouses Integration Page

  1. Navigate to System Configuration → Data Warehouse Integration.
  2. Select your warehouse type (e.g., Snowflake).
  3. Enter your JDBC/ODBC credentials and test the connection.
  4. Save configuration.