Data Warehouses
The Tridens Monetization Data Warehouse Integration feature allows you to seamlessly export your subscription, billing, usage, and revenue data from Tridens to your preferred analytics platforms. This guide provides a clear roadmap for setting up and managing this integration, focusing on scalability for high-volume usage-based billing in industries like telecom, energy, and electric vehicles.
Topics on this Document
- Introduction & Overview
- Architecture & Components
- Supported Data Sources & Targets
- Data Modeling & Schema Mapping
- Data Sync Modes & Strategies
- Security, Authentication & Credentials
- Permissions & Encryption
- Usage / Configuration Guide
- Limitations, Caveats, and Best Practices
1. Introduction & Overview
1.1 Purpose and Use Cases
The Tridens Monetization Data Warehouse Integration automates the extraction, transformation, and loading (ETL) of operational data from Tridens Monetization into customer-owned data warehouses. This enables advanced analytics, BI reporting, and machine learning workflows outside the native Tridens analytics module.
Key Use Cases:
- Revenue Analytics: Sync subscription metrics (e.g., MRR, churn rates) for cohort analysis in SaaS or media companies.
- Usage Pattern Insights: Load real-time IoT meter data (e.g., energy consumption) into PostgreSQL for predictive billing in utilities.
- Compliance Reporting: Export audited billing records to MS SQL for IFRS15/ASC 606 reconciliation.
- Custom Integrations: Feed anonymized usage events into dashboards via generic JDBC.
Data warehouse integration reduces manual exports (like CSV/API dumps), minimizes analytics latency, and supports schema evolution without downtime—critical for dynamic pricing models.
1.2 Supported Platforms
Supports direct connectors to the following targets, leveraging Tridens RESTful APIs and JDBC/ODBC standards:
| Platform | Integration Type | Supported Deployments |
|---|---|---|
| Snowflake | Native connector via Snowpipe for staged loads | Account & warehouse-level |
| PostgreSQL | JDBC | On-prem, AWS RDS, GCP SQL |
| Microsoft SQL Server (MS SQL) | JDBC | Azure SQL, On-prem, Always Encrypted |
| MySQL | JDBC | AWS Aurora, GCP SQL, Self-hosted |
| Generic JDBC | Custom driver | Any JDBC-compliant data warehouse (e.g., BigQuery, Redshift) |
1.3 Key Features
- Incremental Syncs: Low-latency updates.
- Schema Evolution: Handles field additions/removals automatically.
- Scalability: Serverless execution, 1M+ events/day.
2. Architecture & Components
2.1 High-Level Architecture
The Data Warehouse Integration follows a modular ETL pipeline design:
Source (Tridens Monetization) → Integration Engine → Target Data Warehouse
This architecture ensures scalability, resilience, and low-latency data movement across Tridens services and supported data warehouses.
Architecture Overview
- Source: Tridens Monetization microservices expose REST APIs and event logs for data extraction.
- Integration Engine: The ETL process runs on AWS Lambda or Kubernetes Pods to extract, transform, and load data efficiently.
- Target: Data warehouses like Snowflake, PostgreSQL, or MS SQL store structured analytics-ready data.
Architecture Diagram

2.2 Data Flow
- Extract: Pull data from Tridens APIs.
- Transform: Apply in-memory mappings to target data warehouse objects.
- Load: Bulk upsert to target data warehouse with transaction boundaries.
2.3 Components/Modules Involved
- Extractor: Tridens Monetization notifications.
- Loader: Java-based JDBC handler using bulk operations.
- Transformation Engine: Transforms DTOs to data warehouse compatible objects.
2.4 Data Flow / Pipeline Steps
- Initialization: Validate config, acquire locks.
- Extract: Fetch data from DTO and prepare for data warehouse structure.
- Transform: Apply anonymization rules.
- Validate: Schema and row-level checks.
- Load: Transactional insert/update.
- Commit: Update metadata and emit metrics.
3. Supported Data Sources & Targets
3.1 Tridens Data Sources
Extracts from core Tridens entities via REST APIs:
| Entity | Description |
|---|---|
| Subscriptions | Plans, amendments, status |
| Usage Records | Metered events (e.g., calls, sms, data usage, energy) |
| Billing/Invoices | Charges, payments |
| Customers/Accounts | Hierarchies, balances |
API Reference: Tridens API Documentation
3.2 Target Data Warehouses
| Target | Min Version | Driver |
|---|---|---|
| Snowflake | N/A | Snowflake JDBC 3.13+ |
| PostgreSQL | 12 | pgJDBC 42.5+ |
| MS SQL | 2019 | MS JDBC 12.4+ |
| MySQL | 8.0 | Connector/J 8.1+ |
| Generic JDBC | N/A | Custom |
4. Data Modeling & Schema Mapping
4.1 Definition of Objects / Tables Exposed
Tridens exposes over 17 core data tables.
Primary Tables:
- Subscriptions
- Balances
- Events
- Bills
- Invoices
- Payments
- Customers
- …
5. Data Sync Modes & Strategies
5.1 Data Frequency
Tridens Monetization provides real-time data delivery, leveraging event-driven architecture and change data capture (CDC) for near-instant updates.
Mechanism:
- Notifications ensure sub-second latency.
- Ideal for use cases such as:
- Real-time billing adjustments
- Fraud detection
- Instant customer insights
5.2 Handling Deletes, Updates, Inserts
| Operation | Strategy |
|---|---|
| Inserts | New rows created with unique primary keys. |
| Updates | Performed via MERGE or UPSERT on primary key columns. |
| Deletes | Handled as soft deletes, depending on target warehouse support. |
6. Security, Authentication & Credentials
6.1 Credentials Storage, Rotation, and Encryption
| Security Mechanism | Description |
|---|---|
| Storage | Credentials stored on Tridens Monetization (encrypted) |
| Encryption | All payloads encrypted with AES-256 during transit and at rest. |
7. Permissions
Minimal Access Required:
CREATE TABLEon schemaINSERT,UPDATE,DELETEon target tables
Role-Based Example:
GRANT USAGE ON DATABASE analytics_db TO ROLE tridens_role;
GRANT CREATE SCHEMA ON DATABASE analytics_db TO ROLE tridens_role;
8. Usage / Configuration Guide
8.1 Step-by-Step Setup for Data Warehouse

- Navigate to System Configuration → Data Warehouse Integration.
- Select your warehouse type (e.g., Snowflake).
- Enter your JDBC/ODBC credentials and test the connection.
- Save configuration.