ETL & Data Warehouse Testing Software

Test in Dev & Monitor in Production

YouTube player

Data are the backbone of any business. But what is the use of having data which are defective? You cannot use erroneous data, as using it would involve trusting anomalies which might wreak havoc while making decisions based on it. Introducing iCEDQ, the data testing and monitoring platform for all size of databases and files. It automates ETL Testing and helps maintain the sanctity of your data by making sure everything is valid.

What is iCEDQ?

iCEDQ Engine

iCEDQ is an ETL Testing platform, designed to identify any data issues in and across structured and semi-structured data. Because of its ability to identify the data issues, it is used to automate ETL Testing, Data Warehouse Testing, Data Migration Testing, Business Intelligence Report Testing, Big Data Testing, and Production Data Monitoring.

Its unique in-memory engine with support for SQL, Apache Groovy, Java and APIs allows organizations to implement end to end automation for Data Testing and Monitoring.

How iCEDQ Works?


The engine connects to the configured data sources. Then it reads the data from source and target into the memory.


With complete accuracy, the engine then identifies matching records and missing records based on a defined business key or a natural key.


Once it identifies the matching record, the engine then validates the attributes for transformation, conversion, calculation failures based on the defined groovy expression.


The engine captures all the missing records and expression failures in an exception report. This report is available for users to analyze the data issues and take actions based on the results.

How iCEDQ Works - iCEDQ

What are the different types of Rules?

iCEDQ platform provides the ability to do different types of check by offering four different types of Rule templates. These templates are highly configurable and support any data source.

Checksum Rule

This rule checks the summary of data between the source and target. It lets the user compare two aggregated values between source and target. Below are few different use cases of Checksum Rule.

  • Compare counts between tables and files
  • Ensure reference table has static number of records
  • Identify if there is sudden spike or dip in number of records

Recon Rule

Recon rule allows a user to compare the full volume of data between the source and target. It compares data row by row and column by column, which makes it easier for the user to identify the exact row and column having the data issue. Below are some of the use cases of Recon Rule

  • Identify missing records from source and/ or target
  • Validate the data transformation ETL or business rule
  • Compare data in a file with a database
  • Compare database schema across environments

Validation Rule

Validation rule enables the user to verify a single source to be accurate, it might be a single database or file at source or target. Multiples columns or rows can be tested at once using this rule. This type of rule can be used for pushdown validations too. Below are few of the use cases of Validation Rule

  • Format and Null value check
  • Type II dimension testing
  • Duplicate Data check
  • Feed file validation

Script Rule

The Script Rule allows users to write a custom groovy script in iCEDQ. It is used as a pre-processing or post-processing Rule. Users can even consume custom Java libraries in Script Rule, there are endless possibilities with the script rule. Below are some of the use cases of Script Rule.

  • Create table and populate with test data
  • Read a parameter value from a database and execute another Rule
  • Take a backup or snapshot of a table
  • Trigger a PowerShell/ shell/ batch script


iCEDQ an ETL Testing platform is an enterprise software and because of that it has various features which satisfy requirements of majority of the organizations out there. We have listed some of the key features of iCEDQ.



The effort of Testing and Monitoring of data spans across different teams in any organization. Developers do Unit testing, QA team does Integration testing, Analysts do UAT and Ops, or business users monitor data in production. This collaboration is possible with iCEDQ as it has a centralized database repository and web-based graphical user interface. Users can create, share, reuse Rules across team, projects, release quickly.


In-Memory Engine

The core feature of iCEDQ is its engine used to compare and validate full volume of the data in-memory. This allows iCEDQ to go across data sources and compare the data efficiently. Customers can run their processes on our Java based engine (akka) and/ or Apache Spark based engine based on their requirement.



iCEDQ captures all the metadata about the Rules and their run instance summary in a database repository. This enables users to create any custom reports and dashboard using our internal reporting tool or external reporting tool like Tableau, Power BI, etc. You can find the complete feature list of our ETL Testing platform here



iCEDQ enables complete ETL Test automation by exposing the Rest API and Command-line interface for execution of these Rules. Organizations can trigger Rules from their data pipelines and make a decision based on the result. Below is a list of few tools with which customers have integrated iCEDQ. Scheduling tools (Autosys, Control-M, Tidal), CICD tools (Jenkins, Bamboo), ETL tools (DataStage).


We offer three different editions/ engines of iCEDQ. All these engines are in-memory engines but give different performance.

iCEDQ Standard Edition

iCEDQ Standard Edition

It is the most deployed edition of iCEDQ. In this edition each Rule occupies a single core on the server and uses it for processing all the data for that Rule.

iCEDQ HT Edition

iCEDQ HT Edition

HT Edition gives 4-5 x performance improvement over standard edition. Each Rule uses multiple cores for processing the data.

iCEDQ Spark Edition

iCEDQ Big Data Edition

This is our big data edition engine which uses Apache Spark cluster to do all the processing. You can scale the performance based on the size of your cluster.

Getting started is easy

Be up and running in minutes.