Datagaps - Automated Testing Tools for ETL, BI & BigData Testing

DataOps Data Quality

Define data rules using an easy to use web interface and share the results with your business users.

Validate the quality of your data assets before making business decisions based on it

Key Features & Benefits

Automates the testing of data at Rest or in Motion thus by empowering business users, Data Stewards and Data Owners.

DQ Dimensions

Categorizes data checks into data quality dimensions such as Data Accuracy, Consistency, Unicity, Conformity, Completeness.

Data Quality Score

Computes Data Quality score for data assets and displays trend reports on a Data Quality Dashboard.

Data Observability

Profiles data assets and compiles statistics from the past. Predicts expected values and detects deviations ahead of time.

Data Catalog

Crawls data sources for metadata information about Tables, Columns, and changes to them over time.

Business Data Rules

Data rules can be defined centrally by business users and applied automatically to data elements in multiple data sources.

Semantic Data Types

AI-enabled detection of Semantic Data Types classifies data (PII, PHI) and applies data type-specific quality rules.

Common Data Model

A dictionary of enterprise data elements and their definitions that can be mapped to physical tables and datasets.

Reference Data Checks

Validate the list of values in categorical data elements with the reference data to ensure that the values are as expected.

Data Reconciliation

Check for data integrity by matching data from various sources to ensure that data is consistent across systems.

Data Quality Dashboard

Displays a trend of Data Quality Scores for the enterprise. Users can drill down to review the scores at Data Model, Table, Data Element and Data Quality Dimension levels. Data Quality Score is automatically computed based on the Data Quality Rules for data at rest as data in motion (ETL).

Data Observability for Data Pipeline

By automating the testing of data being ingested in your Data Lake and Data Integration projects, DataOps Data Quality enables Continuous Integration.

  • Data Profiling: Profiles the datasets being ingested and maintains a repository of historical values.
  • Anomaly Detection: Uses Time Series based algorithms to predict expected value and identifies anomalies
  • Grouped metrics: Identifies deviations to business metrics and notifies users.

Connects to all Popular Data Sources

We support your data source in whichever form it is. You think of any kind of data source – whether it is a relational, NoSQL, cloud, or file data source – we support most of them.

Data Quality Dimensions

Support data quality checks for the following DQ dimensions

  • Completeness
  • Conformity
  • Validity
  • Accuracy
  • Uniqueness
  • Consistency
  • Timeliness

See DataOps Data Quality in action

Add value to your Data Analytics projects and save money.