The Evolution from Data Quality to Data Reliability Engineering for AI

In this video, Sandesh Gawande, CEO of iceDQ, tackles a pressing question: If data teams have been tracking quality for over 20 years, why do AI projects still fail? Drawing on his engineering background and factory floor analogies, he reveals why measuring data quality too late undermines AI success.

He argued that the real problem isn’t a lack of measuring data quality, but measuring it too late. By the time issues like poor accuracy, missing data, or inconsistencies are found, the damage is already done. Just like factories need good raw materials and smooth processes, AI models need reliable data and stable systems, if either fails, the whole system can break down.

Watch the video below for a clearer understanding of these insights in action.

Key Topics Covered:

1. What is iceDQ?
2. Why Data Quality Matters in the Age of AI
3. Introducing the Concept of a Data Factory
4. The Persistent Problem with Data Quality
5. Data Quality Metrics: Are They Enough?
6. Real-World Case
7. Data Quality vs Data Reliability
8. What Is Data Reliability?
9. Quality vs Reliability: Knife and Sword Analogy
10. The 3 Pillars: People, Process, and Tools
11. The Data Factory Model Explained
12. Data Testing: What Needs to Be in Place?
13. Embedding White-Box Monitoring
14. Data Monitoring: Are You Doing It Right?
15. Reconciliation and Business Rule Checks
16. Observability and Final Product Checks
17. Final Thought: Reliable vs Quality Sword

Key Takeaways:

1) The shift from traditional data quality to data reliability
2) Building proactive systems that detect and prevent issues before they reach production
3) Core practices: migration testing, pipeline certification, anomaly detection, defect prediction, and compliance tracking
4) Why engineering quality at the source is essential for AI success

Explore the
#1 Data Testing Tool,

Boost your productivity now

This field is for validation purposes and should be left unchanged.