State of Data Quality

The State of Data Quality: Collaboration, Transparency, & Trust 

Reflections from “Solving Data Quality Challenges” on the Data Gurus Podcast  

As AI tools, programmatic sampling, and synthetic audiences reshape how research is conducted, one truth is becoming clear: data quality cannot be a single-stage problem. It’s a full-lifecycle responsibility.  

In a recent episode of Data Gurus, host Sima Vasa brought together industry leaders Dyna Boen (Escalent), Jon Kay (Intuit), Bob Fawson (Data Quality Co-op), and Steven Snell (Rep Data) for a candid discussion on the evolving data-quality landscape. Several key themes stood out to us at Paradigm Sample.  

1. Defining “Bad Data” Is Getting More Complex  

Two persistent challenges remain: fraudulent respondents and inattentive participants. While the industry has made progress in detecting obvious fraud, Snell cautioned about the rise of “good-looking fraud” or responses that appear legitimate but are generated by sophisticated bots or AI tools.  

The issue now extends beyond simply removing bad actors. It’s about recognizing when data that looks “clean” may not be authentic. Tools like ChatGPT have introduced new gray areas, prompting fresh questions about what qualifies as an acceptable, human-validated response.  

2. Bad Data Is a Credibility Risk  

Once flawed data reaches the reporting stage, the damage is done. As Jon Kay from Intuit noted, sometimes the numbers simply don’t pass the “smell test.” Many client-side researchers rely heavily on external panels and agencies, often for as much as 95% of their studies. Yet conversations about data quality often happen after problems surface.  

As Dyna Boen emphasized, “We’re living in an era where incentives for fraud are high and tactics are increasingly sophisticated.” Solving for data quality, therefore, demands a multi-layered, continuous approach rather than a one-time fix.  

Our Final Takeaways 

Data quality is a shared responsibility that depends on transparency, trust, and continuous collaboration across the ecosystem.  It’s essential that we keep learning from one another and hold ourselves accountable to universal standards, such as the GDQ pledge, which champions data quality excellence. 

At Paradigm, our Managed Research Services approach is intentionally multi-layered, evolving alongside the research ecosystem. We believe in keeping a human-in-the-loop mindset; ensuring that beyond automated data checks, thoughtful human oversight continues to guide quality at every stage including the insertion of additional checks through the project lifecycle.