AI Trust Crisis: Companies Question Their Own Models

A significant trust gap is emerging in enterprise artificial intelligence, with 42% of organizations expressing doubt about their AI and machine learning model outputs despite widespread adoption of data monitoring systems, according to a new report.

The study, conducted by the US Business Application Research Center (BARC) and commissioned by data management company Ataccama, surveyed more than 220 data and analytics leaders across North America and Europe. The findings reveal a troubling paradox: while 58% of organizations have implemented data observability programs designed to monitor and resolve data quality issues in realtime, trust in AI remains stubbornly low.

The trust disparity is stark when compared to traditional business intelligence tools. While 85% of organizations express confidence in their BI dashboards, only 58% say they trust their AI model outputs - a 27-percentage-point gap that highlights the unique challenges posed by modern AI systems.

The Unstructured Data Challenge

The trust deficit appears linked to AI's increasing reliance on unstructured data sources like PDFs, images, and documents - inputs that traditional data quality tools weren't designed to handle. The report found that fewer than one-third of organizations currently feed unstructured data into AI models, and only a small fraction apply automated quality checks to these inputs.

"Data observability has become a business-critical discipline, but too many organizations are stuck in pilot purgatory," said Jay Limburn, Chief Product Officer at Ataccama. "They've invested in tools, but they haven't operationalized trust."

“They’ve invested in tools, but they haven’t operationalized trust. That means embedding observability into the full data lifecycle, from ingestion and pipeline execution to AI-driven consumption, so issues can surface and be resolved before they reach production. We’ve seen this firsthand with customers – a global manufacturer used data observability to catch and eliminate false sensor alerts, unnecessarily shutting down production lines. That kind of upstream resolution is where trust becomes real.”

The consequences can be severe. Limburn cited the example of a global manufacturer that used data observability to identify and eliminate false sensor alerts that were unnecessarily shutting down production lines - the kind of upstream problem resolution that builds genuine confidence in data systems.

Skills and Governance Gaps

The report identified several barriers preventing organizations from achieving observability maturity. Skills gaps topped the list at 51%, followed by budget constraints and lack of cross-functional alignment. Many companies have implemented observability as a reactive, fragmented monitoring layer rather than embedding it throughout their data lifecycle.

Kevin Petrie, Vice President at BARC, noted a fundamental shift in how leading organizations approach the challenge. "We're seeing a shift: leading enterprises aren't just monitoring data; they're addressing the full lifecycle of AI/ML inputs," he said.

The most successful organizations are integrating observability directly into their data engineering and governance frameworks, according to the report. Rather than treating it as a standalone monitoring system, these companies embed observability into DataOps automation, master data management systems, and data catalogues to apply automated quality checks at every stage.

This comprehensive approach is becoming increasingly critical as generative AI and retrieval-augmented generation (RAG) systems gain adoption, introducing new forms of risk through their use of dynamic, unstructured data sources.

The report positions trustworthy data as an emerging competitive differentiator, with observability evolving from a niche technical practice into a mainstream requirement for responsible AI deployment.