No items found.

The Superglue Pizza Problem (and What It Teaches Us About AI Reliability)

November 13, 2025

In 2019, Barr Moses had to convince companies that data reliability was a problem worth solving. In 2025, no one needs convincing; builders are living that problem daily.

When Barr founded Monte Carlo in 2019, 'data observability wasn’t a category yet. Data teams knew their pipelines were breaking, but they just didn’t have a name or a solution for it. Coming from consulting at Bain and a background in math and statistics, Barr saw the same story play out across Fortune 500 companies: executives making major decisions on unverified, incomplete data. And the result was costly mistakes and lost trust.

Today, Monte Carlo is the world’s leading data reliability company, working with enterprises like Cisco, Intuit, and the New York Times. In this episode of בול בדאטה, the data/AI podcast hosted by Hetz partner Guy Fighel, Barr shared what founders can learn from building a new category — and why every AI startup will soon face the same challenge.

If your data breaks, your product breaks.

For AI companies, data quality isn’t a backend problem; it’s a product problem.

Barr points out that most AI systems are only as good as the data they rely on. In industries like healthcare, finance, or retail, a single wrong answer can damage a brand or trigger regulatory risk. She gives a memorable example: when a user asked Google’s AI “what to do when cheese slips off my pizza,” the model confidently replied, “use organic superglue.” Kinda funny, but it's devastating if you’re not Google.

“Most companies can’t afford to ‘superglue’ their reputation back together after one bad AI response.”

Category creation starts with pain, not product.

When Barr launched Monte Carlo, 'data observability' wasn’t a budget line item. There was no existing slot in the enterprise tech stack. Her advice for founders: don’t start a new category unless you have to. And if you do, lead with the problem’s impact, not your product’s features. “People feel the pain before they understand the solution,” she says. “Once they connect business risk to data reliability, the budget follows.”

“You don’t replace BI or your Data Warehouse — you’re adding something new. So you have to convince people why it matters.”

AI observability and data observability are inseparable.

Many companies try to treat AI reliability as something distinct from data quality, but Barr argues it’s the same foundation. Most AI “failures” trace back to one of four root causes:

  1. Bad or missing data
  2. Broken code
  3. Infrastructure issues (e.g. Airflow, DBT, LangChain)
  4. Model output not fit for purpose

“You can’t separate AI observability from data observability — because AI breaks for the same reasons data does.”

Hope is not a strategy.

One of Barr’s strongest messages: founders can’t rely on “it’ll probably work” when building data-driven products.

“It doesn’t make sense to live in a world where we build Data & AI products and just hope they’ll be fine.”

Companies that invest in observability aren’t just protecting against errors, they’re freeing their teams to move faster. When 80% of data issues are caught before customers notice, teams can focus on building, not firefighting.

The conviction to keep building

Barr closes with the same conviction that fueled Monte Carlo’s early days: the belief that every company will one day need reliable, observable data and AI systems, just like software needed monitoring a decade ago.

“At the end of the day, we’re here because we believe data and AI deserve the same trust as any other mission-critical system.”

Subscribe to Hetz Ventures on YouTube to get updated when new episodes release, or follow along on Geektime.