PODCAST | Is your AI hallucinating?

PODCAST | Is your AI hallucinating?

December 11, 2025

Black and white image of two smiling women with long hair. A microphone and red sound wave are to the right - Topic around AI

What’s the true cost of chasing the latest AI hype? More often than not, it’s a solution in search of a problem.

In this sharp, pragmatic new episode of Tech Unboxed, BBD software engineers Riselle Rawthee and Hyla Fourie pull back the curtain on the tension between flashy AI solutions and the right tool for the job. They challenge the pervasive belief that Large Language Models (LLMs), Retrieval Augmented Generation (RAG), and agentic systems are always the answer, urging teams to start with the simplest, most viable path – which might be a clear prompt, a smaller model, or even a non-AI approach.

 

The “garbage in, garbage out” reality

The conversation drills down into the most critical factor for reliable AI: data realism.

The engineers argue that better data is better than more data. RAG, while powerful, doesn’t repeal the “garbage in, garbage out” law; it only sharpens it. Poorly structured, outdated, or noisy data will simply yield wrong answers with extra confidence.

 

The speed vs. stability trade-off

The episode also tackles the evolving role of the developer in the age of generative AI. Assistive coding tools accelerate learning and compress research cycles, a process the guests call “vibe coding.” While this speeds onboarding to new stacks, it poses a risk to long-term maintainability when engineers don’t understand the generated code.

The future favours adaptable problem solvers who can own end-to-end systems. This means:

  • Knowing how to constrain a model with retrieval
  • Understanding when to escalate from a prompt to a full pipeline
  • Possessing the discernment to say no to AI entirely when a simpler method (like a database or search index) suffices

 

Agentic AI: Autonomy with a warning label

The episode concludes with a sober look at agentic AI. These systems promise smarter reasoning and autonomy by coordinating specialised agents (e.g., routing math tasks to a calculator), but they come with a high risk of overengineering and operational cost.

The guidance is clear: Judge agentic systems by measurable outcomes, operational costs, and the clarity of tool handoffs, not by their marketing allure. The shared takeaway is optimistic but grounded: Engineers must stay in the driver’s seat. AI is a powerful accelerant for good engineering practice, but we must resist the urge to treat it like magic. The path to reliable AI starts with clarity on the objective, data hygiene, and a commitment to scaling complexity only as justified by value.

 

Interested in more insights?

Watch Tech Unboxed: Is your AI hallucinating? From RAG to vibe coding, and everything in between – with BBD’s Riselle Rawthee & Hyla Fourie now.

Related Content

Featured insights

Article

How top software companies handle the work that matters

Dark abstract image with flowing purple waves creating a sense of depth and movement.
Podcast

PODCAST | Is your AI hallucinating?

Black and white image of two smiling women with long hair. A microphone and red sound wave are to the right - Topic around AI
Article

Engineering resilient teams: The new competitive advantage

A software team engaged and smiling, gathered around a tablet. They're in a bright office space with natural light and a potted plant behind them.