From 6 months to 6 weeks: Inside Haleon’s first AI-driven insights project

Jan 30, 2026 | Publications

Pexels Karola G 5477778

WARC’s Rica Facundo sits down with Shawn Roy, Haleon APAC Insights and Innovation Lead, and Akshay Mathur, Partner at Quantum, to unpack the process of AI-accelerated insight generation and how speed and depth don’t have to be a trade-off.

WARC: To begin, how has the traditional model of research typically worked, and what were the limitations – especially in a healthcare context?

Akshay Mathur, Quantum: When we talk about “traditional research,” we’re primarily referring to qualitative methods – focus groups, interviews, ethnographies. There’s nothing inherently wrong with them; they still have a place. But increasingly, they’ve been repositioned as slow and expensive.

Cycles are getting shorter, budgets are tighter, and businesses need impact to land quarterly. So, the industry is asking for solutions that are faster, more cost-effective, and still sufficiently deep.

Historically, when clients asked for “better, cheaper, faster,” we’d say we could only give two of the three. With new AI applications, we’re starting to reimagine that triangle. If AI can help deliver “fast and cheap,” then the researcher’s role – judgement, creativity, originality – becomes even more crucial in ensuring the “better.”

Because organisations need quicker impact, the challenge is: how do we maintain depth while reducing time and cost? That’s where AI becomes part of the toolkit.

Shawn Roy, Haleon: From the client side, I see a similar pattern. There’s a strong impulse to “learn more” – run more focus groups, more visits, more interviews. But the truth is, we already sit on a vast amount of tacit institutional knowledge, often scattered across markets and functions. It’s humanly impossible to connect all those dots quickly.

In this project, AI synthesis enabled us to surface opportunities from existing knowledge quickly, without commissioning new consumer research. We consolidated more than 70 internal documents and unlocked insights we hadn’t been able to connect before.

Healthcare adds additional constraints, as there are regulatory hurdles to talking to specific consumer groups, such as children or pregnant women. This approach allowed us to understand patterns and emotions in ways that would have taken much longer, or been impossible to dive into, through traditional fieldwork fully.

WARC: So, all the inputs were existing internal documents – not new consumer interviews?

Akshay Mathur, Quantum: Correct. We intentionally resisted the instinct to “go out and talk to more people.” Instead, we focused on the internal insight stack – 70 documents across multiple markets, supplemented by stakeholder interviews.

There’s much latent knowledge in organisations, but it’s under-leveraged because it lives in conversations, decks, research archives, and people’s heads. AI tools helped us access that scale and speed without losing the nuance.

Shawn Roy, Haleon: And that brought to light things we hadn’t been thinking about. For instance, our category often defaults to thinking about mobility in terms of “macro health issues” like walking or running. Still, this AI approach helped us uncover micro health issues across multiple body areas – things that affect everyday life, identity, and even cultural practices.

It shifted us from clinical problem-statements to life-centred moments. That was powerful.

WARC: Shawn, you mentioned this was the first time Haleon used AI on internal data. The healthcare industry is typically cautious. What reservations did you have, and what helped overcome them?

Shawn Roy, Haleon: A lot! We’d never done anything like this before. But my leadership team challenged me to deliver insights in 1.5 months, not six. They also believed we had enough internal knowledge – we weren’t connecting it fast enough.

Security was a significant concern. We didn’t use open public tools; we used a ring-fenced GPT, specifically because it promised a high level of encryption and guaranteed that uploaded data wouldn’t be used for model training. Internally, it went through heavy scrutiny. 

This was also a leap of faith. But if we kept saying “not yet,” we’d never get started. So we treated it as an experiment – with transparency and constant checks.

Read More  The impact of the Ukraine War: One year on

And it wasn’t AI alone. It was an AI-human symphony. We brought in marketing, R&D, category leaders – anyone with relevant tacit business knowledge – to interpret, validate, and refine what the model surfaced.

WARC: Akshay, can you paint a picture of how that “symphony” worked? What did the process look like?

Akshay Mathur, Quantum: We designed the project to be iterative. Traditionally, agencies disappear into a black box and return with a report. Here, we deliberately broke that model.

Every week, we ran working sessions:

  • Show work-in-progress
  • Validate themes
  • Refine prompts
  • Get business inputs
  • Re-run the model
  • Assess implications for innovation, activation, communication, and portfolio

We handed over part of the process to the AI, balancing it with human interpretation and business context to ensure the outputs had practical landing spots.

WARC: Verification and rigour are fundamental in research. How did you prevent hallucinations and maintain evidential grounding?

Akshay Mathur, Quantum: 

Three things:

  1. Evidence-based databases – we fed the AI with consolidated, factual Haleon documents – not the open internet.
  2. Traceability – for every insight, we checked: Which document did this come from? Which line? Which slide? The output had to be grounded in an evidence trail.
  3. Human review loops – the Haleon team assessed each iteration across categories, R&D, and marketing.
WARC: Another dimension of qualitative work is cultural nuance. Can AI surface that, or does the human role become even more important?

Akshay Mathur, Quantum: AI can find patterns, but humans bring the meaning.

Some cultural nuances were documented internally so that the AI could extract them. Where cultural insights weren’t documented, we used our 30 years of experience.

The strategic leap, the cultural leap – the model doesn’t do that yet. That’s still a very human value.

Shawn Roy, Haleon: We also triangulated with external cultural trend reports to address internal gaps. For example, we realised we had a bias towards thinking that specific health issues only affect older adults. External cultural cues showed us it’s relevant much earlier in life.

AI alone wouldn’t have caught that.

WARC: Akshay, let’s talk about prompting. What does a “discussion guide” look like when your participant is an AI?

Akshay, Quantum: We created a prompt guide, similar to a discussion guide. The underlying principles are familiar:

  • Ask foundational questions
  • Probe
  • Check what’s present and what’s missing
  • Validate
  • Iterate

The most significant shift is that it’s much more dynamic. Each iteration changes what questions you ask next.

A useful mental model is:

  1. Five Whys – understand root causes
  2. What is it not? – uncover blind spots
  3. So what? – find meaning
  4. What’s next? – land implications for brand, business, portfolio

Shawn Roy, Haleon: The guide has to evolve constantly – almost like moving through levels in a game. You’re continuously feeding new questions from R&D, marketing, and strategy. AI isn’t thinking from a business perspective; you must feed that perspective into it.

WARC: You’ve talked about an “AI + human symphony.” Which teams needed to be part of that symphony for it to work, and why?

Shawn Roy, Haleon: This kind of work becomes truly powerful when it’s cross-functional, not just Insights + Agency. Some of the most meaningful breakthroughs came from involving R&D, which helped us articulate the functional needs behind life moments, and marketing, which helped validate viability.

AI becomes a unifier across functions.

WARC: How long did this project take, and what did AI meaningfully save in time or cost?

Shawn Roy, Haleon: We delivered everything – end-to-end – in six weeks.

A traditional multi-market qualitative study would have taken:

  • ~1.5 months for fieldwork
  • ~1 month for analysis
  • And sometimes up to six months, depending on complexity

This approach was roughly one-fifth the time and one-fifth the cost.

Akshay Mathur, Quantum: And the quality didn’t drop. It allowed more depth because we could interrogate a much broader evidence base.

Recent Posts

The anti-algorithm movement: Authenticity as our new imperative 

The anti-algorithm movement: Authenticity as our new imperative 

In an era where our lives are increasingly curated by invisible algorithms, a cultural countermovement is gaining momentum. Across platforms and demographics, consumers are experiencing what can only be described as algorithm fatigue—a weariness with recommendations...

Gen Z: The new conservative

Gen Z: The new conservative

We know the world has changed. The world Gen Z was born into has not remained the same. At first, technology was booming, and society was learning how to build community, embrace equality, and support one another. Then, a different world emerged: one marked by chaos,...

The Power of Sound: Why Music Remains a Timeless Branding Tool

The Power of Sound: Why Music Remains a Timeless Branding Tool

One morning, while listening to a Spotify playlist, I stumbled upon "Buka Semangat Baru," a song by Indonesian artists. It had always been synonymous with Coca-Cola in my mind, but I discovered it wasn't originally created for the brand. Released in 2009, it was later...