Moderation Isn’t Methodology
Why insights live in the deviations
There’s probably a version of this conversation happening in every insights team right now. AI-moderated interviews are faster & cheaper than traditional IDIs. Self-guided digital diaries eliminate the problem of scheduling interviews. Automated platforms can synthesize themes across hundreds of responses in the time it takes to transcribe one. The case for efficiency is real, and for some research questions in specific contexts, the tools deliver genuine value.
But something important is getting lost in this conversation, and it’s important enough to name directly: moderation isn’t a mechanical function. Treating it as one — as a cost to be engineered out of the process — doesn’t streamline qual. It changes what qual is capable of uncovering.
“What [an automated system] can’t do is recognize that the most important thing a person said wasn’t the answer to the question. ”
What a discussion guide can’t tell you
Qual 101: Every qual study starts, of course, with a discussion guide. It represents the team’s best thinking about what to ask, in what (general) order, with what framing. It’s also built around assumptions of what respondents will say and how the conversation will unfold.
Any experienced moderator treats the guide as a starting point. It’s the loose structure for a conversation that unfolds in real time. When a person says something unexpected, the moderator follows the respondent. When an answer is technically complete but clearly an evasion or copout, the moderator pushes for more. When the room shifts…something lands differently than expected, a comment produces a reaction that the guide didn’t anticipate… the moderator reads that and adjusts, always with the research objectives & overall context in mind. This isn’t improvisation for its own sake. It’s trained judgment applied in real time, and it’s where qual research earns its value.