
Dan Gardiner | 17 Apr 2026
"Surveys are a waste of time!" People don't usually say it out loud, but they think it. You've probably felt it yourself halfway through filling one out, when it becomes clear how hard it is to say anything that actually matters. There's a quietly held view by many: feedback is a burden: for the people giving it and the teams trying to use it. But what if we've misdiagnosed the problem?
Across 5 projects over the past year, we have asked participants a simple question at the end of our AI-assisted interviews:"how would you rate the overall quality of this interview?"
The pattern is striking. From UNICEF frontline staff in Afghanistan to teachers in the Philippines, the vast majority of participants rate the experience as very good. Most of the remainder describe it as good, with almost no one rating it negatively: just a single participant across 975 who provided a rating.
And more interesting than the ratings is how people describe the interaction itself:
"The interview was so nice. I was thinking of this being a very difficult task, but when I have seen it, it's very simple and so straightforward." (Healthworker, Sierra Leone)
"You asked me so many questions that my heart was so happy I felt like I found God." (Water Security Champion, India)
"Praise be to God this interview is very very good for us to share the joys and sorrows when working in the field🙏" (Community Health Worker, Indonesia)
This is not what "survey fatigue" is supposed to sound like. One reason, we think, is that a good interview does more than extract information. It helps people think. Several participants described the interaction not simply as easy, but as useful in itself.
"It tickles the mind and improves thinking ability." (Teacher, Philippines)
"The AI-led conversation was clear and supportive, helping me think deeply about learner support and promotion decisions." (Teacher, Philippines)
"It helped guide me to give thoughtful answers step by step." (School leader, Philippines)
That matters because in most monitoring systems feedback is treated as a one-way transaction: collect answers now, analyse them later. But when the person responding finds the interaction clarifying or reflective, the quality of what comes back changes. In practice, this means clearer diagnosis of what is and isn't working: why services are not used, where delivery breaks down, and what frontline workers actually need.
It would be easy to attribute all this to AI, but that would be too simple. The more accurate explanation is design. Good interviews do not happen by accident. They depend on identifying the operational questions that actually matter, structuring the conversation around them, and keeping follow-up prompts focused on areas that will produce insight rather than noise. It also involves prototyping and piloting them before they go live. The AI then does the final part of the job: responding, probing, and adapting in real time, but within an interview that has been deliberately built and tested.
The experience itself matters too. Our interviews are short. They are easy to participate in (including by using Whatsapp). They are anonymous. People can respond in their own language, and use voice notes rather than typing. This combination removes many of the frictions that make feedback feel awkward or difficult
"With a personal interview, we would feel hesitant... but on WhatsApp we don't feel any hesitation." (Community volunteer, India)
"Praise be to God, interviewed via voice message. Thank you, thank you very very much." (Community Health Worker, Indonesia)
Taken together, our interviews are designed to feel like reflective conversations. That point is easy to underestimate. We spend a great deal of time thinking about what data to collect, how to analyse it, and how to present it. We spend much less time thinking about what it feels like to give that data in the first place. If feedback feels burdensome, performative, or pointless, people comply minimally. If it feels simple, safe, and thought-provoking, they tend to say more, and think more, as they do it.
The lesson here is not that every programme needs AI-assisted interviews. It is that people are often more willing to share useful feedback than we might assume. But to get that feedback, the interaction has to be designed for the person giving it, not just for the team analysing it later.