MORE HUMAN INSIGHT MORE OFTEN


The current potential for AI in the work Oxygen does.

Oxygen specialises in the analysis of brand problems and informed recommendations for how to take brands forward in positioning and marketing. Our work always starts with verifying brand and market status via quant data, but our core expertise is qualitative: why customer phenomena are happening and what to do about them. AI is both tremendously helpful and tremendously unhelpful in this context; depending on how you apply it.

Unhelpful stuff

First, we agree with many quant experts currently that ideas like synthetic respondents aren’t currently contributing much. Indeed, for us, the synthetic respondent is a bit of a questionable concept. One of the great things about paying to talk to real human beings is that they constantly surprise you! By talking to them really regularly brands get ahead of the game: what has just happened, what’s about to happen, currency for your brand planning.

The only thing Oxygen can ever be sure of when we start a qualitative project is that we WILL find out something new about people, however long we’ve been in a market. In that context, clients too might consider the idea of predicting typical future influences on the brand from past ones both disrespectful of human beings, and rather pointless.

Currently synthetic respondents are being considered, for example, as an option for quant to replace survey weighting (basically plumping up a sub sample you’ve not managed to interview in the time e.g. DEs or 16-24s to the right numbers by re-entering or doubling up data from existing relevant respondents in that cell).  Unsurprisingly, reported recent experiments using synthetic respondents for this indicate, among other things, over representation of middle of the road opinion and poor representation of outlier views. It’s hard to see what advantages using AI here has over conventional weighting. And – secret squirrel –  it is actually really cheap to talk to real people online, so if you want better value per respondent £, maybe just buy some more quota.


Helpful stuff

Using ‘weak AI’ in basic research analysis is great.

Consultants like Oxygen talk to people to pick up trend clues, ‘place them’ in a study’s respondent base by importance and then relate all that insight back to knowledge of your brand or how marketing works. That is the valuable, and currently non-replicable, part of what human researchers and their clients do. In 10 qualitative interviews there might be one respondent who tells you something of very radical and novel interest. Only a good human listener is going to pick that up. And only a very good human listener currently knows enough to fuse that rare insight with a brand’s USPs, changing cultural context or specialist comms expertise to produce a breakthrough brand recommendation.

However, some more basic research analysis – perhaps 50% of what a typical client might be wanting you to collect in a study – is time-consuming note taking and accurate reporting. This might be multi brand behaviour, usage, basic feedback on their experience or a show or product, likes and dislikes of the current offer, what hardware they have in their home.

We actually think some of this analysis can be done better by AI than by the average human, in the time most studies allow.

Take for example the use of AI to analyse diary studies. A 7–10-day diary study looking at say, media use and feedback and views on how brands compete gives you a lot of complex data. The human moderator definitely needs to read it properly at least once or twice to spot real insight and probe it further with the respondent. However, when it comes to summarising the more factual elements of it, AI is invaluable. A moderator might read through that diary several times but it’s unlikely he or she can keep all the information in it in his or her mind at one time. Writing a really good and fair summary of the feedback from each respondent to place alongside their other data is almost prohibitively time consuming. An AI generated summary for each respondent is brilliant, usually very accurate and gives us extra analysis potential, e.g. allowing us to centre in at the very end of the study into the really important respondents and drill down into what they have in common with each other.

Also, if clients want a reliable research report they need very accurate quotes from the right people. AI transcription breaks the back of that task. AI transcripts are still not perfect, but they do mean the moderator can listen back to all those interesting, varied, unexpected human respondents with a rough time coded transcript to hand for correction, taking many hours out of the process.

So, until we get AI that, for example, incorporates norms on how research respondents react to unfamiliar NPD ideas and how far your respondent’s responses diverge, or builds in really reliable behavioural psychology models, or distils marketing theory into a tool to generate a marketing plan, then you’ll still need human analysts. And to get value insight in the first place, (we think) you’ll always need human respondents.

But at the moment we think the AI opportunity is that our Clients should be able to afford to do more research, more frequently, with more respondents and analyse it much more quickly. That can’t be bad.