Tuesday, March 18, 2025

Using GenAI in strategy, research and insight

 

This week’s provocation: Using GenAI in strategy, research and insight

This week I ran a virtual version of the Advanced Application of AI in Advertising course that I put together for the IPA. One of the key topic areas is about how to use GenAI tools in the strategy and planning process which I think is an endlessly shifting but fascinating topic. I genuinely think that integrating these tools into the planning workflow can be game changing, particularly for research, so I thought I’d write up some techniques that I’ve found especially useful. Many of you seemed to have found my recent post on using LLMs to challenge your thinking beneficial, so this is a build on that.

It’s important to say that this post is not about synthetic research, or data and performance analysis, or using the bespoke tools that have been created (shout out to AdailySpringboardsMagnolia who are all doing interesting things) but more about how LLMs can help you with everyday research. The context of this is the advertising planning process, but I think these tips are useful for any situation where you want to understand something better.

Academic Ethan Mollick has made the point before now that many people use LLM tools as they would a search engine, which doesn’t necessarily get the best out of the AI. But framing the answer to this as a need to learn complex prompt engineering is also not that helpful. To paraphrase Mollick, it works best to treat the AI like an infinitely patient research assistant who forgets everything you tell them each new conversation. There’s no one way (so the best advice is to try stuff out, and keep trying) but these are some techniques that I’ve found consistently useful:

  • Diversity of insights: The first thing to recognise is that LLMs can support a diverse set of research contexts. I’ve always liked Julian Cole’s way of navigating a brand’s strategy and positioning by looking at key insights across the 4Cs: company, category, customers and culture:

    • Company: you can use LLMs for classic analysis like PESTLE, Porter’s 5 forces, and SWOT but also for market positioning, financial performance, leadership and customer perception insights.

    • Customer: they can do a pretty good job of supporting persona generation, journey mapping, and understanding consumer trends, behaviour and sentiment.

    • Category: Alongside the usual analysis like market positioning, competitive scenarios, and emerging trends, LLMs can be good for understanding risk and opportunity, value and supply chain issues, acquisitions and scenario planning.

    • Culture: they can help cultural segmentation, anthropology, local nuances, as well as cultural symbols and artefacts, and cross-cultural analysis.

  • Breath of application: Google’s prompting guide for strategists and creatives (PDF) sets out four main ways in which strategists can use LLMs, which is a good starting point:

    • Condense: Break down complex information into digestible insights

    • Expand: Generate new ideas or dive deeper into topics.

    • Iterate: Refine existing ideas and content by exploring variations.

    • Finesse: Polish for clarity and impact.

  • Starting prompts: many people seem to begin with vague, unspecific prompts and are then unimpressed with the mediocre outputs they get. The quality of responses is generally directly proportional to the quality of inputs. More context = better outputs, and beginning with good context gets you off to a good start. Personally I’m a fan of the RISEN framework (originated by Kyle Balmer) which stands for Role, Instructions, Steps, End goal, Narrowing (or novelty). This gives the AI a persona to follow, specific instructions and a goal, context on how the information will be used, and details on how you want it to be presented.

  • Going deeper: in her deck on Strategy in the Era of AI, Zoe Scaman said that ‘AI is not a Q & A platform…it doesn’t simply retrieve information. It learns from data, making connections and recognising trends to offer insights and potential answers’. And she’s right. The real value comes after the initial prompt, when you’re going back and forth human-AI-human-AI-human to dive deeper, explore, and finesse. Treating the AI as a tool for exploration rather than as a search engine takes outputs to a different level.

  • Opening up new lines of thought: Summary and assimilation seem to be one of the main ways that people currently use LLMs but there is so much more you can do to help you unlock different ways of thinking about a topic. As an example, this strategist used Notebook LM to capture key trends from this folder with over 100 trends decks for 2025 in it. But she also went further and asked questions such as examples of where unexpected industries are teaming up, or what were overlooked insights that could create an edge. When I tried this, the summary trends were pretty uninspiring and generic but these follow up questions revealed some interesting new thoughts. I also did a bit of norm switching, asking how trends from one industry could apply to another and that was useful too (ref my previous post on using LLMs to challenge your thinking for more on this)

  • Use a diverse set of tools. I inevitably have my favourite tools that I use but sometimes it can be useful to deliberately go beyond habit and use a wider range of agents. One of the ways in which AI’s are quirky is that different tools give different answers, so just as human cognitive diversity helps to solve problems faster and better, so diverse inputs and perspectives from different tools can help. Alongside the main multi-use GPTs like ChatGPT, Gemini and Claude, here are a few tools that I use quite a lot:

    • Unstuck: great for summarising and drawing out key points from lengthy videos (like lectures or talks)

    • Google’s Notebook LM: wonderful for analysing/summarising multiple inputs such as research studies, reports, videos, and also making connections between topics

    • Globe Explorer: Great at structuring a topic at a high-level, particularly when you don’t know much about it

    • It’s also worth shouting out the Custom GPTs available in ChatGPT. There’s a good range in there that can support research - the one I probably use most is AskYourPDF Research Assistant. I’ve also created my own Custom GPT, trained on my 3 books and 16 years of blog posts which is pretty good, though in truth I probably need to put a bit more work into finessing it.

  • Get the LLM to check its own homework: I will almost always ask the AI to finesse and improve it’s answers. It often results in much better outputs. I read somewhere (I’ve forgotten where so forgive the lack of a link) that asking the LLM to rate it’s answer on a scale of 1-10 against your criteria helps in the improvement process. If it comes back with an 8 out of 10 (which it often does) you can challenge it to improve it to a 10. There’s something about using a numerical grading which really works.

  • Using the LLM as a sparring partner: I liked this idea of asking the LLM to play the role of the audience that you are about to present to and using it to critique your pitch/presentation/idea (HT Shane O’Leary for that one). You can give it context like what’s important to them, how they respond to information or the kinds of questions they ask. But it’s an excellent way to prepare, and seems to be able to anticipate questions or challenges very well.

A final point on this. As I’ve written about before, one of the reasons that I don’t use the Custom GPT which I’ve created for writing posts like this is because writing is how I think. Going through the writing process helps me to understand a topic, make connections, learn, and form an opinion. Getting an AI to do this for me means that I don’t get any of this, and I miss out on the real value of writing in the first place. It’s worth reflecting that in a strategy or research process LLMs are no substitute for human judgement or consideration but they can help you to think faster and better. They can be so much more than summarisers. They can be thought partners - critiquing, improving and finessing but also opening up entirely new lines of thinking.