This week’s provocation: Reframing problems with AI
A happy 2026 to you all (when is it too late to be wishing people a HNY?). For the first post of the new year I wanted to go deeper on a theme that has come up repeatedly in my work - how do we genuinely break out of assumptions that can hold us back (including those we don’t even know we have). And ultimately, how can we ask better questions? I’m increasingly minded to believe that, with AI capability developing so fast (nevermind the huge levels of general uncertainty we’re all sitting with right now) we’re really going to need to think bigger and colour outside the lines this year. As Marshall Goldsmith famously wrote, what got you here won’t get you there.
One of my favourite examples of thinking differently is an experiment that was run by Dr. Tina Seelig at Stanford University. She took her large class of students and split them into fourteen small groups and set them all a task. Each group were given five dollars and a two hour window and set the challenge of making the highest possible return. They were encouraged to be as entrepreneurial as possible and were allowed a few days to plan out what they were going to do. Each of the teams were then told they had three minutes in next week’s class to present their results.
A couple of teams bought lottery tickets to try and win big, which didn’t work. Many teams took the obvious approach and bought cheap products to sell on for marginally higher prices. They each made a few more dollars than the original stake. A few groups were more entrepreneurial and offered a service of some kind to other students. One group for example, set up a stall on campus and charged students one dollar to pump up their bicycle tires. They soon learned that asking for a donation rather than charging a fixed fee made them more money, and in the two hours those teams made around $200.
But there was one team that made $650. All the other teams had been constrained in their thinking by the five dollars and the two hours that they had been given. The winning team realised that the most valuable thing that they had access to was not the five dollars, or the two hour window, but the three minute presentation time that they had in next week’s class. They sold their slot to a company that wanted to recruit Stanford students and that company paid $650 for the opportunity to give a three minute pitch in front of a full class of bright young things.
As Dr Seelig describes, the five dollars/two hours framing was limiting - it confined the potential scope of the solutions. Reframing the challenge to focus on the fundamental task at hand without making any assumptions about how best to solve it enabled the winning team to open up a totally new possibility. A first principle is ‘a basic proposition or assumption that cannot be deduced from any other proposition or assumption’. First principles thinking is the practice of breaking a complex problem down to its most basic, foundational truths and building an original solution from the ground up, rather than relying on analogy or tradition. That’s exactly what the winning team in Dr Seelig’s experiment did.
I genuinely think that reframing problems and challenges is one of the most important tools that a strategist has at their disposal. The biggest breakthroughs don’t come from dealing with the symptoms in front of you but from understanding root causes and creating a fresh approach towards solving the real problem. Reframing acts as a kind of cognitive circuit breaker that shifts the focus from how to solve a problem to what the problem actually is. And the way that we define a problem determines the range of solutions we can imagine.
The famous ‘slow elevator’ paradox gives us the example of expensive mechanical upgrades being conducted to the lift in a building where tenants complain about it being too slow. The reframed view is that the wait for the lift is annoying, leading to the much cheaper and more effective solution of installing mirrors in the lobby.
Reframing is about asking better questions, and finding better problems to solve. In the current era of complexity and unpredictability it’s more important than ever that we don’t waste time and resources climbing the wrong hill. We can easily solve the wrong problem at the speed of AI, but I also think that AI can be hugely valuable (and is hugely undervalued) in playing the roler of the cognitive circuit breaker that help us to look at a challenge in a different way. Here’s a few techniques that I’ve found to be particularly useful:
Simple reframing: At the simplest level you can challenge your favoured AI engine to help you to reframe a problem from multiple angles. For example, describe the problem and prompt it to start by asking any clarifying questions that it needs, and then ask it to generate five alternative framings of the problem (you can suggest angles e.g. focusing on incentives, one on assumptions, one on system dynamics, one on user needs and one based on adjacent-category analogies). For each reframing, ask the LLM to show what it makes it possible, what the original framing hides, and to set out three practical moves to take it forwards.
Perspective challenge: I love doing perspective triangulation using AI (contrasting clashing viewpoints to reveal tensions and different perspectives). One of my favourite ways of challenging my own thinking is to to do norm switching (e.g. taking norms from one category and applying them to another category). But for reframing it can be particularly interesting to ask the AI how an expert from a totally different field (a lawyer, an artist, a jazz musician, or even a named comedian) would look at or solve this problem. Your strike rate here is going to be lower but there’s typically one or two completely original thoughts that emerge. I also love using synthetic personas for this, particularly setting up two outlier or more extreme personas and having them debate a specific topic to reveal trade offs or new perspectives.
Starting from scratch: One of the best questions to ask when trying to open up new thinking is ‘what would this look like if we were to design it from scratch today?’. Challenging an LLM to rebuild something without using any of the existing assumptions often leads to looking at something in a totally different way.
The hostile critic: Ask the LLM to act as a strategic critic (or an ‘adversarial red teamer’) whose sole job is to invalidate your definition of the problem. This forces you to defend your assumptions or, more often, to realise where the gaps are.
Inversion and subtraction: In problem-solving, humans often take an additive approach but subtraction can be just as powerful in strategy. You can use an LLM to force a ‘subtraction’ frame, which may reveal that the problem is actually an excess of something else. An example would be to challenge the LLM to solve the problem with a significant constraint attached (e.g. no budget), or to ask it to list out the key assumptions in your proposed solution or idea and then assume that the opposite of each is true.
Future-back: Rather than extrapolating forwards from today, you can use the LLM to work back from a future where the problem has already been solved (or has caused total failure). It helps to give the LLM a specific time-frame to work back from, but this often opens up a much more long term perspective and to see how solutions implemented now can amplify into something much bigger.
Invisible beneficiaries: A persistent problem in a business can often be a hidden ‘solution’ for someone else who is actually benefiting from the status quo. Ask the LLM to assume that the problem is not an accident, but a perfectly designed feature of the current system. Ask for an analysis of the situation, identifying three ‘hidden winners’ who actually benefit from this problem remaining unsolved. You can dive into how they gain (status, resources, revenue, reduced accountability) and the ways in which they may be resisting or sabotaging change. It’s particularly good for surfacing political or cultural barriers rather than just technical ones, and can reframe a persistent challenge from a failure of execution to a conflict of incentives.
The scale stress-test: Incremental thinking can dominate because a problem may be just the right size to be annoying but not fatal. You can prompt an LLM to shrink or amplify a problem to its logical extreme as a way to reveal the underlying structural flaws that may normally be hidden by business as usual. As an example, you can do this as a two-part thought experiment. With part one you can ask the LLM what specific part of the solution/system/infrastructure would be the first to cause a total collapse if the problem was 1000x bigger tomorrow. You can then ask it to solve the problem for just one single customer with an infinite budget, to show how a perfect experience might look. Then you use the LLM to identify the most fragile assumptions in the gap that this creates.
Many people look at AI solely as a route to do what they did before but to do it faster or more efficiently. But this ignores the potential it has to help us to genuinely think differently about a multitude of different challenges and contexts. Reframing shifts the goal from finding perfect solutions to optionality. And if there’s one thing we need in an increasingly complex and unpredictable world, it is options.
