How AI will reshape government analysis 

Jason Stolborg-Price

Two recent news stories caught my attention: one suggested US tariffs were designed by ChatGPT, another found a UK minister using AI for policy advice. Senior leaders worldwide are likely experimenting. For those of us working in government analysis, this rapid change prompts the question, what does this mean for us?

I lead the foresight team in the Cabinet Office Joint Data Analysis Centre, and when my team faces uncertainty, we produce scenarios. So, I’ve started thinking about the possibilities analysts might face over the next five years and suggested some scenarios. While I present them distinctly, reality will likely blend elements from all three:

  • Scenario 1: “Human Guardrails” AI generates insights; analysts verify everything before it reaches decision-makers. We evolve into technical AI specialists and “sense-makers” explaining findings. Good judgment becomes more valuable than technical skills for advancement.
  • Scenario 2: “Self-Service” Ministers use analytical systems directly, perhaps tools we design or AI interfaces they query themselves. We’d focus more on building and maintaining systems, writing fewer reports.
  • Scenario 3: “Prediction Engines” Our focus shifts from explaining the past to forecasting the future. AI predicts policy outcomes; we help ministers interpret forecasts and understand the human factors AI might miss.

So, what do we need to do to prepare?

  • Critical thinking skills. We need to verify outputs, identify biases, and grasp the ethical implications of the AI we use.
  • Focus on finding the right questions. AI increasingly provides answers, but formulating insightful questions needs our human judgment and creativity.
  • Start experimenting responsibly. I’ve learned a lot by finding problems and trying AI solutions. Be ambitious. Today’s AI is the worst it will ever be, so if something isn’t possible now, try again in six months.
  • Prioritise transparency. Let’s normalise saying when and how we use AI, whether for data-cleaning, proofreading, or planning. This fosters trust, collective learning, and helps spot biases or errors early. We can’t improve together unless we’re open.
  • Champion diverse perspectives. We must advocate for voices potentially underrepresented in AI training data to keep our analysis balanced.

What I’m doing

Last summer at Analysis Week, I talked about using AI to make analysis more interactive. My theory was that analysts should move from static reports towards interactive tools tailored to actual decisions. I provided some examples of tools that often take less than a day to create with AI, with no coding experience necessary:

  • Interactive systems maps: Visualise complex policy relationships, helping identify leverage points and potential consequences.
  • Collaboration networks: Show who to work with across organisations or influence internationally.
  • Fiscal modelling tools: Allow exploration of tax and spending trade-offs over time.
  • Theory of change simulators: Let users adjust variables and see potential outcomes.
  • Policy games: Offer “choose your own adventure” style explorations of options.

More recently, I built a global news monitoring system using RSS feeds (Really Simple Syndication) and Large Language Models (LLMs) to spot emerging issues and innovative international policy approaches for the UK. While AI handles the data processing, human judgment remains crucial to assess which approaches could work here. The output quality is good, but it definitely still needs expert review before reaching policymakers.

With PolicyLab, we’ve also explored digital twins (virtual replicas of real-world systems) that test interventions virtually. By modelling UK infrastructure, we can optimise services through simulation first. Think of it as a policy sandbox, allowing experiments without real-world consequences.

You can see how these projects touch on all three scenarios I outlined. The analyst’s role is evolving, moving from primarily producing content towards designing systems, verifying outputs, and translating findings into actionable insights.

The risks we must manage

Of course, while embracing these opportunities, we have to be mindful of several key risks:

  • Hallucinations: when AI confidently presents fiction as fact.
  • Skill fade: from outsourcing our thinking too much for too long.
  • Security vulnerabilities: lurking in AI-generated code.
  • Oversimplification: boiling down complex problems too far.
  • Bias amplification: skewed data leading to skewed outputs.

Now, I can build tools quickly, but I don’t always fully grasp the underlying code. We have to communicate these limitations clearly, especially as our users adopt AI themselves. Senior analysts really need to speak about AI as a tool we use critically, not something we avoid out of fear.

Ultimately, our effectiveness hinges on understanding the decisions our customers need to make and how they make them. Even if AI lets us produce papers in seconds, our customers won’t magically get more reading time. This reality pushes us towards creating more direct, impactful insights and tools, rather than just more volume.

This leads to fundamental questions about our future role. Analysts might not always stand between the data and the decision-maker. It’s unclear how much agency we’ll have in shaping that transition. Developing the right skills and mindset now is crucial. By proactively experimenting, focusing on critical thinking, championing transparency, and ensuring diverse perspectives, we can help steer this evolution.

Our goal must be ensuring that whatever analytical capabilities emerge, whether human-led, AI-assisted, or AI-driven, continue to serve the public good and maintain the analytical rigour our work demands. By engaging actively, we can help shape how AI is integrated, not just react to it.

James Ancell
Jason Stolborg-Price
James Ancell is Head of Foresight in the Cabinet Office's Joint Data and Analysis Centre. Prior to that, he held a variety of analytic and policy roles in the Cabinet Office, the Army and the United Nations. His team's work covers long term scenarios of the UK and annual horizon scans of the year ahead.