Making artificial intelligence work professionally in the Government Actuary’s Department (GAD)
When our Advice Innovation Team was asked to think about how the Government Actuary’s Department (GAD) might deploy artificial intelligence (AI) technology in complex consulting workflows, it was tempting for them to build a plethora of apps and tools. However, that’s not what they did.
Instead, they focused on something more fundamental: building robust, structured processes which could help GAD’s actuaries and analysts use AI in a more professional and structured way.
This might seem like an unglamorous approach, but it’s showing early signs of revolutionising how we can use AI to create better outcomes. David Spreckley and Becky Wodcke from our Advice Innovation Team explain how we evolved our approach.
When AI sounds right but gets it wrong
David explains “A big challenge with AI tools (like ChatGPT or Copilot) is not that they’re obviously wrong – it’s that they’re convincing, but wrong. AI outputs are confident and well-written, making them seem credible even when they contain errors. We call this ‘credible incorrectness’ – we could say hallucinations.”
For an organisation providing professional services within government, credible incorrectness poses a serious risk. The polished, well written, presentation of AI-generated content can mask fundamental errors that human experts would catch.
Our solution: process before technology
Rather than treating Large Language Model (LLM) technology as a clever chatbot, our Advice Innovation Team has approached the technology more as a professional processing tool – like Word or Excel – that fits into structured workflows.
Their approach to development, within more complex situations, rests on a simple framework that considers 2 main factors:
- Stakes: How important is the decision or output?
- Complexity: How difficult is the judgement we’re asking AI to make?
Higher stakes and greater complexity demand tighter framing, stronger evidence, and more human oversight.
David adds: “We’ve learned that how we frame a task for AI is just as important as the task itself. By breaking complex decisions into smaller, clearer steps, we make it easier for both AI and humans to spot potential problems.”
Our framework for professional AI use
To standardise our approach, our Advice Innovation Team has developed the Role, Instruct, Data and Audit (RIDA) framework – an easy to remember framework that makes AI deployment repeatable, scalable and auditable. The structure of the framework is as follows:
- Role: Clearly define what we want the AI to do (which may be quite specific – for example “content cleaner and compressor with expertise in retaining informational depth while reducing length.”)
- Instruct: Give specific, unambiguous directions
- Data: Provide relevant, structured information
- Audit: Create fully auditable records in a standardised way, allowing easy human review and verification
This framework works for simple tasks like document reviews but also scales up to complex processes that previously seemed impossible to automate. The framework also allows us to meet our strong professional and internal requirements around “Do, Check, Review” and modelling more generally as well as being aligned with the Government’s 10 Principles for AI usage.
Becky, who has been instrumental in developing the processes expands: “We have worked out a way to manage entire workflows for consulting use cases within a relatively simple spreadsheet tool. Our workflow means we do not necessarily need to use a chatbot.”
Setting realistic goals and early success
Through our work we have learnt that we need to be realistic with what can be achieved, particularly from an efficiency perspective. David explains “When designing processes, we do not feel our starting point should be to take a 10-hour process and complete it in one hour. That is not realistic and could lead to issues.
“Initially, we think a more realistic expectation would be to turn a 10-hour process into an 8 hour one, but aim to make the outputs 20% better. That’s still a 40% net gain that is worth targeting and is very attainable with the current technology when applied to consulting work. There may, of course, be further efficiency and effectiveness benefits as our processes get embedded and we scale what we do.”
Some early successes include processes that perform an independent reasonableness check of pension scheme liabilities, prepare a comprehensive Technical Actuarial Standard (TAS)100 review document for the reviewing actuary, and even provide guidance around tender submissions. As we upskill our people, more and more use cases are emerging.
Why process innovation matters
Designing workflow processes might not sound exciting, but it’s innovative in this context. By getting the fundamentals right, GAD is starting to unlock AI’s potential while managing its risks.
This approach means we can confidently use AI for increasingly sophisticated tasks, whilst ensuring that, by design, human expertise remains central to quality control. What’s more, as AI models improve, our structured processes allow us to take advantage of new capabilities quickly and without compromising professional standards. Becky adds “As agentic AI capabilities increase, the investment in underlying processes will give us a strong base for utilising these developments.”
The result: More efficient, effective outcomes and, ultimately, better value for society.
Contact us
If you would like to see our workflow management tools in action, contact Analysis.Function@ons.gov.uk. Our Advice Innovation Team would be happy to discuss what we have learned.