The three levels of Respondent Centred Design (RCD)

Policy details

Metadata item Details
Publication date:4 May 2023
Owner:Government Statistical Service (GSS) Harmonisation and Data Quality Hub
Who this is for:Anyone involved in survey design
Type:Guidance
Contact:Harmonisation@statistics.gov.uk or DQHub@ons.gov.uk

Survey commissioners and suppliers are under increasing amounts of pressure to design and produce quality surveys. These pressures can be related to time, resource, or money.

The increasing pressure to develop surveys quickly can lead to compromises on quality throughout the data lifecycle. For example, it can mean there is less time to explore survey design and what works best for respondents. These compromises come with varying risks that commissioners and suppliers need to think about during any decision-making processes.

This guidance originated from request for support and advice on how to adapt research and design under these pressured working conditions.

How to use this guidance

You can use this guidance to help you make decisions about the amount of research and design needed for survey questions and materials. It will also tell you about the associated risks to the quality of your work if you were to reduce or stop certain activities.

We have created three levels to support people through these situations. The levels are called Bronze, Silver, and Gold.

The levels are based on the amount of Respondent Centred Design (RCD) completed at each level and each carry their own associated risks to quality. By using this guidance, you will be able to identify and understand any risks and take them into consideration against the timeline for your work, and to support any trade-off discussions that may take place.

The recommended three test rounds per mode, as contained in the guidance, are called:

  • “test” — where you test the prototype
  • “fix” — where you fix issues based on the test findings
  • “validate” — where you check if your fixes have worked

A combination of all three yields the most confidence in the designs. We have provided guidance on resource levels and project timings for the different levels. Please note that these are guides and are here to help you to understand the level of investment and determine which approach is suitable for you.

What we mean by Respondent Centred Design (RCD)

The term Respondent Centred Design (RCD) comes from the term User Centred Design (UCD), which was developed in the 1980s. Since then, UCD has driven the development of products, such as websites and apps, in the computer technology world. The main aim of UCD is to create something which has high “usability”, which may encourage a user to buy something, come back again, and spread the word about how great their experience was. Each of these actions contribute to the success of a product. Almost all those desired outcomes are similar to the goals of a survey.

In the context of survey design, RCD is an approach that brings together best practice from the Government Social Research Profession and user experience design from the computer technology world. RCD can be applied to all modes and simply means putting the respondent at the centre of the process when it comes to the design of your survey experience.

When designing for multiple modes, we always design the online mode first and then move on to develop the other modes. We do this because if a design can be easily used and understood in a self-complete mode, then it stands a high chance of being easily used and understood in an interviewer-led mode. We then take an optimode approach to design where we optimise and adapt to the mode being used.

“Optimode” means to design the respondent communications and the questionnaire optimally for each mode.

For respondent materials this means tailoring the content of letters depending on the mode of the interview. For questionnaires, this means optimising the design for each mode.

Having a design that works optimally in the mode that it is being administered will reduce respondent burden as it will be tailored to their needs.

RCD is defined as learning about the needs of those who will use your survey and designing it to meet them. Respondent needs can relate to the following:

  • who they are
  • what their circumstances are
  • what information they need before, during and after taking part in a survey
  • how and where they take part in a survey
  • what they are trying to do and what they want to be able to do
  • what their expectations are at each stage of the journey
  • issues that cause them difficulty in their journey

An RCD approach means that the respondent informs the design of the survey, rather than the data user. The data-user-centred approach to questionnaire design can result in questionnaires that are long, confusing, and sometimes repetitive. They can often feel irrelevant to the respondent, which leaves them feeling like they have not been able to represent themselves well enough or in the way they wanted to. This poor experience is a contributing factor to declining response rates.

Online surveys are increasingly common, which means good respondent experience needs to be a priority. The rise in online self-completion surveys means we can no longer rely on highly trained and dedicated interviewers to provide a good experience for the respondent and get a response from them.

Interviewers are an important part of the work to maintain response rates, especially around hard to reach groups. Interviewers use their skills to gather the right information for complicated concepts which are cognitively demanding. They quickly learn which questions are troublesome, but changes to question wording are kept to a minimum to manage time series data concerns. Instead, interviewers will use their skills and experience to work around the challenges. We can learn about respondent needs from interviewer feedback and in RCD, we use these insights to improve the design of our surveys.

By using RCD to develop products it is more likely we will build the right thing, or in this case the right survey and its associated respondent materials. If we produce the right thing we will see:

  • a reduction in respondent burden
  • increased data quality
  • a reduction in costs

Respondent burden can occur at any point in the respondent journey. It could happen at the point of receiving communications through to completing the survey itself. It is possible to reduce burden at every touchpoint by developing the survey experience to be respondent centred. Wilson and Dickinson have brought together their user-centred design knowledge and approaches to create a “Respondent Centred Design Framework” (RCDF), which was published in 2021. The Framework consists of ten steps:

  1. Establish the data user need
  2. Research how respondents conceptualise topics — this is known as “mental model research”
  3. Understand user experience and needs
  4. Use data to design
  5. Create using appropriate tone, readability, and language
  6. Design without relying on help
  7. Design optimally for each mode — this is known as taking an “optimode” approach to design
  8. Use adaptive design — this means the interface will adapt to the screen size and display accordingly
  9. Combine usability and cognitive testing — this is known as “cogability” testing
  10. Design inclusively

The RCDF aims to avoid “errors of measurement” and “errors of representation” and can be used alongside the:

RCD also aligns with Agile delivery principles. This means you do not need to strive for perfection before sharing or testing something. The aim is to “fail fast”. It is important to learn quickly what is working and what is not working so that you can change it and test the new version with the public. This development approach means that you do not finish a project before completing the supporting work. This reduces the risk of carrying on with something just because you have already made progress with something, or because it will take too much time and effort to change it. The Government Digital Service principles support UCD and Agile.

Levels of RCD

Research can only be classed as RCD if it contains both a Discovery phase and a phase which includes research into respondent mental models. If research does not include these phases, it is classed as “unvalidated development”. This is based on the level of investment and methods used. RCD applies to all modes of surveys and products of the survey, including touchpoints for the respondent, such as invitation letters, reminder letters, or questionnaires.

There are three different levels of RCD.

Gold

This level of RCD carries minimal risk to quality of a survey. It is fully aligned to the RCD Framework and Government Digital Service (GDS) Design Principles. The gold level prescribes three rounds of testing to take place for each mode used in the survey.

As a guide:

  • it will take approximately 65 to 89 weeks to develop content at this level using an RCD approach — this could be reduced to between 47 and 62 weeks if you had another Research Officer (RO) to run face-to-face testing at the same time as telephone mode testing
  • you will need 1 senior researcher, and 2 or 3 junior researchers to complete this work

Examples of Discovery activities

At this level Discovery activities could include:

  • gathering data user needs
  • reviewing interviewer feedback
  • making observations
  • exploring mental models — this is an important part of respondent centred design and transformation work, and should be completed for Bronze, Silver, and Gold
  • designing with data
  • desk-based research on other surveys
  • creating user stories
  • redesigning workshop with colleagues
Examples of Alpha activities

At this level Alpha activities could include:

  • prototyping — when the online mode is used with other modes, always design online first and then move to other modes, like face-to-face, and then telephone
  • no at desk design, or only a small amount of at desk design
  • running 3 rounds of testing for each mode of survey — or running tests until saturation
  • test 1 (“test”) which involves analysing, and redesigning or reiterating
  • retest 2 (“fix”) which involves analysing, and redesigning or reiterating
  • retest 3 (“validate”) which involves analysing, and redesigning or reiterating
  • updating documents that give details about user needs, user stories, and user journeys
  • returning to the Discovery phase, if needed
Examples of Beta activities

At this level Beta activities could include:

  • running quantitative test 1, which involves analysing, and redesigning or reiterating
  • running quantitative test 2
  • updating documents that give details about user needs, user stories, and user journeys
  • returning to the Alpha phase, if needed
RCDF components

The following RCDF components apply to research at this level:

  • establish the data user need
  • mental model research — this is an important part of respondent centred design and transformation work, and should be completed for Bronze, Silver, and Gold
  • understand user experience and needs
  • use data to design
  • create using appropriate tone, readability and language
  • design without relying on help
  • take an “optimode” approach to design
  • use adaptive design
  • conduct “cogability” testing
  • design inclusively
GDS components

The following GDS components apply to research at this level:

  • start with user needs
  • do less
  • design with data
  • do the hard work to make it simple
  • iterate, then iterate again
  • this is for everyone
  • understand context
  • build digital services, not websites
  • be consistent, not uniform
  • make things open: it makes things better

Research at this level can lead to:

  • increased need for ongoing research testing and resulting staff costs
  • increased length of time needed to develop a survey

Research at this level can reduce, but not fully avoid, the likelihood of:

  • decreased response rates — this could be as a result of respondent burden, disjointed respondent journey, poor respondent experience, or attrition
  • poor data quality — this could lead to a lack of insights to explain the differences in data and timeseries, and an increased risk that “bad” decisions could be made
  • minimal stakeholder engagement — if the process is rushed and questions need to be borrowed from other surveys there will be reduced insight into other data user needs
  • less inclusive surveys — with less transformation work there will be less exploration of how inclusive a survey is
  • gaps in data — if there are lots of items missing from the data we will need to use statistical methods to fill in the missing information
  • departing from the Office for Statistics Regulation (OSR) Code of Practice — this could lead to a potential risk of losing accreditation, or a failure of inhouse assessments
  • not meeting service standards set by GDS — minimal transformation makes it impossible to explain differences in the data, which carries a risk of not meeting service standards set by GDS
  • greater costs — these could come from the need to use interviewer-led modes or offer greater incentives for respondents if the quality of responses is poor, or from the work needed to clean, process or remedy poor data to produce analysis datasets

Silver

This level carries a medium level of risk to quality of a survey because the amount of RCD work completed is reduced. At this level alignment with the RCDF and GDS principles is reduced. The number of rounds of testing are also reduced for each mode. This means any further changes beyond test round 2 would rely on desk-based work, which is where risks to quality could arise. There would still be some optimode design, but this would be limited because of the reduced number of rounds. The main risk here relates to the lack of RCDF components and iterative rounds per mode to perfect the design, fix issues, and validate if edits were effective or not.

As a guide:

  • it will take approximately 24 to 30 weeks to develop content at this level using an RCD approach — this could be reduced to between 22 and 28 weeks if you had another junior researcher
  • you will need 1 senior researcher and 2 junior researchers to complete this work

Examples of Discovery activities

At this level Discovery activities could include:

  • gathering data user needs
  • reviewing interviewer feedback
  • exploring mental models — this is an important part of respondent centred design and transformation work, and should be completed for Bronze, Silver, and Gold
  • desk-based research on other surveys
  • creating user stories
  • redesigning workshop with colleagues
Examples of Alpha activities

At this level Alpha activities could include:

  • prototyping — when the online mode is used with other modes, always design online first and then move to other modes, like face-to-face, and then telephone
  • a medium amount of at desk design
  • running 2 rounds of testing for each mode of survey, or running tests until saturation — the “validate” test has been removed at the Silver level, so you are unable to check if your fixes have worked
  • test 1 (“test”) which involves analysing, and redesigning or reiterating
  • retest 2 (“fix”) which involves analysing, and redesigning or reiterating
  • updating documents that give details about user needs, user stories, and user journeys
  • returning to the Discovery phase, if needed
Examples of Beta activities

At this level Beta activities could include:

  • running a quantitative test which involves analysing, and redesigning or reiterating
  • updating documents that give details about user needs, user stories, and user journeys
RCDF components

The following RCDF components apply to research at this level:

  • establish the data user need
  • mental model research — this is an important part of respondent centred design and transformation work, and should be completed for Bronze, Silver, and Gold
  • understand user experience and needs
  • create using appropriate tone, readability and language
  • take an “optimode” approach to design
  • use adaptive design
  • conduct “cogability” testing
  • design inclusively
GDS components

The following GDS components apply to research at this level:

  • start with user needs
  • do less
  • do the hard work to make it simple
  • this is for everyone
  • understand context
  • build digital services, not websites
  • be consistent, not uniform

Research at this level can lead to:

  • decreased response rates — this could be as a result of respondent burden, disjointed respondent journey, poor respondent experience, or attrition
  • poor data quality — this could lead to a lack of insights to explain the differences in data and timeseries, and an increased risk that “bad” decisions could be made
  • minimal stakeholder engagement — if the process is rushed and questions need to be borrowed from other surveys there will be reduced insight into other data user needs
  • less inclusive surveys — with less transformation work there will be less exploration of how inclusive a survey is
  • gaps in data — if there are lots of items missing from the data we will need to use statistical methods to fill in the missing information
  • greater costs — these could come from the need to use interviewer-led modes or offer greater incentives for respondents if the quality of responses is poor, or from the work needed to clean, process or remedy poor data to produce analysis datasets

Research at this level can reduce, but not fully avoid, the likelihood of:

  • departing from the Office for Statistics Regulation (OSR) Code of Practice — this could lead to a potential risk of losing accreditation, or a failure of inhouse assessments
  • not meeting service standards set by GDS — minimal transformation makes it impossible to explain differences in the data, which carries a risk of not meeting service standards set by GDS
  • increased need for ongoing research testing and resulting staff costs
  • increased length of time needed to develop a survey

Bronze

This level of RCD would be the minimal amount of RCD work completed on survey questions and materials. It carries a high risk to the quality of a survey.

Alignment with the RCDF and GDS principles is further reduced at this level. The number of rounds of testing are reduced even further for each mode. This means any further changes beyond test round 1 would rely on desk-based work, which is where high risks to quality could arise. There would still be some optimode design, but this would be limited because of the reduced number of rounds.

The main risk here relates to the lack of RCDF components and iterative rounds per mode to perfect the design, fix issues, and validate if edits were effective or not.

As a guide:

  • it will take approximately between 15 to 21 weeks to develop content at this level using an RCD approach
  • you will need 1 senior researcher, or 1 experienced junior researcher to complete this work

Examples of Discovery activities

At this level Discovery activities could include:

  • gathering data user needs
  • exploring mental models — this is an important part of respondent centred design and transformation work, and should be completed for Bronze, Silver, and Gold
  • desk-based research on other surveys
  • holding a redesign workshop with colleagues
Examples of Alpha activities

At this level Alpha activities could include:

  • prototyping — when the online mode is used with other modes, always design online first and then move to other modes, like face-to-face, and then telephone
  • a high amount of at desk design
  • running 1 round of testing for each mode of survey, or running tests until saturation — the “fix” and “validate” tests have been removed at the Bronze level so you are unable to test and check if your fixes have worked
  • test 1 (“test”), which involves analysing, and redesigned or reiterating, but no “fix” or “validate” test
  • updating documents that give details about user needs, user stories, and user journeys
Examples of Beta activities

There are no Beta activities at this level.

RCDF components

The following RCDF components apply to research at this level:

  • establish the data user need
  • mental model research — this is an important part of respondent centred design and transformation work, and should be completed for Bronze, Silver, and Gold
  • use adaptive design
  • conduct “cogability” testing
GDS components

The following GDS components apply to research at this level:

  • start with user needs
  • do the hard work to make it simple
  • understand context

Research at this level can lead to:

  • decreased response rates — this could be as a result of respondent burden, disjointed respondent journey, poor respondent experience, or attrition
  • poor data quality — this could lead to a lack of insights to explain the differences in data and timeseries, and an increased risk that “bad” decisions could be made
  • minimal stakeholder engagement — if the process is rushed and questions need to be borrowed from other surveys there will be reduced insight into other data user needs
  • less inclusive surveys — with less transformation work there will be less exploration of how inclusive a survey is
  • gaps in data — if there are lots of items missing from the data we will need to use statistical methods to fill in the missing information
  • greater costs — these could come from the need to use interviewer-led modes or offer greater incentives for respondents if the quality of responses is poor, or from the work needed to clean, process or remedy poor data to produce analysis datasets
  • departing from the Office for Statistics Regulation (OSR) Code of Practice — this could lead to a potential risk of losing accreditation, or a failure of inhouse assessments
  • not meeting service standards set by GDS — minimal transformation makes it impossible to explain differences in the data, which carries a risk of not meeting service standards set by GDS

Research at this level can reduce, but not fully avoid, the likelihood of:

  • increased need for ongoing research testing and resulting staff costs
  • increased length of time needed to develop a survey

Unvalidated development approach

If the work that you are doing does not align to any of the three levels, then you may be following an unvalidated development approach. This approach includes none of the main RCD activities, such as mental model research. It includes no primary research, only secondary research.

This approach carries the maximum amount of risk to the quality of a survey. There is minimal alignment with the RCDF and GDS principles as most of the steps are not applied.

There are no testing rounds in an unvalidated development approach, so there is no pre-testing with respondents before the questions are used in a live survey. This means any changes to the questions would rely on desk-based work, which is where high risks to quality arise. There is no mental model research so any changes are made on assumptions and not insights. There would be some optimode design, but this would be designed at desk and would be based on assumptions rather than research insights.

The main risk here relates to the lack of RCDF components and iterative rounds per mode to perfect the design, fix issues, and validate if edits were effective or not.

As a guide, you will need 1 senior researcher, or 1 experienced junior researcher to complete this work. This work can be completed quickly as it is all conducted at desk.

Examples of Discovery activities

At this level Discovery activities could include:

  • gathering data user needs
  • desk-based research on other surveys
  • holding a redesign workshop with colleagues

This approach does not include any “test”, “fix”, or “validate” test rounds with the respondent.

Examples of Alpha activities

At this level Alpha activities could include prototyping. When the online mode is used with other modes, you should always design online first and then move to other modes, like face-to-face, and then telephone.

Most of the survey would be designed at desk and there would be no rounds of testing for each mode of the survey.

Examples of Beta activities

There are no Beta activities at this level.

RCDF components

The following RCDF components apply to research at this level:

  • establish the data user need
  • use adaptive design
GDS components

The only GDS component that applies at this level is “start with user needs”.

Research at this level can lead to:

  • decreased response rates — this could be as a result of respondent burden, disjointed respondent journey, poor respondent experience, or attrition
  • poor data quality — this could lead to a lack of insights to explain the differences in data and timeseries, and an increased risk that “bad” decisions could be made
  • minimal stakeholder engagement — if the process is rushed and questions need to be borrowed from other surveys there will be reduced insight into other data user needs
  • less inclusive surveys — with less transformation work there will be less exploration of how inclusive a survey is
  • gaps in data — if there are lots of items missing from the data we will need to use statistical methods to fill in the missing information
  • greater costs — these could come from the need to use interviewer-led modes or offer greater incentives for respondents if the quality of responses is poor, or from the work needed to clean, process or remedy poor data to produce analysis datasets
  • departing from the Office for Statistics Regulation (OSR) Code of Practice — this could lead to a potential risk of losing accreditation, or a failure of inhouse assessments
  • not meeting service standards set by GDS — minimal transformation makes it impossible to explain differences in the data, which carries a risk of not meeting service standards set by GDS

Research at this level can reduce, but not fully avoid, the likelihood of:

  • increased need for ongoing research testing and resulting staff costs
  • increased length of time needed to develop a survey

  • If you would like us to get in touch with you then please leave your contact details or email Analysis.Function@ons.gov.uk directly.
  • This field is for validation purposes and should be left unchanged.