About this guidance
The terms “questionnaire”, “form” and “survey” are often used interchangeably. But the terms refer to very different things. While questionnaires or forms are written sets of questions created to collect data, a survey refers to the method of collecting, aggregating, and analysing data. A questionnaire or form may be used to collect data for a survey.
Data quality starts at the beginning of the data lifecycle. This means it is important to design comprehensive questionnaires to ensure the results you gather are true reflections of the target population.
This guidance will:
- provide a short and practical guide to the main considerations in question and questionnaire design
- share specific best practice on question design so it can be used by people developing questionnaires
Who this guidance is for
This guidance is for anyone who is involved in designing questionnaires, including:
- social researchers
- user researchers
- survey and form designers
- survey managers
- content designers
Learning outcomes
After you have read this guidance you should be familiar with:
- the main approaches to questionnaire design — these are Agile Methodology and Respondent Centred Design Framework (RCDF)
- the main considerations in questionnaire design — this includes ethics, respondent burden, and accessible and inclusive design
- using a data requirement template to efficiently gather customer requirements
- the main practical considerations in question design
- qualitative and quantitative testing for questionnaires
You can find more in-depth training on questionnaire design on the Social Research Association website.
Disclaimer
This guidance will include links to third parties. The Office for National Statistics (ONS) have not developed this third party content, or vetted any content associated with these links. ONS do not endorse or have a contract with the third parties for these products. The links are for recommended further reading only and other sources are available.
Questionnaire design process
Ethical considerations
Ethical considerations are an important part of the questionnaire design process. Survey respondents have the right to:
- understand the purpose of the survey
- understand how long it will take to complete the survey
- choose whether they would like to participate, as long as participation is not required by law – for example, most ONS business surveys are conducted under Section 1 of the Statistics of Trade Act 1947, which means a business must provide the information requested
- be informed of any risks before they give informed consent
- withdraw their data at any point of the process
- have their confidentiality, anonymity and privacy maintained throughout the data lifecycle
No one involved in the research process should suffer any physical, social, or psychological harm as a result of taking part. This includes the respondents, researchers, and the general public.
You can make sure your questions and questionnaire comply with ethical standards through qualitative research. This could include cognitive testing. Qualitative research is an important step in the design and delivery of a survey which should not be missed.
Useful guidelines, principles, and frameworks on research ethics that you should follow when developing a survey include:
- the Data Ethics Framework
- the Ethics Self-Assessment Tool from the UK Statistics Authority (UKSA)
- the UKSA publication on the ethical considerations associated with qualitative research
- GOV.UK guidance on data protection
- guidance from the Information Commissioners office on the UK General Data Protection Regulation (UK GDPR)
Respondent burden
The Encyclopaedia of Survey Research Methods defines respondent burden as:
“The degree to which a survey respondent perceives participation in a survey research project as difficult, time consuming, or emotionally stressful.”
There are 4 factors that can create respondent burden:
- frequency of contact
- length of contact
- required respondent effort
- stress associated with disturbing questions
You can make sure your questions and questionnaire have minimal burden through qualitative research. This could include cognitive testing. Qualitative research is an important step in the design and delivery of a survey which should not be missed.
Respondent burden can affect response quality and lead to non-response. This means it is important to find ways of minimising respondent burden throughout the research. Find out more about monitoring and reducing respondent burden.
Main approaches
The process of designing a questionnaire involves a series of steps. There are 2 main approaches that should be applied when designing questionnaires.
Agile methodology
Agile methodology is a way of managing work. It is a methodology used for project management and for producing a product.
You can apply Agile methodology to questionnaire design because it is flexible and cyclical in nature. This approach allows you to adapt the initial vision for your questionnaire as you gather insights and make changes through each phase of the project.
You can apply Agile principles to your project regardless of whether your team currently works in Agile. Find out more about Agile delivery.
There are five Agile phases:
- Discovery Phase — this involves gathering initial qualitative insights, and quantitative insights where possible, to inform your survey design
- Alpha Phase — in this phase you will start to qualitatively develop and test your draft survey
- Beta Phase — this involves quantitatively testing your revised survey with a limited sample of respondents
- Live Phase — in this phase you will formally open your survey to respondents while continuing to assess and make improvements
- Retirement Phase — this involves shutting down completed surveys that are not due to be repeated
Respondent Centred Design Framework (RCDF)
The Respondent Centred Design Framework (RCDF) helps structure how the research is done. It was developed by Laura Wilson and Emma Dickinson in 2021.
RCDF is an approach that brings together best practice from social research and user experience design. It aims to reduce respondent burden and increase data quality by developing surveys based on the needs of users. In this case, this refers to the respondents. The terms ‘users’ and ‘respondents’ will be used interchangeably in this guidance.
There are 10 RCDF components:
- Gather the data user need.
- Understand mental models.
- Understand user experience and needs.
- Use data and insights to inform.
- Create using appropriate tone, readability, and language.
- Design without relying on help.
- Tailor your respondent materials and questionnaire for the mode that the survey is being administered in — this is known as an “optimode” approach.
- Use adaptive design.
- Conduct a combination of usability testing and cognitive testing known as “cogability” testing.
- Design inclusively.
Find out more about each part of the user-centred design approach to surveys.
This guidance will outline the activities that take place at each Agile phase and will refer to components of the RCDF throughout.
Questionnaire design in Agile
Discovery Phase
This is the first Agile phase. This is an exploratory phase where you begin to understand your customer and respondent needs.
The main outcomes of this phase are to:
- understand the research aims of the survey
- understand your intended respondents
- document the user needs
- gain an initial understanding of how users think about their responses
- develop draft user stories and user journeys
Gather data user needs
You will need to ensure you understand the research aims of the survey. Ask yourself:
- What are the analysts and research commissioners trying to understand, monitor, or measure with this survey?
- How do analysts and research commissioners intend to use the data?
- What are the analytical and policy aims of each data requirement?
You should avoid working with draft questions at this stage. Instead, ask stakeholders to complete a data requirements template. If you are working on a pre-existing survey, ask for previous publications and documents.
You will also need to understand who the intended respondents are and determine your sample.
Learn from other people
Do some desk research to find out if anyone else is working on a similar project. This could be within Government or academia.
Find out if there are any suitable questions that have been already developed and tested within the Civil Service. Find more about harmonised questions and design patterns to make your data more comparable, consistent, and coherent.
Create a research grid
A research grid is an effective tool to define and direct the research aims and associated tasks. It can help you track your progress and can be adapted as you progress through the Agile phases.
When you are creating your research grid, you will:
- list the research aims and questions
- consider how each of your research aims and questions will be addressed in your survey
- identify any gaps in your research
- remove any questions or tasks that do not align with your data requirements
Find more about how to develop a research grid.
Gain insights
If you are working on a pre-existing survey that is administered by an interviewer, you could hold in-depth interviews or focus groups with interviewers to get feedback on the current questionnaire and materials. This will allow you to gain insights about the:
- respondent and interviewer user needs — this includes any burden associated with the questions
- questionnaire flow and any flaws in the current design – this includes inappropriate question order or errors in the navigational path
- user mental models on questionnaire topics — mental models refer to the thought processes users go through to reach an understanding of questions
If you cannot speak to interviewers working on your survey there are other ways to gather insights. You could:
- ask interviewers that work on other surveys for feedback on the questionnaire and materials
- observe how respondents complete your survey live, if you are working on a pre-existing survey
- do some qualitative research with previous or new respondents to learn about their mental models – this could include semi-structured interviews, or pop-up research
These insights will inform the user stories and user journeys you will create. The insights will also contribute to the redesign of the materials and questionnaire.
A user story is a three-part statement which records the user needs, following the structure “As a [insert]…I need [insert]…so that [insert]”. For example, “As a respondent, I need to know how long the survey it will take, so that I can complete it at a suitable time”.
A user journey records everything a user does when using your survey. This documentation will help to identify any barriers and will inform the research plan. Find out more about establishing user needs.
Design with data
If you are working on an existing survey, you can review the historic survey data. This data can show you whether there is a high amount of missing data related to a question, or if certain questions are taking respondents a long time to complete.
Collate, analyse, and create
After you have done your research and are reaching the end of the Discovery Phase you can collate and analyse your findings. You should review your research grid and ask yourself the following questions:
- Have you completed all necessary activities and collected the insights you need to proceed to the next phase?
- Are there any gaps in your insights?
You can repeat some of the previous steps if you have identified any gaps in your insights.
You should also create some supporting documentation, including:
- user needs
- user stories
- user journeys
- a research grid for the Alpha Phase
Alpha Phase
In this phase you will begin developing and testing new prototypes of your survey. The design of your survey will be informed by the insights gathered in the Discovery Phase. The prototypes are iteratively tested in rounds of qualitative research.
The main outcomes this phase are to:
- create a respondent centred survey through cogability testing
- validate your understanding of who or what your survey intends to collect data from and their associated needs
- validate your understanding of respondent mental models, user stories, and user journeys
- refine and retest your survey if it is not meeting the needs of your respondents
If you are redeveloping a pre-existing questionnaire, you can make changes to it at this stage. This could include changing the order the questions or designing new questions. But you should only make changes where you have evidence of a respondent need to do so.
Methods and modes
Methods and modes of data collection are an important part of the questionnaire process. These can dictate the design of the questions. Different methods and modes will also allow you to use different materials and tools.
The two main methods in which in which surveys can be administered are interviewer and self-administered. Each method has modes, or specific techniques, used to collect the data. Interviewer-administered methods include:
- face-to-face techniques
- telephone interviewing
- video interviewing
Self-administered methods include:
- paper, web, and email questionnaires
- telephone methods, such as Telephone Data Entry and Interactive Voice Response – remember these are telephone systems and there is no interviewer asking the questions
The RCDF recommends taking an “optimode” approach to design. This means the respondent materials and questionnaire should be tailored for the mode that the survey is being administered in. This can be achieved through “cogability” testing, which is a combination of usability testing and cognitive testing. This will improve respondent experience and data quality.
It is important to follow certain principles to design respondent centred content. You can use what you learned in the Discovery Phase about your respondents.
Tone
Think about how you will come across to your audience when you are communicating with them. If your organisation has a predefined tone of voice, make sure you use it in your survey. This will help make your survey feel legitimate for your respondents.
Readability
Think about how easy or difficult it is for your respondents to understand the content of your survey. You should test your content to make sure it is suitable for the average reading age. Remember the average reading age in the UK is 9 years old. There are free online tools to help you test your content, including:
Language
Think about whether you are using appropriate language that is commonly used by your respondents. Avoid using formal language when possible so respondents can relate and connect to your survey.
Accessibility
Think about whether it is simple for every respondent to find, use and understand the content of your survey. Make sure they can access the content with appropriate browsers and devices. You should also make sure your survey is accessible to people who use assistive technologies.
To make your text accessible you should:
- use plain English and avoid colloquialisms
- explain all abbreviations, acronyms, and jargon
- make paragraphs no longer than two to three sentences and break up the text into short sections
- use headings and subheadings to break up sections
- use bullet points for lists
- avoid using italics, bold text, brackets, and underlined text
- avoid using too many capital letters
- avoid using contractions, such as ‘aren’t’
- avoid using ellipsis, or “…”
The physical design and layout of the survey should also be accessible. This will help respondents understand the questions and help interviewers record the responses. You should:
- use a logical sequence for your questions
- keep any explanatory notes short and present them as part of the question they refer to
- number pages consecutively
- use black text because it is easiest to read
- use light pastel colours for the background – colour contrast can help make text easier to read
Find out more about making your survey accessible and how to design for accessibility.
The Dyslexia Association’s style guide also has information on developing written materials that are easy to read.
Inclusivity
Think about how you can design your content to be inclusive for all respondents. Find out more about making your survey more inclusive.
The aim of good question design is to ensure that every potential respondent will:
- interpret the question in the same way
- interpret the question in the same way regardless of the mode of collection
- interpret the question in the way the researcher intended
- be able to respond accurately
- be willing to answer
Keep questions short, clear, and simple
You should:
- use the fewest words possible while ensuring the question is still clear
- avoid vague wording – for example in the question “Do you exercise regularly?”, the term “regularly” could be interpreted as daily, weekly, or something else, so it should be avoided
- avoid double negatives as this can make a question difficult to interpret and may mislead the respondent – for example instead of saying “Don’t you disagree that junk food is bad for you?”, say “Do you agree that junk food is bad for you?”
- avoid using negative words like “not” and think about rephrasing the question – for example, instead of saying “Do you think smoking should not be allowed?”, rephrase the question and ask “Do you think smoking should be banned?”
- avoid using the word “please” – for example, instead of saying “Please select all that apply” you can just say “Select all that apply”
- avoid question stems that do not provide any context – for example, instead of saying “Which of the following applies?” you should add the context into the question stem itself and ask “Which of the following social media platforms, if any, do you use daily?”
- avoid repetitive wording of questions — each question should be distinctive
Avoid leading questions
You should avoid questions that lead respondents towards a particular response. For example, you should avoid questions like:
- “Did everyone enjoy the course? We always get really good feedback about it” — you should not make assumptions or encourage respondents towards a positive response
- “Do you support the government policy on education, which many people are against?”— you should not encourage respondents towards a negative response
- “How many days did you work last week?” — some respondents will not be able to answer this question as it assumes they worked the previous week
Ask balanced questions
You should ask questions that reference the possibility of having a different opinion. To balance your questions, use phrases such as “if any”, “if at all”, “if anything”, or provide opposite alternatives.
Examples of good, balanced questions include:
- “What, if anything, did you find difficult about this activity?”
- “Are you for, or against, hybrid working?”
- “How useful, or not useful, did you find this presentation?”
Split “double-barrelled” questions
A double barrelled question incorporates two questions into one. If you have a question that asks for two pieces of information, you should split it into two separate questions. For example, the question “How would you rate the course facilitator and the materials provided?” is “double-barrelled” because it asks the respondent to rate both the course facilitator and the materials. You should split this into two separate questions:
- “On a scale of 0 to 10, 0 being poor and 10 being excellent, how would you rate the course facilitator?”
- “On a scale of 0 to 10, 0 being poor and 10 being excellent, how would you rate the materials provided?”
Take into consideration recall memory error
Respondents may find it difficult to accurately answer questions that involve thinking about an event in the past. This is known as “recall memory error”. The quality of the data collected is influenced by:
- how long ago the event took place
- how important or memorable the event was to the respondent
You can help reduce this bias by minimising the time between the event taking place and the respondent recalling the event. You can also increase the accuracy of the response by asking questions which allow the respondent to check any records they may have.
You should be specific about when a time period begins and ends. Rather than saying “last week” you should give specific dates and say, for example “between Monday 23 January and Sunday 29 January”.
Consider data quality issues when a respondent answers on behalf of someone else
The quality of data may be negatively affected if a respondent is answering the questions on behalf of someone else. For example, if a respondent is asked “how many hours did your partner work last week?” they may be unable to give an accurate answer. But, these types of questions are appropriate for certain surveys and need to be tested beforehand. Always ensure you have direct permission to collect information about a respondent from another person.
Be mindful of sensitive questions
Remember some respondents may find some topics embarrassing or sensitive. You should:
- only ask sensitive questions which are in line with the research objectives
- avoid including sensitive questions at the beginning of your survey — this can increase the risk that respondents will choose to stop answering questions, or “drop-out”
- explain why you are asking the question — for example, if you are asking the question “What is your ethnic group?”, you should explain you are asking the question to understand and reduce disparities within the organisation
- include a response option of “Prefer not to say”
Think about the order and flow of your questions
You should start your questionnaire with easy questions and work up to the more difficult ones. Respondents may be put off by difficult questions at the start of your questionnaire. This could mean that they choose to stop participating in the survey.
Use filtering where you can. For example, if you ask “Were you born overseas?” and a respondent answers “no”, you should not ask any follow-up questions designed for people who were born overseas.
Be aware of how the question order could influence responses. For example, if you first ask a respondent “Which of these four fruits is your favourite?”, and then ask, “How much fruit do you eat in a week?”, the respondent may concentrate on just those four types of fruits. This may mean they report a lower number than if they were to think of a longer list of fruits.
You should ask questions in a logical order and ask questions on a similar topic together. This will reduce the burden on respondents as they are already thinking about a particular topic.
The types of questions you ask in your survey will affect the quality of the responses you collect. Before writing your questions, consider how you want to use the responses. This will help you to identify the most appropriate question formats to use.
Open-ended questions
These types of questions allow respondents to answer in their own words. Open-ended questions are suitable for exploratory phases when there is not enough information to develop appropriate categories.
Examples of open-ended questions include:
- “Why did you leave your last paid job?”
- “How did that make you feel?”
Closed-ended questions
These types of questions provide respondents with a range of the most likely answers to select from. Closed-ended questions tend to be quicker and cheaper to administer and process.
There are two types of closed-ended questions. Limited choice questions, allow respondents to choose only one response category. An example of this would be the question “Are you currently employed?”, with the response options “yes” and “no”.
Multiple choice questions allow respondents to choose more than one response category. An example of this would be the question “Where do you do your food shopping?” with the response options of “Supermarket”, “Local market”, “Online”, and “Other”.
Response categories for closed-ended questions
Scalar
This represents a set of response options that cover a range of opinions on a topic. Likert scales are widely used to measure attitudes and opinions that are more complex than a simple “yes” or “no” response. A minimum of a 5-point Likert scale is recommended as one of the most reliable ways to measure degrees of opinions, perceptions, and behaviours. A 7-point Likert scale gives even more detailed answers for analysis. The scale ranges from one extreme to the other and should always include a neutral option in the middle of the scale.
A question such as “How likely or not likely are you to recommend this course to a colleague?” may use a likelihood scale with the following response options:
- “Very unlikely”
- “Unlikely”
- “Neither likely nor not likely”
- “Likely”
- “Very likely”
A question such as “To what extent do you favour or oppose the building of benches in common green areas?” may use a support scale with the following response options:
- “Strongly favour”
- “Somewhat favour”
- “Neither favour nor oppose”
- “Somewhat oppose”
- “Strongly oppose”
Other scale examples include:
- agreement — with response options like “Strongly agree”, “Agree”, “Neither agree nor disagree”, “Disagree”, and “Strongly disagree”
- satisfaction — with response options like “Highly satisfied”, “Satisfied”, “Neither satisfied nor dissatisfied”, “Dissatisfied”, and “Highly dissatisfied”
- frequency — with response options like “Always”, “Often”, “Sometimes”, “Rarely”, and “Never”
- quality — with response options like “Excellent”, “Often”, “Good”, “Fair”, “Poor”, and “Very poor”
Non-scalar
These types of questions do not have a scale. An example of this is the Census 2021 question “Does your household own or rent (Census Address)?” with the response options:
- “Own outright”
- “Own with a mortgage or loan”
- “Part-own and part-rent”
- “Rent”
- “Live here rent-free”
General considerations of designing response categories
Response categories must fit the question. For example, the response categories for a question like ‘How satisfied or dissatisfied are you with the content of the course?’ could include a satisfaction scale.
You should:
- avoid any overlap between categories — for example, if you are asking “What is your age group?” the response categories may include “16-25”, “26-44”, “45-64”, “65 and over”
- avoid repetitive response categories — for example, starting all responses with ‘I’ adds reading burden if you are asking “How do you travel to work?” with the response options of “I ride a bike”, “I get a train”, “I drive a car”
- find a balance in the number of categories — too many categories can cause respondent burden and too few may not accurately reflect the respondent’s position
- include a “Don’t Know” or “Unsure” option at the end of all your response categories, where appropriate
- put response categories that are more socially desirable than others at the end of the list — this can help to reduce bias
- keep the scales in the same order throughout your questionnaire if you are using the same Likert scales for multiple questions
Pre-testing or qualitative testing
This involves one-to-one interviews between a respondent or participant and an interviewer, or researcher, to identify any flaws with the survey. Insights from this testing can lead to survey restructuring and can inform further rounds of pre-testing.
There are three different types of testing you can do.
Cognitive testing
This is also known as cognitive interviewing. You can use this type of testing to evaluate the respondent’s understanding of the designed questions, and the mental processes by which respondents reach an answer.
Usability testing
You can use this type of testing to assess how respondents interact with the survey and how they manage to complete a survey. This includes assessing the navigation.
Cogability testing
This is a combination of both cognitive and usability testing. Cogability testing is the most recommended type of testing. It should be used for all modes to explore the cognition and usability of the survey during the same session. This is because one can affect the other. For example, using square boxes for response options usually suggests to respondents that they can choose more than one answer, but round circles suggest respondents should choose one response option only. This might affect how they understand and respond to the question being asked.
Testing considerations
Where possible, you should test in the medium or mode in which the product will be administered in when it goes live.
Ideally, you should test the whole questionnaire. If this is not possible, test a set of questions together when the concepts are related to assess how question order can affect responses. You should avoid testing single questions on their own. Without context and flow the results will be limited. You will not be able to apply them to broader contexts.
When you are testing your questionnaire you should gather insights on what the respondents need to complete the questionnaire, rather than on their personal preferences.
It is not good practice to show question designs to stakeholders before the designs have been tested. This is because:
- you will end up collecting information about what stakeholders “want” rather than what they “need”
- they are not experts in questionnaire design
- co-design can be productive, but it can be difficult to manage when some ideas not included in the final design
It is more useful to show stakeholders evidence of what works after testing. This will help you reassure them that your designs will be successful. If you show them designs before testing, this can create a lot of extra work, involving sending out the questions, collating feedback, managing the feedback, and then putting out ‘redesigned’ versions with reasons as to why you have decided to do or not do something.
Showing stakeholders the evidence also helps to ensure decisions are made based on evidence and needs, rather than assumptions.
Beta Phase
This phase involves quantitatively testing your survey at scale with a limited sample. There are two methods for doing this:
- pilot testing involves quantitatively testing a survey with a small sample of respondents — this type of testing is used to identify any potential issues with the format, filtering, and length of a survey
- dress rehearsals are final quantitative trial runs of the survey with a small sample that has been selected using the chosen sampling methodology — this type of testing is used to get data about survey costs and estimate population variances
If you need more insights before you move on to the Live Phase you can return to the Discovery Phase or the Alpha Phase.
Live Phase
This is when you start using your survey to gather data. You can go live with your survey once you are satisfied with the findings from the Beta Phase. But you must make continuous improvements throughout the life of the service.
You can return to the previous phases if you notice that a feature or design is not working as well as it should.
Retirement Phase
This phase involves closing down the survey.
There are many reasons why you may close a survey. It may be that your questionnaire is no longer needed, or that it is not cost effective to keep running it. In this case, it would make sense to retire the questionnaire, or shut it down. You will still need to consider your user needs, and how these will be met after the survey is retired.
You may also have developed a new questionnaire alongside the original, meaning that the original questionnaire can be closed down.
If your survey will be repeated, you will need to concentrate on how it can be continuously improved. This involves using all the insights you have gathered throughout the research to identify whether it needs improving. If the survey no longer meets user needs, you can return to the Discovery Phase or the Alpha Phase to identify why this is and what improvements need to be made.