Aim of this guidance
This guidance focuses on monitoring and reducing respondent burden when carrying out statistical surveys. It does not cover how to implement statistical surveys.
Monitoring and reducing respondent burden is mentioned in the Code of Practice for Statistics. All government statisticians are governed by the code and statistical Heads of Profession (HoPs) are responsible for ensuring implementation of the code in their departments.
Practice V5.5 of the Code of Practice for Statistics states:
“Statistics producers should be transparent in their approach to monitoring and reducing the burden on those providing their information, and on those involved in collecting, recording and supplying data. The burden imposed should be proportionate to the benefits arising from the use of the statistics.”
What we mean by ‘statistical survey’
The Organisation for Economic Co-operation and Development states that:
“A survey is an investigation about the characteristics of a given population by means of collecting data from a sample of that population and estimating their characteristics through the systematic use of statistical methodology.”
Examples of statistical surveys:
- sample surveys
- the collection of data from administrative records – where respondents are ‘surveyed’ and asked to respond based on admin data
Data collections that are not included:
- the extraction of data from admin records by operating systems
- exercises where respondents clearly select themselves e.g. surveys on websites, readership surveys and some types of consultation exercises where there is an invitation to comment
The concept of respondent burden
What we mean by ‘respondent burden’
The Encyclopedia of Survey Research Methods defines respondent burden as:
“The degree to which a survey respondent perceives participation in a survey research project as difficult, time consuming, or emotionally stressful is known as respondent burden. Interview length, cognitive complexity of the task, required respondent effort, frequency of being interviewed, and the stress of psychologically invasive questions all can contribute to respondent burden in survey research.”
Concept of perceived respondent burden
The concept of response burden can be divided into actual and perceived burden.
There are four factors that constitute perceived respondent burden:
- frequency of contact
- length of contact
- required respondent effort
- stress of disturbing questions
The characteristics of the respondents may influence the amount of response burden that they perceive themselves.
Respondents can be grouped by:
- the access they have to relevant information
- their interest in the task given to them
- the competence they have to complete the survey task
For business surveys, the perceived burden may also be affected by their position in the business, their prior exposure to the business survey and to the survey organisation.
The value that the respondent places upon completing and returning the business survey may be influenced by these factors and the business culture that they operate in. Moreover, there is also the case that the lack of understanding the purpose of the survey is perceived as burdensome.
Why we need to address respondent burden
Response burden can affect response quality. It can lead to non-response, with implications to precision and potential bias. Non-response is a key indicator on survey quality. The main sources of non-response are non-contact and refusal.
Accumulated non-response (often referred to as attrition) affects longitudinal surveys. Reduction in sample size over time caused by attrition can threaten the statistical reliability of survey findings.
Therefore – developing a strategy and taking measures to reduce respondent burden can improve the quality and quantity of data.
Monitoring respondent burden
To monitor respondent burden, statistics producers should:
- collect data on the respondent burden associated with their statistical surveys
- estimate the costs of compliance (i.e. the cost of completing the survey) to give a measure of respondent burden
- compare the costs of compliance with previous years’ figures
- investigate any substantial changes and take appropriate action to try to reduce the burden
- explain to users any variations in compliance costs caused by changes in the nature of a survey, such as a sample increase.
Calculating compliance costs
Guidance on how to calculate compliance costs (PDF, 171KB)
Use hourly pay excluding overtime (dataset 14.6a) to calculate hourly rates.
Publication of compliance information
Part of monitoring respondent burden is publishing the compliance information. This is because the Code of Practice for Statistics states that statistics producers should be transparent in their approach to monitoring respondent burden.
Where should compliance information be published?
The compliance information should either be published in a Background Quality Report (BQR) or on a relevant webpage.
More information on BQRs can be found in our guidance on communicating quality, uncertainty and change.
The School Meals in Northern Ireland BQR is a good example of a BQR.
This compliance cost publication from HM Revenue and Customs (HMRC) is a good example of how to publish compliance information online.
How often compliance information should be published
Ideally each producer of official statistics should publish compliance costs each financial year. HMRC publishes annual statistics on its survey compliance costs to track the cost incurred by businesses and local authorities in complying with the statistical surveys it conducts.
The responsibility to publish compliance information
It is the responsibility of the department that owns the survey to estimate and publish the compliance costs.
For surveys that are jointly conducted by two government departments and/or external contractors, it is the responsibility of the department that owns the survey.
For example, the Family Resources Survey is owned by the Department for Work and Pensions (DWP), but the data are collected by both the Office for National Statistics (ONS) and NatCen. Therefore DWP are responsible for estimating and publishing the compliance costs.
Survey Control and Liaison Officers
Role of a Survey Control and Liaison Officer (SCLO)
- provide advice and guidance and monitor survey activity within their department
- work closely with survey managers
- maintain an up-to-date record of all statistical surveys conducted within their department along with their associated compliance costs
- organise the annual reporting of survey and compliance cost information for all statistical surveys – more information on our recommendations for how to do this are in the monitoring respondent burden section
- advise those responsible for surveys on the survey control procedures for their department
- assess new survey requests and changes to existing surveys according to departmental procedures.
- monitor any discontinued or paused surveys
- support the survey review process within their department
Northern Ireland have a network of SCLOs managed by the Northern Ireland Statistics and Research Agency (Nisra) Survey Control Unit who produce guidance on what is expected of SCLOs in the Northern Ireland context. For further information, contact email@example.com.
We encourage other government departments to nominate an SCLO who will be responsible for supporting their statistical Head of Profession in monitoring and reducing respondent burden.
Reducing respondent burden
A range of measures can be taken to ease the burden that statistical surveys place on respondents.
1) Consider the need for new surveys
Practice V5.3 of the Code of Practice for Statistics states:
“The suitability of existing data, including administrative, open and privately-held data, should be assessed before undertaking a new data collection.”
When commissioning new statistical surveys ensure that the need for the survey outweighs the extra burden created by the new survey. When deciding whether the survey is necessary or not consider whether the data required is available elsewhere.
A thorough search of other data sources should be conducted, including:
- other government departments or agencies
- existing statistical surveys
- UK Data Service (UKDS)
- professional organisations
- literature search e.g. academic journals
- administrative data records
- local sources e.g. local authorities
If an existing data source is found but deemed unsuitable, there may still be a need for a new survey.
2) Consider alternative data sources
Ensure alternative data sources are considered in reducing respondent burden.
Administrative data systems provide advantages such as:
- reduced costs
- no need for sampling
- reduced non-response errors
On the other hand, administrative data is not always the appropriate data source, since they can have limitations in terms of coverage and data quality. They may also have subtle differences in definitions that make comparisons to other statistical releases difficult.
If it is possible to use other data sources, then removing the existing survey and using the alternative data source will reduce respondent burden.
It may also be possible to link respondent data from the statistical survey to other data sources to obtain information rather than adding new questions to the statistical survey (see practice V5.1 of the Code of Practice for Statistics). More information on data linking can be found in our data linking guidance.
In any case, it is important to make sure that any existing data comes from a reliable source.
3) Consider combining surveys
Where surveys collect similar information, or approach the same respondents, it may be possible to combine them into one survey.
This will depend on factors such as:
- survey length
- timing of the survey
- frequency of the survey
- population group surveyed
- sample size
- sampling approach
In some cases, a more detailed annual survey could be supplemented by a less detailed quarterly survey, or additional questions could be asked of a subset of the respondents rather than all respondents.
4) Establish processes to review existing statistical surveys
Guidelines for statistical producers:
- Review statistical surveys on a regular basis to ensure there is a continuing need for their existence
The Forestry Commission undertakes annual summary reviews to ensure there is a continuing need for their surveys. In these annual reviews they review the questionnaire and update respondents. Every five years, full reviews are undertaken.
Reviewing surveys in this way illustrates a commitment to minimising and reducing respondent burden to those supplying data.
- Be transparent about the need for each question in a survey
The Forestry Commission publishes the need for information for each of their surveys. This ensures users and those supplying data understand why the information is required and how it will be used.
Their Removals Survey is a good example of this.
- Communicate to respondents what the data are used for
This helps to show the benefits arising from the burden that has been placed on respondents.
An example of this can be seen in the materials provided to respondents in the Labour Force Survey.
The information provided helps to illustrate the benefits that taking part in a Labour Force Survey interview will bring.
- Publish information on web pages about the processes in place to review existing surveys
5) Select appropriate sample size and sampling procedure
To reduce respondent burden, the number of respondents contacted to participate should be kept to the minimum necessary to provide robust information.
Careful consideration of the sampling procedure will also ensure respondent burden is minimised.
Here we present some things to consider when deciding on sample size and sampling procedure.
Use the Osmotherly rule to limit contact with small businesses
Contact with small businesses should be kept to a minimum where possible, as statistical surveys create a proportionately higher burden for them compared with large enterprises.
Small businesses (those with 0 to 9 employees) are covered by the Osmotherly rule (PDF, 923KB) which restricts the burden placed upon them.
This rule guarantees small businesses will:
- not be simultaneously selected for more than one survey
- remain in that survey for only a limited time span
- have a ‘survey holiday’ of at least three years from most surveys once the time in the survey has completed and while the business remains small
Use an accurate and up-to-date sampling frame
Using an accurate and up-to-date sampling frame will avoid imposing unnecessary burden on ineligible respondents and will also increase the overall response.
Using a common register as your sampling frame, such as the Inter-Departmental Business Register (IDBR), can help co-ordination across surveys.
However, this needs to be balanced against the diverse needs of the surveys. For example, IDBR might not be the perfect place for looking up telephone number information, as sometimes the information stored is poor or some tele-matching might be needed.
Ensure appropriate longitudinal survey designs
Many surveys have longitudinal elements, which require the collection of data from the same units on repeated occasions.
Such surveys, therefore, impose burden on the same respondents over a prolonged period, whereas a one-off cross-sectional survey will impose burden on just a single occasion.
Things to consider in longitudinal surveys
Weigh up the extra respondent burden against the benefits of repeated measures
The extra respondent burden must be weighed against what the longitudinal elements offer in terms of greater scope for measuring change over time.
Consider implementing rotating panels
A panel is a survey sample in which the same units are surveyed on two or more occasions. Rotating panels will specify the overlap of the sample between periods. It will control and limit the number of occasions a unit is surveyed. There is more on sample rotation and overlap in the next two sections.
Permanent Random Numbers (or similar) may be used to select samples and control panel rotation. They may also be used to co-ordinate overlap between the samples selected for different surveys.
Units sampled for longitudinal surveys may experience less burden on later data collections as they may become accustomed to completing the survey and can prepare in advance and provide the required data more efficiently.
Consider the possibility of panel-conditioning which is sometimes observed in repeated surveys when a sample unit’s response is influenced by prior interviews or contacts. Panel conditioning can affect the resulting estimates by introducing what is sometimes called ‘time-in-sample bias’ or ‘rotation-group bias’.
Overhead or set-up costs
Be aware that some respondents may experience an overhead or set-up cost to be able to supply data on any number of subsequent occasions.
Use sample rotation to reduce overlap in longitudinal or repeated surveys
In longitudinal or repeated surveys overlap refers to that part of a sample that is common to the survey on two or more occasions.
Sample rotation is a way of controlling overlap on longitudinal and repeated surveys.
Sample rotation involves rotating some units out and rotating new units in. It can be used to share burden and refresh samples. The Labour Force survey is a repeated survey which uses sample rotation, see page 13 of Volume 1 of the 2020 update to the Labour Force Survey User Guide for more information.
Example of sample rotation
In business surveys, Permanent Random Numbers (PRNs) are used for sample selection from the Inter-Departmental Business Register (IDBR) to control burden by limiting the time a business spends in a sample. Businesses are usually selected for a number of consecutive occasions before being ‘rotated out’.
Note: if using PRN sampling, the speed of sample rotation (or, equivalently, the expected number of occasions a business is consecutively selected) must be specified.
Benefits of this system:
- Having a considerable part of the sample in common between two periods reduces the variance of estimates of change between the periods, and allows measurement of change across time at the individual-business level
- PRNs and rotation sampling share the burden fairly, in that all eligible businesses will ‘get their turn’ in a controlled way.
- There’s an argument that retaining a business in a sample for a longer run of consecutive periods reduces its overall burden, as it becomes accustomed to completing the questionnaire each (frequent) occasion and more efficient at doing so, rather than having a break and having to become re-acquainted with the requirements of the questionnaire.
- Being selected for consecutive periods prolongs the burden placed at any given time.
- Although selecting for fewer occasions may appear less burdensome, businesses’ time-in-sample will come around again sooner, so the aggregated burden over time remains the same (all other things being equal)
- However, if sampling only a very small proportion of the population (small sampling fraction), units will not be selected again for a very long time.
Implement overlap controls for surveys running simultaneously
When it comes to sampling for surveys running simultaneously (that is, different surveys running at the same time), overlap refers to the part of a sample that is common to two or more of the surveys.
There are statistical and design reasons for having overlap between samples, but it can also have an effect on response and non-response, especially if respondents perceive their sample burden to be unfair.
Overlap controls can be used to prevent potential respondents being sampled for too many surveys simultaneously. However, care should be taken when implementing overlap controls as they can introduce bias.
Ways of controlling overlap for surveys running simultaneously
These ensure respondents get ‘time off’ from completing surveys.
In social surveys, the Office for National Statistics (ONS) keeps a “Used address file” of addresses selected in survey samples. Addresses on that file cannot be selected for most other surveys until a given period of time has elapsed.
Cap sampling fractions so fewer respondents are sampled
This is not specifically aimed at controlling overlap for surveys running simultaneously, but anything that reduces the number of units sampled will reduce overlap.
In business surveys caps are sometimes used on the sampling fractions for small businesses in particular industries. In other words, a maximum threshold is sometimes applied for the sampling fraction when designing a sample.
Make ad-hoc adjustments to samples
In business surveys, businesses can be exempted from receiving questionnaires in specific and limited circumstances. This includes instances where the information collected in one survey can be used directly in another without the need to send another questionnaire
6) Make use of modelling and estimation techniques
It may be possible to model or estimate a response value, instead of adding a question to a survey or running a separate survey. This will be based on information received by the respondent and/or from alternative data sources.
However, to be able to model some specified variable you would require that same variable to be known (so probably collected in a survey) for at least some units, otherwise there would be no way to develop the model nor determine its quality.
Statistical techniques such as regression, survival analysis and others can be used to predict outcomes based on several variables from the historical time-series or variables collected elsewhere.
An example of modelling and estimation can be seen in the ONS release: The probability of automation in England.
7) Select the appropriate frequency for the survey
The frequency of surveys should be kept to a minimum. Surveys should only be repeated when there is good reason to do so, for example, if it is part of a regular monitoring exercise or if circumstances are likely to have changed over time.
Reducing frequency is a simple and effective way to cut respondent burden, for example, running a survey every two years instead of every year or run it continuously with a smaller sample but report only every two years.
It is important to consider the end users of the data when making any changes to frequency. For example, users of an annual survey may not be happy with published results being moved to being available every other year.
8) Standardised classification and definitions
Practice V5.1 of the Code of Practice for Statistics states:
“Opportunities for data sharing, data linkage, cross-analysis of sources, and the reuse of data should be taken wherever feasible. Recognised standards, classifications, definitions, and methods should be applied to data wherever possible.”
Unnecessary variations between statistical surveys can increase respondent burden while harmonisation can help reduce burden and improve the value and quality of the data received.
For example, alignment of classifications can help to avoid duplication between surveys and more value can be extracted from the data if it can be related directly to another survey.
Definitions and concepts familiar to the respondent and consistent with other similar surveys should be used. Prior consultation with a sample of respondents will help to identify the most appropriate definitions and concepts.
9) Use less burdensome methods of data collection
Alternative methods of data collection using new technology or mixed-mode approaches offer potential cost savings.
As people increasingly use the web and personal technology, a traditional pen and paper survey approach may appear inefficient and may be more burdensome to respondents. In such circumstances, switching to new methods of technology or mixed modes may help to maintain response rates.
The use of electronic methods for collecting data, such as Electronic Data Interchange (EDI), can also reduce respondent burden. This is particularly useful when the information being collected is stored on the respondent’s business computer system.
Another consideration is the use or provision of computer software to interface with existing business packages used by companies. However, to be effective, the software needs to be openly shared.
A report for the GSS: The Application of Alternative Modes of Data Collection in UK Government Social Surveys states that cost savings may be achievable in government social surveys by maximising the use of cheaper modes of data collection (mail, internet and telephone) in mixed-mode data collection survey designs.
However, in some circumstances, face-to-face interviewing may need to be retained to ensure data quality.
More evidence is needed to assess the trade-off between reducing cost and maintaining quality.
10) Ensure efficient processing of data
An efficient data editing process for the collected survey data should be established to minimise the need for rechecking with respondents.
Electronic data collection instruments, whether online or interviewers’ laptops, provide the opportunity to query data inputs at the time of collection, and thus avoid the need for subsequent re-contact.
However, each check made imposes burden, and too many will likely lead to the respondent deciding not to complete the survey (especially when using self-completion). This risks non-response bias.
The most important thing to ensure is good questionnaire design. Asking questions clearly and asking only for information that respondents understand and have available is key.
11) Research survey design techniques
Good survey design will not only help to collect quality data but also reduce the burden placed on respondents.
Here we outline some techniques that can be applied to any survey type, but be aware they should be adjusted based on each survey’s needs.
- Choose the appropriate collection technique
For example, use telephone surveys only when the required information is likely to be readily available. This reduces re-contact burden and costs due to checking and correcting data.
- Ensure the survey is directed to the most relevant person
For example, somebody qualified to provide the necessary answers. This will avoid time being wasted by the respondent in finding others to complete the survey or gathering the information from many people within the organisation.
- Provide advance warning of surveys (especially when they are infrequent) so that respondents have time to collect the required records.
- Give a full explanation of the purpose of a survey to respondents.
Make it clear what is required, for whom and why. Distinguish clearly which surveys are compulsory (with reasons) and which are voluntary.
- Keep surveys as simple as possible – respondents should only be asked those questions that are necessary.
- Wherever possible, ask only for information which is likely to be readily available – information which requires some searching on the part of the respondent takes time and creates costs for business respondents.
- Ensure questionnaires are as clear and helpful as possible and guidance is straightforward and to the point.
- Adapt and ‘personalise’ questionnaires to individual businesses as far as possible.
Data should be extracted directly from company records where feasible. Respondents should also be offered alternative ways of supplying data which may place less burden on them.
- Make surveys an appropriate length – questionnaires should be kept as short as possible to minimise respondent burden.
A relatively short questionnaire is also more likely to be completed than a longer one. However, the length of the survey varies and is influenced by the mode of data collection.
- Optimise routing within surveys to make long surveys shorter by having respondents automatically skip pages that are not relevant to them.
- Making whole surveys voluntary or certain questions within surveys voluntary rather than statutory will help reduce respondent burden (see practice V5.4 of the Code of Practice for Statistics).
Those managing surveys should use their judgement as responses to voluntary surveys or questions will be lower.
12) Apply a user-centred design approach
In the context of survey development, the respondent is the ‘user’ and ‘User-Centred Design’ (UCD) puts the respondent at the heart of the design process.
The GOV.UK guidance on establishing user needs states: “if you don’t understand who they are or what they need from your service, you can’t build the right thing”.
To find out more see our guidance on implementing a user-centred design approach to surveys.
Case study on reducing respondent burden
The National Travel Survey (NTS) team within the Department for Transport has implemented a number of solutions to better meet user needs while reducing the burden for participants and interviewers.
The NTS is used widely and the team receives many requests from users who ask for additional information.
Good practice used by the NTS team:
- Annual review of the NTS – this is achieved by identifying sections of the survey which take the longest to complete.
- Setting up an online panel to manage the large volume of requests for new questions.
- This allows the NTS team to create shorter and more targeted surveys towards sub-groups where it would be disproportionate to ask everyone the full NTS.
- It also allows a much quicker turnaround of results for stakeholders.
- Conducting cognitive testing on any new questions before being approved to add to the NTS – this assesses how well participants understand each question and the level of burden it creates.
The NTS team is currently developing a digital travel diary and undergoing extensive research to establish how a digital diary could simultaneously reduce burden and improve data quality. This includes not just reducing burden for the participants but for the interviewers as well.
The results of the user feedback survey, cognitive testing and digital diary developments can be found on the NTS section of GOV.UK.