If none of our survey templates are suitable, you can create a custom survey that is made up entirely of your own questions. Qlearsite supports a number of question types – free text, yes/no, scales, select one, select many, slider and matrix – so you can create the survey you need.
Here are some best practices to consider as small changes in the types of questions and answer options in your survey can seriously impact the quality and value of your survey’s results:
Avoid survey fatigue by keeping your question set short
Keep your surveys as short as possible by limiting the number of questions you ask. Simply put, long surveys are boring and can lead to “survey fatigue.” When survey fatigue is reached, respondents will either quit the survey or stop paying attention and randomly select answers until it’s complete. Either way, your data gets compromised. We recommend a question set reaching a maximum of 20 questions including both open text and closed questions.
Use simple and direct language
Avoid using big, complex words, and words that could have multiple connotations. Your questions should be simple, short and clear. For example, instead of asking “While working from home, I have found new ways to engage and interact with my team which has enabled us to serve clients effectively” you could firstly dedicate a section on the topic of working from home and ask simpler questions such as:
1 I have found new ways to remain connected to my team
(5 point likert – Strongly Agree – Strongly Disagree)
2 My team is able to serve clients effectively
(5 point likert – Strongly Agree – Strongly Disagree)
Some concepts mean different things to different people. When asking questions, try to be as specific as possible. Instead of asking “Do you drive regularly?” you could ask “On average, how many hours per day do you drive?” This now gives you a more specific, measurable and objective answer.
Ask one thing per question
Each question should ask one thing only. It seems simple, but many of us fall into the “double-barreled” question trap.
E.g. “Do you eat meat and veg on a daily basis?” can be a tough question to answer. The reason it can be tough is, what if somebody eats just meat or just veg? There’s no clear way for the respondent to answer this question. A better way to present this question is to split the question into two separate ones. One that focuses on meat consumption and one that focuses on veg consumption.
A quick tip to check your survey for double-barreled questions is to look for words such as “and” or “or” in your questions.
Break down large topics into multiple questions
Often large topics touch on more than one area of a product/service. The best way to gain the most value is by breaking them down into multiple, more tangible questions.
For example, “customer satisfaction” is a common topic that businesses want to explore and it’s a big question topic that can be split into smaller questions to gain greater insight. Instead of asking “How satisfied are you with the product?”, you could instead ask respondents to give their opinion on three separate statements:
I enjoy using this product. (5 point Likert – Strongly Agree – Strongly Disagree)
This product meets my needs. (5 point Likert – Strongly Agree – Strongly Disagree)
I would purchase from this company again. (5 point Likert – Strongly Agree – Strongly Disagree)
The 3 different questions provide insight into different aspects of customer satisfaction, and the average of the scores give you a general measure of satisfaction that you can track over time and attempt to maintain or improve. Together, the three simple questions give you a precise, actionable answer to the question of customer satisfaction.
Use more likert questions
One simple way to take your survey insight from good to great is by using likert questions. Respondents answer it on a 1-5 or 1-7 scale, we recommend a 1-5 scale such as “Strongly Disagree, Disagree, Neither Agree nor Disagree, Agree, or Strongly Agree.” The level of analysis you can perform on your results increases as likert questions do a good job in capturing varied but accurate responses.
Likert questions can also allow us to understand how questions relate to one another adding another valuable layer of insight. When you ask likert questions, you open the door to check correlations between questions, which allow you to say “People who are more likely to ABC are less likely to think DEF.” At Qlearsite, something else we are able to do is run a linear regression and say “Factors A, B, and C have the biggest impact on D.” More simply, we are able to draw conclusions such as “senior employees, on average, spend more time in the office than junior employees.”
Positive, negative and neutral options
When using likert questions (recommended), we must remember to include a neutral option. Neutral options are usually giving respondents a “Neither Agree nor Disagree” option in the middle of the scale and giving them a “N/A” option at the end of it, if the question does not apply to all.
In addition, respondents can find it frustrating with your survey if it forces them to answer questions in ways that aren’t entirely true.
E.g., if you ask “What’s your method of travel to and from work?” and only provide train, car and walking as options, people who take the bus or cycle don’t have a clear answer choice. An easy way to get around this is to provide an “Other” option.
Avoid leading questions
Our opinions can make their way into survey questions, subtly encouraging respondents to answer questions in a certain way and potentially compromising survey results.
For example, asking “Do you think the organisation should cut the social budget to pay for extra cleaning?” would lead to different answers than asking, “Should the organisation bring in extra cleaners to protect our employees?” despite both questions relating to the same topic.
To avoid leading questions, introduce a quality assurance (QA) step in the question set creation process whereby a colleague reviews the question set for neutrality. If your colleague can guess what kind of answer you’re looking for to specific questions then you should consider rewriting the question as that suggests some bias.
Remove question bias
Survey question bias is something we touched upon in the previous point but also important to consider when creating a question set is survey response bias. Asking for demographic information like gender, race, or age at the beginning of a survey can influence how people respond to the rest of the survey. (This is also known as stereotype threat.) For example, studies have shown that when females identify their gender before a mathematics test, they tend to perform worse than females who aren’t asked to identify their gender.
The best way to prevent bias and stereotype threat is by asking sensitive questions—including those about demographics—at the end of surveys. Bias can happen unintentionally when creating question sets. Asking someone “How important do you think mental wellbeing is?” followed by “How much do you plan to spend on mental wellbeing next year?” could lead to bias. If someone says they believe mental wellbeing is very important, they may inflate the amount they plan to spend in the next question. To avoid this, we touched on how to phrase questions previously but you can also randomise the question order to prevent this type of bias.
Bias can also occur when interpreting the survey. Without knowing it, you might treat the opinions of people differently simply because of their demographic answers. For example, if you are a department head, you may treat the opinion of those in your department with greater importance than those in another department. To avoid this, you might not want to gather any demographic data at all and go for a totally anonymous survey instead.
The wording used for survey questions is important and shapes the output from the survey. Instructions about why a survey is being conducted can impact the way respondents answer questions. For example, framing a customer follow-up survey as an evaluation of a team member may lead to different responses than if you framed the survey as a tool to improve your processes.
For example, asking “Did John solve your problem well?” may encourage different answers to “Did we solve your problem today?” as respondents are being asked about a specific occurrence with one person. However, the question “Did we solve your problem today?” is a more neutral way to phrase the same question and gain feedback on your process as opposed to John alone.
People like to help each other and if you tell them that the survey has a specific objective, they may respond in a way that helps you achieve that goal instead of answering the questions honestly. To prevent this, try to be neutral when describing the survey.
Include a catch-all question
Often surveys are targeted to measure specific topics, however, respondents may not have been able to express all of their views via the survey questions. A good way to ensure you don’t miss any key thoughts, actions or reactions is to include a final “catch-all” question at the end of the survey. E.g. “Are there any final thoughts you’d like to share?”
If this is a lot to start with – why not use one of our pre-built templates?