[ 8 minute read ]
In our previous post about best practices for survey design and data collection, we explored defining the goal of the survey and considerations for developing questions. We continue our discussion by focusing on considerations for developing responses, tips on general survey design, communications for maximizing returns, and determining the ideal number of returns.
1. Considerations for Developing Responses
a. Matching Response Types to Question Types
The ways in which a respondent can answer depends on the type of question asked. Open-ended questions such as "What did you think about our fundraising event?" is usually accompanied with a blank answer box so the respondent can provide a personalized long answer. Other questions where there is a limited number of answer choices (age or annual donations to all charities) are best approached using a multiple choice format with ranges provided. With "age", for example, the ranges could be under 13, 14-18, 19-30, 31-40, 41-50, and 51+. When there are many multiple choice answers and many multiple choice questions, consider using the "drop down box" format so the survey doesn't appear so long which could lead to higher exit rates.
b. Consider Using the Likert-scale for Perception or Future Behaviour Questions
- An example of a perception question is "how satisfied are you with our fundraising event?" This could also be worded in an open-ended manner "what did you think of our fundraising event?" but it would be more challenging to compare one respondent's answer to another's. An example of a future behaviour question is "how likely will you be donating to our cause next year?"
- The Likert-scale is an odd-numbered answer scale, usually consisting of five or seven answers, ranging from one extreme to the other. For example, for the question “how satisfied are you with our member services?”, the answer scale will range from “very unsatisfied” to “very satisfied” with “neutral” being the middle answer. Note that it is only important to label the two extreme answers as well as the middle answer.
- The reason why the Likert-scale is ideal is because it offers an opportunity for more robust measurement. If the question were worded “are you satisfied with our member services?” then the answers would simply be “yes” and “no.” This method, however, doesn’t capture the extent to which the respondent is satisfied with your member services. If two people respond yes, for example, one respondent may only be mildly satisfied whereas another is very satisfied. You would not know this intuitively, however, and following-up with the respondents would be required to dig further on what they meant by answering "yes". Save time and avoid ambiguity by using the Likert-scale right from the beginning.
- It is important that the two extreme answers be worded in parallel. For example, it would be inadvisable to pair “very unsatisfied” with “satisfied” or “very unsatisfied” with “very content.” The scale could appear "uneven" depending on how they interpret one word versus the other, thus biasing the respondent to select an answer on end of the scale versus the other.
- When using many scaled questions in a survey, reverse the wording once and awhile so that “very unsatisfied”, “not very often”, and other negative words are sometimes on the left side of the scale and sometimes on the right side. This will help mitigate the respondent from always clicking on one side which could artificially bias responses later in the survey.
- This scale is typically not used with demographic questions.
c. Include all relevant alternatives as answer choices
- When appropriate, include “don’t know” or “not applicable” as answer choices. “Don’t know” may be needed for a question such as “how likely will you be attending our annual fundraiser next summer?” The respondent could honestly not know if she or he will be attending so include “don’t know” for this question. When a survey that includes the question “how satisfied are you with our member services?” is sent to donors and potential donors, then “not applicable” should be included as an answer choice as it is possible not all recipients are members.
- Including all relevant answer choices ensures that respondents will be less frustrated when answering the survey. They will feel less like they are being manipulated into answering in a certain way.
- Similarly, include “other” or “none” as answer choices when applicable. This is especially true for questions that could theoretically have an infinite number of answers. For example, for the question “what types of organizations do you donate to at least once a year?”, the answers can include “social service”, “health”, “environmental” and “religious.” Include “other” as an answer choice, but in doing so, ensure that there is a field for the respondent to write his/her answer.
2. Tips on General Survey Design
- Include a descriptive title so respondents know what and who the survey is for (e.g. “Toronto Cancer Foundation: donor satisfaction survey”)
- The survey should take no longer, on average, than 15 minutes for a respondent to complete. Ideally, it should take 5 to 10 minutes. Although it’s tempting to include many questions, the goal is to have as high a return rate as possible. With most people having busy lives, a shorter survey is desired for maximum returns.
- It is highly recommended that the survey not require any personal information (e.g. name or email) so that respondents feel more comfortable when answering questions.
- Leave demographic questions (e.g. age, gender, marital status, income, education, etc.) until the end so as not to tire the respondent so early in the survey.
- Allow some questions to be optional, especially questions of a sensitive nature such as income level or marital status.
- Include a “comment” box at the end. This is very important as respondents may have something to write beyond the scope of the survey. Whenever a stakeholder takes the time to write additional comments (good or bad!), it means he or she is engaged, and thus his or her voice should be facilitated.
3. Deploying and Testing the Survey
- There are many free online services to deploy the survey such as Google Forms and SurveyMonkey. Although surveys can also be printed, online deployment has many benefits including less cost, automatic and centralized data collection, and in some cases, free analytical tools.
- Telephone and in-person surveys are additional options, however, it is recommended to use online methods. Telephone and in-person surveys require more human resources and the loss of anonymity for the respondent. It is understandable, however, that these methods may be needed to increase accessibility in some cases.
- Test the survey by sending it to a small group of employees and volunteers. Ask them for feedback on wording, length, etc. Refine the survey as necessary.
- Assuming your target audience is a defined set of people (e.g. existing donors), control access by only providing a link to the finalized survey to that defined set of people. Don’t post the link in publicly accessible areas such as your website or social media channels or you will create a bad data set.
4. Communications for Maximizing Returns
To maximize the number of survey returns, communicate the following to respondents, for example, in an email:
- State who you are, the organization and main contact person for the survey, in case there are any questions or concerns.
- Give a relatively tight and specific (e.g. December 23, 10:00 PM) deadline of one to two weeks. If you give respondents a month to complete it, they may postpone completing the survey until the very end and in doing so will likely forget to do it at all!
- Explain what the survey is for and how it will benefit the organization, and where applicable, how it will benefit the respondent.
- When collecting information of a sensitive nature, let respondents know that the information will only be seen by internal resources.
- Let them know how long it will approximately take to complete the survey so they can budget their time appropriately, increasing the chances of completion.
5. Determining the Ideal Number of Returns
The ideal number of returns depends on the size of the target audience. There are online calculators that can help you determine what your return rate should be for statistically significant results. Alternatively, one survey expert says that it is more important to focus on the absolute number of returns. Paul Collins of Clarity Surveys suggests that 50+ is the optimal number of returns. What if you have less than the adequate return rate or less than 50 returns after the collection period has finished? You can send out another e-mail stating that you have less than the number of returns required and to please complete the survey if they haven’t already done so. Alternatively, you can base any organizational decisions on the returns you do have. In this case, pilot any changes or run the ideas by a focus group before a full roll out.
In our next blog article, we explore how to derive actionable insights from the results of your survey.
In the meanwhile, do you have any questions on the above tips? Comment below or e-mail us!