Tips from my online survey research in 2018

I have a special affinity for quantitative research. Specifically: I LOVE online survey work! It’s a very efficient method to gather lots of data at scale that is almost automatically formatted for easy analysis and visualization.

Even though I have past experience with designing, collecting and analyzing survey data around current events, politics and marketing, my academically-focused work this year really sharpened my survey skills. I used Qualtrics almost exclusively as the online platform for creating and administering surveys, and I recruited larger samples than I ever have before using Amazon Mechanical Turk (MTurk), Qualtrics‘ own panel aggregation and a study pool run by Carnegie Mellon University’s Center for Behavioral and Decision Research (CBDR). My collaborators and I also recruited participants through US-based Survey Monkey; Prolific Academic, a UK-based company that also recruits US-based workers; and QQ and SoJump survey platforms based in China.

The increased stakes and many unknowns I encountered as my research evolved led me to reach out on social media, at workshops and in my own labs for help and for new ideas. I also reached out to MTurk workers for advice on fine-tuning my surveys, and I signed up myself as an MTurk worker to see how it looks from the other side. Below, I share some of what I learned during this year’s academic work that may also benefit your own survey research.

Formatting a survey – on Qualtrics or any other platform

Write out the entire survey in a Microsoft Word or Google document before transferring it to the Qualtrics interface. Then: Proofread, proofread, proofread. Hire an English-fluent editor if this is not your first language. (English IS my first language, and I consider myself an expert writer, but even I have struggled to find and correct every typo – and I still received feedback in peer review for one research paper that I had misused certain words in my items.)

Check that you are asking one and only one question in each survey item – look for words like “and” to see if you are actually including two or more ideas in the same item. Fine-tune the item language to remove any ambiguity to what you mean, adding more explanation for your terms if necessary inside each item in addition to the survey instructions.

Keep the number of questions consistent among pages. Repeat instructions at the top if you need to break a long block of questions into multiple pages.

Include a running header on each page of how far through the survey the participant is – such as “Page n-x of n.” As you fine-tune this, also remember to go back to the very first page (where you put your Institutional Research Board-approved online consent form), and adjust the procedure description to accurately note the number of pages that the survey ends up broken into.

Check your response scales to make sure they are consistent from block to block and with other surveys – such as using the extreme negative value of a scale at the left side and the extreme positive at the right: “Strongly Disagree = 0 to Strongly Agree = 5.” It’s a good idea to stick with the patterns that your participants are already used to interacting with in other surveys (they might have more experience with survey research than you do!).

Repeat response scales periodically – even every question – so that the response scale labels are always in view with the question. (I hate having to keep scrolling up and down or, worse, losing my place in a long survey – it’s a big reason why I quit surveys early.)

Pilot the survey with people who you can sit with and watch take the survey. Ask them to “think aloud” as they go through each screen so you can get a sense of their cognition. Take notes on which items seem clear to them and which items that they find confusing, rewording the weak items on the spot or separately drafting alternate item wordings with the help of your pilot tester.

THEN, pilot your survey with workers on the same platform or website where you intend to deploy your final survey. Include an open-ended feedback field at the end of each page or the end of the survey, or both, for the worker to contribute their advice.

Getting good responses – my $0.02 on “gotchas” & screeners

I am torn on including “gotcha” or attention check questions. On the one hand, they can be a valuable tool to help remind your participant to keep their full attention on a survey and to weed out bad-faith responses. On the other hand … as a human-computer interaction expert, I am keenly aware of how difficult it can be to read and comprehend on-screen text even with the most usable interface designs, and it feels silly to reject people’s work because they neglected to catch one word in a sentence.

A Qualtrics representative advised me to 1) bold, 2) underline, and 3) italicize any words in questions designed to weed out bad responses or to catch people who are reading lazily, in order to make sure that my key text in a survey question is read. By including this kind of scaffolding for reading comprehension in the item formatting, the check questions seem fair.

If you intend to use certain questions for screening or to fill demographic quotas, make sure to put them at the front of the survey so that people don’t waste time taking a survey that they do not qualify for. You can often include programming logic that sends people who do not qualify to a “Thanks for your interest!” web page indicating why they were suddenly ejected from the survey.

However, if I am not qualifying my survey respondents with screener questions, I find it best to front-load the most important items for my survey research, then present other survey measures in a thematic flow so that each page helps prime the participant to answer the subsequent page of questions, and then to put demographic items on the last page. This way, I am likely to receive partial responses that might be helpful in case a participant doesn’t finish a survey.

Compensation – Pay a minimum of $10/hour

For our overall research project, we are using a base rate of $10/hour for compensating participants, including those taking online surveys. This is not too far from the $12-$15/hour compensation that we pay to research assistants. We calculate the precise compensation for a survey by running pilot tests (see above) to estimate the average time it will take a participant to fill out the entire survey – such that if the survey takes 15 minutes, we will set compensation per participant at $2.50.

Note that this is a floor, not a ceiling; beyond this base rate, we are paying money beyond that $10/hour to the platform providers for their overhead and for locating specialized participants, such as “US-located managerial workers age 50 or older” or “male internet users age 40-49 without a bachelor’s degree.” So the total cost of a research study per participant could be higher than $10/hour – and for the sake of simple math, I often just double my mental estimate to $20/hour to take into consideration the “other costs.”

My agreement with my faculty advisors is to keep each individual survey’s budget to no more than $1,000, unless I get permission. I juggle the combination of survey duration, number of participants and overhead to keep within that budget guideline. For the example of a 15-minute survey, I would be able to recruit 200 participants who I intend to receive $2.50 per survey, with an additional $2.50 per survey budgeted for fees, qualifications or other overhead.

CMU has posted guidelines about how to compensate human subjects in research. Violating these will lead to a rejection of IRB approval.

Amazon Mechanical Turk – some non-obvious quirks

As a requester, I was seeing many more survey responses recorded in the Qualtrics metadata than were marked “submitted” in Amazon Mechanical Turk’s interface for Human Intelligence Tasks (HITs). I received a few emails also from Mturk workers that they were unsure whether they had done something wrong, because they were unable to find a “Submit” button in the MTurk interface.

I found an explanation because I periodically take surveys myself as an MTurk worker. I noticed later in 2018 that some requesters were including specific directions for survey HITs that workers needed to hit the two >> arrows at the bottom of the last Qualtrics screen, where it thanks you for recording your survey response. Aha!

Something else that wasn’t obvious to me: For a while, I was asking workers to avoid taking a repeated survey of mine if they had previously filled out the survey. But I received an email from a worker that it is very, very difficult for them to check that type of information. I found myself as a worker that I was not able to easily find out what surveys I had taken beyond the 45-day period listed in the HIT Dashboard. I removed that instruction from my repeat surveys.

As an MTurk worker myself, I quickly figured out how to speed through surveys – use the arrow and tab keys! This enables a survey taker to complete a survey far, far faster than a pilot tester. The flip side happened to me too – I would get a call, or have an office mate or pet interrupt me, and that led me to take far, far longer to complete some surveys than if I went straight through it at once.

This means there is a legitimate explanation for why you may see a wide variation in the duration of each survey response – some participants will have this kind of expertise, and some will either have reading difficulties (see my comments above) or distractions that cause the survey to take them longer than estimated. As a result, I have been throwing out fewer survey responses that varied in duration from the time that my pilot testers devoted to them.

I hope these tips are helpful to you! I will update this post and/or put up new posts as I encounter more issues in survey work that could be helpful to others.

Author: Cori

Cori Faklaris (aka "HeyCori") is an assistant professor at the University of North Carolina at Charlotte, Department of Software and Information Systems, College of Computing. Faklaris received her PhD in human-computer interaction in 2022 from Carnegie Mellon University's Human-Computer Interaction Institute, School of Computer Science, in Pittsburgh, PA, USA. She also is a social media expert and longtime journalist, and/or "Doer of Things No One Else Wants to Do."

One thought on “Tips from my online survey research in 2018”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.