Qualtrics RelevantID, reCAPTCHA, and other tips for survey research in 2021

Use these settings to detect and prevent spam in a survey dataset.

For my dissertation research at Carnegie Mellon University, I have created a national advertising campaign to recruit interview subjects via an online survey. The resulting interviews of U.S. residents age 18 and older will, in turn, inform the design of a final national survey.

It’s fun to return to two of my passions – connecting with people online and conducting quantitative survey research – EXCEPT when my survey gets flooded with spam! Once study info gets posted to the internet, anyone can copy it to a forum or group where people try to game paid surveys with repeated and/or inauthentic responses. This could max out my quota sampling before I reach the people who actually want to be part of this research.

Below are some of my tips for setting up the survey in Qualtrics, in order to address and prevent spam in my dataset:

  • In Qualtrics’ survey settings, I have enabled RelevantID. This checks in the background for evidence that a response is a duplicate or otherwise a fraud, and reports the score in the metadata. This helps catch, for example, whether someone is using a different email to take the survey more than once, and thus increase the amount of compensation they are issued.
  • The “Prevent Ballot Box Stuffing” setting (known as “Prevent Multiple Submissions” in the newer interface) will also help guard against spam duplicates. In past surveys, I have set this to only flag the repeat responses for review. However, for this national survey, I set it to prevent multiple submissions. A message tells anyone caught by this option that they are not able to take the survey more than once.
  • Also in Qualtrics’ survey settings, I have enabled reCAPTCHA bot detection. This is not just the “Prove you are not a robot” challenge question (which I added to the second block in the survey flow). Invisible tech judges the likelihood that the participant is a bot, and reports the score in the metadata.
  • With all of the above enabled, I can manually filter responses in Qualtrics’ Data & Analysis tab. On the top right, the Response Quality label is clickable. It takes me to a report of what issues, if any, the above checks have flagged, and gives me the option to view the problematic responses. Once in that filter, I can use the far-left column of check boxes to delete data and decrement quotas for any or all the selected responses.
  • Even better, though, is to kick these out of the survey before they start. I set Embedded Data to record the above settings, at the top of the Survey Flow. Then, I set a branch near the top with conditions matched to the Embedded Data: a True for Q_BallotBoxStuffing and Q_RelevantIDDuplicate, and thresholds for Q_DuplicateScore, Q_RecaptchaScore and Q_FraudScore. If any of these conditions are met, the block returns End of Survey. See the below image or the Qualtrics page for Fraud Detection for more info.
  • Finally, I want to help the real humans who respond to my ads to choose not to take it, if they judge that it’s not worth the risk of having a response thrown out. In my survey email’s auto-responder and in the Qualtrics block with the reCAPTCHA question, I include text to this effect: Note that only one response will be accepted. We may reject responses if the survey metadata reports duplication, low response quality and/or non-U.S. location, if the duration of the survey seems inconsistent with manual human response, or if the responses fail attention checks.
Screengrab from Qualtrics showing the placement and settings for the Fraud Detection blocks in the survey flow. See https://www.qualtrics.com/support/survey-platform/survey-module/survey-checker/fraud-detection/ for more information.
Screengrab from Qualtrics showing the placement and settings for the Fraud Detection blocks in the survey flow. See https://www.qualtrics.com/support/survey-platform/survey-module/survey-checker/fraud-detection/ for more information.

‘Normal and Easy: Account Sharing Practices in the Workplace’ – new paper for CSCW 2019

Drumroll … I now am a co-author on an archival publication in the lead venue for social computing!

Our paper, “Normal and Easy : Account Sharing Practices in the Workplace,” is being published this month in Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW. This is part of the Conference on Computer-Supported Collaborative Work and Social Computing – which is what most of my life’s work in information technology and media has revolved around.

However, as much as I might want to be present, I am also practicing good self-care this fall – and part of that is limiting my travel so that I don’t run myself ragged trying to be different places plus keep up with my research and personal life! So, my advisor Laura Dabbish is presenting this research on Wed., Nov. 13, in Austin, Tx., USA. 

For this research paper, we conducted two online surveys. In Study 1, we asked people a series of open-ended questions to elicit their sharing practices and start to zero in on what are their key pain points, while in Study 2, we collected a series of closed-ended items to gather specific details about how and why people shared digital accounts with colleagues and what are their specific struggles with those activities. We have posted these survey protocols on our website at https://socialcybersecurity.org/files/WorkplaceSharing_OpenEndedShort_Qualtrics.pdf and  https://socialcybersecurity.org/files/WorkplaceSharing_ClosedEndedLong_Qualtrics.pdf.

Our results demonstrate that account sharing in the modern workplace serves as a norm rather than a simple workaround (“normal and easy”), with the key motivations being to centralize collaborative activity and to reduce the work needed to manage the boundaries around these collaborative activities.  

However, people still struggle with a number of issues: lack of activity accountability and awareness, conflicts over simultaneous access, difficulties controlling access, and collaborative password use. (Hands up, anyone who has a sticky note taped in their work space to share passwords for key accounts?)

Our work provides insights into the current difficulties people face in workplace collaboration with online account sharing, as a result of inappropriate designs that still assume a single-user model for accounts. We highlight opportunities for CSCW and HCI researchers and designers to better support sharing by multiple people in a more usable and secure way.

This is a BIG paper, so I’ll stop restating the abstract and send you to the link on our website: 

  • Yunpeng Song, Cori Faklaris, Zhongmin Cai, Jason I. Hong, and Laura Dabbish. 2019. Normal and Easy: Account Sharing Practices in the Workplace. In Proceedings of the ACM: Human-Computer Interaction, Vol. 3, Issue CSCW, November 2019. ACM, New York, NY, USA. Available at: https://socialcybersecurity.org/files/CSCW2019_NormalAndEasy.pdf 

‘A Self-Report Measure of End-User Security Attitudes (SA-6)’: New Paper

This month is a personal milestone – my FIRST first-author usability research paper is being published in the Proceedings of the Fifteenth USENIX Symposium on Usable Privacy and Security (SOUPS 2019).

I will present on Monday, Aug. 12, in Santa Clara, Calif., USA, about my creation of the SA-6 psychometric scale. This six-item scale is a lightweight tool for quantifying and comparing people’s attitudes about using expert-recommended security measures. (Examples of these include enabling two-factor authentication, going the extra mile to create longer passwords that are unique to each account, and taking care to update software and mobile apps as soon as these patches are available.)

The scale itself is reproduced below (download the PDF at https://socialcybersecurity.org/sa6.html ):

  • Generally, I diligently follow a routine about security practices.
  • I always pay attention to experts’ advice about the steps I need to take to keep my online data and accounts safe. 
  • I am extremely knowledgeable about all the steps needed to keep my online data and accounts safe. 
  • I am extremely motivated to take all the steps needed to keep my online data and accounts safe.
  • I often am interested in articles about security threats. 
  • I seek out opportunities to learn about security measures that are relevant to me.

Response set: 1=Strongly disagree, 2=Somewhat disagree, 3=Neither disagree nor agree, 4=Somewhat agree, 5=Strongly disagree. Score by taking the average of all six responses.

If you are a researcher who can make use of this work, please download our full research paper and cite us as follows: Cori Faklaris, Laura Dabbish and Jason I. Hong. 2019. A Self-Report Measure of End-User Security Attitudes (SA-6). In Proceedings of the Fifteenth Symposium on Usable Privacy and Security (SOUPS 2019). USENIX Association, Berkeley, CA, USA. DOI: 10.13140/RG.2.2.29840.05125/3.

Many thanks to everyone who helped me develop and bring this project in for a landing, particularly Laura and Jason, Geoff Kaufman, Maria Tomprou, Sauvik Das, Sam Reig, Vikram Kamath Cannanure, Michael Eagle, and the members of the Connected Experience and CHIMPS labs at Carnegie Mellon University’s Human-Computer Interaction Institute. Funding for our Social Cybersecurity project is provided by the U.S. National Science Foundation under grant no. CNS-1704087.