Idea # 1 for future work: Investigating the role of resistance in cybersecurity adoption

I’ve noticed a peculiar pattern – or more accurately, non-pattern – in all my studies of usable security. At every step of adoption, people exhibit resistance to adopting cybersecurity measures (such as installing password managers or creating unique passwords for each online account). I expected to see a lot of resistant attitudes in people who do not adopt security practices, or those who only adopt security practices if mandated to do so. But resistance is also high among research participants who have voluntarily adopted security practices and who seem very engaged with cybersecurity overall.

As it happens, I have already developed a measure of security resistance that can help in these studies. Take the average of participants’ Likert-type survey ratings on these four items (1=Strongly Disagree to 5=Strongly Agree) [handout]:

  • I am too busy to put in the effort needed to change my security behaviors.
  • I have much bigger problems than my risk of a security breach.
  • There are good reasons why I do not take the necessary steps to keep my online data and accounts safe.
  • I usually will not use security measures if they are inconvenient.

So far, I have found that resistance alone is not a reliable differentiator of someone’s level of cybersecurity adoption. For example, in my working paper describing the development and validation of the SA-13 security attitude inventory, I find that my SA-Resistance scale (the one described above) is significantly negatively associated with self-reported Recalled Security Actions (RSec), but also significantly positively associated with Security Behavior Intentions (SeBIS). By contrast, in a more recent survey (forthcoming), I found that a measure similar to SA-Resistance was significantly positively associated with a self-report measure of password manager adoption, but significantly negatively associated with a measure of being in a pre-adoption stage similar to intention. A research assistant during the 2021 REU program, Faye Kollig, also found no significant variances in resistance among participants in our interview study to identify commonalities in security adoption narratives.

At the same time, adding these resistance items to those measuring concernedness, attentiveness, and engagement (the SA-13 inventory) appears to create a reliable predictor of security behavior. In a study at Fujitsu Ltd., Terada et al. found a correlation between SA-13 and actual security behavior for both Japanese and American participants (p<.001) that was stronger than that for SA-6. The authors speculate that this is because of the inclusion of the resistance items.

Is it consistently the case that resistance only helps to differentiate your level of adoption if it is balanced with other attitudes? Is some other mechanism responsible? I hope to follow up on these results with a student assistant when I join UNC Charlotte’s Department of Software and Information Systems this fall as an assistant professor.

Coincidentally, the New Security Paradigms Workshop has a theme this year of “Resilience and Resistance.” I may submit to the workshop myself, but I also hope that other prospective attendees will find my resistance scale of use in their work.

Qualtrics RelevantID, reCAPTCHA, and other tips for survey research in 2021

Use these settings to detect and prevent spam in a survey dataset.

For my dissertation research at Carnegie Mellon University, I have created a national advertising campaign to recruit interview subjects via an online survey. The resulting interviews of U.S. residents age 18 and older will, in turn, inform the design of a final national survey.

It’s fun to return to two of my passions – connecting with people online and conducting quantitative survey research – EXCEPT when my survey gets flooded with spam! Once study info gets posted to the internet, anyone can copy it to a forum or group where people try to game paid surveys with repeated and/or inauthentic responses. This could max out my quota sampling before I reach the people who actually want to be part of this research.

Below are some of my tips for setting up the survey in Qualtrics, in order to address and prevent spam in my dataset:

  • In Qualtrics’ survey settings, I have enabled RelevantID. This checks in the background for evidence that a response is a duplicate or otherwise a fraud, and reports the score in the metadata. This helps catch, for example, whether someone is using a different email to take the survey more than once, and thus increase the amount of compensation they are issued.
  • The “Prevent Ballot Box Stuffing” setting (known as “Prevent Multiple Submissions” in the newer interface) will also help guard against spam duplicates. In past surveys, I have set this to only flag the repeat responses for review. However, for this national survey, I set it to prevent multiple submissions. A message tells anyone caught by this option that they are not able to take the survey more than once.
  • Also in Qualtrics’ survey settings, I have enabled reCAPTCHA bot detection. This is not just the “Prove you are not a robot” challenge question (which I added to the second block in the survey flow). Invisible tech judges the likelihood that the participant is a bot, and reports the score in the metadata.
  • With all of the above enabled, I can manually filter responses in Qualtrics’ Data & Analysis tab. On the top right, the Response Quality label is clickable. It takes me to a report of what issues, if any, the above checks have flagged, and gives me the option to view the problematic responses. Once in that filter, I can use the far-left column of check boxes to delete data and decrement quotas for any or all the selected responses.
  • Even better, though, is to kick these out of the survey before they start. I set Embedded Data to record the above settings, at the top of the Survey Flow. Then, I set a branch near the top with conditions matched to the Embedded Data: a True for Q_BallotBoxStuffing and Q_RelevantIDDuplicate, and thresholds for Q_DuplicateScore, Q_RecaptchaScore and Q_FraudScore. If any of these conditions are met, the block returns End of Survey. See the below image or the Qualtrics page for Fraud Detection for more info.
  • Finally, I want to help the real humans who respond to my ads to choose not to take it, if they judge that it’s not worth the risk of having a response thrown out. In my survey email’s auto-responder and in the Qualtrics block with the reCAPTCHA question, I include text to this effect: Note that only one response will be accepted. We may reject responses if the survey metadata reports duplication, low response quality and/or non-U.S. location, if the duration of the survey seems inconsistent with manual human response, or if the responses fail attention checks.
Screengrab from Qualtrics showing the placement and settings for the Fraud Detection blocks in the survey flow. See https://www.qualtrics.com/support/survey-platform/survey-module/survey-checker/fraud-detection/ for more information.
Screengrab from Qualtrics showing the placement and settings for the Fraud Detection blocks in the survey flow. See https://www.qualtrics.com/support/survey-platform/survey-module/survey-checker/fraud-detection/ for more information.