‘A Self-Report Measure of End-User Security Attitudes (SA-6)’: New Paper

This month is a personal milestone – my FIRST first-author usability research paper is being published in the Proceedings of the Fifteenth USENIX Symposium on Usable Privacy and Security (SOUPS 2019).

I will present on Monday, Aug. 12, in Santa Clara, Calif., USA, about my creation of the SA-6 psychometric scale. This six-item scale is a lightweight tool for quantifying and comparing people’s attitudes about using expert-recommended security measures. (Examples of these include enabling two-factor authentication, going the extra mile to create longer passwords that are unique to each account, and taking care to update software and mobile apps as soon as these patches are available.)

The scale itself is reproduced below (download the PDF at https://socialcybersecurity.org/sa6.html ):

  • Generally, I diligently follow a routine about security practices.
  • I always pay attention to experts’ advice about the steps I need to take to keep my online data and accounts safe. 
  • I am extremely knowledgeable about all the steps needed to keep my online data and accounts safe. 
  • I am extremely motivated to take all the steps needed to keep my online data and accounts safe.
  • I often am interested in articles about security threats. 
  • I seek out opportunities to learn about security measures that are relevant to me.

Response set: 1=Strongly disagree, 2=Somewhat disagree, 3=Neither disagree nor agree, 4=Somewhat agree, 5=Strongly disagree. Score by taking the average of all six responses.

If you are a researcher who can make use of this work, please download our full research paper and cite us as follows: Cori Faklaris, Laura Dabbish and Jason I. Hong. 2019. A Self-Report Measure of End-User Security Attitudes (SA-6). In Proceedings of the Fifteenth Symposium on Usable Privacy and Security (SOUPS 2019). USENIX Association, Berkeley, CA, USA. DOI: 10.13140/RG.2.2.29840.05125/3.

Many thanks to everyone who helped me develop and bring this project in for a landing, particularly Laura and Jason, Geoff Kaufman, Maria Tomprou, Sauvik Das, Sam Reig, Vikram Kamath Cannanure, Michael Eagle, and the members of the Connected Experience and CHIMPS labs at Carnegie Mellon University’s Human-Computer Interaction Institute. Funding for our Social Cybersecurity project is provided by the U.S. National Science Foundation under grant no. CNS-1704087.

Tips from my online survey research in 2018

I have a special affinity for quantitative research. Specifically: I LOVE online survey work! It’s a very efficient method to gather lots of data at scale that is almost automatically formatted for easy analysis and visualization.

Even though I have past experience with designing, collecting and analyzing survey data around current events, politics and marketing, my academically-focused work this year really sharpened my survey skills. I used Qualtrics almost exclusively as the online platform for creating and administering surveys, and I recruited larger samples than I ever have before using Amazon Mechanical Turk (MTurk), Qualtrics‘ own panel aggregation and a study pool run by Carnegie Mellon University’s Center for Behavioral and Decision Research (CBDR). My collaborators and I also recruited participants through US-based Survey Monkey; Prolific Academic, a UK-based company that also recruits US-based workers; and QQ and SoJump survey platforms based in China.

The increased stakes and many unknowns I encountered as my research evolved led me to reach out on social media, at workshops and in my own labs for help and for new ideas. I also reached out to MTurk workers for advice on fine-tuning my surveys, and I signed up myself as an MTurk worker to see how it looks from the other side. Below, I share some of what I learned during this year’s academic work that may also benefit your own survey research.

Continue reading “Tips from my online survey research in 2018”