‘A Self-Report Measure of End-User Security Attitudes (SA-6)’: New Paper

This month is a personal milestone – my FIRST first-author usability research paper is being published in the Proceedings of the Fifteenth USENIX Symposium on Usable Privacy and Security (SOUPS 2019).

I will present on Monday, Aug. 12, in Santa Clara, Calif., USA, about my creation of the SA-6 psychometric scale. This six-item scale is a lightweight tool for quantifying and comparing people’s attitudes about using expert-recommended security measures. (Examples of these include enabling two-factor authentication, going the extra mile to create longer passwords that are unique to each account, and taking care to update software and mobile apps as soon as these patches are available.)

The scale itself is reproduced below (download the PDF at https://socialcybersecurity.org/sa6.html ):

  • Generally, I diligently follow a routine about security practices.
  • I always pay attention to experts’ advice about the steps I need to take to keep my online data and accounts safe. 
  • I am extremely knowledgeable about all the steps needed to keep my online data and accounts safe. 
  • I am extremely motivated to take all the steps needed to keep my online data and accounts safe.
  • I often am interested in articles about security threats. 
  • I seek out opportunities to learn about security measures that are relevant to me.

Response set: 1=Strongly disagree, 2=Somewhat disagree, 3=Neither disagree nor agree, 4=Somewhat agree, 5=Strongly disagree. Score by taking the average of all six responses.

If you are a researcher who can make use of this work, please download our full research paper and cite us as follows: Cori Faklaris, Laura Dabbish and Jason I. Hong. 2019. A Self-Report Measure of End-User Security Attitudes (SA-6). In Proceedings of the Fifteenth Symposium on Usable Privacy and Security (SOUPS 2019). USENIX Association, Berkeley, CA, USA. DOI: 10.13140/RG.2.2.29840.05125/3.

Many thanks to everyone who helped me develop and bring this project in for a landing, particularly Laura and Jason, Geoff Kaufman, Maria Tomprou, Sauvik Das, Sam Reig, Vikram Kamath Cannanure, Michael Eagle, and the members of the Connected Experience and CHIMPS labs at Carnegie Mellon University’s Human-Computer Interaction Institute. Funding for our Social Cybersecurity project is provided by the U.S. National Science Foundation under grant no. CNS-1704087.

I did a podcast! ‘Cybercrime Conversations #12 – Social Cybersecurity’

It was a pleasure to speak this week with Rod Graham, an assistant professor of sociology and criminal justice at Old Dominion University, about my Social Cybersecurity research and my lifelong journey to Carnegie Mellon University. We also talked a fair amount about Zen Buddhism – it turns out he and I have that in common, too. Small world!

My ideas for ‘Theory-Driven Interface Design Strategies to Address ‘False News’ on Social Media’

I have enjoyed my work for the past two years on our Social Cybersecurity project at the Human-Computer Interaction Institute at Carnegie Mellon University. Building on my news and social media background, I’ve also been working on some specific ideas for design strategies to address viral hoaxes, rumors and disinformation/misinformation in social computing systems. Many thanks to HCI faculty Niki Kittur and Geoff Kaufman for providing ideas for prior work to incorporate into these strategies, and to Kathleen M. Carley for her perspective as a computational sociologist.

Poster for Knight Foundation site visit to Carnegie Mellon University, April 8, 2019. Abstract: Non-expert users and experts such as journalists alike can have trouble judging the quality of the content and sources that they encounter in social media. Current interface designs may not be leveraging what we know about how users perceive and judge information when they are multitasking or quickly scanning a display. Our work aims to create new design guidelines for helping busy users to assess false news, unverified rumors and hoaxes in two contexts: (1) Helping users to make their own judgment of 
which specific content should not be trusted; (2) Aiding users in judging the credibility of information sources found in social media.

Today I will debut these for the first time in public and speak with people from the Knight Foundation about some of my ideas. Onward!