‘What Drives SMiShing Susceptibility? A U.S. Interview Study of How and Why Mobile Phone Users Judge Text Messages to be Real or Fake’ – paper at SOUPS 2024

My Phd student Sarah Tabassum is here with me at the Symposium on Usable Privacy and Security in Philadelphia, PA, USA, presenting our paper during Tuesday’s Mobile Security block: “What Drives SMiShing Susceptibility? A U.S. Interview Study of How and Why Mobile Phone Users Judge Text Messages to be Real or Fake.”

For this study, we interviewed 29 people (half students, half outside of campus) about how they make sense of the flood of strange messages received on their phones. Texts with links were commonly seen as “fake” (bad news for the political campaign trying to advertise a pre-primary rally!).

As an Apple user, I was surprised/pleased that Android owners get interface warnings of possible spam or scam texts (see pic). However, there’s no way to report messages. (iPhone has a “Report Junk” option, but no “Report Smish” button either.)

Screenshot of an Android phone screen showing the notification of "Why this Looks Like Spam" for a text message that is claiming to be Chase bank.

Our SPEX Lab group is now thinking about how to better support mobile users in making sense of these messages and learning how to spot the scam SMS-type texts (“smishing” = SMS + phishing).

Something to know – scammers often now will not send a fake link in the 1st text. Instead, they “soft sell”, building trust with a series of messages. Once you reply, THEN they text the link to steal your credentials – or, call and claim to be security investigating the text!

  • Sarah Tabassum, Cori Faklaris, and Heather Richter Lipford. 2024. What Drives SMiShing Susceptibility? A U.S. Interview Study of How and Why Mobile Phone Users Judge Text Messages to be Real or Fake. In Proceedings of the 20th Symposium on Usable Privacy and Security. Retrieved June 25, 2024 from https://www.usenix.org/conference/soups2024/presentation/tabassum-sarah

Tweaking my ‘Policy on Use of AI and Other Creative Tools’ (version 2.0)

In Spring and Fall 2023, my students had some successes in using generative AI such as ChatGPT for coursework (without crossing the line into a stated integrity violation). Some of these uses were:

  • Creating a prototype image of a “Quantified Toilet” in situ, as part of a privacy design project (based on a thought experiment at CHI 2014, but a real-life possibility too).
  • Writing quick research summaries for a shared Google Slides deck in my Collaborative and Social Computing graduate seminar.
  • Testing out whether commercially available Large Language Models (LLMs) can reliably and validly answer questions about dealing with security and privacy concerns.

But they also ran into a few obstacles. One, they did not know how to effectively prompt these models or to generate versions or new iterations of the first idea. This semester, I will provide them with more guidance.

Two, students did not know enough to be on alert for errors generated by the models. With images, that can be as simple as a missing flush valve on the toilet tank drawing. With text, the errors can be harder to notice if you are not knowledgeable about the topic. I had to correct students on a few occasions that the papers or author names that ChatGPT generated, based on an existing research paper, simply did not exist!

Georgia Tech’s Amy Bruckman has provided her draft of an AI policy that puts students on notice of potential harms. She notes that her courses are writing-heavy, so she discourages the use of genAI, noting that it has the strong potential to reduce their learning. Other potential harms noted in her policy: factual errors, bias, fake references, and poor style.

With this in mind, I have revised my Version 1.0 policy to use the following text (italics show emphasis in the syllabus given to students):

In this course, students are permitted to use tools such as Stable Diffusion, DALL-E, ChatGPT, Bard Gemini, and Bing Copilot. In general, permitted use of such tools is consistent with permitted use of non-AI assistants such as Grammarly, templating tools such as Canva, or images or text sourced from the internet or others’ files. No student may submit an assignment or work on an exam as their own that is entirely generated by means of an AI tool. If students use an AI tool or other creative tool to generate, draft, create, or compose any portion of any assignment, they must (a) credit the tool, and (b) identify what part of the work is from the AI tool and what is from themselves. Students are responsible for identifying and removing any factual errors, biases, and/or fake references that are introduced into their work through use of the AI tool.

I give a syllabus quiz at the start of every semester. I now have written a question to reinforce to students what they should retain about this policy. In future class sessions, I aim to follow up with a discussion in class about how to identify problems in generative AI output, and how to remedy these problems.