4 tips on getting college students to fill out course evaluations

I get close to a 100% response rate on my course evaluations. Why? I apply what I know about social psychology, usability, and user engagement:

– I send an email announcement with the course link. Students are more likely to pay attention to a message from an authority figure and regular fixture in their lives (me) than an anonymous form email from the administration.

– I make a course assignment in our Learning Management System (Canvas) with the link that is due on the final day of the course evaluations. This is the same design pattern that I have used the entire semester to remind them of deliverables and nudge timely submissions. They have formed the habit of checking off this list of to-dos that is visible in the LMS sidebar.

– I offer 1 extra credit point to each student if the class reaches a 100% response rate on the evaluation survey. This is advertised in the email announcement, the LMS course assignment, and in my in-class lecture. A participation incentive that the recruit actually wants is a key motivator for all of my survey research, and it works here too. Plus, it activates students’ altruism and self-interest to help out other students by participating and to influence them personally to take the survey.

– I give them 10 minutes in the last class meeting to fill out the course evaluation, if they haven’t already done so. Most have already formed the intention to act based on my previous steps, but have not acted yet on the intention. By making it a class activity with dedicated time, it lessens that inertial force to put it off in favor of other urgent deadlines. I step out of the room, though, to mitigate the social pressure of having me present as they fill it out.

The end result is that my course evaluations are more balanced than if only the students with a grudge against me fill them out. I can trust the results as being a true cross-section of students’ assessments of my work.

“A Framework for Reasoning about Social Influences on Security and Privacy Adoption” – new for CHI 2024

This framework gives structure to what is known in the literature and the SIGCHI community about the social-psychological drivers of security and privacy adoption.

Pleased to be getting a publication out from my thesis work! This short paper and poster recaps the initial work to synthesize a framework that provides structure to the growing literature on social cybersecurity.

Many usable security solutions exist (such as using password managers or reporting phishing scams), but people often are not fully aware of what they do or use them regularly. A conceptual model of the adoption process will help us to identify where people get stuck and how to leverage social influences to encourage secure behaviors. We will be able to form and test hypotheses and improve our designs.

Toward this goal, we have developed a framework that synthesizes our design ideation, expertise, prior work, and new interview data (N=17) into a six-step adoption process with path relationships, associated social influences, and obstacles. 

This work contributes a prototype framework that accounts for social influences by step. It adds to what is known in the literature and the SIGCHI community about the social-psychological drivers of security adoption.

Future work (from my lab, but hopefully others’ too) should establish whether this process is the same regardless of culture, demographic variation, or work vs. home context, and whether it is a reliable theoretical basis and method for designing experiments and focusing efforts where they are likely to be most productive.

  • Cori Faklaris, Laura Dabbish, and Jason I. Hong. 2024. A Framework for Reasoning about Social Influences on Security and Privacy Adoption. In Extended Abstracts of the ACM Conference on Human Factors in Computing Systems (CHI EA 2024), May 11-16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA, 13 pages. Available at: https://corifaklaris.com/files/framework_chi2024.pdf

Tweaking my ‘Policy on Use of AI and Other Creative Tools’ (version 2.0)

In Spring and Fall 2023, my students had some successes in using generative AI such as ChatGPT for coursework (without crossing the line into a stated integrity violation). Some of these uses were:

  • Creating a prototype image of a “Quantified Toilet” in situ, as part of a privacy design project (based on a thought experiment at CHI 2014, but a real-life possibility too).
  • Writing quick research summaries for a shared Google Slides deck in my Collaborative and Social Computing graduate seminar.
  • Testing out whether commercially available Large Language Models (LLMs) can reliably and validly answer questions about dealing with security and privacy concerns.

But they also ran into a few obstacles. One, they did not know how to effectively prompt these models or to generate versions or new iterations of the first idea. This semester, I will provide them with more guidance.

Two, students did not know enough to be on alert for errors generated by the models. With images, that can be as simple as a missing flush valve on the toilet tank drawing. With text, the errors can be harder to notice if you are not knowledgeable about the topic. I had to correct students on a few occasions that the papers or author names that ChatGPT generated, based on an existing research paper, simply did not exist!

Georgia Tech’s Amy Bruckman has provided her draft of an AI policy that puts students on notice of potential harms. She notes that her courses are writing-heavy, so she discourages the use of genAI, noting that it has the strong potential to reduce their learning. Other potential harms noted in her policy: factual errors, bias, fake references, and poor style.

With this in mind, I have revised my Version 1.0 policy to use the following text (italics show emphasis in the syllabus given to students):

In this course, students are permitted to use tools such as Stable Diffusion, DALL-E, ChatGPT, Bard Gemini, and Bing Copilot. In general, permitted use of such tools is consistent with permitted use of non-AI assistants such as Grammarly, templating tools such as Canva, or images or text sourced from the internet or others’ files. No student may submit an assignment or work on an exam as their own that is entirely generated by means of an AI tool. If students use an AI tool or other creative tool to generate, draft, create, or compose any portion of any assignment, they must (a) credit the tool, and (b) identify what part of the work is from the AI tool and what is from themselves. Students are responsible for identifying and removing any factual errors, biases, and/or fake references that are introduced into their work through use of the AI tool.

I give a syllabus quiz at the start of every semester. I now have written a question to reinforce to students what they should retain about this policy. In future class sessions, I aim to follow up with a discussion in class about how to identify problems in generative AI output, and how to remedy these problems.