Policy on Use of AI Tools for my course syllabus, version 1.0

Ever since ChatGPT arrived, I have been talking with my students and colleagues about how best we can use it and other AI-powered creative tools such as DALL-E and Stable Diffusion in our work. I also have discussed with students, in particular, how these AI tools also could mislead them (such as by “hallucinating” output that looks and feels like a real-world search result or blog post, but composed of made-up information). This is partly because I feel strongly that students should be prepared for the working world where these tools are rapidly becoming commonplace, and partly because talking about it helps me work out my own thinking about their rightful place in the workflows.

Today seemed like a good day to formalize my thoughts into a written policy for my courses. I credit the blog post linked below with inspiring my wording. But the impetus is the sheer number of conversations I’m having with instructors who suspect AI tool use in coursework this month.

Here is what I have come up with:

“In this course, students are allowed to use tools such as Stable Diffusion, ChatGPT, and BingChat in a similar manner to use of non-AI references, templates, images, or body text, such as those in assigned research papers or obtained via internet search. This means that (1) no student may submit an assignment or work on an exam as their own that is entirely generated by means of an AI tool. And, (2) if students use an AI tool to generate, draft, create, or compose any portion of any assignment, they must (a) credit the tool, (b) identify what part of the work is from the AI tool and what is from themselves, and (c) summarize why they decided to include the AI tool’s output.”

Thanks to a timely comment from Jeff Bigham on Mastodon, I am contemplating adding the following sentence to the above, and retitling the section “Policy on use of AI and Other Creative Tools”:

“The same requirement to credit the use of tools for generating, drafting, creating, or composing work toward deliverables also applies to use of creative tools such as Grammarly and Canva.”

Reference consulted for the above: Kristopher Purzycki. 2023. Syllabus Policy for Using AI Tools in the Writing Classroom. Medium. Retrieved March 17, 2023 from https://medium.com/@kristopherpurzycki/syllabus-policy-for-using-ai-tools-in-the-writing-classroom-8accab29e8c7

Idea # 1 for future work: Investigating the role of resistance in cybersecurity adoption

I’ve noticed a peculiar pattern – or more accurately, non-pattern – in all my studies of usable security. At every step of adoption, people exhibit resistance to adopting cybersecurity measures (such as installing password managers or creating unique passwords for each online account). I expected to see a lot of resistant attitudes in people who do not adopt security practices, or those who only adopt security practices if mandated to do so. But resistance is also high among research participants who have voluntarily adopted security practices and who seem very engaged with cybersecurity overall.

As it happens, I have already developed a measure of security resistance that can help in these studies. Take the average of participants’ Likert-type survey ratings on these four items (1=Strongly Disagree to 5=Strongly Agree) [handout]:

  • I am too busy to put in the effort needed to change my security behaviors.
  • I have much bigger problems than my risk of a security breach.
  • There are good reasons why I do not take the necessary steps to keep my online data and accounts safe.
  • I usually will not use security measures if they are inconvenient.

So far, I have found that resistance alone is not a reliable differentiator of someone’s level of cybersecurity adoption. For example, in my working paper describing the development and validation of the SA-13 security attitude inventory, I find that my SA-Resistance scale (the one described above) is significantly negatively associated with self-reported Recalled Security Actions (RSec), but also significantly positively associated with Security Behavior Intentions (SeBIS). By contrast, in a more recent survey (forthcoming), I found that a measure similar to SA-Resistance was significantly positively associated with a self-report measure of password manager adoption, but significantly negatively associated with a measure of being in a pre-adoption stage similar to intention. A research assistant during the 2021 REU program, Faye Kollig, also found no significant variances in resistance among participants in our interview study to identify commonalities in security adoption narratives.

At the same time, adding these resistance items to those measuring concernedness, attentiveness, and engagement (the SA-13 inventory) appears to create a reliable predictor of security behavior. In a study at Fujitsu Ltd., Terada et al. found a correlation between SA-13 and actual security behavior for both Japanese and American participants (p<.001) that was stronger than that for SA-6. The authors speculate that this is because of the inclusion of the resistance items.

Is it consistently the case that resistance only helps to differentiate your level of adoption if it is balanced with other attitudes? Is some other mechanism responsible? I hope to follow up on these results with a student assistant when I join UNC Charlotte’s Department of Software and Information Systems this fall as an assistant professor.

Coincidentally, the New Security Paradigms Workshop has a theme this year of “Resilience and Resistance.” I may submit to the workshop myself, but I also hope that other prospective attendees will find my resistance scale of use in their work.