Tweaking my ‘Policy on Use of AI and Other Creative Tools’ (version 2.0)

A big pile of papers on a desk. Similar to my own. Photo credit: IsaacMao via photopin cc

In Spring and Fall 2023, my students had some successes in using generative AI such as ChatGPT for coursework (without crossing the line into a stated integrity violation). Some of these uses were:

  • Creating a prototype image of a “Quantified Toilet” in situ, as part of a privacy design project (based on a thought experiment at CHI 2014, but a real-life possibility too).
  • Writing quick research summaries for a shared Google Slides deck in my Collaborative and Social Computing graduate seminar.
  • Testing out whether commercially available Large Language Models (LLMs) can reliably and validly answer questions about dealing with security and privacy concerns.

But they also ran into a few obstacles. One, they did not know how to effectively prompt these models or to generate versions or new iterations of the first idea. This semester, I will provide them with more guidance.

Two, students did not know enough to be on alert for errors generated by the models. With images, that can be as simple as a missing flush valve on the toilet tank drawing. With text, the errors can be harder to notice if you are not knowledgeable about the topic. I had to correct students on a few occasions that the papers or author names that ChatGPT generated, based on an existing research paper, simply did not exist!

Georgia Tech’s Amy Bruckman has provided her draft of an AI policy that puts students on notice of potential harms. She notes that her courses are writing-heavy, so she discourages the use of genAI, noting that it has the strong potential to reduce their learning. Other potential harms noted in her policy: factual errors, bias, fake references, and poor style.

With this in mind, I have revised my Version 1.0 policy to use the following text (italics show emphasis in the syllabus given to students):

In this course, students are permitted to use tools such as Stable Diffusion, DALL-E, ChatGPT, Bard Gemini, and Bing Copilot. In general, permitted use of such tools is consistent with permitted use of non-AI assistants such as Grammarly, templating tools such as Canva, or images or text sourced from the internet or others’ files. No student may submit an assignment or work on an exam as their own that is entirely generated by means of an AI tool. If students use an AI tool or other creative tool to generate, draft, create, or compose any portion of any assignment, they must (a) credit the tool, and (b) identify what part of the work is from the AI tool and what is from themselves. Students are responsible for identifying and removing any factual errors, biases, and/or fake references that are introduced into their work through use of the AI tool.

I give a syllabus quiz at the start of every semester. I now have written a question to reinforce to students what they should retain about this policy. In future class sessions, I aim to follow up with a discussion in class about how to identify problems in generative AI output, and how to remedy these problems.

Author: Cori

Cori Faklaris (aka "HeyCori") is an assistant professor at the University of North Carolina at Charlotte, Department of Software and Information Systems, College of Computing. Faklaris received her PhD in human-computer interaction in 2022 from Carnegie Mellon University's Human-Computer Interaction Institute, School of Computer Science, in Pittsburgh, PA, USA. She also is a social media expert and longtime journalist, and/or "Doer of Things No One Else Wants to Do."

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.