Tweaking my ‘Policy on Use of AI and Other Creative Tools’ (version 2.0)

In Spring and Fall 2023, my students had some successes in using generative AI such as ChatGPT for coursework (without crossing the line into a stated integrity violation). Some of these uses were:

  • Creating a prototype image of a “Quantified Toilet” in situ, as part of a privacy design project (based on a thought experiment at CHI 2014, but a real-life possibility too).
  • Writing quick research summaries for a shared Google Slides deck in my Collaborative and Social Computing graduate seminar.
  • Testing out whether commercially available Large Language Models (LLMs) can reliably and validly answer questions about dealing with security and privacy concerns.

But they also ran into a few obstacles. One, they did not know how to effectively prompt these models or to generate versions or new iterations of the first idea. This semester, I will provide them with more guidance.

Two, students did not know enough to be on alert for errors generated by the models. With images, that can be as simple as a missing flush valve on the toilet tank drawing. With text, the errors can be harder to notice if you are not knowledgeable about the topic. I had to correct students on a few occasions that the papers or author names that ChatGPT generated, based on an existing research paper, simply did not exist!

Georgia Tech’s Amy Bruckman has provided her draft of an AI policy that puts students on notice of potential harms. She notes that her courses are writing-heavy, so she discourages the use of genAI, noting that it has the strong potential to reduce their learning. Other potential harms noted in her policy: factual errors, bias, fake references, and poor style.

With this in mind, I have revised my Version 1.0 policy to use the following text (italics show emphasis in the syllabus given to students):

In this course, students are permitted to use tools such as Stable Diffusion, DALL-E, ChatGPT, Bard Gemini, and Bing Copilot. In general, permitted use of such tools is consistent with permitted use of non-AI assistants such as Grammarly, templating tools such as Canva, or images or text sourced from the internet or others’ files. No student may submit an assignment or work on an exam as their own that is entirely generated by means of an AI tool. If students use an AI tool or other creative tool to generate, draft, create, or compose any portion of any assignment, they must (a) credit the tool, and (b) identify what part of the work is from the AI tool and what is from themselves. Students are responsible for identifying and removing any factual errors, biases, and/or fake references that are introduced into their work through use of the AI tool.

I give a syllabus quiz at the start of every semester. I now have written a question to reinforce to students what they should retain about this policy. In future class sessions, I aim to follow up with a discussion in class about how to identify problems in generative AI output, and how to remedy these problems.

Qualtrics RelevantID, reCAPTCHA, and other tips for survey research in 2021

Use these settings to detect and prevent spam in a survey dataset.

For my dissertation research at Carnegie Mellon University, I have created a national advertising campaign to recruit interview subjects via an online survey. The resulting interviews of U.S. residents age 18 and older will, in turn, inform the design of a final national survey.

It’s fun to return to two of my passions – connecting with people online and conducting quantitative survey research – EXCEPT when my survey gets flooded with spam! Once study info gets posted to the internet, anyone can copy it to a forum or group where people try to game paid surveys with repeated and/or inauthentic responses. This could max out my quota sampling before I reach the people who actually want to be part of this research.

Below are some of my tips for setting up the survey in Qualtrics, in order to address and prevent spam in my dataset:

  • In Qualtrics’ survey settings, I have enabled RelevantID. This checks in the background for evidence that a response is a duplicate or otherwise a fraud, and reports the score in the metadata. This helps catch, for example, whether someone is using a different email to take the survey more than once, and thus increase the amount of compensation they are issued.
  • The “Prevent Ballot Box Stuffing” setting (known as “Prevent Multiple Submissions” in the newer interface) will also help guard against spam duplicates. In past surveys, I have set this to only flag the repeat responses for review. However, for this national survey, I set it to prevent multiple submissions. A message tells anyone caught by this option that they are not able to take the survey more than once.
  • Also in Qualtrics’ survey settings, I have enabled reCAPTCHA bot detection. This is not just the “Prove you are not a robot” challenge question (which I added to the second block in the survey flow). Invisible tech judges the likelihood that the participant is a bot, and reports the score in the metadata.
  • With all of the above enabled, I can manually filter responses in Qualtrics’ Data & Analysis tab. On the top right, the Response Quality label is clickable. It takes me to a report of what issues, if any, the above checks have flagged, and gives me the option to view the problematic responses. Once in that filter, I can use the far-left column of check boxes to delete data and decrement quotas for any or all the selected responses.
  • Even better, though, is to kick these out of the survey before they start. I set Embedded Data to record the above settings, at the top of the Survey Flow. Then, I set a branch near the top with conditions matched to the Embedded Data: a True for Q_BallotBoxStuffing and Q_RelevantIDDuplicate, and thresholds for Q_DuplicateScore, Q_RecaptchaScore and Q_FraudScore. If any of these conditions are met, the block returns End of Survey. See the below image or the Qualtrics page for Fraud Detection for more info.
  • Finally, I want to help the real humans who respond to my ads to choose not to take it, if they judge that it’s not worth the risk of having a response thrown out. In my survey email’s auto-responder and in the Qualtrics block with the reCAPTCHA question, I include text to this effect: Note that only one response will be accepted. We may reject responses if the survey metadata reports duplication, low response quality and/or non-U.S. location, if the duration of the survey seems inconsistent with manual human response, or if the responses fail attention checks.
Screengrab from Qualtrics showing the placement and settings for the Fraud Detection blocks in the survey flow. See https://www.qualtrics.com/support/survey-platform/survey-module/survey-checker/fraud-detection/ for more information.
Screengrab from Qualtrics showing the placement and settings for the Fraud Detection blocks in the survey flow. See https://www.qualtrics.com/support/survey-platform/survey-module/survey-checker/fraud-detection/ for more information.