I field a lot of questions from people outside academia about what I think of Generative AI (GenAI or just “AI”) in education. I am fortunate to teach upper-level electives for undergraduates and graduate students. Because I am not teaching foundational courses, I can allow usage of any AI tool, consistent with usage of other productivity and creativity tools. In fact, I encourage this. I want students to try things with AI and figure out what works well and what doesn’t. I consider that, while they need to learn college-level skills and establish intellectual credentials, their job prospects and career success will involve knowing the proper uses of GenAI for productivity. Learning from mistakes now will pay off later.
AI Syllabus Policy 3.0
My syllabus policy has evolved from the first version and a later tweak. For Fall 2025, it reads:
In this course, students are encouraged to use tools such as Copilot, Gemini, and Notebook LM (sign in with UNC Charlotte email) that are university-licensed and data-protected, per the “green” status on the AI Software Guidance webpage. No student may submit an assignment as their own that is entirely generated by means of an AI tool. If students use an AI tool or other creative tool, they must follow best practices by (a) crediting the tool for (b) its part in the work. Students are responsible for identifying and removing any factual errors, biases, and/or fake references. Ask the instructor for advice about your use of a particular AI model not listed here.
The major change in wording is to recognize and encourage the use of UNC Charlotte’s now-considerable investments in AI software. We do not want students (or faculty, or staff) to be forced to share their educational data or intellectual property, or to go out of pocket beyond what they’ve already paid, to benefit from Generative AI. I want to reinforce this message by emphasizing and linking to the provided AI services. After all, students are paying for them already through tuition and fees, and through their taxes. They should at least consider using them. And if they exercise their freedom of choice to not do so, certainly I have my opinions about what other models or services are worth their usage.
I kept the sentence about student responsibilities for falsehoods, biases, and hallucinations in outputs. This reinforces what I teach students about AI – that it is an assistant, not a replacement, and as such, you must check over all its work, just as if you hired a human intern. I also kept the requirement to cite the AI and for which part, and am glad to see that more students are doing just that. Finally, I boldface the statement about what they CANNOT use the AI tool for in their schoolwork. I encourage students to consult with me if they are unsure whether their use of AI is improper.
Of course, some students do go wrong with AI – and what else is new? Cheating and shortcuts have always been with us. Most of the time, I’m more baffled by this than anything, but then, poor student practices often seem like the behavior of a foreign species.
Where Students Go Wrong with AI
Misuse can be easy to spot. Most of the time, there is no way to prove that GenAI was used to completely misrepresent the student’s contribution. One sure-fire marker, however, is narrative that includes the phrase “As an AI model … ” 😂 from a straight copy-paste. Another obvious marker is use of fake references or “hallucinations” in students’ literature reviews. It takes literally 15 seconds for us to pop the content or reference into Google to verify that, oh, yeah, that’s definitely made-up. Ironically, the students end up being the ones who are tricked – the realistic-sounding output of so-called “deep research” models fools them into submitting the output without verifying and then revising it, and thereby turning it into a legitimate submission.
Students are probably using GenAI where it seems simpler not to. Each semester, a TA will show me examples of strongly suspected AI use for a low-stakes, 1-2 pts assignment. At best, an AI is only capable of B- to C-quality work in my courses. It would have involved less time and effort for the student to dash off a mediocre-to-bad-quality submission themselves than to call up the AI model, craft the prompt, run the program, copy the output, and paste it into the submission box, and end up with the mediocre-to-bad grade anyway.
Some students take short-cuts with AI that cheat them out of learning. This is the “wrong AI” use that concerns me the most. Chatbots can be helpful study buddies or final checkers for a deliverable, but they are not able to make nuanced judgments about students’ understanding. They usually are designed to stay positive, not to push back on vague answers as would a human teacher or class peer. Nor will they ever have complete information about the course. As my doctoral student Sarah Tabassum notes, the real loss is the omission of the learning process itself – the critical reading and reflection that build lasting knowledge. Accepting AI outputs at face value means sacrificing the cognitive effort that the assignment was designed to develop.
AI misconduct seems to look like non-AI misconduct. When confronted about incorrect or outright forbidden AI use, students say something along the lines of, “I ran out of time and had to use the AI to do the work.” This is consistent with what research has found to be a chief driver of other forms of academic misconduct, such as plagiarism: poor time management leading to last-minute deadline pressure. Other reasons given by students for misconduct in prior research also seem to apply here, such as feeling overwhelmed by the workload, not having confidence that they can do a good job on their own, and not understanding what constitutes misconduct vs. good citation or intellectual practices. Nobody yet can say for sure what are the AI best practices in professional work, either, which makes both students and faculty unsure at times what is the right course of action.
Faculty and Student Use of AI are Not Equivalent
I have shared all of the above with my colleagues, my students, and even asked AI models for advice. Students have responded that they are seeing faculty sharing course materials entirely generated by AI, and suspect that they are using AI for grading, so why shouldn’t they also outsource their work to AI? But this argument rests on a false equivalence between the nature of faculty work and the purpose of student assignments. Education is fundamentally a personal, internal process, which AI reliance robs the student of. Faculty are expert guides who have already mastered the knowledge and skills, similar to golf pros who help you to improve your swing (credit goes to Kurt Vonnegut for the original analogy). Even my colleagues who may be using AI tools for generating materials whole-cloth are still using their professional judgment to brainstorm ideas for those materials, to curate the outputs, and to validate them before assigning them in their classes.
As for using AI in grading … my mind is unsettled on this point. I don’t use it, and neither do my TAs, but this may be just as much due to the impracticality and lack of accuracy in today’s models as to our human-centered values. That, however, feels like a different blog post.