Evolution of My AI Syllabus Policy and Experience

I field a lot of questions from people outside academia about what I think of Generative AI (GenAI or just “AI”) in education. I am fortunate to teach upper-level electives for undergraduates and graduate students. Because I am not teaching foundational courses, I can allow usage of any AI tool, consistent with usage of other productivity and creativity tools. In fact, I encourage this. I want students to try things with AI and figure out what works well and what doesn’t. I consider that, while they need to learn college-level skills and establish intellectual credentials, their job prospects and career success will involve knowing the proper uses of GenAI for productivity. Learning from mistakes now will pay off later.

AI Syllabus Policy 3.0

My syllabus policy has evolved from the first version and a later tweak. For Fall 2025, it reads:

In this course, students are encouraged to use tools such as Copilot, Gemini, and Notebook LM (sign in with UNC Charlotte email) that are university-licensed and data-protected, per the “green” status on the AI Software Guidance webpage. No student may submit an assignment as their own that is entirely generated by means of an AI tool. If students use an AI tool or other creative tool, they must follow best practices by (a) crediting the tool for (b) its part in the work. Students are responsible for identifying and removing any factual errors, biases, and/or fake references. Ask the instructor for advice about your use of a particular AI model not listed here.

The major change in wording is to recognize and encourage the use of UNC Charlotte’s now-considerable investments in AI software. We do not want students (or faculty, or staff) to be forced to share their educational data or intellectual property, or to go out of pocket beyond what they’ve already paid, to benefit from Generative AI. I want to reinforce this message by emphasizing and linking to the provided AI services. After all, students are paying for them already through tuition and fees, and through their taxes. They should at least consider using them. And if they exercise their freedom of choice to not do so, certainly I have my opinions about what other models or services are worth their usage.

I kept the sentence about student responsibilities for falsehoods, biases, and hallucinations in outputs. This reinforces what I teach students about AI – that it is an assistant, not a replacement, and as such, you must check over all its work, just as if you hired a human intern. I also kept the requirement to cite the AI and for which part, and am glad to see that more students are doing just that. Finally, I boldface the statement about what they CANNOT use the AI tool for in their schoolwork. I encourage students to consult with me if they are unsure whether their use of AI is improper.

Of course, some students do go wrong with AI – and what else is new? Cheating and shortcuts have always been with us. Most of the time, I’m more baffled by this than anything, but then, poor student practices often seem like the behavior of a foreign species.

Where Students Go Wrong with AI

Misuse can be easy to spot. Most of the time, there is no way to prove that GenAI was used to completely misrepresent the student’s contribution. One sure-fire marker, however, is narrative that includes the phrase “As an AI model … ” 😂 from a straight copy-paste. Another obvious marker is use of fake references or “hallucinations” in students’ literature reviews. It takes literally 15 seconds for us to pop the content or reference into Google to verify that, oh, yeah, that’s definitely made-up. Ironically, the students end up being the ones who are tricked – the realistic-sounding output of so-called “deep research” models fools them into submitting the output without verifying and then revising it, and thereby turning it into a legitimate submission.

Students are probably using GenAI where it seems simpler not to. Each semester, a TA will show me examples of strongly suspected AI use for a low-stakes, 1-2 pts assignment. At best, an AI is only capable of B- to C-quality work in my courses. It would have involved less time and effort for the student to dash off a mediocre-to-bad-quality submission themselves than to call up the AI model, craft the prompt, run the program, copy the output, and paste it into the submission box, and end up with the mediocre-to-bad grade anyway.

Some students take short-cuts with AI that cheat them out of learning. This is the “wrong AI” use that concerns me the most. Chatbots can be helpful study buddies or final checkers for a deliverable, but they are not able to make nuanced judgments about students’ understanding. They usually are designed to stay positive, not to push back on vague answers as would a human teacher or class peer. Nor will they ever have complete information about the course. As my doctoral student Sarah Tabassum notes, the real loss is the omission of the learning process itself – the critical reading and reflection that build lasting knowledge. Accepting AI outputs at face value means sacrificing the cognitive effort that the assignment was designed to develop.

AI misconduct seems to look like non-AI misconduct. When confronted about incorrect or outright forbidden AI use, students say something along the lines of, “I ran out of time and had to use the AI to do the work.” This is consistent with what research has found to be a chief driver of other forms of academic misconduct, such as plagiarism: poor time management leading to last-minute deadline pressure. Other reasons given by students for misconduct in prior research also seem to apply here, such as feeling overwhelmed by the workload, not having confidence that they can do a good job on their own, and not understanding what constitutes misconduct vs. good citation or intellectual practices. Nobody yet can say for sure what are the AI best practices in professional work, either, which makes both students and faculty unsure at times what is the right course of action.

Faculty and Student Use of AI are Not Equivalent

I have shared all of the above with my colleagues, my students, and even asked AI models for advice. Students have responded that they are seeing faculty sharing course materials entirely generated by AI, and suspect that they are using AI for grading, so why shouldn’t they also outsource their work to AI? But this argument rests on a false equivalence between the nature of faculty work and the purpose of student assignments. Education is fundamentally a personal, internal process, which AI reliance robs the student of. Faculty are expert guides who have already mastered the knowledge and skills, similar to golf pros who help you to improve your swing (credit goes to Kurt Vonnegut for the original analogy). Even my colleagues who may be using AI tools for generating materials whole-cloth are still using their professional judgment to brainstorm ideas for those materials, to curate the outputs, and to validate them before assigning them in their classes.

As for using AI in grading … my mind is unsettled on this point. I don’t use it, and neither do my TAs, but this may be just as much due to the impracticality and lack of accuracy in today’s models as to our human-centered values. That, however, feels like a different blog post.

Five (!) new papers published in the first half of 2025

I’m a fan of the African (maybe?) proverb: “If you want to go fast, go alone. If you want to go far, go together.” In research, collaboration – bringing together different perspectives and shared resources – is the special sauce that can enable long-term success.

This year has yielded a number of high-quality manuscripts from my Security and Privacy Experiences (SPEX) group and from external collaborations. I have co-authored five new papers that have been accepted for publication.

Work from SPEX Students

  • Sarah Tabassum, Nishka Mathew, and Cori Faklaris. “Privacy on the Move: Understanding Educational Migrants’ Social Media Practices through the Lens of Communication Privacy Management Theory.” In Proceedings of the ACM Journal on Computing and Sustainable Societies (COMPASS 2025) and associated conference, July 22-25, 2025, in Toronto, Canada. Association of Computing Machinery, New York, NY, USA. [Preprint]

This paper is the result of Sarah’s pre-dissertation work to identify socio-technical gaps for a key U.S. higher-ed population – educational migrants. Drawing on 40 interviews with international students from 14 countries, we introduce the concept of “triple presence” to describe migrants’ simultaneous engagement with their home country, host society, and diaspora communities. Using Communication Privacy Management (CPM) theory, the study reveals that privacy concerns shift across three migration stages—pre-migration, transition and arrival, and post-migration—highlighting increased vulnerability during transition and complex privacy negotiations post-migration. Migrants adopt strategies like platform segmentation, encrypted communication, and strategic disconnection to manage privacy turbulence caused by scams, surveillance, and cultural differences. Next step: Sarah is planning a participatory design study to probe how newer AI affordances may be useful for designing for culturally responsive privacy tools and platform-level interventions.

  • Narges Zare, Cori Faklaris, Sarah Tabassum, and Heather Lipford. “Improving Mobile Security with Visual Trust Indicators for Smishing Detection.” In Proceedings of the IEEE 6th Annual World AI IoT Congress (AIIoT 2025), May 28-30, in Seattle, WA, USA. Institute of Electrical and Electronics Engineers, New York, NY, USA. [Preprint]

Since beginning her Phd in 2023, Narges has been studying how to counter the rise in mobile threats from smishing (SMS phishing). In this paper, we explore how visual trust indicators can empower mobile users to better detect these fraudulent messages. Through a user-centered design and evaluation process involving 30 participants, the study tested intuitive, color-coded icons—such as green checkmarks for legitimacy, yellow exclamation marks for caution, and red crosses for fraud—within realistic mobile messaging prototypes. Participants favored familiar, non-verbal icons for quick recognition, while tooltips offering clear, actionable guidance (like “report spam”) enhanced confidence, especially for ambiguous messages. The findings underscore the importance of accessible, customizable, and culturally sensitive design in mobile security interfaces. Next step: Narges is planning an online experiment to test hypotheses derived from this paper about which indicators are likely to perform the best.

Work with Collaborators

  • Rajatsubhra Chakraborty, Xujun Che, Depeng Xu, Cori Faklaris, Xi Niu, and Shuhan Yuan. “BiasMap: Can Cross-Attention Uncover Hidden Social Biases?” In Proceedings of the CVPR 2025 Demographic Diversity in Computer Vision Workshop (CVPR 2025 DemoDiv), June 11, 2025, in Nashville, TN, USA. IEEE Computer Society and The Computer Vision Foundation, Ithaca, NY, USA, 10 pages. [Preprint

It has been a delight to work with Raj and with Depeng (Raj’s main Phd advisor and a UNC Charlotte faculty colleague) on tackling mitigations for biased AI-generated imagery. This paper introduces a novel framework for detecting latent biases in text-to-image diffusion models like Stable Diffusion. Unlike traditional fairness audits that focus on output demographics, BiasMap uses cross-attention attribution maps to reveal how demographic attributes (e.g., gender, race) become spatially entangled with semantic concepts (e.g., professions) during image generation. The findings show that biases originate early in the model’s U-Net architecture and persist through the generation process, highlighting the limitations of current debiasing methods. We hope that this work will pave the way for more equitable generative AI systems.

  • Noga Gercsak. “Enhancing Cybersecurity in DER-Based Smart Grids with Blockchain and Differential Privacy.” In Proceedings of the IEEE 6th Annual World AI IoT Congress (AIIoT 2025), May 28-30, in Seattle, WA, USA. Institute of Electrical and Electronics Engineers, New York, NY, USA. [Preprint]

Confession: I did not expect Noga – a student at David W. Butler High School in Matthews, NC – to get as far as she did in realizing this research vision! Noga followed up on a interest of mine to respond to the growing cybersecurity threats facing distributed energy resources (DERs) in smart grids. (DER examples: electric vehicle charging stations; smart thermostats and other home networked devices; arrays of solar panels connected to the larger electric grid.) Her paper proposes a novel framework that integrates blockchain technology and differential privacy to enhance system resilience, scalability, and data protection. The framework employs a lightweight blockchain for secure, tamper-proof communication and dynamic certificate management, while differential privacy adds noise to sensitive data to preserve anonymity without sacrificing utility. Through simulations involving certificate issuance, replay attacks, spoofing, and DDoS scenarios, the system demonstrated robust performance—achieving block creation times averaging 0.85 seconds and attack recovery in under 40 microseconds. The results show that this hybrid approach not only withstands cyberattacks but also maintains high efficiency and privacy, offering a promising path forward for securing DER-based smart grids in real-world deployments. (Earlier this year, Noga won the North Carolina engineering competition for the Junior Humanities and Science Symposium with her presentation of this work.)

  • Jacob Hopkins, Carlos Rubio Medrano, and Cori Faklaris. “The Price Should Be Right: Exploring User Perspectives on Data Sharing Negotiations.” In Proceedings of the Fifteenth Usable Security and Privacy Symposium (USEC 2025), Feb. 24, 2025, in San Diego, CA, USA. Internet Society, Reston, VA, and Geneva, Switzerland. [Preprint]

Jacob’s Phd work focuses on how to rebalance the power dynamics in voluntary data-sharing events, such as when a bouncer asks for proof of your age at the bar door. He, me, and his faculty advisor at Texas A&M-Corpus Christi, Carlos Rubio Medrano, aim to empower individuals—data subjects—by enabling them to negotiate what personal data is shared and how it is used, rather than passively accepting opaque terms set by data requesters. Jacob envisions a multi-track user study, involving both data subjects and data requesters, to explore what data people are willing to share, under what conditions, and what controls both parties need to feel secure and informed. The study will inform the design of a future privacy negotiation framework that supports manual, automated, and semi-automated negotiations, with the goal of increasing transparency, minimizing privacy risks, and ensuring usability for a wide range of users. I love how his vision lays the groundwork for privacy-enhancing technologies that treat data exchange as a fair and informed negotiation—not a one-sided transaction.

‘What Drives SMiShing Susceptibility? A U.S. Interview Study of How and Why Mobile Phone Users Judge Text Messages to be Real or Fake’ – paper at SOUPS 2024

My Phd student Sarah Tabassum is here with me at the Symposium on Usable Privacy and Security in Philadelphia, PA, USA, presenting our paper during Tuesday’s Mobile Security block: “What Drives SMiShing Susceptibility? A U.S. Interview Study of How and Why Mobile Phone Users Judge Text Messages to be Real or Fake.”

For this study, we interviewed 29 people (half students, half outside of campus) about how they make sense of the flood of strange messages received on their phones. Texts with links were commonly seen as “fake” (bad news for the political campaign trying to advertise a pre-primary rally!).

As an Apple user, I was surprised/pleased that Android owners get interface warnings of possible spam or scam texts (see pic). However, there’s no way to report messages. (iPhone has a “Report Junk” option, but no “Report Smish” button either.)

Screenshot of an Android phone screen showing the notification of "Why this Looks Like Spam" for a text message that is claiming to be Chase bank.

Our SPEX Lab group is now thinking about how to better support mobile users in making sense of these messages and learning how to spot the scam SMS-type texts (“smishing” = SMS + phishing).

Something to know – scammers often now will not send a fake link in the 1st text. Instead, they “soft sell”, building trust with a series of messages. Once you reply, THEN they text the link to steal your credentials – or, call and claim to be security investigating the text!

  • Sarah Tabassum, Cori Faklaris, and Heather Richter Lipford. 2024. What Drives SMiShing Susceptibility? A U.S. Interview Study of How and Why Mobile Phone Users Judge Text Messages to be Real or Fake. In Proceedings of the 20th Symposium on Usable Privacy and Security. Retrieved June 25, 2024 from https://www.usenix.org/conference/soups2024/presentation/tabassum-sarah