Walking the Talk: Why I’m Disclosing My AI Use to My Students

We spend a lot of time talking about how students should (and shouldn’t) use AI. We debate academic integrity, we draft policies, and we ask for disclosures. But there is a quieter, more controversial conversation happening in the corridors and faculty meetings: How are we using it?

The reality of the modern faculty workload is intense. Among research, service, and teaching, the prep work (drafting quiz items, polishing slide decks, and organizing materials) and other day-to-day management tasks (monitoring CMS engagement, recording attendance, answering student emails, proctoring make-up work, updating documents, coordinating submissions, and consulting with the TAs on grades) can eat up the very hours we should be spending on deep mentorship and high-level instruction.

That’s why this semester, I’ve decided to not just amp up use of AI to help me work smarter; I’m telling my students exactly how I’m doing it.

I recently added this disclosure to my course management policy for ITIS 4360 / 5360: Human-Centered Artificial Intelligence, based on suggested language from UNC Charlotte Student Affairs:

Dr. Faklaris often uses AI tools to assist with tasks such as generating ideas, checking grammar, writing alternative quiz items, drafting slide content and in-class activities, identifying research papers, and organizing materials. The purpose is to support efficiency, not to replace her judgment or expertise. All content has been reviewed and adapted to ensure it aligns with the objectives of this course. We disclose this so you understand that AI can be a helpful resource when used responsibly and critically.

What this adds to my existing AI policy language for the syllabus:

1. Modeling Responsible Use: If we want students to be “Human-in-the-loop” practitioners, we have to show them what that looks like. By disclosing that I use AI for a first draft of quiz items or to brainstorm an in-class activity, I’m showing them that AI is a tool for augmentation, not a replacement for expertise.

2. Bridging the Trust Gap: Students are often nervous that faculty are “policing” AI while secretly using it themselves. By being upfront, I’m creating a culture of integrity that works both ways. If I expect them to adhere to best practices, I should be willing to do the same.

3. Focusing on What Matters: Using AI to help organize a bibliography or check the grammar on a slide doesn’t make me a less capable professor. It makes me a more available one. It gives me the “imaginative capacity” (to borrow a theme from our upcoming AI Summit!) to focus on the human elements of teaching that no LLM can replicate.

The Bottom Line: AI Policy in the classroom isn’t just about catching cheaters. It’s about rethinking how we work. I strongly believe that AI should serve human ends. For me, that means using technology to be a more present, prepared, and transparent educator.

From Anxiety to Agency: New Research on Human-Centered Security at CHI 2026 and USEC 2026

I’m delighted to share this news — two of my latest papers with key collaborators have been conditionally accepted on their first try, and at very competitive venues! While these papers cover different topics (one focusing on the psychology of anxiety and the other on the unique safety needs of international students), they share a common goal: making digital security more inclusive, less stressful, and deeply grounded in the human experience.

Measuring Invisible Stress: The Cybersecurity Anxiety Scale (CybAS)

Conditionally accepted for CHI 2026 (Barcelona, Spain)

For years, we’ve known that users feel fatigued and concerned by the drumbeat of cybersecurity and privacy threats. However, we have lacked a validated way to measure the specific, persistent worry that comes with navigating these threats. We call this emotional state Cybersecurity Anxiety.

Led by Peter Mayer and first-authored by Nikolaj Dall and Hanno Gustav Hagge, our paper, “From Fear to Control: Developing a Three-Factor Scale for Cybersecurity Anxiety (CybAS),” introduces a new 15-item psychometric tool. Through several rounds of survey studies, we identified three core factors that define this anxiety:

  • Present: Immediate concerns and stress during security tasks.
  • Future: The “what-if” worry about anticipated threats.
  • Control: The feeling (or lack thereof) that one has the agency to stay safe.

Why it matters: By using CybAS, researchers and designers can better diagnose why a security tool might be failing. If a system makes a user feel helpless, they are more likely to disengage. CybAS allows us to build “anxiety-aware” security systems that empower users rather than scaring them.

Hypothesized diagnostic categories based on CybAS subscale score combinations. More information is available in the finalized paper, including CybAS item wordings and directions for using it.

Designing Safety Tools for, and with, International Students

To be presented at USEC 2026 (San Diego, CA)

When students from the Global South move to the U.S. to study, they don’t just face a new culture; they face a new and often treacherous digital ecosystem. These educational migrants are frequently targeted by cross-channel scams (SMS, phone calls, and emails) that exploit their unfamiliarity with local institutions.

As described in From Scam to Safety: Participatory Design of Digital Privacy and Security Tools with International Students from Global South,” lead author Sarah Tabassum conducted participatory design sessions with 22 students to imagine better safety solutions, using AI capabilities as a design material.

From this data, we identified several must-have features that current tools lack:

  • University Integration: Students trust their schools. By embedding safety support into university platforms, we can provide a trusted safety net.
  • Cross-channel filtering: Moving beyond just email spam to filter SMS and voice scams.
  • Contextual explanations: Instead of just saying “this is a scam,” tools should explain why based on the cultural cues the student might be missing.

Why it matters: This work reminds us that security is not a one-size-fits-all solution. For these users, it must account for the situational vulnerabilities of those moving across borders.


Orientation challenges experienced by educational migrants (Point 1) and the four migrant-centered security features they identified as necessary for safer digital navigation (Points 2–5). See the paper for more details and participants’ sketches and choices of AI capabilities for their tools.

Looking Ahead to 2026

These two papers exemplify the types of work I wanted to conduct when I founded my SPEX research group at UNC Charlotte. Together, and with our external collaborators, we are creating new knowledge of how to make people feel safer and more capable in a complex digital world.

I am incredibly proud of student authors Nikolaj, Hanno, Sarah and her co-author Narges Zare, and my other collaborators and lab mates for their hard work. If you are attending USEC in February or CHI in April, please come say hello! We look forward to sharing our full findings and connecting with fellow researchers who are passionate about human-centered security and privacy.

Evolution of My AI Syllabus Policy and Experience

I field a lot of questions from people outside academia about what I think of Generative AI (GenAI or just “AI”) in education. I am fortunate to teach upper-level electives for undergraduates and graduate students. Because I am not teaching foundational courses, I can allow usage of any AI tool, consistent with usage of other productivity and creativity tools. In fact, I encourage this. I want students to try things with AI and figure out what works well and what doesn’t. I consider that, while they need to learn college-level skills and establish intellectual credentials, their job prospects and career success will involve knowing the proper uses of GenAI for productivity. Learning from mistakes now will pay off later.

AI Syllabus Policy 3.0

My syllabus policy has evolved from the first version and a later tweak. For Fall 2025, it reads:

In this course, students are encouraged to use tools such as Copilot, Gemini, and Notebook LM (sign in with UNC Charlotte email) that are university-licensed and data-protected, per the “green” status on the AI Software Guidance webpage. No student may submit an assignment as their own that is entirely generated by means of an AI tool. If students use an AI tool or other creative tool, they must follow best practices by (a) crediting the tool for (b) its part in the work. Students are responsible for identifying and removing any factual errors, biases, and/or fake references. Ask the instructor for advice about your use of a particular AI model not listed here.

The major change in wording is to recognize and encourage the use of UNC Charlotte’s now-considerable investments in AI software. We do not want students (or faculty, or staff) to be forced to share their educational data or intellectual property, or to go out of pocket beyond what they’ve already paid, to benefit from Generative AI. I want to reinforce this message by emphasizing and linking to the provided AI services. After all, students are paying for them already through tuition and fees, and through their taxes. They should at least consider using them. And if they exercise their freedom of choice to not do so, certainly I have my opinions about what other models or services are worth their usage.

I kept the sentence about student responsibilities for falsehoods, biases, and hallucinations in outputs. This reinforces what I teach students about AI – that it is an assistant, not a replacement, and as such, you must check over all its work, just as if you hired a human intern. I also kept the requirement to cite the AI and for which part, and am glad to see that more students are doing just that. Finally, I boldface the statement about what they CANNOT use the AI tool for in their schoolwork. I encourage students to consult with me if they are unsure whether their use of AI is improper.

Of course, some students do go wrong with AI – and what else is new? Cheating and shortcuts have always been with us. Most of the time, I’m more baffled by this than anything, but then, poor student practices often seem like the behavior of a foreign species.

Where Students Go Wrong with AI

Misuse can be easy to spot. Most of the time, there is no way to prove that GenAI was used to completely misrepresent the student’s contribution. One sure-fire marker, however, is narrative that includes the phrase “As an AI model … ” 😂 from a straight copy-paste. Another obvious marker is use of fake references or “hallucinations” in students’ literature reviews. It takes literally 15 seconds for us to pop the content or reference into Google to verify that, oh, yeah, that’s definitely made-up. Ironically, the students end up being the ones who are tricked – the realistic-sounding output of so-called “deep research” models fools them into submitting the output without verifying and then revising it, and thereby turning it into a legitimate submission.

Students are probably using GenAI where it seems simpler not to. Each semester, a TA will show me examples of strongly suspected AI use for a low-stakes, 1-2 pts assignment. At best, an AI is only capable of B- to C-quality work in my courses. It would have involved less time and effort for the student to dash off a mediocre-to-bad-quality submission themselves than to call up the AI model, craft the prompt, run the program, copy the output, and paste it into the submission box, and end up with the mediocre-to-bad grade anyway.

Some students take short-cuts with AI that cheat them out of learning. This is the “wrong AI” use that concerns me the most. Chatbots can be helpful study buddies or final checkers for a deliverable, but they are not able to make nuanced judgments about students’ understanding. They usually are designed to stay positive, not to push back on vague answers as would a human teacher or class peer. Nor will they ever have complete information about the course. As my doctoral student Sarah Tabassum notes, the real loss is the omission of the learning process itself – the critical reading and reflection that build lasting knowledge. Accepting AI outputs at face value means sacrificing the cognitive effort that the assignment was designed to develop.

AI misconduct seems to look like non-AI misconduct. When confronted about incorrect or outright forbidden AI use, students say something along the lines of, “I ran out of time and had to use the AI to do the work.” This is consistent with what research has found to be a chief driver of other forms of academic misconduct, such as plagiarism: poor time management leading to last-minute deadline pressure. Other reasons given by students for misconduct in prior research also seem to apply here, such as feeling overwhelmed by the workload, not having confidence that they can do a good job on their own, and not understanding what constitutes misconduct vs. good citation or intellectual practices. Nobody yet can say for sure what are the AI best practices in professional work, either, which makes both students and faculty unsure at times what is the right course of action.

Faculty and Student Use of AI are Not Equivalent

I have shared all of the above with my colleagues, my students, and even asked AI models for advice. Students have responded that they are seeing faculty sharing course materials entirely generated by AI, and suspect that they are using AI for grading, so why shouldn’t they also outsource their work to AI? But this argument rests on a false equivalence between the nature of faculty work and the purpose of student assignments. Education is fundamentally a personal, internal process, which AI reliance robs the student of. Faculty are expert guides who have already mastered the knowledge and skills, similar to golf pros who help you to improve your swing (credit goes to Kurt Vonnegut for the original analogy). Even my colleagues who may be using AI tools for generating materials whole-cloth are still using their professional judgment to brainstorm ideas for those materials, to curate the outputs, and to validate them before assigning them in their classes.

As for using AI in grading … my mind is unsettled on this point. I don’t use it, and neither do my TAs, but this may be just as much due to the impracticality and lack of accuracy in today’s models as to our human-centered values. That, however, feels like a different blog post.