Walking the Talk: Why I’m Disclosing My AI Use to My Students

We spend a lot of time talking about how students should (and shouldn’t) use AI. We debate academic integrity, we draft policies, and we ask for disclosures. But there is a quieter, more controversial conversation happening in the corridors and faculty meetings: How are we using it?

The reality of the modern faculty workload is intense. Among research, service, and teaching, the prep work (drafting quiz items, polishing slide decks, and organizing materials) and other day-to-day management tasks (monitoring CMS engagement, recording attendance, answering student emails, proctoring make-up work, updating documents, coordinating submissions, and consulting with the TAs on grades) can eat up the very hours we should be spending on deep mentorship and high-level instruction.

That’s why this semester, I’ve decided to not just amp up use of AI to help me work smarter; I’m telling my students exactly how I’m doing it.

I recently added this disclosure to my course management policy for ITIS 4360 / 5360: Human-Centered Artificial Intelligence, based on suggested language from UNC Charlotte Student Affairs:

Dr. Faklaris often uses AI tools to assist with tasks such as generating ideas, checking grammar, writing alternative quiz items, drafting slide content and in-class activities, identifying research papers, and organizing materials. The purpose is to support efficiency, not to replace her judgment or expertise. All content has been reviewed and adapted to ensure it aligns with the objectives of this course. We disclose this so you understand that AI can be a helpful resource when used responsibly and critically.

What this adds to my existing AI policy language for the syllabus:

1. Modeling Responsible Use: If we want students to be “Human-in-the-loop” practitioners, we have to show them what that looks like. By disclosing that I use AI for a first draft of quiz items or to brainstorm an in-class activity, I’m showing them that AI is a tool for augmentation, not a replacement for expertise.

2. Bridging the Trust Gap: Students are often nervous that faculty are “policing” AI while secretly using it themselves. By being upfront, I’m creating a culture of integrity that works both ways. If I expect them to adhere to best practices, I should be willing to do the same.

3. Focusing on What Matters: Using AI to help organize a bibliography or check the grammar on a slide doesn’t make me a less capable professor. It makes me a more available one. It gives me the “imaginative capacity” (to borrow a theme from our upcoming AI Summit!) to focus on the human elements of teaching that no LLM can replicate.

The Bottom Line: AI Policy in the classroom isn’t just about catching cheaters. It’s about rethinking how we work. I strongly believe that AI should serve human ends. For me, that means using technology to be a more present, prepared, and transparent educator.