Walking the Fine Line: Leveraging AI in Education Through the Lens of Virtue Ethics

What lines should we not cross when it comes to leveraging AI in education?

This is both a question I've been asked many times and a question I've asked myself many times. For a long time, I patiently waited for institutional guidance to answer it. However, more recently I've turned to philosophy—more specifically, virtue ethics—for direction. 

We can wait for others to provide ethical guidance on AI use, and some guidance may come. Certain deontologies—mandated rules—may emerge; however, reflecting on our own moral compasses may offer a more meaningful course.

Consider this Case Study (Disclosure: I ChatGPTed it for expedience):

A longtime student requests a reference letter. You consider using AI to draft it but must balance efficiency with ethical concerns about authenticity and confidentiality. How should you proceed?

Being an educator involves being marooned in a perpetual state of busyness: classes to teach, papers to grade, meetings to attend, and reference letters to write. So, I empathize with those who view AI as a justified shortcut to lighten their overwhelming load. But, where and when is using AI ethically justified?

Here are three virtue ethics tests that can potentially help teachers gauge whether the use of AI is ethically permissible in any given situation: the Publicity Test, the Transparency Test, and the Golden Rule Test.

The Publicity Test

The Publicity Test asks whether we'd be comfortable with a decision being public knowledge—a litmus test for ethical decision-making. I’ve put an educational twist on this by calling it the "Stakeholders Test." Would I use AI to write that student's reference letter if I had to disclose the use to the student, their parents, and my colleagues? If my answer is "no," then I shouldn't leverage AI in this way. By aligning our actions to this test we will be acting in accordance with our own moral standards rather than outsourcing to exterior ethical frameworks.

The Transparency Test

When conducting the Transparency Test, we must ask ourselves: Can I explain my use of AI to others? If I find myself struggling to articulate my reasoning or feeling evasive about explaining why I used AI to write the reference letter, then I probably shouldn't have done so. Regarding reference letter writing, I would feel comfortable explaining why I leveraged AI as an editing tool, but I’m not as sure I could justify having ChatGPT generate the letter in my stead. As an aside, being transparent with students about why and how I use AI in my own life has sparked many great conversations. 

The Golden Rule Test

The Golden Rule, rooted in biblical origins and widely recognized even in kindergarten classrooms, advises: "Do unto others as you would have them do unto you." In practical terms, don't use AI in ways you wouldn't feel comfortable with AI being used for you. The Golden Rule's brilliance lies in its simplicity. Would you appreciate your administrator writing your reference letter with AI? Some teachers might be wholly okay with this, while others may not. Ultimately, you'll have to draw your own line in the sand.

Final Thoughts

So, where’s the line when it comes to using AI? While there may be some institutional guidance, the answer is ultimately a personal one. In your heart of hearts, what feels right in your teaching practice? What feels right may evolve over time. I myself have leveraged AI in ways I never intended when the technology first burst onto the scene. But I still have many lines that I won't yet cross. One only knows how those lines may shift in the months and years to come.

AI Disclosure: I used ChatGPT to create the AI use scenario above, and I also used it to edit for punctuation and clarity. I also used Canva to generate the comic.

Comments

Popular posts from this blog

A Step-by-Step Guide for Creating Rubrics with Embedded Feedback Banks

Part Three: How to Navigate AI in Your Own Classroom

Part One: My Step-by-Step Guide for Addressing Ai in BC High Schools