LOCAL COLUMN

OPINION: A machine called my son a cheat. As a teacher and mom, I’m worried

Published

My 17-year-old son is a hardworking high school senior enrolled in dual credit college courses. He balances academics with sports, practices and the normal pressures of being a teenager. So when he came home frustrated after receiving a zero on a major writing assignment, I knew something was wrong.

His professor had run his paper through an artificial intelligence-detection program. The software flagged it as AI generated. No conversation. No questions. Just a zero.

“Mom, I didn’t use AI,” he told me. “I worked really hard on that paper.”

Jaycie Homer

I believed him. I know his writing voice. I’ve watched him draft essays at the kitchen table. But the system didn’t believe him. And there was nothing I could do.

As both a teacher and a mother, I understand the growing concern about artificial intelligence in education. Tools like ChatGPT are widely available, and academic integrity matters. Educators want to ensure students are submitting their own work. But relying solely on AI-detection software is not the answer.

These tools are far from perfect. Research from Stanford University has shown that AI detectors are often unreliable and disproportionately flag writing by non-native English speakers as AI generated. Other studies have found that detection programs frequently produce false positives and false negatives. In other words, human writing can be labeled as AI, and AI writing can slip through unnoticed.

Yet some educators are treating these tools as final authorities.

In my son’s case, there was no discussion about his ideas, no opportunity to explain his process, and no acknowledgment that software can be wrong. A machine made a judgment, and that judgment stood.

As a classroom teacher of more than a decade, I use AI tools myself. Platforms like MagicSchool AI, Diffit and Microsoft Copilot help me differentiate instruction, generate leveled reading materials, and support students with individualized education plans and language needs. AI has allowed me to better meet the diverse needs of my sixth-grade classroom.

But I never outsource my professional judgment.

If I suspect a student used AI inappropriately, I start with a conversation. I ask them to explain their thinking. I look at their previous writing. I give them the chance to demonstrate understanding. Because education is built on relationships, not algorithms.

AI-detection tools can be part of a broader strategy, but they should never be the sole evidence used to penalize students. Overreliance on flawed software risks harming honest students, eroding trust and creating inequitable outcomes.

My son eventually rewrote his assignment. He accepted the lower grade, but the experience left a mark. What troubled me most wasn’t the grade itself. It was the message: that a machine’s conclusion mattered more than a student’s voice.

We are at a pivotal moment in education. AI is here to stay. It can enhance learning, support teachers and create new opportunities. But it can also cause harm if used carelessly.

Educators, especially in higher education, must develop thoughtful policies for AI use and detection. Detection software should be transparent about its limitations. Students should have clear avenues to appeal decisions. And faculty should be trained to interpret results critically rather than treating them as definitive proof.

Most importantly, we must remember that teaching is relational work. We know our students. We recognize their growth. We understand their strengths and struggles.

AI may assist us, but it should never replace our judgment.

If we want to prepare students for a world shaped by artificial intelligence, we must model fairness, critical thinking and humanity in how we use it. Our students deserve nothing less.

Jaycie Homer is a sixth-grade teacher.  She lives in Lovington. 

Powered by Labrador CMS