AI, Education, and Why I'm Cautiously Optimistic
Jun 15, 2025Four years ago, I was introduced to the world of plagiarism detection. It was a new challenge for me, but not for Google Classroom, who had released the tool a few years earlier. The complexity behind it was immense, and its application changed my perspective on the utility of assisted grading tools. At the time (and still even now), everything revolved around the content you had access to. How can you verify that something is not-so-original with a limited corpus? We introduced new ways to tackle this problem, specifically through the intra-domain detection of past student essays. This style of plagiarism detection is almost analog to the challenges that we’re seeing today. Is it possible to detect if a sample of writing was created by AI? Is it actually plagiarism if it was? These are just a couple of the fun questions that began to stir the pot.
My interest in large language models (LLMs) really peaked when the quality of them exploded back in 2021. Around that time, I started having conversations with professors and students about potential use cases and concerns with AI. COVID left a lot of teachers unprepared for fully online learning environments, and I could see this leading to similar issues. Over time, my professors and I discussed the potential for plagiarizing work and AI having influence on papers. What I didn’t fully understand was how quickly these problems would evolve. The conversations quickly transitioned from a lack of effort in essays to substanceless essays. Essays were riddled with incorrect information. Areas that students thought were small knowledge gaps turned out to be large enough for issues like these to slip through. The temptation to just complete an assignment was too great.
And this wasn’t limited to students. College admins were also caught making the exact same mistakes. I’ve had many professors gossip about how speeches and emails had no substance and contained incorrect information. It turned out that no one wanted to struggle to complete their tasks.
These conversations and stories are what grew my sense of unease. And everyday, a new tool was being deployed with a questionable level of quality. I knew that my job in plagiarism detection wasn’t going to be the only thing to become more challenging. I worried (and still worry) about the learning outcomes for students that are introduced to these tools at the start of their education journey. I worried about the false sense of understanding these tools are creating, where real knowledge should be. Was this becoming a problem that went beyond academic integrity and policing plagiarism?
In an attempt to better understand the problem, I started attending more of Classroom’s school visits and teacher studies. My prediction was right, teachers were caught off guard. It might be easy to catch AI generated content now, but what about one to two years from then? Every single teacher I spoke to had a fallback method: know your student. Does my student actually write like this? Did they copy and paste everything into the Google Doc? Why is my 9th grader suddenly writing at a college level? Obviously, for an experienced teacher, it was easy to catch.
Moving into 2024, everything quickly progressed. Students and administrators became more reliant on ChatGPT, Gemini, and Claude, and their work reflected it. However, the evidence became more clear: these problems aren’t new, only the technology is. Misinformation has always existed on websites, readily available via a quick Google search. Students have always shared essays with each other. Everyone has gotten incorrect information from a confident friend at some point in their lives. Though, these AI tools’ ability to combine all of these negative characteristics and make it always available was unique. And continuous improvement would only make these challenges more complex.
Despite all the concerns in these conversations, there was an undercurrent of optimism. Teachers saw these emerging tools as an opportunity to optimize the process for creating differentiated assignments and providing accessible, one-on-one tutoring.
When I first encountered differentiated learning in classrooms, it struck me how much teachers were already juggling. American University School of Education’s How Differentiated Instruction Supports All Students breaks down what teachers need to track: readiness to learn, learning preferences, prior knowledge, languages spoken, and personal interests. That’s a lot of moving pieces for one teacher managing thirty students. The thing is, teachers are already collecting mountains of data through every assignment, every question asked during class, every moment of confusion. But making sense of all that data? That’s where things get overwhelming, and where I started wondering if AI could actually help rather than complicate things.
The conversations I’ve had with teachers about AI tools for differentiated instruction have been cautiously optimistic. They see the potential for these models to spot patterns they might miss: maybe Sarah always struggles with word problems but excels at visual math, or maybe Marcus’s writing improves dramatically when assignments connect to his interests in sports. But here’s what keeps coming up in every discussion: the fear of bias creeping into these AI systems. Teachers know their students in ways that data points can’t capture, so while automated assignment creation based on student profiles sounds promising, it only works if teachers remain the final decision-makers. The goal isn’t to replace that human judgment; it’s to give teachers better tools to act on what they already know about their students.
Beyond classroom instruction, I kept hearing about another pain point: access to tutoring. Students I talk to struggle to find quality tutors who are both available and affordable. Having someone actually work with you and care about your learning requires dedication that’s hard to come by. What caught my attention about AI tutors was their promise of being there whenever students needed help. Sure, you could debate the quality of that help, but just having something available felt like progress.
Then reality started setting in through my conversations with teachers. Consistent availability means nothing if the teaching quality is terrible. Are these AI tutors actually guiding students with helpful hints, or are they just handing over answers? Do they catch the big misunderstandings that trip students up, or do they get lost in irrelevant details? Are their explanations even reliable? I started asking myself whether these AI tutors needed human supervision to be useful, or if students were better off just doing what they’ve always done: googling their questions and hoping for the best.
After all these conversations with teachers and watching how quickly everything has evolved, I keep coming back to the same realization: we’re not dealing with something entirely new here. We’ve always had tools that could either help students learn or help them avoid learning. AI just happens to be really, really good at both.
The teachers who seem to be handling this best aren’t the ones trying to ban AI or the ones letting students use it for everything. They’re the ones asking students to show their work, including their AI interactions. They’re having students fact-check and improve AI output instead of just accepting it. They’re treating AI like what it actually is: a research assistant that sometimes gets things wrong, not a replacement for thinking.
What worries me most isn’t the technology itself, but how we’re rushing to either embrace it completely or reject it entirely. Both approaches miss the point. The question isn’t whether AI will change education – it already has. The question is whether we’ll teach students to think alongside these tools, or let them outsource their thinking entirely. From where I sit, that choice is going to define what learning looks like for the next generation.