← Back to Main

Are LLMs the new calculators?

Published on July 18, 2025

Introduction

Am I following in previous generations' paths when I say "I don't think grade school students should be using language models for their work"? On one hand, language models dilute the uniqueness and quality of student's work, but on the other, tools like Grammarly, Sparknotes, WolframAlpha, and other websites received the same reaction. Today, LLMs are pervasive in our every day life. Most Google searches result in an AI overview response. Tech companies are competing to see who can shove LLMs into more of their features. Whether you like it or not, you are surrounded by language models. But is it as bad as everyone says? Yes and no.

My credentials: avid user of LLMs

No, it's not.

When calculators first entered the classroom, it caused an uproar. Parents and teachers feared students would cheat or lose skills, and failed to realize the potential to complete more with these new tools.

If used the right way, LLMs offer a similar benefit. Students can personalize their methods of learning by asking all the questions they want answered. Many teachers say "there are no stupid questions," but of course being surrounded by all your peers doesn't make for the most "unpressurized" environment. Students can now ask ChatGPT "Did an Apple really fall on Newton's head?" without the fear of being laughed at in class. Additionally, students can tailor their prompts to allow language models to answer their queistions in a voice of their choosing (I recently looked asked Gemini to "Explain Vector-Quantized Variational Auto Encoders to me as if I'm in high school.").

Of course this requires a specific setting for the use of language models. Students shouldn't just be tossing a PDF of their homework into ChatGPT.

Yes, it is.

Language models have the potential to short-circuit the learning processes grade schools aim to cultivate. Modern transformer models allow for the instant generation of essays on any topic. Students are meant to wrestle with ideas, organize thoughts, and struggle on these essays to truly learn. As someone who did not perform well in many English classes, I would know. Over-reliance on LLMs risks turning students into passive recipients of information rather than active constructors of knowledge.

Others issues also arise from the usage of these LLMs. Many teachers focus on detecting whether or not students were using LLMs rather than grading the actual quality of the work. Students using language models have unintentionally lowered the bar for all other students writing their work from scratch. That being said, LLMs and their usage in the classroom are still relatively new, so I guess we'll see where things go!

Conclusion

Maybe the overuse of ChatGPT causes brain atrophy [1]. Maybe most technologies we use causes brain atrophy [2]. Students might become adept at "prompt engineering" but less skilled at critical thinking, source evaluation, or developing a unique voice. Maybe critical thinking and the whole package won't be necessary anymore. Maybe prompt engineering is the future. At least that's what Google thinks according to the tasks it's assigning its engineers...

  • [1] León-Domínguez, U. (2024). Potential Cognitive Risks of Generative Transformer-Based AI Chatbots on Higher Order Executive Functions. Neuropsychology, 38(4), 293-308. https://doi.org/10.1037/neu0000948
  • [2] Dahmani, L. & Bohbot, V. D. Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Sci. Rep. 10, 6310 (2020).