Exploring what generative AI really is, what it isn’t, and what that means for teachers and learners.
Before generative
Let’s step back for a moment and remind ourselves that AI has been used in education for a while already. Classification and recognition AI was central to Google’s Quick, Draw! Game (2016); Reinforcement Learning systems were demonstrated using Minecraft education edition through project Malmo (2016); Lego Spike Prime (2020) could be used alongside AI algorithms to understand sensor information and inform actions. When it comes to teacher use for productivity, predictive/analytical AI has been used in Knewton Alta, (2018) an adaptive learning platform that predicts student performance and recommends practice based on data; and ALEKS (McGraw Hill) delivers personalized practice based on student knowledge and performance (1999). Many now see generative AI as the next step up, with extra power, control and generative properties. However is that really the case?
Generative AI and the Illusion of Understanding
Generative AI does have a kind of magic to it, which is different from the other forms. Its responses are so human-like many have already been fooled into thinking it is human. But when looking through research papers and developer notes, and coming across its shortcomings, the truth is, it cannot really understand, at least not in the way that humans can.
It seems like it can because we have fed into it nearly every literary work known to humankind, the equivalent of billions of books, images and other forms of data. So with that knowledge of how we communicate and make meaning of things, given context (a prompt), it can string a collection of words together that mimics human speech in a way that makes sense as valid communication. This does not mean it has a conscience or awareness. It is just a statistical pattern generator, a really good one. Delving into the specifics of this topic is beyond the scope of my post here, but for more on the details, check out this great article by Hugh Brosnahan called 'ChatGPT-5 and the Limits of Machine Intelligence'. It looks into the philosophy and psychology of humankind, which offers a deeper understanding of what AI is missing, and how possible (or impossible) it may be for these machines to achieve what we imagine in the realm of artificial general intelligence.
A Metaphor for AI
Anyone who has used ChatGPT, Claude, or Gemini will know that chatbot memory is helpful but not perfect. They forget, ignore, or misremember information, and while settings can be adjusted, the flaws remain. In my experience, using these tools feels like dealing with a person that has great memory ability (the illusion of), great reasoning ability (the illusion of this, too), nearly all the knowledge the world has accumulated over millennia (the training data) and the ability to manipulate and communicate that data (it can do a pretty good job of this on a good day). However, that person is a 5 year old. An extremely intelligent 5 year old, but still a 5 year old, who often forgets what the topic of conversation is (when the context limit is reached), or starts to tell you a story totally unrelated to what you are discussing (hallucination 1), or does its best to convince you that it has indeed completed a task when there is no evidence that it has (hallucination 2). And it does all this with 100% confidence.
So, what does this actually mean for education?
It means we have to stay mindful when we are using these tools to make judgements, quite important judgements, like grading, appropriate feedback, and students-teacher communications. There are edtech productivity tools that have been used for years, like the ones I mentioned in the intro, but they provide limited benefit over existing methods and generative AI can’t really do any better.
What AI can and cannot do
Let’s take the example of a teacher’s role, and the three types of knowledge that help them do it: subject knowledge, curriculum knowledge, and knowledge of the student. I'm not really including instruction here as it is part of curriculum knowledge, and also heavily relies on knowledge of the student. The first two are relatively straightforward for generative AI to tackle. Subject knowledge is already within its grasp, and curriculum knowledge, the order of instruction, levels of difficulty, and common misconceptions, can also be digested and recalled.
The most difficult, however, is knowledge of the student. This goes far beyond grades or test scores to include motivations, learning preferences, personal interests, struggles, strengths, family context, and the wider web of relationships that shape learning. These factors are dynamic and deeply personal. The best teachers piece them together over time and use them to shape meaningful learning experiences.
In theory, an LLMs could process vast amounts of student data if we were willing to provide it. But would students, parents, or schools be comfortable entrusting information that sensitive to AI companies? The more data they have, the better they work, but the risks rise just as quickly! With research showing that current non-generative systems deliver only marginal gains, it is hard to argue that today’s generative AI is ready to justify that trade-off.
Combined with hallucinations and misbehaviour discussed in my previous post, and with evidence suggesting that we may be nearing the limits of current LLM performance, it is far from clear how much better things can realistically get.
So what now?
My aim in writing this was not to give us any specific answers or suggest whether we should trash it or jump in head first. I don’t think we should do either. But I did want to clear up the confusion about what generative AI is and what it can actually do. And also share my point of view and experience as an educator who has used generative and other forms of AI fairly extensively, taught with it, created resources with it, and has (and still is) creating secondary curricula for it and is coding edtech apps that use it. There is a lot we are still learning.
Over these two blog posts, I have not particularly sang the praises of AI, but that's not because I don't think it is useful. I think it is amazing! But over the last few years, there has been an unnecessary (but understandable for the tech industry) amount of hype and conjecture about this stage of AI. Let’s embrace this amazing technology, but let’s also be real about what it is and what it can genuinely do for us, before we make any major decisions.
Want to read more?
Below are some links to really interesting articles that discuss the nature of AI in education and related topics, that helped me form my opinion on this issue.
- The future of teaching and learning in the context of emerging AI technologies
- The Role of AI in Modern Education
- The Impact of ChatGPT on Education: The Good and the Bad
- The Effect of ChatGPT on Students’ Learning Performance
- Drivers and Consequences of ChatGPT Use in Higher Education
- AI in Education in 2024: Mixed Feelings on the Tech’s Future
- The Illusion of Thinking
- What if A.I. Doesn’t Get Much Better Than This?