To AI or not to AI? - That Is No Longer The Edtech Question!

An image representing AI in education

 

AI in the Movies hasn't been all roses

It has been about 3 years since OpenAI thrust us into the era of mass adoption of Artificial Intelligence. Since then, the world has undergone a paradigm shift, for better or for worse. But let’s not forget that AI was around long before this and the term was initially coined over 70 years ago. AI is nothing new and our opinions of it have been forming over the last few decades. Unfortunately, many media and entertainment depictions of AI have garnered negative views, based on movies like '2001: A Space Odyssey' (1968); 'Blade Runner' (the 1982 one), which was based on the Philip K. Dick short novel 'Do Androids Dream of Electric Sheep' (1968); the Terminator movies (from 1984); 'I, Robot' - where the robot was actually good, but for most of the movie, the viewer was led to believe otherwise; 'The Matrix' (1999); 'Ex-Machina' (2014); and 'M3GAN' (2022) the only one in this list that I have not actually seen.

So what does this all mean for edtech, which is the purpose of this post? It means that those negative sentiments have now drifted into the minds of teachers, educators and parents. But do we really have anything to fear? Well, yes and no. Let me start with my less important (for this article) take on AI...

 

Share the blame, share the responsibility

It is not going to eliminate us or take over the planet. Humans are ultimately in control of AI (for the most part), and all bad things that we will want to blame on AI should be attributed to human incompetence/error, or human intentionality. It will either be a mistake, or because someone meant it to happen. But AI will always remain a tool for us to use. A brilliant tool, but still just a tool. Nothing more, nothing less. We often like to pin the blame for bad things on our creations (Hello, Frankenstein?), but I think some real introspection is needed here. A brief look at world history indicates that we can do bad all by ourselves.

Considering all this, can we say that it is good enough for education? Well, yes and no, again! Yes, because there is already evidence that it has benefits in the classroom. Adaptive learning systems powered by AI are improving education in certain areas; where AI is used, engagement is increasing; it is enabling accessibility and inclusivity and furthermore, it has the potential to support teachers in really important and meaningful ways.

 

Hype and Circumstance

So just pessimism? Elana Klein, in her brilliant Wired article ‘Tech in the Classroom: A History of Hype and Hysteria’ talks about how we, educators, teachers, parents and the general public at large, have always had a fear of new things. She lists technological tools and advancements like the calculator, the computer, interactive whiteboards, the internet, the iPad as things that sparked negativity and limited adoption just because they were new. She also cites The Pessimists Archive, which is a record of all other such tools that caused ‘moral panics’. But we now all use calculators, computers, interactive whiteboards (although not in the so-called revolutionary way they sold it to us!), the internet and iPads in the classroom, mostly without batting an eye. So is generative AI the same? Well, yes and no (third time’s a charm?). Of course it is, for obvious reasons, but it is much more complicated than that. Recent publicity around generative AI’s blunders are quite worrying. Yes, there were malfunctions with computers, the internet, interactive whiteboards and other tools back in the day. But Gemini told people to literally eat rocks; Bing chat professed it's undying love and told people to leave their partners; Alexa told a 10-year-old to plug a phone charger halfway into a wall outlet and touch a penny to the exposed prongs; Grok amplified Nazi rhetoric and told offensive jokes about the Holocaust. This is not to mention that we have now accepted the hallucinating nature of LLMs as a necessary evil; and not even the experts know exactly how they really work or more accurately, why they don't work like they should. I should add that many of the issues have now been fixed, or more guardrails have been put on the LLMs, to make sure they behave a bit better, but still...

Yes, you could say ‘Just don’t trust it’, or ‘Check your sources!’, but for a tool that is as brilliant and revolutionary as generative AI, that holds almost the entirety of human knowledge produced, and 90% of the time tells us the right thing, it is just so easy to … not check… sometimes, even the best of us don’t. Can we let this technology loose in the classroom? Is it ready? Have we done an adequate risk assessment? What issues are we prepared to live with?

To analyze these questions, let’s go back over my 3 yes’s and no’s:

 

The Yes's and No's

Do we have anything to fear from AI in education? No, if we institute safeguards, do appropriate testing, have a better understanding of how it works and limit its use and scope until better understanding in our use-case exists. If we don’t do all these things, then yes we do  have something to fear.

Is AI good enough for education? Yes, if we make sure we are results driven, if we try and focus on what is important in education, which is the educating, then it is good enough for education. If we rely on the current hype around AI to decide on policy and develop our next classroom tools, if we feed the financial bubble that is propping up the financial markets and startups right now, then no it is not good enough for education.

Is the moral panic around generative AI the same as it was for the other tech innovations? This is a tricky one for me. Yes, if there were actually no issues with it, then I think we would still have a moral panic over it. But the issues are real. So my answer is no. The concern over generative AI in education is not the same as with previous education advancements. Partly because we can’t ignore the fact that we still don’t really understand how generative AI works, partly because teams responsible for research and safeguarding these tools have been let go, or disbanded due to misaligned priorities, and partly because the rather big issues that have made their way to the mainstream media are serious concerns for classroom use in a way that other things were not. The internet has come the closest to being a similar danger, and in most, if not all schools around the world, it is heavily censored (not perfectly, but good enough for most cases). This level of safety is needed in education. The LLMs belonging to OpenAI, Google, Anthropic and Meta definitely seem to have extensive censorship (Twitter’s/X’s Grok is probably a big no on this, for me). But it would be good to get some honesty and insight about why they still fail. 

 

So, what is the question?

So, To AI or not to AI’ - that is the no longer the edtech question because despite the hype and this AI bubble that we’re in and the issues that we face with using it, I don’t think it is going to go away. It is now part of the fabric of society (to be honest, it has been for the last 3 decades at least but more so, now). The question is now, ‘How do we AI?’ In the field of education, at least. This is the question I have been battling, since I started developing SpellMe, my neurodivergent friendly spelling app. To answer that question, and find out what it means for my app, we need to fully understand what it is. I’ll discuss that further in part 2.

 

Blog Posts