The danger of AI is weirder than you think by Janelle Shane

I continued my obsession with AI in education and found another TED talk reflecting on the downsides of using AI and what can go wrong or in an unexpected way. Janelle Shane’s TED Talk uses her insights to how we might rethink tech use in education.

In her talk, Shane walks through bizarre, whimsical, and sometimes unsettling examples of how AI “thinks” and makes mistakes: colorizing historical photos in strange hues, auto-completing nonsensical recipes, or generating weird but superficially convincing text. The core message is that AI is not human intelligence. Its patterns, biases, and oddities reveal its limits. What struck me most was how frequently we underestimate the strangeness that lies beneath the seeming “intelligence” of these systems.





A few lessons jumped out:


  • AI’s creativity is weird, not magical. The system can piece together patterns from data, but it often “hallucinates” or produces illogical results, because it lacks understanding or common sense.

  • Bias and unpredictability lurk under the surface. Because AI models are trained on data reflecting human prejudices, their outputs can mirror and amplify those biases in odd ways.

  • We must remain skeptical and human centered. Shane’s examples show that we should not treat AI outputs as fact or absolute. Instead, we need careful oversight, questioning, and human judgment.


She reminds us that every time AI seems incredibly clever, there’s also a moment where it is completely off the rails—and that’s instructive for educators.


One quote that stuck with me:

“AI is most dangerous when it seems almost right because we think we can trust it.”

This is really valuable for teachers. It warns us that just because a tool looks polished doesn’t mean it’s accurate or safe to use without scrutiny.


In my teaching context, this talk is a wakeup call. If I integrate AI into assignments or lessons, I cannot treat it as a “black box” that always works. Instead, I’ll model skepticism and verification with my students—showing them how to test AI outputs, compare sources, and recognize errors.


This aligns directly with my belief that technology should amplify critical thinking, not replace it.


Source:

Comments