
Are they right?
[snip]
Might both hypotheses be true as cautionary possibilities worthy of cognitive attention?
The MIT research which explored neural and behavioral consequences of LLM-assisted student essay writing applying electroencephalography (EEG) to assess cognitive engagement indicates we may be sacrificing cognitive capacity and creativity for short-term convenience.
The students were comparatively divided into three study condition groups: one using ChatGPT, a second using Google for research, and the third drawing exclusively on individual logic and reasoning (no tools).
[snip]
Notably, 83% of those who used ChatGPT to draft their work couldn’t remember a single sentence written within just minutes before.
Based on monitoring EEG brain activity, the ChatGPT users showed significantly decreased neural engagement, while brain-only writers generated nearly double the number of connections in the alpha frequency band associated with attention and creativity.
In the theta band related to memory formation and deep thinking, the gap was even greater: 62 connections for brain-only writers versus 29 for AI.
Steven Graham, a Regents and Warber professor in the Division of Leadership and Innovation at Arizona State University’s Teachers College, characterizes this gap phenomenon as “cognitive debt.”
[snip]
Graham’s English teachers who then reviewed the essays blind regarding which were AI generated described the ChatGPT work as having “close to perfect use of language and structure, while simultaneously failing to give personal insights or clear statements.”
The teachers also described the ChatGPT essays as “soulless” because many sentences were empty of content, lacking “personal nuances” associated with individual thinking.
Is this, and will it continue to be, a worsening societal pattern?
Writing in RealClearScience, Bruce Abramson, director of the American Center for Education and Knowledge, doesn’t appear optimistic.
[snip]
“Why waste your time learning, studying, or internalizing information when you can just look it up on demand?”
As Abramson points out, whereas in 2011 an estimated one-third of Americans and one-quarter of American teenagers- had smartphones, more than 90% of Americans and 95% of teenagers currently have them.
Few of today’s college students have ever operated without ability to scout ahead or query a “smart” device for information on an as-needed basis.
I received universally vacant looks when recently asking some of my undergraduate students if they knew the purpose of a slide rule, the simple tool that my generation of architects originally used prior to pocket calculators for structural and other numerical determinations.
Abrahamson concludes that having “outsourced knowledge, comprehension, and judgement to sterile devices easily biased to magnify public opinion,” we’ve raised a generation incapable of deep understanding.
[snip]
After all, if none of them know what a slide rule was, they may not really be missing out on all that much: the cognitive attention spent calculating beam, column, and truss strength for a particular application when that time and thought might have been put to better use exploring and comparing other structural options.
Here, let’s differentiate between cognitive pattern recognition and intellectual purpose directed “vision.”
Just as Steven Graham’s teachers reviewing MIT’s ChatGPT-generated essays found them stale and pointless, we shouldn’t expect or rely on menu derivative large language models to replace human “generative intelligence” which established goals and purpose.
[snip]
All large language search and pattern-marching models are designed to guess what word, or portion of a word, is most likely to come up in an answer, and in the process, regularly get facts wrong, invent them, jumble them, and yes, sometimes entirely make stuff up.
These wrong and sometimes senseless answers are known as “hallucinations” because AI apps like ChatGPT and Gemini express them with authoritative confidence without honestly answering that they don’t know much like clueless students taking multiple-choice tests.
AI can stimulate us to inquire and learn, introduce us to new patterns of possibilities, undertake enormously complex calculations in short order, and even correct our imperfect grammar.
But depending on them to think for us?
Only if we’re inhumanly dumb enough to allow that to happen.
* Original Article:
https://www.newsmax.com/larrybell/chatbot-eeg-llm/2025/08/15/id/1222646/