- The Midas Report
- Posts
- New Evidence Shows AI Writing Tools Can Dull Thinking
New Evidence Shows AI Writing Tools Can Dull Thinking
3 min read.

MIT Media Lab’s EEG study ties one shot generation to lower brain connectivity and recall. The fix is timing and interface. Design for human first drafting and inquiry.
MIT Media Lab researchers have fresh, concrete evidence that how people use AI matters as much as what the model can do. On WBUR’s On Point, lead researcher Nataliya Kos Myna discussed “Your Brain on ChatGPT Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Tasks,” a study that put 54 Boston area students through SAT style essay prompts under three conditions, ChatGPT only, Google Search only, and brain only with no external tools. The team measured brain activity with EEG connectivity, analyzed essays with NLP, and had two independent English teachers score the writing blind to conditions.
What the EEG and essays showed
The signal was clear. Brain only participants showed the most widespread connectivity across the brain. Google Search users showed less, though still notable activity involving visual regions. ChatGPT users showed the least connectivity, which researchers emphasized reflects reduced connectivity rather than inactivity. The outputs tracked that pattern.
NLP analysis found the ChatGPT group produced more homogeneous vocabulary across participants. On a prompt about happiness, ChatGPT essays clustered around career and career choice, Google Search essays emphasized giving, and brain only essays centered on “true happiness.” The human judges could identify sets of essays by the same individual across sessions, while an AI model could not detect those micro differences. Retention and ownership also dropped with one shot generation. Eighty three percent of ChatGPT participants could not quote anything from their own essay 60 seconds after submitting it, and 15 percent reported no sense of ownership over their work.
The most actionable finding came from a crossover session with 18 returning participants. Timing changed everything. When people shifted from ChatGPT to brain only, their brain connectivity did not climb back to the level of original brain only participants. But when they shifted from brain only to ChatGPT, their connectivity was significantly higher than the original brain only group.
Their behavior with the AI also changed. After doing brain first drafting, they made two times fewer “write me an essay” requests and four times more information seeking requests like “tell me more” and references. In other words, human first work followed by AI assistance encouraged a more engaged, research oriented use of the tool rather than offloading the task.
Design implications for builders
For founders and product teams building with large language models, this reframes the problem. The risk is not merely that LLMs produce average prose. It is that defaults which encourage automation can suppress engagement, reduce immediate recall, and homogenize outputs. That is a design problem. The study’s strategic implication is straightforward.
Prompt augmentation, not automation. Interfaces and defaults that push users toward inquiry, references, and critique instead of “write the essay” can foster healthier cognitive patterns. The crossover data suggests that encouraging pre writing before invoking generation, and making iterative, information seeking interactions the path of least resistance, can improve both engagement and downstream outcomes.
This principle extends beyond classrooms. The study flagged risks for learning outcomes and content differentiation in educational and professional settings. It also pointed to a workable production model, hybrid pipelines that pair AI with human oversight. The radio program itself used Descript to generate a first draft of the transcript that a producer then reviewed and corrected, demonstrating how AI can expand capacity while maintaining quality control. That is the augmentation playbook in practice. Put humans in the loop where judgment and ownership matter, and use AI to speed the parts that benefit from automation.
A related finding from a separate EEG study on handwriting versus typing underscores the same direction of travel. Students wearing high density EEG nets showed higher brain connectivity when handwriting compared to typing, engaging visual processing, sensory motor integration, and motor cortex activity associated with learning and memory. Mode shapes mind. If your product pushes users to offload too much, you should expect weaker engagement and retention.
Demand for better guidance is already here. Approximately 3,600 teachers have contacted the MIT research team seeking help integrating AI responsibly. That is a sizable market signal for evidence based edtech tools, training, and policies that embed augmentation patterns by design. The opportunity is to build systems that make thoughtful work easier and mindless delegation harder.
The takeaway is simple. Model performance is not the constraint. User cognition is. Design AI that keeps people thinking. Human first drafting, AI as an explainer and critic, interfaces that reward inquiry. If you build for augmentation, you will ship products that help users do better work today and become better thinkers over time.