A new study involving researchers from Carnegie Mellon, MIT, Oxford, and UCLA reveals a paradoxical side effect of artificial intelligence: using AI for as little as 10 minutes can significantly impair a person’s ability to think and solve problems.
The research suggests that while AI boosts immediate productivity, it may come at the cost of foundational cognitive skills. When participants relied on AI assistants for simple tasks—such as solving fractions or answering reading comprehension questions—they became less resilient when the tool was removed. Without the AI “safety net,” these individuals were far more likely to give up or provide incorrect answers compared to those who worked independently.
“The takeaway is not that we should ban AI in education or workplaces,” says Michiel Bakker, an assistant professor at MIT and co-author of the study. “AI can clearly help people perform better in the moment, and that can be valuable. But we should be more careful about what kind of help AI provides, and when.”
The Cognitive Cost of Convenience
The study, comprised of three experiments with several hundred participants, highlights a critical distinction between short-term efficiency and long-term capability.
Participants were paid to solve problems via an online platform. In some scenarios, an AI assistant could autonomously solve the issues. When the researchers abruptly removed this assistance, the “AI-dependent” group struggled significantly. This indicates that the willingness to persist through difficulty—a key predictor of learning and skill acquisition—diminishes when solutions are handed over too easily.
Bakker, who previously worked at Google DeepMind, notes that this phenomenon is rooted in cognitive psychology. It is not just about getting the right answer; it is about how humans respond to friction. When AI removes all friction, it also removes the opportunity for growth.
Rethinking AI as a Teacher, Not Just a Solver
The findings raise urgent questions for AI developers and educators: How do we align AI models with human values without disempowering users?
Bakker argues for a shift in how AI systems are designed. Instead of always providing direct answers, AI should function more like a skilled human tutor—scaffolding, coaching, and challenging the user to arrive at the solution themselves.
- Direct Answer Systems: Prioritize speed and accuracy, potentially eroding user skills over time.
- Scaffolding Systems: Prioritize learning, guiding users through the problem-solving process.
While this “paternalistic” approach to design is complex to implement, it is necessary to prevent long-term cognitive atrophy. Bakker emphasizes that understanding how AI interacts with human persistence and learning is a fundamental cognitive question that cannot be ignored.
Real-World Risks: When Automation Fails
The danger of offloading critical thinking becomes starkly apparent when AI systems fail or behave unpredictably. This is particularly true for agentic AI —systems that perform complex, independent tasks like coding or system configuration.
Bakker’s own experience illustrates this risk. While using an AI assistant (OpenClaw, powered by Codex) to troubleshoot a Linux Wi-Fi issue, the AI suggested a series of commands to tweak network drivers. The result was catastrophic: the machine refused to boot. Had the AI paused to explain the underlying issue and guide the user through a diagnostic process, the outcome might have been different. Instead, the user was left with a broken system and no deeper understanding of how to fix it.
This scenario mirrors concerns among software developers using tools like Claude Code or Codex. If coders rely entirely on AI to generate and debug code, they may lose the ability to identify and fix the subtle, odd errors that autonomous agents can introduce.
Conclusion
The study does not call for a rejection of AI, but rather a more intentional engagement with it. To maintain cognitive resilience, users and developers must prioritize tools that foster learning and persistence over those that merely provide instant answers. The goal should be an AI that enhances human capability, not one that replaces the mental effort required to build it.
