Is AI making us stupid?
On cognitive atrophy, intellectual ownership, and the tension between ease and excellence
For the past two years, I’ve used generative AI nearly every day. It’s efficient, versatile, and frankly, addictive. Yet, a creeping fear has taken hold: Is this making me intellectually lazy? What began as a tool for augmenting my work is starting to feel like an intellectual crutch, and for someone whose greatest fear is losing mental acuity, this is troubling.
So, is this real?
Am I just getting older?
Can AI actually slow our brains down? And even if it can, does it really matter?
The calculator problem
History offers us a tempting comparison. When calculators first entered classrooms, they were met with scepticism. Would students lose their ability to do arithmetic by hand? While the initial fears weren’t entirely baseless—studies show that overusing calculators can weaken foundational arithmetic skills—they also freed up brain space for more complex problem-solving. By enabling students to focus on more complex tasks, calculators didn’t make us dumber; they reallocated our mental effort.
But while calculators allowed us to reallocate mental effort without fundamentally altering the way we think, AI is not a calculator. Calculators solve a specific category of problems. Generative AI is general-purpose. It doesn’t just assist with arithmetic or narrowly-defined tasks—it integrates into virtually every aspect of our cognitive lives, from writing to brainstorming to emotional expression. My dad uses AI to write WhatsApp messages for birthdays. I use it to organise thoughts, refine ideas, plan meals, and even investigate why my hair is falling out. Its applications are endless—and that’s precisely what makes it so different.
Unlike calculators, AI doesn’t just offload tasks. It mediates how we approach problems in the first place. This raises the stakes of the game: Is AI merely shifting our mental effort, or is it displacing something essential in the way we think?
Cognitive offloading: the trade-offs
The concept of cognitive offloading—the act of delegating mental tasks to external tools—can help us understand the tradeoffs. Research shows that by allowing us to focus on the “big picture”, AI reduces mental load and improves performance in the short-term. But there’s a downside: long-term reliance on offloading can diminish memory retention, problem-solving skills, and even creativity, especially when we stop actively engaging with the tasks we’ve outsourced (Grinschgl et al., 2020).
Offloading isn’t inherently bad. Calculators didn’t make us less capable – they changed how we used our mental resources. But offloading is only helpful when we do it intentionally and stay actively engaged in the process. The real concern arises when reliance on these tools becomes automatic and unconscious, leaving us disconnected from the very skills we’re delegating.
Take GPS as a case study. It’s undeniably convenient, yet studies show it has weakened traditional navigation skills like reading a map. People who rely heavily on GPS often struggle to form mental maps, losing a spatial awareness that was once second nature. The concern with AI is similar but broader. As we offload more than just memory—as we outsource creative thinking, problem-solving, and even decision-making—are we hollowing out the very capacities that define intellectual autonomy?
The functionalist temptation
This brings us to a philosophical question: If AI achieves the desired outcome, does the process of getting there still matter?
From a functionalist perspective (tl:dr; ends justify means), intelligence is measured by adaptability. If AI allows us to achieve goals more efficiently—whether writing a report or crafting a compelling argument—does it matter that we didn’t “struggle” to get there? Perhaps this is the evolution of intelligence: outsourcing lower-order tasks to focus on higher-order challenges. As entrepreneurs, one of the first skills we have to master is delegation. Is AI any different?
But this view has limits. There’s a difference between instrumental intelligence (achieving results) and formative intelligence (cultivating the mental rigour that comes from effort). Aristotle, for instance, argued that human flourishing (eudaimonia) isn’t just about outcomes; it’s about the process of growth—developing virtues like discipline, resilience, and intellectual curiosity. From this perspective: Struggle isn’t a bug to be eliminated; it’s a feature of intentional engagement.
If we consider all effort to be just a form of inefficiency, we risk losing something essential. There’s satisfaction, even joy, in mastering a difficult task—whether it’s writing a complex essay or solving a problem through sustained effort. The risk of AI isn’t that it makes things easier; it’s that it could strip us of the opportunities for growth that come from doing hard things.
Ease versus excellence
AI has risen in a post-pandemic world grappling with burnout and a collective rejection of hustle culture. We’ve embraced the whole “work smarter, not harder,” mindset, and AI fits perfectly into this narrative. It promises to alleviate the drudgery of work, making our lives easier and more efficient.
But have we swung too far? In our quest to reject overwork, are we cultivating a reliance on ease that erodes our resilience? There’s a paradox here: The very qualities that help us navigate burnout—grit, problem-solving, intellectual sharpness—are developed through effort. By outsourcing even minor tasks to AI, are we weakening the “muscles” that enable us to tackle larger challenges?
The counterpoint to this is that relying on AI, like working remotely, isn’t inherently harmful, but it requires us to be more intentional about maintaining habits—like intellectual engagement or physical activity—that used to arise naturally.
While I agree in theory, I think most people aren’t intentional in practice. We’re often too exhausted, too burnt out, to add new habits to already packed schedules. For example, if you work a manual labor job, your body stays fit as part of the job; if you work in a mentally demanding field, your brain stays sharp by necessity. But when daily routines no longer provide these benefits—as in jobs that demand long hours without mental or physical exertion—you suddenly have to carve out time for them. What used to be a natural by-product of work now requires deliberate effort, which is harder when we’re already stretched thin.
A framework for intentional AI use
If any of you remember the Sunday Dispatch from the Scribesmith days, you might remember, my obsession with curated consumption, and why I try not to feed my brain junk food. I think a similar logic applies to AI. If curating the media we consume is akin to eating a healthy diet, then having a framework for intentional AI use is the equivalent of going to the gym.
Here are a few things I’m considering.
Selective offloading
I like to reserve AI for tasks that don’t compromise critical thinking. For example, I’ll use AI to clean up a Slack message from a voice transcript, but not to summarise a book I’m reading. I want to engage deeply where it matters.
Does it matter to me that it’s mine?
How much do I care about this specifically? Does it matter to me that it’s mine or does it matter to me that it’s done?
YMMV, but I’m a purist with a lot of things. I blame years of martial arts for turning me into a glutton for punishment. I need to feel like I worked hard in order to get a sense of satisfaction from things. But, if it’s something where the outcome matters more than the process, i.e. cleaning up a work email, AI all the way.
Is it shaping my thought process?
AI is prone to both bias and hallucinations, and it has the ability to see patterns where we don’t, but if the ideas and choices it presents are already influenced by its biases, am I really making a true choice? If I can’t understand or see ‘the working’ behind its suggestions, how do I know I’m not just being nudged in a direction I didn’t consciously choose?
Reinforcement learning versus primary source
Rather than use it as my single source of truth, I use AI to reinforce things I’ve already read or learnt, to improve my (abysmal) memory. One of my favourite ways to do this is to plug things I’ve saved into Obsidian from the previous week (after reading of course) into NotebookLM to create a custom podcast to shore up things I learnt.
Skill-preservation
Am I still capable of doing the task manually, or has reliance dulled my ability?
This is a tricky one – I’m typically hyperaware of how I’m functioning at all times, so if things are taking me longer than usual that’s a good sign to recalibrate. For example, when I noticed a correlation between my dwindling attention-span and Tiktok use, it was time to phase out the latter. When I notice I’m struggling to articulate my thoughts clearly without throwing them into ChatGPT, it’s time to take a break for a week or two.
The new shape of intelligence
“They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.” -- Plato, Phaedrus
Plato feared that writing would weaken memory, and in a sense, he was right. Writing externalised memory—but it also expanded human thought. Similarly, AI is reshaping the way we think and interact with knowledge, pushing us to consider not just what it can do, but how we choose to integrate it into our lives.
AI isn’t inherently making us less intelligent; the responsibility for that lies with us. In a world where we’re tired, stressed, and more burnt out than ever, the real risk is slipping into unthinking reliance. When convenience takes priority, we risk losing the intentionality needed to ask the hard questions about how we think, grow, and flourish.
Thanks to Lauren Razavi for reading drafts of this.
"Selective offloading" is the best way to describe my own approach to using AI tools in my everyday workflows.
Your conclusion is on point, is down to us ultimately. We must not fall into the trap of become slaves to the tools and offloading every single intellectual tasks… otherwise we are doomed. The ones who will shine in the next era will be the ones who can extract the power of AI.