I think we have all heard something along these lines in the last few months: “Hey there! You! Yeah you! If you don’t learn how to use AI, your ass is grass! Society is going to leave you in the dust, you aren’t going to find a job, you’re going to be worthless!”
And these claims can’t be dismissed anymore. Shopify CEO Tobias Lütke and Duolingo CEO Luis von Ahn have begun to implement policies refusing to hire humans if the position could be completed by Artificial Intelligence (AI). In May of 2025, Microsoft laid off around 6,000 employees and claimed AI now writes up to 30% of its code. And this is just the tip of the iceberg — it seems like AI is good for those who believe in never-ending profits.
But is it actually good for all of us? Good for our mental health? Our souls? Our thinking? Is it really going to benefit us?
First, we must make it clear that overuse of AI increases the tendency to cognitively offload, which then decreases critical thinking skills. This basically means that we mentally hand over the task, such as writing a paper, to AI, rather than doing it ourselves.
So why is this happening? As students, we are familiar with the hustle needed to maintain that oh-so-precious GPA, but let’s get into the weeds. The three stressors that I believe are most easily identifiable for AI use are exam stress, boring readings and feeling stuck. To demonstrate this exam stress, in 2024, a study of Harvard’s student body found that nearly half (48%) of students used AI to take at‑home tests or quizzes, while 53% used it to write entire essays. I’ve heard countless stories of students using AI to summarize their readings, whether it be novels or very technical papers. Finally, in regards to feeling stuck, students use AI to generate thoughts or organize them more logically. This helps some students get around writer’s block, which then provides temptation every time you are in the same situation. Why not just use AI? Everyone else is.
Here’s the problem: Every time you use AI to complete a task, habitual behaviors click on, engaging our brain’s reward system and releasing dopamine, the “feel-good” neurotransmitter. When you are racing against a deadline and craving a non-judgmental study buddy, ChatGPT becomes the perfect match — handing out information in spades until all of a sudden those small helping hands start to feel like crutches instead of catalysts. This relationship becomes addictive, as this tiny action becomes a dopamine hit and forges a habit loop where self-regulation steps down. What began as a helping hand suddenly becomes an automatic reflex, turning occasional uses into daily dependency.
Intuitively, when you offload tasks onto a generative AI software, you won’t learn nearly as much as you would have otherwise. Your brain isn’t metaphorically lifting the weight that it otherwise would have been. To really illustrate this, I will share the following embarrassing story.
I was in calculus class and, instead of actually doing my homework, I had started to use online calculators (shoutout Mathway and Symbolab) in order to save time and effort. But, of course, I wasn’t actually learning anything. Sure, I would look at the outputs, but I think we can all agree that is nowhere near the same as doing the problems themselves. This meant that when the next unit started, I was totally helpless. Math units build upon each other, and because I didn’t know the prior information, I was totally helpless. Because I didn’t lift the cognitive weight of that first week and learn the foundation, I wasn’t ready when the teacher threw on added weight with the next unit. So what does this mean for societal progression, and more importantly, what will happen to us because of it?
Scientific ideas build on each other, writers get better as they write more and artists develop their own style the more they create. It’s kind of obvious, right? Doing the task over and over again makes you better at said task. This falls in line with the ideas of “practice makes perfect” and taking it “one step at a time.” If we start to use AI too much, it will become the baseline for our thoughts. While this would benefit the companies developing these algorithms, giving them both immense wealth and power over what and how we think, it wouldn’t lead to altruistic human ingenuity or creativity.
Assuming all of this reasoning is sound, we are left with two options in my eyes.
Option 1: Restructure society
Higher education would have to foster a passion and love for thinking and learning, something that it currently fails at. This fostering may only be possible with the elimination of whatever kind of capitalism we are in now and a pivot to stakeholder capitalism or even socialism. From this point, society would be much more equal and could then make an informed decision about in what scenario AI should be used. I believe that a decision would come where most people see the most benefit: medicine and science in general. But this is unlikely. So what else do we have?
Option 2: Don’t rely on the thinking machines!
Why?
Every time you use AI, you aren’t building and flexing as many of your brain muscles that you maybe should be. Using your brain for tasks, no matter how insignificant it may seem, is taking you one step closer to potentially doing something new, something different. AI is based on the status quo and will continue to reinforce those beliefs. It’s up to us to do something new.
The best and brightest students won’t be the ones who use AI the most. They will be the ones who know when not to, and have built habits reflecting that discipline. In the age of machines, the boldest act is to stay human.
Justus Swan believes we’re all definitely doomed, but in a way that’s actually intuitive.