Sitemap

ChatGPT Is Making You Dumb and You Like It.

4 min readJun 2, 2025

I, like millions of other people, have begun incorporating the use of large language models, such as ChatGPT, into my everyday life. What started as a gimmicky toy that I thought was ‘cool’ has evolved into a helpful AI assistant that can do things like answer complex questions and find information when I’m too lazy to Google it. Want to know about some random topic? Why break a sweat searching through a couple of pages on Google to find the answer when you can ask your friendly LLM? It struck me one day that in the pursuit of ‘productivity’ and ‘knowledge,’ we’re likely becoming less critical thinkers and, in essence, less intelligent.

What got me thinking about all of this, you ask? Snowboarding! A friend invited me to go indoor snowboarding with him over the weekend. I’d never been before, but I thought it would be something fun to try. We reached the top of the slope, and before going down, he took the time to give me a rundown on how to snowboard. “Bend your knees like this”, “Keep your posture like this”, “Focus your body weight on this leg”. All the knowledge he had accumulated from snowboarding was distilled into a couple of sentences. I was ready! My feet clipped into the board, and standing at the top of the slope, I leaned forward to start my descent. I started sliding down the hill… facing the wrong way, flailing my arms around like a deranged chicken for about 10 seconds before my board hit a patch of uneven snow, and I fell face-first into the cold, crunchy snow. But why? I had all the knowledge I needed to know how to snowboard! It was given to me in the form of a couple of sentences, conveniently similar to how ChatGPT would have provided it.

That’s not how we learn things, now is it?

Press enter or click to view image in full size
Me still not quite getting how it all works

A couple of weeks later, one late Saturday night, I was working on a side project of mine. I’ve been obsessed with recreating one of my all-time favourite games, Far Cry 2. One of the key features of the game is the vast expanses of savannah grass which blow gently in the wind, which I find truly beautiful. To render large amounts of animated grass in a real-time game takes a boatload of black magic. You need to understand vector math and how compute, vertex, and pixel shaders work. This doesn’t even touch techniques like frustum culling, level of detail, and so many other black-magic spells. Though that evening I was confident, I had ChatGPT by my side. A couple of hours later, I had the first version of my scene, featuring hundreds of thousands of blades of beautiful, swaying grass. ChatGPT provided me with working code for practically all the parts needed to render the grass in Unity. The strange thing is, while I could read the code, I had absolutely no idea how the hell it worked. I was unable to make any meaningful changes to the code without asking ChatGPT.

Here’s the problem: just like with snowboarding, I was given a distilled version of the knowledge. I wasn’t comfortable with the fundamentals, and I’d argue that this is the key to truly learning something. In the example of snowboarding, getting comfortable with how adjusting where you position your weight affects your speed and direction is fundamental. And this only comes with trying it out yourself until it clicks. For the coding example, understanding vector math and all the other elemental techniques is fundamental. This is where the problem comes in; we are lazy creatures. We want the results without the effort required to achieve them. And this is precisely what LLMs like ChatGPT allow us to do. Yes, you could use an LLM to teach you the fundamentals, then practice using those until you’re comfortable moving to more complex ideas and techniques. But why do that when you can get the final result with a single prompt?

The thing that scares me is that I am seeing this behaviour not only in myself as a techie but also in people from all walks of life. People use LLMs to write blog posts and articles without understanding the topic themselves, and those writing business plans often do so without first grasping how a business works. And this doesn’t even cover the software engineering field where the use of LLMs and ‘agents’ is being promoted for the sake of ‘productivity’. The number of code reviews I do, which, when I push back, I get, “Oh, yeah, CoPilot did that,” frightens me. People are giving up their critical thinking for convenience, and they are all too happy to do it.

Don’t get me wrong, I don’t want to be that old man groaning, “You children with your silly ChatGBT! Back in my day, we used to get our information by reading it off stone tablets using candlelight.” LLMs and AI can provide immense value. We’re living in an era where the world’s information is truly accessible in the palm of your hand and can be presented in a form best suited for each individual. We’re at a critical fork in the road. One path leads to a future of deep knowledge abundance, where we leverage AI to understand any topic we want truly. The other leads to a future where we have lost our ability to think critically and instead have only surface-level knowledge, outsourcing critical thinking to AI.

--

--

Keagan Ladds
Keagan Ladds

Written by Keagan Ladds

Software engineer @ Tesla & serial tinkerer, writing about the tech things that keep me up at night.

No responses yet