Will Artificial Intelligence decrease my intelligence? #
There is a narrative that AI is here to replace people. That might mean developers, writers, engineers, or whatever role happens to be under discussion that week.
Honestly, this is a fair concern, and to some degree it has happened already. Computerised systems have been replacing roles for years. Go into your local supermarket. How many self-checkouts are there now?
It may eventually replace my role, but until then I’m embracing its capabilities. I do not use AI to replace core thinking, and I certainly do not rely on it to produce finished work without scrutiny.
Instead, I use it in much the same way I have always used tools like spreadsheets, websites like Wikipedia or Stack Overflow. Dare I say odd “RTFM” moment when things got really hard?
When I am working through something like a docker build that refuses to behave, a python function that is slowly evolving into something unmaintainable, or a fighting this Hugo blog theme that appears to have its own interpretation of reality, AI helps me explore options more quickly. It helps me sanity check my assumptions, and explain concepts in a way that actually sticks.
Rubber Duck evolved #
In many ways, AI feels like an evolution of rubber duck debugging. Developers have long relied on the simple act of explaining a problem out loud to uncover the solution. It’s an old but extremely effective tool.
Talking out loud about your code to a rubber duck allows you to hear / comprehend logic flaws, bugs, and glitches that would otherwise go unnoticed. It’s crazy effective. Sometimes you can’t see the wood for trees, you need another perspective.
AI takes that concept a step further by responding, suggesting alternatives, and occasionally offering something genuinely useful. It is not perfect, and it can be confidently wrong in ways that are almost impressive.
That imperfection is part of what makes it useful. You shouldn’t blindly trust AI output. This forces a level of engagement and critical thinking that passive documentation never really demanded. If anything, using AI regularly has made me more sceptical and more inclined to validate what I am given, rather than less.
Terminal Velocity #
The real value, at least from my perspective, is not automation but acceleration. AI does not remove the need to do the work. It reduces the time it takes to get to a point where meaningful work can happen.
Tasks that would previously involve trawling through documentation, opening countless browser tabs, and slowly piecing together an answer can now be compressed into a much shorter feedback loop.
That shift changes how you approach problems. You are more willing to experiment, more inclined to try alternative approaches, and less concerned about getting it wrong on the first attempt. The cost of iteration is so much lower.
WD40 #
This fits neatly into how I enjoy learning. Iterative steps. Build something, then I’ll get all confident and that point will come where I break it. If I get it working again, I’ve got pretty decent fundamental knowledge.
“Build. Break. Understand” is my site’s motto. Most of what exists under techielab.org is not there because it needed to be built, but because I wanted to understand how it worked.
WD40 stands for Water Displacement Attempt 40. 39 previous attempts didn’t work, then it did.
Misleading #
Perhaps that is why the term “artificial intelligence” feels slightly misleading. It frames the technology as an imitation rather than a teammate or collaborator.
In practice, what I experience is far closer to a mentor. I ask questions, receive structured answers, sometimes questionable ones, and then refine those answers through my own understanding.
That loop feels much closer to working with another person than it does to using a traditional tool. AI never tires, never complains, and occasionally invents entirely new APIs with remarkable confidence.
I joke that AI is like a puppy. It’ll give you the best joy, but it will crap on your carpet. It will eat your shoes. You cannot leave it alone, don’t get it wet and never feed it after midnight (I may be confusing puppies with gremlins, but, if you know you know). With time that relationship will grow, evolve and the accidents slow down (never go away, but become less of a daily issue. The odd blip will happen)
The right word? #
Is “artificial” really the right word? I am not convinced that it is.
Artificial suggests something fake or inferior. What we are dealing with feels probabilistic, context-aware, occasionally insightful, and occasionally completely wrong. In other words, it is not entirely unlike human intelligence.
Perhaps the more interesting question is whether we have always overestimated how precise or deterministic intelligence needed to be in the first place.
AI has not fundamentally changed how I work, but it has significantly altered how quickly I can think, test ideas, and learn from them.
If that is what we choose to label as artificial, then it may say more about our expectations of intelligence than it does about the technology itself.
Reply by Email