Forgot How to Learn
The Problem
As AI has become increasingly more powerful and correct, I have found myself overly reliant on it as a fast answer to most if not all of the technical challenges I encounter. With this increase in throughput for my job, it’s acting as a positive reinforcement that what I’m doing is right and I should lean into it more. I am no longer taking my time questioning the AI or studying what it is telling me. In other words, I’ve found myself starting to vibe code.
This realization hit me the other week while I was setting up my K8s cluster for my homelab. I don’t have any professional experience with K8s so I was planning on using this as an opportunity to learn and grow my skills. Setup a reproducible system with nixos and then use FluxCD to manage my K8s cluster.
While scoping out the project with AI, I quickly found it offering to implement these things for me and it’s as if I forgot that the purpose of this was to learn and do it myself. Before I knew it I had a fully wired up K8s cluster and everything was automated with FluxCD. Don’t get me wrong, I have a rough understanding of what was going on from my conversation with AI during the setup but if I was asked to quickly debug something I would not have a great idea of where to start without asking AI first.
That is the root of my problem. I’m so used to letting AI take the reins that when it came time for me to do a project “old school” I basically forgot how.
How I’m Combatting This
After this realization, I went through the repo and dove into the project’s structure and yaml files for individual services, asking AI questions along the way rather than asking it to do things for me.
There was something unexpectedly nostalgic about it. It brought me back to a few years ago when this was just how you learned things, you sat with the material, got stuck, figured it out, and moved on with a solid hit of dopamine. I had forgotten how satisfying that loop was. Somewhere along the way, AI had short-circuited it so completely that I hadn’t even noticed it was gone. Going back to it felt like rediscovering why I got into this in the first place.
By the end, I was able to add services and make tweaks to the cluster without reaching for AI first. More importantly, I actually understood what I was changing and why.
Final Thoughts
Going forward, I want to be more intentional about how I use AI rather than how much. The goal isn’t to avoid it as it’s an incredible tool and I’m not interested in knee-capping myself out of principle. But there’s a meaningful difference between using AI to accelerate your understanding and using it as a replacement for understanding altogether.
The risk of the latter is subtle. You don’t notice it happening, and the short-term results actually look great. It’s only when something breaks, or when you need to make a deliberate architectural decision, that the gap shows up. By that point, you may be several layers deep into a system built on choices you didn’t fully think through.
The fundamentals still matter. AI is at its best when it’s working with an engineer who understands the problem; not carrying one who doesn’t.