Don’t Become a Vibe Coder AKA Prompt Engineer

I’m glad I learnt how to code before the rise of AI coding tools, such as GitHub Copilot and Cursor. There’s already enough spaghetti code out there, and I’d like to try and not contribute to it.

Don’t get me wrong, these AI tools are great and can speed up the productivity of developers. However, I do think they have their place and not to be relied on by novice developers who are just getting into coding.

How This Relates To The Dunning-Kruger Effect

The Dunning-Kruger effect is a cognitive bias in which people with limited competence in a given field overestimate their abilities.

This is an excellent concept to address when we look at how AI is limiting the learning process of new developers, impacting the depth of how far their knowledge goes. They will only be as advanced as the code outputted by the LLM with a shallow understanding of what the outputted code is doing, but more importantly - why it does something.

With the use of AI we can generate lots of code leading to the assumption we are seasoned developers who can write the next multi-billion dollar startup. While this might seem advantageous at first, with time we come to realise we are generating unmaintainable code with security flaws and no understanding of what our codebase is doing under the hood.

Seems like a great time to be a cyber security engineer, don’t you think?

I don’t think this only applies to new developers either, it is very easy to let an AI tool take the wheel as we are working on a project because it feels easy. The only problem with this is that overtime we are starting to rely more and more on an AI tool that coding without it becomes a lot harder.

We learn by getting stuck in and putting our mind to work on complex problems - this is where the real growth happens. I don’t want to be in a situation down the line where I have to tell the interviewer to let me fire up my AI tool to answer a simple code related problem.

A great example of over-reliance on technology is when we are constantly using our GPS to get from point A to point B, while being completely oblivious to our surroundings. We build a strong belief overtime that we know how to navigate to point B, when in reality if we don’t have our trusty GPS we may lack the mental ability to recall the way to our destination.

Final Thoughts

In short, we should use these tools with great care and always be questioning the reasoning behind why an LLM has generated a certain line of code to solve a problem. In changing our perspective ever so slightly, we should be able to leverage these tools for the better as opposed to giving AI the steering wheel and blindly drive us to our destination.