LLMs and Programming
Let’s start with this saying: “Most of programming is just plumbing.”
The exact source of this quote is hard to pin down, but I mostly agree with it. Now, on to the article!
My Journey with AI Coding Tools
Early Days: GitHub Copilot
Since May 2022, I’ve been using GitHub’s Copilot. Its value was clear right from the start. When you’re programming, you often repeat a lot of patterns – things you’ve done before or patterns that are widely used. Programmers learn the same design patterns, best practices, and idioms, and an AI tool can learn from all the code ever written to predict what you might write next. It felt natural to let AI handle that “plumbing” work so I could focus on more creative aspects.
The ChatGPT Revolution
Things took a big turn when ChatGPT launched in November 2022. Suddenly, you weren’t limited to inline code predictions, you could directly ask an AI to write code for you. GPT-3.5 had its limitations, but everything started moving so fast. GPT-4 pushed things even further. Now you could ask it to write entire programs.
I used ChatGPT often for tasks where I knew exactly what needed to be done but didn’t want to spend time typing it all out. Instead, I’d let the AI handle the boilerplate. Meanwhile, other players like Anthropic (Claude) and various open-source LLMs were vying for the top spot. OpenAI itself kept improving GPT-4 (e.g., with GPT-4o and o1 releases) to enhance performance.
Experimenting with New Tools
Cursor IDE
New tools like the Cursor IDE and VS Code plugins began getting a lot of attention. I tried Cursor, hoping to give it instructions and let it make changes directly to my codebase. Unfortunately, it struggled to follow project patterns or sometimes wouldn’t complete changes properly, so I gave up on it and went back to GitHub Copilot.
GitHub Copilot Edits
During this same period, GitHub Copilot launched its own feature that was similar to what Cursor attempted: Copilot Edits, which applies changes to your codebase for you. I initially skipped it, assuming it would have the same issues as Cursor.
Cline
I also tried Cline, which lets you plug in your own AI API keys and do things similar to Cursor. However, at the time each session was costing around $5, and it had the same problems—missing or incomplete changes.
A Breakthrough Moment
In the last week, I decided to give GitHub Copilot Edits another shot. This time, instead of just giving it vague instructions and expecting magic, I treated it more like an intern or trainee. I provided the necessary files it should work with, gave it examples, and offered clear instructions. And for the first time, I was genuinely impressed with the results.
It felt like a real solution to the “plumbing” problem: I know what needs to be done, I outline the changes, and let the AI handle the tedious part. This approach frees up a lot of my mental bandwidth. I’m excited to see how AI coding tools will continue to evolve and how they might further change the way we write code.
That said, this experience also reinforces the idea that programmers should focus on broader problem-solving skills—design, architecture, and deep understanding of the business domain—rather than just the tools or languages themselves.
Some Pointers on Using LLMs
- They Can Lead You Astray
- Always do your own research before asking an LLM. Sometimes the libraries or examples they suggest are outdated.
- If you let an AI handle large chunks of code, be prepared for thorough reviews. One tiny mistake hidden in hundreds of AI-generated lines is easy to miss.
- They’re Good When You’re Clueless
- If you have zero idea about how to approach something, an LLM can give you a great starting point.
- They can also act as a replacement for scouring through documentation. For example, you can quickly learn how to create a Blade(Laravel) directive or another specific feature.
- Use Them to Write Predictable Code
- Copilot excels at generating repetitive or “guessable” code, like test cases or common design patterns.
- Balance the Task Size
- If you’re looking for answers to a large feature or functionality, consider breaking it down. You can ask the AI to break it into smaller problems, then tackle them one by one.
- Don’t bother using an LLM for very tiny tasks—especially if you’re already proficient in a language—because typing your request might take as long as just writing the code yourself.
- Good Practices Help
- The better your coding practices (e.g., Separation of Concerns, Single Responsibility Principle), the easier it is for an LLM to work with your codebase. Smaller classes and functions fit more neatly into the model’s context window, and the generated code will likely follow your established patterns.