It’s clear that generative AI is already being used by the vast majority of programmers. That’s great. Even if the productivity gains are smaller than many people think, 15-20% is a significant amount. And it’s not a big deal that it makes it easier to learn programming and start a productive career. We were all impressed when Simon Willison asked ChatGPT to help him learn Rust. It’s amazing to have that power at your fingertips.
But surprisingly, there is one concern I share with many other software developers: Will generative AI widen the gap between junior and senior developers?
Generative AI makes a lot of things easier. When writing Python, I often forget to put colons where they are needed. I often forget to use parentheses when calling. print()
I’ve never used Python 2 (old habits die hard, and there are plenty of older languages where print is a command, not a function call). I usually have to look up the name of a pandas function, whatever. Even though I use pandas a lot. Generative AI solves that problem, whether you use GitHub Copilot, Gemini, or something else. And I’ve written that for beginners, generative AI saves a lot of time, frustration, and mental space by reducing the need to memorize the arcane details of library functions and language syntax. These details are growing as every language feels the need to catch up with its competitors. (Walrus operator? Wait.)
But there’s another side to that story. We’re all lazy and don’t want to remember the names and signatures of all the functions in the libraries we use. But isn’t it a good thing to know them? Fluency in programming languages exists just as it does in human languages. You don’t get fluent by using a phrasebook. You might be able to survive a summer backpacking around Europe, but you’ll have to do much better to get a job there. The same is true in almost every field. I have a PhD in English literature. I know that Wordsworth was born in 1770, the same year as Beethoven. Coleridge was born in 1772. Many important texts in Germany and England were published in 1798 (plus or minus a few years). The French Revolution happened in 1789. Does that mean something important happened? Something beyond Wordsworth and Coleridge writing a few poems and Beethoven writing a few symphonies? Yes, indeed. But how can someone who doesn’t know these basic facts expect an AI to tell them what happened when all these separate events collided? Will we ask about the connections between Wordsworth, Coleridge, and German thought, or will we think about formulating ideas about the Romantic movement that transcends individuals and even European nations? Or will we be stuck on islands of unconnected knowledge? Because we (not the AI) make the connections. The problem is not that the AI can’t make the connections, but that we don’t think the way we ask the AI to make the connections.
The same problem is seen in programming. To write a program, you need to know what you want to do. But you also need an idea of how to get non-trivial results in AI. You need to know what to ask, and surprisingly, how to ask it. I experienced this a while ago. I was doing simple data analysis with Python and pandas. I used a language model and asked “How do I do this?” every single line of code I needed (similar to GitHub Copilot). Partly as an experiment, partly because I don’t use pandas often enough. And the model put me in a corner, and I had to hack my way out. How did I get in that corner? It wasn’t because of the quality of the answers. Every single answer to every question I asked was correct. In the postmortem, I checked the documentation and tested the sample code the model provided. I’m stuck with one question I absolutely have to ask. I went to another language model, wrote a longer prompt that described the entire problem I was trying to solve, compared that answer to my sloppy hack, and then asked “what’s the problem?” reset_index()
How do I do this?” Then I felt like a (not wrong) novice who didn’t know anything. If I had known how to ask the first model to reset the index, I wouldn’t have been in such a predicament.
It seems like you could read this example as, “Look, you don’t need to know all the details of pandas. Just write better prompts and ask the AI to solve the whole problem.” That’s fine. But I think the real lesson is that you need to be good at details. If you don’t know what your language model is doing, whether you’re writing code in chunks or one line at a time, either approach is going to be problematic sooner or later. You don’t need to know the details of pandas. groupby()
The feature is there, but you need to know that it’s there. And you need to know that. reset_index()
There it is. I had to ask GPT, “Wouldn’t it work better if we used this?” groupby()
?” Because I asked him to write a program. groupby()
It was an obvious solution, but it wasn’t. You may need to know if the model was used. groupby()
Exactly. Testing and debugging have not gone away and will not go away.
Why is this important? Let’s not think about the distant future when programming itself will no longer be necessary. We should ask how a junior programmer entering the field today can become a senior programmer if he or she becomes overly reliant on tools like Copilot and ChatGPT. This is not to say that such tools should not be used. Programmers have always built better tools for themselves, and generative AI is the latest generation of tools, and one aspect of fluency has always been knowing how to use tools to be more productive. However, unlike previous generations of tools, generative AI can easily become a crutch. It can hinder learning rather than facilitate it. And a junior programmer who is not fluent and always needs a syntax book will have a hard time making the transition to seniority.
And that’s the problem. I think, as many have said, people who learn to use AI won’t have to worry about losing their jobs to AI. But there’s another side to that. People who are excluded from learning to use AI and becoming fluent in what AI does will have to worry about losing their jobs to AI. They will literally be replaced because they can’t do things that AI can’t do. They won’t be able to come up with good prompts because they have a hard time imagining what’s possible. They’ll have a hard time figuring out how to test, and they’ll have a hard time debugging when the AI fails. What should they learn? That’s a hard question, and my idea of fluency may not be right. But I’d wager that people who are fluent in the languages and tools they use will be more productive users of AI than those who aren’t. And I’m also convinced that learning to see the big picture rather than the little pieces of code they’re working on will get them far. Finally, the ability to connect the big picture to the microscopic world of fine details is a skill that few people have. I don’t. And if it’s any consolation, I don’t think AI does either.
So learn how to use AI. Learn how to write good prompts. The ability to use AI has become a “must have” for jobs, and rightly so. But don’t stop there. Don’t limit what AI can learn, and don’t fall into the trap of thinking, “The AI knows this, so I don’t need to know it.” AI can help you speak fluently. “What is reset_index()
The question “do?” was asked humbly, but it was meaningful. It’s definitely unforgettable. Learn to ask big-picture questions. What context does this code fit into? Asking questions like these, rather than just accepting the output of AI, is the difference between using AI as a crutch and using it as a learning tool.