I don't get this over-excitement about AI-generated code.
(this article first appeared on linkedin).
We had this brilliant idea of combining Leibniz's binary system with Gödel numbering and Turing universal machine into very efficient transistor-based machines, capable of executing an incredible number of simple operations per second.
The only problem was/is that translating the information we exchange daily (numbers, texts, sounds, pictures) into 0&1 is such a terribly tedious process, and even worst is translating our "ideas" about manipulating information into basic operations understood by binary computers.
We started with punch cards, then with binary code entered manually, then assembler, then ... (put all the programming languages we invented over time).
The key point is that formal (aka programming) languages were/are not meant for computers, but for humans.
Every programming language is born to help humans to express their "intentions" in the most convenient and understandable way for compilers to do their job as best as possible (again another piece of software written to help humans avoid entering binary code...).
The programming paradigms have nothing to do with what a computer really does when executing code. They are attempts to help humans think about what they need to do, avoid common errors, and be able to understand after some time what they actually intended to do...
If we look at the overall development of the software industry, most of the code we develop is mainly an aid for humans to reduce their effort and mistakes.
Let's take all OS, libraries, and frameworks developed again and again in different languages.
This incredible effort (and the consequent dependency hell...) has been put in place to avoid each developer from "reinventing the wheel" every time and repeating the same mistakes all the time.
But if you look at compilers (the bridge between humans and computers), and take a closer look at optimization strategies, we can see that all those "aids" (i.e. languages, paradigms, frameworks) are actual impediments (no matter how hard we try) to get the best out of binary code (see this provocative talk about clean code...).
We always need more RAM, faster CPU, and more storage to do pretty much the same things that we did 10 years ago.
Now, going back to code pilot and chatGPT and all this genAI everybody is talking about.
We are investing an incredible amount of resources to train computers to understand a less-than-ideal language to express our intention (i.e natural languages) to produce code into a human-oriented formal language (i.e. programming languages) to make it understandable by the same humans, that needs then to be translated by a program (i.e. compilers) into optimized binary code... what could go wrong?
Why are we not removing altogether the human-in-the-middle problem and are we asking the computer to generate optimized binary code from our ideas expressed into a "disciplined" natural language?
What if we could finally get rid of programming languages, libraries, and frameworks altogether and focus only on what really matters (i.e. algorithms data structures, and finally restart Thinking Above the Code)?
This way we could not care anymore about the effort of reinventing the wheel every time, Computers are faster than us and less prone to complain about repetitive tasks, after all.
But of course, we need a way to build such a system, and therefore we need to develop the "right" programming languages, frameworks and paradigms.
We are only humans after all... ;)