AI is advancing at a faster clip this year with the debut of several technologies boosting its usage. Generative Pre-trained Transformer (GPT), for example, is the large-scale natural language technology that uses deep learning to produce human-like text.
The third generation (GPT-3), which predicts the most likely next word in a sentence based on its absorbed accumulated training, can write stories, songs, and poetry, and even computer code.
By fine-tuning GPT-3 on more than 100 gigabytes of code from Github, an online software repository, OpenAI recently created Codex. The software can write code when prompted with an everyday description of what it’s supposed to do—for instance, counting the vowels in a string of text. But researchers have found it performs poorly when tasked with tricky problems.
AI expert and Erudit’s chief science officer and co-founder, Ricardo Michel Reyes, discussed the growing use of AI in programming and whether or not its advancement is something that programmers should fear when it comes to their job security. He believes that AI tech in programming can bring a lot to the table to help support programmers, rather than replace them. Reyes fell in love with Artificial Intelligence (AI) in 2009 and has not stopped researching and developing neural networks since. He has developed and founded multiple companies in the fields of AI and Tech.
What are the challenges programmers are facing when it comes to working with AI? Is there talk about replacing these roles with these technologies?
Should programmers fear that AI will replace their jobs? No matter how advanced the robot is, we have not yet programmed them to want or desire something. They cannot give themselves their own prompts. Whatever needs to be coded, there has to be a motor of desire to guide them. In the case of AI in programming, that’s a human programmer. Plus, the code suggested by the AI is just what most of the programmers would have coded, not necessarily the best way to code it. There is still this random, messy way to connect things that we call creativity that’s missing in the AI game. Replicating art is not being an artist! Instead of fearing it, programmers should focus on what needs to be coded, in which order, if it’s the right feature, how to prevent bugs, and harness the power of the AI to code all of the boilerplate. Then, just review if it makes sense. That can save tons of hours of carpal-tunneled wrists repeating the same codes.
What are the benefits to using AI in this area and how are programmers applying it?
There are many AI tools that are very useful to programmers. They can give you metrics of complexity of code, point out lines you could simplify, or potential errors in runtime. For example, they can better estimate the time a task is going to take for a programmer to finish a code, which can be very useful for project management as well as managing client expectations.
The biggest advances are GitHub Copilot and ChatGPT. Copilot is a smart autocomplete that can suggest the missing part of the line of code you’re writing and even the full next line. ChatGPT is a chatbot that can—among many other things— write a full program if queried to do so. Just type: “write a program to convert Fahrenheit degrees to Celsius in python,” et voila! AI tech in programming is growing really fast and will continue to accelerate because of all the value it can bring to the table.
With the introduction of generative AI tools like ChatGPT, and the newly announced Google Bard and Microsoft AI etc, what are the concerns surrounding this?
One big concern is that people trust these models too much and stop verifying facts and correctness. They stop reviewing the code AI produces to search for security vulnerabilities, infinite loops, etc. Another concern is that it starts autocompleting with intellectual property of other companies because programmers were not careful enough to opt-out of sending their code as training datasets for Copilot. If we keep going down this rabbit hole, we find even more issues like worsening inequality, amassing more power by Microsoft and Google, creating more dependency, cheating on technical tests for recruitment, etc. On the other hand, AI does provide opportunities for junior programmers and the underprivileged who lack access to education. I think these benefits—the opening of doors for the underprivileged plus the time AI can save—can compensate for the issues, assuming we continue striving for the thoughtful and ethical use of AI.
What is Erudit doing to address these issues?
We are pro-generative AI. We are also part of The Good AI, and we follow recommendations from the Montreal AI Ethics Institute and the Future of Life Institute. We use tools to save coding time for common tasks, but always review and test thoroughly, and the core parts prone to error or bias are peer-reviewed and discussed by the team.
What can organizations do to respond to these issues in 2023?
Continue educating themselves on the developments of AI and technology. Don’t be afraid to test them in small projects where results can be reviewed and where limitations and opportunities can be identified. Always stay critical and test with beta users that can give feedback on multiple contexts and scenarios. Also, have conversations with people that work with AI. Message me on LinkedIn! Most of us are very open to discuss both the opportunities and the limitations of technology.
What does the future hold in this area and what can programmers do to be prepared?
Humans have tried to make everything cheaper and faster since the beginning of time with the help of new tools. It’s important not to treat AI as a separate entity to humans or as a new species. There’s no way to create without destroying, and programmers took a lot of jobs from other people when computers started becoming a staple in work. This is the cycle of transformation and innovation. If we think that we’re done with learning after college, that’s a problem. Learning is a lifelong process, and we need to reskill ourselves as technology grows. Even if ChatGPT automates all programming, there will always be a new job like prompt writer, prompt optimizer, AI orchestrator, AI-human safety and bias supervisor. It’s better to adopt early and learn fast. What does the future hold? Programming languages emerged as a way to help humans control computers, and when AI gets capable enough, we might not even need code. Right now, computers are based on logic and math instructions in binary code, but quantum computers will change the game again. I see programmers of the future as just designers and orchestrators that organize actions and manage AIs that write the instructions in binary code and quantum circuits directly, with no need for the very inefficient, error-prone compilation process. The only thing I’m really afraid of is the day we invent artificial desire, pain, and pleasure—but that’s another discussion!