“When you start coding, it makes you feel smart in itself like you’re in the Matrix [film],” says Janine Luk, a 26-year-old software engineer who works in London.
Born in Hong Kong, she started her career in yacht marketing in the south of France but found it “a bit repetitive and superficial”. So, she started teaching herself to code after work, followed by a 15-week coding boot camp. On the camp’s last day, she applied for a job at the cyber-security software company, Avast. And started working there a week later.
“Two and a half years later, I really think it’s the best decision I ever made.”
When she started at the company, she was the first woman developer working on her team. She now spends her spare time encouraging other women, people of colour, and LGBT people to try coding.
For programmers like her, she says the most interesting shift recently has been the rise of artificial intelligence (AI) tools which can bite off increasingly big chunks of programming all by themselves.
“The single most mind-blowing application of machine learning I’ve ever seen,” Instagram’s co-founder Mike Krieger enthused about Copilot.
It is based on artificial intelligence called GPT-3, released last summer by OpenAI, a San Francisco-based AI lab, co-founded by Elon Musk. This GPT engine does a “very simple but very large thing – predicting the next letter in a text,” explains Grzegorz Jakacki, Warsaw-based founder of Codility, which makes a popular hiring test. OpenAI trained the AI on texts already available online such as books, Wikipedia and hundreds of thousands of web pages, a diet that was “somewhat curated but in all possible human languages. And spookily, it wasn’t taught the rules of any particular language,” adds Mr Jakacki.
The result was plausible passages of text. People have subsequently asked it to write in a variety of styles, for example, new Harry Potter stories, but in the style of Ernest Hemingway or Raymond Chandler.
And, since the AI has been fed code written by professional programmers, it’s really helping coders draw on their colleagues’ collective wisdom, says Dina Muscanell, Vermont-based senior programmer at the open-source software company Red Hat.
There are already coding-community websites like Stack Exchange, where programmers can pose questions and get suggestions.
“If you think about getting that feedback instantaneously as you’re typing, that’s pretty awesome. You have a team of people feeding you this code even if there is an AI assembling it.”
But professional programmers also have a few qualms about the new AI kid on the block. In software engineering, “you’re lucky where the garbage [rubbish] is very obvious, but this thing can generate very subtle garbage,” says Mr Jakacki. Subtle mistakes in code can be especially costly and very hard to find. A possible future answer could involve using AI to detect bugs: for instance, noticing that pressing some buttons on a microwave “are valid inputs, but do not make sense”. In the meantime, “if you’re not experienced, and you’re just trying to learn, you could be doing something bad without being aware of that,” warns Ms Muscanell.
The hype over GPT-3 got “way too much”, and people needed reminding the AI “sometimes makes very silly mistakes”, tweeted Sam Altman, OpenAI’s chief executive.
Still, GitHub decided to train up another, similar model. But this time, training the AI on software source code instead. So, the company has fed Copilot on a healthy diet of public code. As a result, Copilot can provide “relatively good solutions, even though sometimes it requires some tweaking,” according to Miss Luk who has tried giving the AI coding challenges.
As a programmer, far from seeing the tool as risking her job she likes the idea of having AI to support her with “the more boring parts” of coding, like checking over-complicated strings, called regular expressions, that she always has to “quadruple check”.
Another big question involves ownership of this auto-generated code. What if Copilot, which has been trained on other people’s programs, dishes up something near-identical to code another programmer has written, and then you use it? Miss Luk argues:
“Using the AI tool can potentially violate open source licences because it can cite something from the training set.”
And that could land you in hot water for plagiarism. It’s all an area “where the law is not catching up with technology,” Mr Jakacki says.
In theory, you could measure how much code owed to one bit of training code: by training up a different AI using all the other source code but leaving that particular bit out. But doing this would be “extremely costly,” observes Mr Jakacki. In reality, at the moment the AI only provides short passages of code, not fully turned out software programs.
By comparison, 10,000 lines are the minimum length of website code ‘when you’re getting some meaningful functionality’, Mr Jakacki says. So, it’s not quite ready to replace human programmers yet. Or bring about the fabled AI singularity – an idea first hypothesised by mathematician John von Neumann, where computer intelligence enters a runaway explosion of self-improvement cycles and quickly far surpasses human intelligence. And more to the point, for coders like Miss Luk, “even though it does help, it doesn’t necessarily mean the workload is alleviated”.
Code still needs to be thoroughly reviewed and subjected to tests both involving how it works and how it fits with other pieces of code. The chief reason Luk enjoys coding is the problem-solving element of it. “If everything is already done for you, it takes the fun out of it,” she reflects.
If computers do too much of the thinking, “you don’t get the satisfaction after solving an issue.” And while she thinks there is potential for AI programming tools, as they learn more and adapt, “but hopefully not so soon that we won’t be needed any more,” she laughs.