For software developers working at breakneck speeds to keep up with a growing list of demands and obligations, the arrival of artificial intelligence (AI) coding assistants several years ago was a blessing. Developers quickly became avid users of the generative AI models that accelerated the code-creation process and speed of delivery. But on the heels of that undeniable initial benefit, the other shoe has fallen, adding layers of complexity to an increasingly complex environment. Securing the attack surface was already a Sisyphean task; AI coding is making it even more insurmountable.
Beyond adding further complexity to the codebase, AI models also lack the contextual nuance that is often necessary for creating high-quality, secure code, primarily when used by developers who lack security knowledge. As a result, vulnerabilities and other flaws are being introduced at a pace never before seen.
The current software environment has grown out of control security-wise, showing no signs of slowing down. But there is hope for slaying these twin dragons of complexity and insecurity. Organizations must step into the dragon’s lair armed with strong developer risk management, backed by education and upskilling that gives developers the tools they need to bring software under control.
AI Assistants Increase Complexity and Code Maintainability Issues
When generative AI first appeared in November 2022 as OpenAI’s ChatGPT, developers were quick to take advantage, soon using GenAI models to speed up code creation and software development. By June 2023, 92% of U.S. developers were using AI tools for work or personal use, according to a GitHub survey. Developers mostly saw accelerated code creation as beneficial, and using AI tools quickly became routine.
However, although subsequent surveys, such as one by Synk, found that about three-quarters of developers said AI-generated code was more secure than code created by humans, they also found that AI was nevertheless introducing errors in more than half of its code. What’s more, 80% of developers were ignoring secure AI coding policies, passing up any chance of catching those mistakes as they happened.
With AI assistants accelerating the process, a mushrooming amount of vulnerable software is being released into an environment that, regardless of how the code was created, is already rife with security flaws.
More recent research by GitClear sheds light on how AI-generated code increases complexity and compounds the challenge of maintaining and securing software late in the software development lifecycle (SDLC). GitClear analyzed four years of changed code—about 153 million lines—created between January 2020 and December 2023, and found alarming results concerning code churn and the rate of copied or pasted code.
“Code churn”, described as code that was changed or updated within two weeks of being written, was projected to double between 2021 and 2024, and that’s before the onslaught of AI tools came into play. Over the same period, the amount of copy/pasted code increased faster than code that had been updated, deleted, or moved, indicating movement away from DRY (Don’t Repeat Yourself) practices, a trend that invariably leads to an increase in software flaws.
Both bad practices amplify the complexity of applications, which drives up support costs while increasing the difficulty of securing software. The speed of software production, accelerated by AI, puts more vulnerabilities into the pipeline before they can be fixed, which also considerably lengthens the time it takes for security to catch up. A study by the National Institute of Standards and Technology (NIST) found that, compared with correcting flaws at the start of the SDLC, fixing defects during testing takes 15 times longer. And fixing them during deployment/maintenance can take 30 to 100 times longer.
AI tools increase the speed of code delivery, enhancing efficiency in raw production, but those early productivity gains are being overwhelmed by code maintainability issues later in the SDLC. The answer is to address those issues at the beginning, before they put applications and data at risk.
Dragon Hunting With An Armory of Upskilling
Organizations involved in software creation need to change their culture, adopting a security-first mindset in which secure software is seen not just as a technical issue but as a business priority. Persistent attacks and high-profile data breaches have become too common for boardrooms and CEOs to ignore. Secure software is at the foundation of a business’ productivity, reputation, and viability, making a commitment to a robust security culture a necessity. And at the foundation of a secure culture is developer risk management.
Implementing an education program to upskill developers on writing secure code and correcting errors in AI-generated or third-party code can prevent those increasingly common defects from entering the pipeline, reducing complexity (the first dragon) while improving the security (the second dragon) and software quality.
Companies need to invest in a program that provides agile, hands-on and continuous learning, and gives security a prominent place among their key performance indicators. A learning program should establish a baseline of skills developers need, and it should include both internal and industry baselines to gauge their progress. It should address real-world problems and be tailored to developers’ work, in formats that fit within their schedules and involve the programming languages in which they actually work. That kind of upskilling feeds into a security culture in which developers work with security teams to ensure that the best security practices are followed at the start of the SDLC, which has proved to be the most effective (and cost-effective) way of ensuring software security.
A crucial aspect of education is knowing that the program is working, that developers have absorbed their new skills and they are applying them consistently.
The advantages AI tools deliver in speed and efficiency are impossible for time-crunched developers to resist. But the complexity and risk created by AI-generated code can’t be ignored either.
Organizations need to thoroughly upskill developers so that they can work with security professionals to nip software security problems in the bud. Only by managing developer risk can the twin dragons of complexity and insecurity be slain so that code, whether generated by AI or humans, can be made safe, secure, and free from vulnerabilities.
Related: How to Eliminate “Shadow AI” in Software Development
Related: Semgrep Raises $100M for AI-Powered Code Security Platform