The good and the bad of AI generated code

Share post:

Generative AI tools are transforming the coding landscape, making both skilled and novice developers more efficient. However, the same tools that boost productivity can also generate flawed and potentially dangerous code. As the use of AI in coding continues to rise, understanding its limitations and risks becomes crucial.

AI coding assistants can significantly speed up the development process. GitHub found that developers using its AI assistant worked 55% faster. Gartner predicts that by 2028, 75% of software engineers will use AI code assistants, up from less than 10% in early 2023. Major tech companies like OpenAI, Meta, Microsoft, Google, and Amazon all offer AI coding tools.

But all of these tools are not equal. Recently, ZDNet, the tech publication tested all of the major AI models. ChatGPT passed all of the coding tests while Google’s Gemini Advanced, MetaAI and Meta Code Llama failed most of them. Mysteriously, Microsoft Copilot failed them all which is infinitely strange since Microsoft’s AI engine is supposedly based on OpenAI’s ChatGPT.

Further, despite the productivity benefits, an number of studies find that AI-generated code comes with significant risks

– Security Concerns: A Stanford study found that programmers using AI assistants wrote less secure code.
– Incorrect Code: Research from Bilkent University showed that 30.5% of AI-generated code was incorrect, with another 23.2% partially incorrect.
– Reverted Changes: GitClear reported that the rise of AI coding assistants correlated with an increase in code changes that needed to be fixed within two weeks of being authored.

Programmers are aware of the potential issues. Over half of developers have concerns about the quality of AI-generated code. Human coders also make mistakes, but AI assistants can introduce different kinds of errors, such as logical errors in numbers and loops, and problems with mathematical operations.

Currently, bad AI-generated code also can result in messy code libraries or minor issues. Those of us who have used code generators in a past life note that while they could generate code quickly, the quality and maintainability of that code was often terrible. AI generated code is better, but still, reportedly has smoe weaknesses.

However, as AI tools improve and take on more autonomous roles, the potential for more significant problems grow. Complex architectural decisions and precise requirements are areas where AI tools still struggle.

Generative AI tools save time and money upfront, but they may also lead to increased complexity and support costs, at least in their current incarnation. There is no doubt that these tools will be improved over time.

In the meantime, software industry must balance the benefits of rapid deployment with the need for thorough review and correction of AI-generated code to avoid future disasters.

 

SUBSCRIBE NOW

Related articles

Nvidia unveils open-source AI model rivaling GPT-4

Nvidia has released NVLM 1.0, a powerful open-source artificial intelligence (AI) model that competes with proprietary systems like...

OpenAI Raises Record $6.6 Billion Amid Shift to For-Profit Model and Rising Costs

OpenAI has announced the completion of a monumental $6.6 billion funding round—the largest venture capital deal to date—valuing...

Is Linux the future of AI? Hashtag Trending for Thursday, October 3, 2024

Hi, it’s Jim. Did you get a chance to check out CDW Canada Tech Talks. If you’re passionate...

California Gov. Newsom vetoes sweeping AI safety bill amid Silicon Valley pressure

California Governor Gavin Newsom has vetoed a major AI safety bill aimed at regulating powerful AI models before...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways