Pros & Cons of AI Coding Assistants: Speed vs Quality

Why Trust Techopedia

AI coding assistants may be good for speed, but are they helping developers write quality codes?

With the launch of Devin AI this month, coding with artificial intelligence takes on a new life: A chatbot handling the entire process from prompting to code generation to testing to bug-fixing — and even deployment.

If the crystal ball suggests anything, the role of a software engineer is at the least going to change, but for the short-term, we are going to investigate the pros and cons of AI coding assistants.

Key Takeaways

  • AI coding assistants like GitHub Copilot or ChatGPT are increasingly popular among developers, offering speed and productivity boosts by suggesting code completions based on context.
  • But, despite these benefits, the quality of code produced by AI assistants has to be in question — is there code churn and a future technical debt?
  • AI coding assistants may speed up development but compromise code quality if not used carefully — and code must be reviewed and verified forensically.
  • Developers must be careful about the risks of writing less secure code while being overconfident in its security.
  • We speak to a range of developers and code specialists for analysis from the frontline of AI code development.

Why AI Coding Matters

At the crux of every software application are lines of code. Depending on factors like intended functionality and architectural decisions, the codebase can range from thousands to millions of lines. This code often relies on reusable libraries and interfaces with other components through Application Programming Interfaces (APIs) and may be organized into modular containers and microservices to manage complexity.

Given all the rigors associated with the software development lifecycle, writing codes manually from scratch has always been challenging for developers.

In our new AI-enabled world, many developers are turning to AI coding assistants such as Devin AI, GitHub CoPilot or ChatGPT, with research from GitHub showing that 92% of US-based developers now use AI coding tools to enhance coding speed and productivity.

Advertisements

Despite the gains, a new study by GitClear has questioned the belief that AI coding assistants are advancing software development without downsides.

The report analyzed over 150 million lines of code and found concerning increases in code churn and decreases in reuse since the rise of AI model-based development.

This situation raises the question: if AI coding assistants are great for speed like GitHub claims, what can we say about the quality and safety of codes generated by them?

AI Coding Assistants & How They Work

AI coding assistants such as Devin AI, Github Copilot, Divi AI, and Amazon CodeWhisperer are tools that use AI to assist software developers in writing, reviewing, and debugging code.

They leverage generative AI models and large language models (LLMs) that are trained on large code datasets to suggest completions for lines and blocks of code in an integrated development environment (IDE).

As a developer types, the assistant provides recommendations in real-time to autocomplete common syntax, names of variables/functions, and even entire code snippets based on context. For example, if a developer types “for i in…“, the assistant may suggest “for i in range(10):” to complete the for loop syntax.

This is not your generic auto-complete, but instead, it is AI coding with you — and following your context to make (hopefully) the correct suggestions.

Besides autocompleting code, some assistants can also generate entire function definitions or classes if the developer describes the logic in plain language comments.

For instance, a project manager at Deloitte shared how he built and launched an app from scratch using AI coding assistants within 30 days.

Meanwhile, Devin AI has successfully demonstrated handling tasks on Upwork:

Other useful features of AI programming tools include error detection, documentation assistance through comments and READMEs, and explanations or examples of code usage.

AI Coding Assistants Offer Speed — But Do They Guarantee Quality Code?

One of the main motivations for adopting AI coding assistants is to speed up software development and cut off related mundane tasks for developers, Vaclav Vincalek, founder at 555vCTO told Techopedia in an email.

However, Karthik Sj, VP of Product Management & Marketing at Aisera argues this may come at the cost of compromising the quality of the software, which can have negative consequences for both the developers and the users.

He said:

“In terms of quality, the underlying LLMs are far from perfect, and the results can be unpredictable in some cases. This is why developers should always review the code generated.”

According to GitClear’s report, while AI tools help developers write code faster, this does not account for “bad code that shouldn’t be written in the first place.”

The report argues that the raw speed of generating code is not the only important factor — the quality and necessity of the code matter as well. Simply writing more code faster can potentially introduce technical debt, maintenance issues, and complexity down the line if it is low-quality or unnecessary code.

Nazmul Hasan, founder & CIO at AI Buster, concurs with Gitclear, adding that low-quality codes may cause significant pain and slowdowns for future readers and maintainers trying to understand and update the code.

Explaining this in a chat with Techopedia, he said: “Integrating AI coding assistants into my workflow has had mixed impacts on code maintainability and readability.”

“These assistants streamline coding and help uphold standards, but there’s always a risk that the AI-generated code might not align with my project’s specific guidelines or be easily understood by my teammates.

 

I’ve observed instances where this led to inconsistencies and introduced technical debt. It’s become clear to me that while leveraging AI for efficiency, it’s equally important to ensure the code remains clean, well-documented, and consistent with our architectural principles??????.”

AI Coding Assistants: A Recipe for Software Vulnerabilities

With generative AI security still a subject of global concern, it’s fair to question the increasing reliance on AI coding assistants.

A recent study from a team of researchers at Cornell University examining how programmers use AI coding assistants uncovered troubling trends regarding security vulnerabilities. The researchers found that developers with access to an AI coding tool wrote significantly less secure code compared to those without AI assistance.

Additionally, programmers were more likely to believe their AI-assisted code was secure, even when it contained more vulnerabilities.

The study also indicated that users who trusted AI programming tools less and customized their prompts more carefully introduced fewer security issues.

The researchers found that while AI coding tools can boost efficiency, they may foster overconfidence and lower code quality for security-sensitive applications.

Given these significant challenges and risks for the safety and security of the code they generate, have we reached the level where we can depend on AI coding assistants for our codes?

The answer is not straightforward, says Hasan, as it depends on several factors, such as the type, complexity, and domain of the code, the quality and reliability of the AI coding assistant, and the level of human oversight and verification.

Safety is an interesting angle here, David Brauchler, principal security consultant at NCC Group pointed out in an email chat with Techopedia.

“We also need to consider that these systems are not trained to generate good code. They’re trained to generate humanlike code, including its flaws, assumptions, and vulnerabilities.

 

And it’s very possible that as these models consume more data, they’ll infer more about the kinds of mistakes people make when writing code, leading to difficult-to-detect issues.”

How Developers Can Minimize Risk While Using AI Coding Assistants

The rise of AI coding assistants poses a dilemma for software teams — either use them and risk lower-quality code or ditch them completely and potentially fall behind in your product time to market.

But there may be a third option — getting the benefits while minimizing the risks.

On this, Sj recommends the need for developers to improve their AI prompting skills during training as well as set clear guidelines on what to use the tools for.

Peter McKee, Head of Developer Relations at SonarSource, suggests developers may need to introduce code-scanning tools into their development journey to mitigate potential GenAI coding errors.

He told Techopedia:

“Introducing code-scanning into the CI/CD process can help continuously monitor AI-generated code for bugs and issues.

 

This allows developers to continue focusing on the more high-level project problems, gives them insight into errors, and allows for speedier resolutions.”

To spot vulnerabilities in GenAI-developed codes, McKee also suggests employing Static application security testing (SAST), as it can help developers identify security vulnerabilities before they push the codes into production.

The Bottom Line

AI coding assistants are powerful tools that can speed up software development and improve developer productivity.

However, they are not a silver bullet, and they may also produce low-quality or insecure code, reduce developer skills and creativity, and raise ethical and legal issues.

Therefore, developers need to use AI coding assistants with caution and discretion and not blindly rely on them.

Developers also need to verify, review, and test the code generated by AI coding assistants and ensure that it meets the requirements and standards of each programming language and industry of use.

Advertisements

Related Reading

Related Terms

Advertisements
Franklin Okeke
Technology Journalist
Franklin Okeke
Technology Journalist

Franklin Okeke is an author and tech journalist with over seven years of IT experience. Coming from a software development background, his writing spans cybersecurity, AI, cloud computing, IoT, and software development. In addition to pursuing a Master's degree in Cybersecurity & Human Factors from Bournemouth University, Franklin has two published books and four academic papers to his name. His writing has been featured in tech publications such as TechRepublic, The Register, Computing, TechInformed, Moonlock and other top technology publications. When he is not reading or writing, Franklin trains at a boxing gym and plays the piano.

',a='';if(l){t=t.replace('data-lazy-','');t=t.replace('loading="lazy"','');t=t.replace(/