Jump to content

AI-assisted software development

From Wikipedia, the free encyclopedia

AI-assisted software development is the use of artificial intelligence agents to augment the software development life cycle. It uses large language models (LLMs), natural language processing, and other AI technologies to assist software developers in a range of tasks from initial code generation to subsequent debugging, testing and documentation.[1]

Technologies

[edit]

Code generation

[edit]

LLMs that have been trained on source code repositories are able to generate functional code from natural language prompts. Such models have knowledge of programming syntax, common design patterns and best practices in a variety of programming languages.[2]

Intelligent code completion

[edit]

AI agents using pre-trained and fine-tuned LLMs can predict and suggest code completions based on context, going beyond simple keyword matching to infer the developer's intent and picture the broader structure of the developing codebase. An analysis has shown that such use of LLMs significantly enhances code completion performance across several programming languages and contexts, and the resulting capability of predicting relevant code snippets based on context and partial input boosts developer productivity substantially.[3]

Testing, debugging, code review and analysis

[edit]

AI is used to automatically generate test cases, identify potential bugs, and suggest fixes. LLMs trained on historical bug data can enable prediction of likely failure points in generated code. Similarly, AI agents are used to perform static code analysis, identify security vulnerabilities, suggest performance improvements and ensure adherence to coding standards and best practices.[1]

Beyond detection, researchers have explored using LLMs for automated program repair, where models propose candidate patches for buggy code. Off-the-shelf LLMs have been reported to repair some security-relevant defects in a zero-shot setting (i.e., without task-specific fine-tuning), including issues categorized by the Common Weakness Enumeration (CWE),[4] being comparable to contemporary, non-AI bug fixing tools. These approaches build on LLMs’ code-generation capability and the resulting patches still require validation through software testing, static program analysis, and human code review.[4][5]

Challenges

[edit]

The incorporation of AI tools has introduced new ethical dilemmas and intellectual property challenges. The ownership of AI-generated code is unclear: who is responsible for the generated end-product? Also unclear are the ethical responsibilities of generated code.[6] Changes in the role of software engineers are inevitable.[7][8]

Governance and oversight

[edit]

The outputs from AI-assisted software development require to be validated through a combination of automated testing, static analysis tools and human review, creating a governance layer that acts as a safeguard ensuring quality and accountability.[9]

Security risks and challenges

[edit]

AI-assisted software development introduces novel security risks that extend beyond traditional software vulnerabilities.

Increased vulnerability introduction

[edit]

Research indicates that developers using AI coding assistants frequently encounter security issues in AI-generated code. A 2023 survey by Snyk found that 56.4% of developers reported that AI coding tools sometimes or frequently introduce security vulnerabilities, yet 80% of developers bypass established security policies when using these tools.[10] The vulnerabilities stem partly from AI models being trained on historical code repositories that may contain outdated or insecure patterns, which the models then suggest with high confidence to developers.[11]

Prompt injection attacks

[edit]

AI coding tools are susceptible to prompt injection attacks, where malicious instructions embedded in external data sources—such as documents, codebases, or web content—can manipulate the AI's behavior in unintended ways.[12] The OWASP Top 10 for Large Language Model Applications, first published in 2023, identifies prompt injection as the highest-priority vulnerability for LLM-based systems.[13] These attacks can be direct, where users craft malicious prompts, or indirect, where malicious content is embedded in external sources that the AI processes.[14]

Agentic AI risks

[edit]

Modern AI coding tools increasingly operate as autonomous agents with access to file systems, terminals, and network resources. In September 2024, Anthropic reported that a suspected Chinese state-sponsored threat actor manipulated their Claude Code tool to autonomously target approximately 30 global organizations across technology, finance, chemical manufacturing, and government sectors.[15] Anthropic characterized this as "the first documented case of a large-scale cyberattack executed without substantial human intervention," with AI performing 80-90% of the operation independently.[16][17]

Shadow AI

[edit]

Organizations face risks from unauthorized use of AI coding tools by employees, sometimes called "shadow AI." Developers may inadvertently expose proprietary code, credentials, or sensitive data to external AI services without organizational oversight, creating compliance and data governance challenges.[18] Snyk's research found that 80% of developers admitted to bypassing security policies when using AI coding tools, and only 10% scan most of the AI-generated code they use.[19]

Mitigation approaches

[edit]

Security frameworks are emerging to address AI-specific development risks. The OWASP Top 10 for Large Language Model Applications provides guidance on vulnerabilities including prompt injection, insecure output handling, training data poisoning, and excessive agency.[20] Recommended mitigations include enforcing least-privilege access controls, implementing human-in-the-loop approval for sensitive operations, segregating untrusted content from user prompts, and validating AI outputs before execution.[21] Organizations such as MITRE have developed frameworks like ATLAS (Adversarial Threat Landscape for AI Systems) to catalog AI-specific attack techniques and inform defensive strategies.[22]

Industry perspectives

[edit]

Technology sector leaders have highlighted the transformative potential of AI-assisted software development. In an 'Unlocking AI Potential' session of 'Advancing AI 2025' hosted by AMD Developer Central, Andrew Ng and Lisa Su emphasized the strategic and operational implications of integrating AI tools into development workflows. Ng noted that AI systems are increasingly capable of "helping programmers focus on higher-level problem solving", while Su framed the shift as "an opportunity to redefine performance and productivity across industries."[23]

See also

[edit]

References

[edit]
  1. ^ a b "Transforming software with generative AI". MIT Technology Review Insights. 17 October 2024. Retrieved 5 July 2025.
  2. ^ Soral, Sulabh (6 November 2024). "The future of coding is here: How AI is reshaping software development". Anthropic. Retrieved 6 July 2025.
  3. ^ Husein, Rasha Ahmad; Aburajouh, Hala; Catal, Cagatay (12 June 2025). "Large language models for code completion: A systematic literature review". Computer Standards & Interfaces. 92 (C) 103917. doi:10.1016/j.csi.2024.103917 – via ACM Digital Library.
  4. ^ a b Pearce, Hammond; Tan, Benjamin; Ahmad, Baleegh; Karri, Ramesh; Dolan-Gavitt, Brendan (2022-08-15), Examining Zero-Shot Vulnerability Repair with Large Language Models, arXiv, doi:10.48550/arXiv.2112.02125, arXiv:2112.02125, retrieved 2025-11-09
  5. ^ Chen, Mark; Tworek, Jerry; Jun, Heewoo; Yuan, Qiming; Pinto, Henrique Ponde de Oliveira; Kaplan, Jared; Edwards, Harri; Burda, Yuri; Joseph, Nicholas (2021-07-14), Evaluating Large Language Models Trained on Code, arXiv, doi:10.48550/arXiv.2107.03374, arXiv:2107.03374, retrieved 2025-11-09
  6. ^ Sauvola, Jaakko; Tarkoma, Sasu; Klemettinen, Mika; Riekki, Jukka; Doermann, David (11 March 2024). "Future of software development with generative AI". Automated Software Engineering. 31 (26) 26. doi:10.1007/s10515-024-00426-z – via Springer Nature Link.
  7. ^ Dryka, Marcin; Pluszczewska, Bianka (9 May 2025). "Is There a Future for Software Engineers? The Impact of AI [2025]". Brainhub. Retrieved 5 July 2025.
  8. ^ Walsh, Philip; Gupta, Gunjan; Poitevin, Helen; Mann, Keith; Micko, Dave; Bhat, Manjunath (30 August 2024). "AI Will Not Replace Software Engineers (and May, in Fact, Require More)". Gartner Research. Retrieved 5 July 2025.
  9. ^ "AI-assisted software engineering: Rewriting the build versus buy playbook". Deloitte. 14 May 2025. Retrieved 30 August 2025.
  10. ^ "AI-generated code leads to security issues for most businesses: report". Cybersecurity Dive. January 30, 2024. Retrieved December 14, 2025.
  11. ^ "Snyk's AI Code Security Report Reveals Software Developers' False Sense of Security". Cloud Wars. January 12, 2024. Retrieved December 14, 2025.
  12. ^ "LLM01: Prompt Injection". OWASP GenAI Security Project. 2025. Retrieved December 14, 2025.
  13. ^ "OWASP Top 10 for LLM Applications". OWASP GenAI Security Project. Retrieved December 14, 2025.
  14. ^ "LLM Prompt Injection Prevention Cheat Sheet". OWASP Cheat Sheet Series. Retrieved December 14, 2025.
  15. ^ "Disrupting AI-enabled espionage". Anthropic. November 13, 2025. Retrieved December 14, 2025.
  16. ^ "Anthropic Says Claude AI Powered 90% of Chinese Espionage Campaign". SecurityWeek. November 2025. Retrieved December 14, 2025.
  17. ^ "Anthropic warns state-linked actor abused its AI tool in sophisticated espionage campaign". Cybersecurity Dive. November 2025. Retrieved December 14, 2025.
  18. ^ "Secure adoption in the GenAI era". Snyk. Retrieved December 14, 2025.
  19. ^ "Snyk's AI Code Security Report Reveals Software Developers' False Sense of Security". Cloud Wars. January 12, 2024. Retrieved December 14, 2025.
  20. ^ "OWASP Top 10 for LLM Applications". OWASP GenAI Security Project. Retrieved December 14, 2025.
  21. ^ "LLM01: Prompt Injection". OWASP GenAI Security Project. 2025. Retrieved December 14, 2025.
  22. ^ "MITRE ATLAS". MITRE. Retrieved December 14, 2025.
  23. ^ Andrew Ng, Lisa Su (2025-07-01). Unlocking AI Potential: Insights from Dr. Andrew Ng & Dr. Lisa Su (YouTube video). AMD Developer Central. Retrieved 2025-07-09.