Draft:GitClear
Submission declined on 10 July 2025 by Pythoncoder (talk).
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
| ![]() |
Comment: In accordance with Wikipedia's Conflict of interest policy, I disclose that I have a conflict of interest regarding the subject of this article. Edibill (talk) 04:44, 10 July 2025 (UTC)
GitClear is a developer analytics platform and code review tool that measures software engineering activity using custom metrics and visualizations. Launched in 2019 by Seattle-based company Alloy.dev, GitClear provides insights into code changes with the aim of quantifying team progress and AI impact on code. GitClear gained industry attention for its research on the impact of AI-assisted coding on code quality, which has been cited by outlets including TechCrunch and GeekWire.
AI Code Quality Research
[edit]In early 2024, GitClear published a study, “Coding on Copilot,” analyzing over 153 million lines of code from 2020 through 2023 to investigate how AI coding assistants (e.g. GitHub Copilot) were affecting code quality[1]. The research identified several concerning trends. Code churn – defined as the percentage of lines that are reverted or modified within two weeks of being written – had sharply risen, with the churn rate projected to double by 2024 compared to pre-AI levels[2][1]. The study also found that the share of code characterized as “copy/pasted” was increasing much faster than the share of code that was updated, deleted, or moved (refactored) in the codebase. In effect, AI-generated code tended to resemble the output of “an itinerant contributor” who duplicates code instead of adhering to the “Don’t Repeat Yourself” (DRY) principle of reusing existing code[1]. These patterns suggested a decline in maintainability: rather than refining proven code, developers using AI tools seemed more prone to introduce redundant code.
GitClear’s 2024 report sparked analysis across the software industry[3][4][5]. It was followed by a second study, released in 2025, with an even larger dataset. The 2025 report analyzed 211 million changed lines of code from 2020 to 2024 and corroborated a continued decline in code quality metrics as AI usage grew[6]. By 2024, the frequency of large duplicate code blocks (five or more identical lines) had increased roughly eightfold compared to two years earlier[6]. In the same period, the proportion of “moved” lines (a metric GitClear uses to track code refactoring and reuse) had significantly decreased. Bill Harding, GitClear’s founder, warned that AI assistants excel at rapidly generating code, but that hastily added code later burden teams with maintenance and fixes. “Fast code-adding is desirable if you’re working in isolation or on a greenfield project,” Harding noted, “but hastily added code is caustic to the teams expected to maintain it afterward."[2]
TechCrunch characterized GitClear's findings as “a remarkable decline in code reuse last year [2024]”.[7], coinciding with the surge in AI-assisted coding (63%) noted in StackOverflow's Annual Developer Report[8]. The prevalence of duplicate code and reduced refactoring observed in the first study persisted into 2024, sounding alarms about the long-term maintainability of software developed with heavy AI assistance. The studies’ findings – that AI coding tools can generate large volumes of code at the expense of readability, reuse, and robustness – were covered by major tech media and have contributed to an ongoing debate about balancing developer productivity vs. code quality in the age of AI[9][1][7]
References
[edit]- ^ a b c d Ramel, David (January 25, 2024). "New GitHub Copilot Research Finds 'Downward Pressure on Code Quality'". Visual Studio Magazine.
- ^ a b Soper, Taylor (2024-1-23). "New study on coding behavior raises questions about impact of AI on software development". Geekwire.
{{cite web}}
: Check date values in:|date=
(help) - ^ Team, Arc (April 17, 2024). "Humans do it better: GitClear analyzes 153M lines of code, finds risks of AI". Arc.dev.
- ^ Popper, Ben (March 4, 2024). "Is AI making your code worse?". Stack Overflow.
- ^ Asay, Matt (February 12, 2024). "Is AI making our code stupid?". InfoWorld.
- ^ a b Bill, Doerrfeld (February 19, 2025). "How AI generated code compounds technical debt". LeadDev.
- ^ a b Kyle, Wiggers (February 21, 2025). "AI coding assistants aren't a panacea". TechCrunch.
- ^ "2024 Developer Survey". Stack Overflow. February 2024.
- ^ Fenton, Steve (March 11, 2025). "What's Missing With AI-Generated Code? Refactoring". TheNewStack.
- Promotional tone, editorializing and other words to watch
- Vague, generic, and speculative statements extrapolated from similar subjects
- Essay-like writing
- Hallucinations (plausible-sounding, but false information) and non-existent references
- Close paraphrasing
Please address these issues. The best way to do it is usually to read reliable sources and summarize them, instead of using a large language model. See our help page on large language models.