Wikipedia talk:WikiProject AI Cleanup/Archive 6
| This is an archive of past discussions about Wikipedia:WikiProject AI Cleanup. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
| Archive 1 | ← | Archive 4 | Archive 5 | Archive 6 |
Copyediting isn't the problem that needs solving
Inspired in part by some comments at Wikipedia:Administrators' noticeboard/Incidents, I've been looking at the wording of Template:AI-generated and wanted to see if it could be improved.
In particular, the template's current text:
- This article may incorporate text from a large language model. It may include hallucinated information, copyright violations, claims not verified in cited sources, original research, or fictitious references. Any such material should be removed and content with an unencyclopedic tone should be rewritten.
seems to imply that the main problem with AI-generated text is that it needs copyediting for MOS:TONE issues. In other words, it's all about the wording.
Having been told that there's a problem with the wording, editors are reaching out to ask, "Which wording do I need to change, to resolve this complaint about unencyclopedic tone?"
IMO what we want them to hear is something closer to "Please find and remove factual errors". Therefore, I suggest that we consider re-writing the template to be a bit more focused, e.g.:
- This article may incorporate text from a large language model. It may include hallucinated information, claims not supported by the cited sources, claims that cannot be supported by any reliable sources, and fictitious references. Please carefully check every claim and source in this article and remove all factual errors.
Additionally, I think we should add some step-by-step instructions to Help:Maintenance template removal#Specific template guidance, probably including a recommendation to log your review on the article's talk page (e.g., "I have confirmed that all the citations are real sources, but I haven't checked whether they support the text because several are WP:PAYWALLED"). A new checklist-style template might help editors understand what needs to be done.
What do you think? WhatamIdoing (talk) 19:22, 7 January 2026 (UTC)
- Disclaimer here, I edited the text a while back -- I think the original tag mentioned hallucinated information and fake references only, which aren't as much of an issue with newer LLMs so much as real sources interpreted poorly. That being said, the tone does often need to be dealt with too. A lot of the complaints come from people who don't see any issue with blatantly promotional text, synthesis, etc., or why stuff like
Each film, with its unique narrative and setting, contributes to the diverse portrayal of Earth Religion in popular culture
(added to Earth religion 10/23/23) orThroughout history, public speaking has held significant cultural, religious, and political importance, emphasizing the necessity of effective rhetorical skills.
(added to Public speaking 8/19/24) is problematic. - Otherwise I think the new edit is OK except for
in this article
-- another common complaint is people thinking the template applies to every single word. I like the idea of a checklist for tracking work. Gnomingstuff (talk) 20:36, 7 January 2026 (UTC)- In re "in this article": I assume the tag supports the
|sectionparameter. WhatamIdoing (talk) 21:06, 7 January 2026 (UTC)- A lot of times it's not confined to any specific section, or it's many sections at once -- it seems better to have one tag at the top than 8 tags scattered around different sections. Gnomingstuff (talk) 22:19, 7 January 2026 (UTC)
- Maintenance tags operate at three scales:
- Template:AI-generated for whole articles
- Template:AI-generated with
|sectionfor a section - Template:AI-generated inline for individual lines
- If the problem is scattered around in multiple sections, then the top-level tag is probably the best we can do. WhatamIdoing (talk) 06:53, 8 January 2026 (UTC)
- The problem is getting people to actually believe this. No one reads "this product may contain nuts" on a food package and thinks that they're about to consume an entire jar of nuts. Yet when people read "this article may contain text from a large language model," for some reason the logic totally breaks down and people freak out. Even if the article isn't articles that -- in many cases -- aren't even "theirs." Gnomingstuff (talk) 14:05, 12 January 2026 (UTC)
- It's like the sewage and wine analogy - a single drop of wine in sewage and it's still sewage, a single drop of sewage in wine and it's now sewage. There are people who genuinely believe that LLMs are sewage and the encyclopaedia (or at least encyclopaedia articles) are wine. In the real world it's nowhere near as simple as that - articles are not wine that can be tainted, LLMs are not sewage that can do nothing but taint, but the internet is where nuance goes to die. Thryduulf (talk) 14:14, 12 January 2026 (UTC)
- Or it is like "This product may contain nuts", and the rare person who is allergic to nuts will treat that package as if it really did contain a large enough amount of nuts to make the whole thing unacceptable.
- One of our systemic problems is that not taking an action is invisible. You can have a dozen editors look at something (e.g., a new article; an addition to an article; a source) and think it's okay, but the only one we can see is the one who disagrees. That person has no idea that a dozen others have already looked at it; he thinks he's the lone person, or one of very few, reviewing the backlogs. WhatamIdoing (talk) 06:28, 13 January 2026 (UTC)
- It's like the sewage and wine analogy - a single drop of wine in sewage and it's still sewage, a single drop of sewage in wine and it's now sewage. There are people who genuinely believe that LLMs are sewage and the encyclopaedia (or at least encyclopaedia articles) are wine. In the real world it's nowhere near as simple as that - articles are not wine that can be tainted, LLMs are not sewage that can do nothing but taint, but the internet is where nuance goes to die. Thryduulf (talk) 14:14, 12 January 2026 (UTC)
- The problem is getting people to actually believe this. No one reads "this product may contain nuts" on a food package and thinks that they're about to consume an entire jar of nuts. Yet when people read "this article may contain text from a large language model," for some reason the logic totally breaks down and people freak out. Even if the article isn't articles that -- in many cases -- aren't even "theirs." Gnomingstuff (talk) 14:05, 12 January 2026 (UTC)
- Maintenance tags operate at three scales:
- A lot of times it's not confined to any specific section, or it's many sections at once -- it seems better to have one tag at the top than 8 tags scattered around different sections. Gnomingstuff (talk) 22:19, 7 January 2026 (UTC)
- In re "in this article": I assume the tag supports the
seems to imply that the main problem with AI-generated text is that it needs copyediting
– The first five possible issues it lists have nothing to do with tone, and the first proposed remedy is removal, not ce. I don't see how this could imply tone is the main problem, editors believing otherwise and asking only howto resolve this complaint about unencyclopedic tone
would appear to have not read the full notice.- Tone/style is a frequent issue and should be mentioned. Model-introduced puffery, promotional constructs, weasely language, wikivoice misuse, etc... need to be corrected to ensure content is encyclopedic. I will say that a link to MOS:TONE is too narrow though, it would be better if there were a broader link target that covered more of these issues in general (but less broad than a link to the WP:MOS).
- Step-by-step instructions could be nice, but I'm doubtful many editors would engage with them. fifteen thousand two hundred twenty four (talk) 20:41, 7 January 2026 (UTC)
- The template has three sentences:
- Identification of the problem/rule violation: "text from a large language model"
- Information on what a rule violation might look like: "hallucinated information, copyright violations, claims not verified in cited sources, original research, or fictitious references"
- Instructions on how to resolve the problem: "should be removed and content with an unencyclopedic tone should be rewritten"
- People who can't figure out which information is LLM generated can't remove it; people who believe LLM generation is reasonable won't remove all of it. But they'll want to do something, so they focus on "tone".
- It sounds like you might want a link to Wikipedia:Manual of Style/Words to watch, which is narrower than MOS:TONE but quite possibly more relevant. WhatamIdoing (talk) 21:11, 7 January 2026 (UTC)
people who believe LLM generation is reasonable won't remove all of it. But they'll want to do something, so they focus on "tone"
– This would seem to be less of an issue with the template and more of an issue with an editor then. Perhaps this could be alleviated somewhat by mentioning tone alongside, instead of apart from, the other issues?
fifteen thousand two hundred twenty four (talk) 21:24, 7 January 2026 (UTC)− ... It may include hallucinated information, copyright violations, claims not verified in cited sources, original research,orfictitiousreferences. Any such material should beremovedandcontentwithanunencyclopedictoneshouldberewritten.+ ... It may include hallucinated information, copyright violations, claims not verified in cited sources, original research, fictitious references, or unencyclopedic prose. Any such material should be corrected or removed.- That would be an improvement, but I still think it will result in people trying to correct the smaller problem. WhatamIdoing (talk) 06:55, 8 January 2026 (UTC)
- There's only so much clarification that can be done without removing necessary information. At some point it's an issue of editor competence, not the message. fifteen thousand two hundred twenty four (talk) 16:37, 8 January 2026 (UTC)
- In addition to the problem WAID mentions, the current text "any such material" is ambiguous between just the text with the issues identified, all the "text from a large language model" or indeed everything in "This article". How about "Any problematic material..."? I think the proposed change is better but "unencyclopedic prose" doesn't to me indicate a tone or stylistic problem. It just says any text in prose form that you don't think belongs in an encyclopaedia. Is there a better way of describing this?
- I think the "When to remove" section is overreaching in its requirement "once the problems are undetectable or demonstrably non-existent" which goes beyond WP:WTRMT. As someone who complained long about student editing, there are parallels to me here. And we didn't go around posting templates that this article contains text added by a student who only just enrolled on a course tangentially linked to this article topic, and who very likely doesn't have the first clue and most certainly has copy pasted from their sources into the article text without engaging enough of their brain cells to do a decent job. Nor do we deal with activist editing by posting templates must remain until the NPOV and V problems are "undetectable or demonstrably non-existent". The issues that are specific to LLMs are worth highlighting, but copyright violations and claims failing WP:V were routine issues long before LLMs and fixing it is no different. The processes and behaviour for "fixing" these should be no different to any text with issues, and where we agree they are sufficiently resolved in good faith after a fair effort. -- Colin°Talk 17:45, 8 January 2026 (UTC)
- I like "Any problematic material". Or even "All problematic material".
- Maybe we need two tags: one for checking the sources, and another for copyediting problems. (Many articles would get tagged with both, but the two problems could be addressed separately.) WhatamIdoing (talk) 22:06, 8 January 2026 (UTC)
- I can't think of any situation in which both wouldn't apply, or at least any situation where checking the sources wouldn't apply. (Unless there are no sources I guess in which case there are even bigger problems) Gnomingstuff (talk) 01:06, 9 January 2026 (UTC)
- I can imagine a situation in which both originally applied, and someone did a quick copyedit, and now what's left is the (IMO more important) sourcing problem.
- I want this:
- Alice: I fixed the WP:PUFFERY.
- Bob: I fixed the sources.
- I don't want this:
- Alice: I fixed the WP:PUFFERY.
- Bob: I fixed the WP:PUFFERY.
- Chris: I fixed the WP:PUFFERY.
- Dave: I fixed the WP:PUFFERY.
- Eve: Who tagged this? What's wrong with the wording in this article?
- WhatamIdoing (talk) 03:18, 9 January 2026 (UTC)
- Fair, this is definitely something that happens a lot. That's why I think your checklist idea is a good one. Gnomingstuff (talk) 05:03, 9 January 2026 (UTC)
- It's possible to set up a checklist in the tag itself. (WP:MILHIST does this for B-class ratings on the talk page.) But it'd be more usual to have an ordinary/separate page with a list of instructions. Which approach appeals to you at the moment? WhatamIdoing (talk) 05:43, 9 January 2026 (UTC)
- Either one is fine Gnomingstuff (talk) 15:24, 9 January 2026 (UTC)
- I'd suggest starting with the ordinary page, and if it proves to not be adequate, we can always "upgrade" to an in-template checklist.
- I think it should be more like "You need to check for WP:PUFFERY and remove any" than like WP:AITELLS. Is there an existing page that describes this? WhatamIdoing (talk) 19:03, 9 January 2026 (UTC)
- Either one is fine Gnomingstuff (talk) 15:24, 9 January 2026 (UTC)
- It's possible to set up a checklist in the tag itself. (WP:MILHIST does this for B-class ratings on the talk page.) But it'd be more usual to have an ordinary/separate page with a list of instructions. Which approach appeals to you at the moment? WhatamIdoing (talk) 05:43, 9 January 2026 (UTC)
- Fair, this is definitely something that happens a lot. That's why I think your checklist idea is a good one. Gnomingstuff (talk) 05:03, 9 January 2026 (UTC)
- (edit conflict) I can very easily think of a situation: Where one of the above has been checked but the other hasn't. That might be by the original editor or it might be by someone else. Thryduulf (talk) 03:20, 9 January 2026 (UTC)
- I can't think of any situation in which both wouldn't apply, or at least any situation where checking the sources wouldn't apply. (Unless there are no sources I guess in which case there are even bigger problems) Gnomingstuff (talk) 01:06, 9 January 2026 (UTC)
- That would be an improvement, but I still think it will result in people trying to correct the smaller problem. WhatamIdoing (talk) 06:55, 8 January 2026 (UTC)
- The template has three sentences:
Do we leave talk page messages when we add the AI-generated template?
When identifying LLM content and adding {{AI-generated}}, is it good practice to leave a message on their talk page asking about LLM content (or directly asking them to stop if it's obvious)? The biggest threat right now is that one editor can introduce massive amounts of artificially generated content, so each editor we stop early saves us a lot of work down the line. Thebiguglyalien (talk) 🛸 02:47, 15 January 2026 (UTC)
- Yes. {{uw-ai1}} and related templates exist for this purpose. SuperPianoMan9167 (talk) 05:38, 15 January 2026 (UTC)
LLM-generated article published in The Signpost
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Project members may care to express their opinion about the presence of LLMSIGNS in an an article that was recently published in The Signpost. Yours, &c. RGloucester — ☎ 23:42, 17 January 2026 (UTC)
- lol, lmao
- that being said not sure this is really on topic, how does or should one even “clean this up” Gnomingstuff (talk) 00:37, 18 January 2026 (UTC)
AI is not a tool; it is a paradigm shift.
– Remarkable.
I've moved this discussion from WP:LLMN since it's targeted to project members, and there's not any cleanup for the noticeboard to perform. fifteen thousand two hundred twenty four (talk) 02:32, 18 January 2026 (UTC)- An article created using AI lauds praise on AI as "transformational". Shocking.[sarcasm] SuperPianoMan9167 (talk) 02:35, 18 January 2026 (UTC)
This article was written by an editor who incorporated copyedits from Claude Opus 4.5. Claude is a closed-source large language model sold by Anthropic PBC; those who find this offensive, disturbing or unpleasant may wish to avoid reading it.
Smh. Some1 (talk) 05:23, 18 January 2026 (UTC)The "golden age" we mourn was never golden for the Global South. It was an English-language project built by English-language editors from English-language sources.
You know, AI aside, watching someone use the plight of disadvantaged groups to sell a product that will further disadvantage those groups, and enrich the advantaged, is disgusting. Human-written expert-vetted WP articles (and stock price spikes) for us, ad-laden LLM summaries for "them". WeirdNAnnoyed (talk) 13:24, 18 January 2026 (UTC)
Feedback on AI detection quiz
I mostly have completed creating a quiz to test one's AI detection ability. (thanks to Altoids0 and Gnomingstuff for helping to compile examples). Most of the time, there is no feedback on whether your judgment was correct or not, so this would serve as a valuable resource for editors who work with cleaning up AI.
On each page, there is an answer and signs of AI writing present. The explanations are wholly written by me, so I am seeking others to contribute their own rationale.
I believe it would also be a good idea to test on talk page comments as well. I'd appreciate if project members can contribute examples they found. Ca talk to me! 13:38, 19 January 2026 (UTC)
- I'd say I'm pretty impressed by GPTZero's performance on the quiz. I'm sure there are edge cases where it fails, but it got 10/10 at least for this quiz. Ca talk to me! 14:34, 19 January 2026 (UTC)
- Nice! Adding a few things to the signs where applicable. Gnomingstuff (talk) 15:43, 19 January 2026 (UTC)
Two RfCs about LLM use
There are currently two active requests for comment regarding LLM use on Wikipedia:
- Wikipedia talk:Translation § Request for comment: "Should the following be adopted as a guideline for using LLM-assisted machine translation tools?"
- Wikipedia:Village pump (proposals) § RfC: Turning LLMCOMM into a guideline: "Should the proposed guideline at User:Athanelar/Don't use LLMs to talk for you be accepted?"
If you are interested, you are welcome to participate through the links above. — Newslinger talk 14:24, 16 January 2026 (UTC)
- Also see Wikipedia:Village pump (proposals)#Sub-proposal: have an AI do the merges where some bright editor has suggested turning over the task of merging articles to LLMs. —David Eppstein (talk) 07:34, 20 January 2026 (UTC)
AN discussion about OKA
The OKA non-profit, an AIC regular, is currently being discussed on AN:
--Gurkubondinn (talk) 13:36, 20 January 2026 (UTC)
Fewer hallucinated references
In the past month I am seeing fewer broken URLs or hallucinated ISBNs while patrolling the edit filters. Has anyone else noticed this? NicheSports (talk) 01:15, 23 December 2025 (UTC)
- Be careful of the base rate fallacy. Have there been fewer edits in general this past month? ([1] doesn't have December numbers yet.) Apocheir (talk) 02:49, 23 December 2025 (UTC)
- Be careful of the red herring fallacy. You're focusing on technicalities around the timeline without addressing the user's main point: there have been fewer low-quality, unsourced AI articles lately. I noticed it too... that's what I came here to talk about, and it happens to be the first comment I read.
- There might be a lot of AI slop from older models, especially from over a year ago, but newer models might actually have a positive impact on the site when regulated. Bocanegris (talk) 15:58, 19 January 2026 (UTC)
- It's always possible that you are seeing fewer articles with hallucinated sources because said articles are being deleted. SuperPianoMan9167 (talk) 16:13, 19 January 2026 (UTC)
- Per your own link: Category:Candidates for speedy deletion as unreviewed LLM-generated content - Wikipedia - the number of candidates for speedy deletion is literally five.
- No, there hasn't been a large-scale purge of AI-generated content on Wikipedia. It looks like things are getting better because AI tools themselves are getting better.
- I've been using AI to help me research and format articles, and not only have I received zero complaints, but I’ve actually received compliments from my mentor. It also helps me with grammar since English is my second language. Without these tools, I couldn't be of service to the organization, and I'm pretty sure I'm not the only one in this position. Bocanegris (talk) 07:09, 20 January 2026 (UTC)
- If it is the AI that is suggesting using forum threads to source edits, then please consider finding other ways to research. Was this ever a real source? One of the ISBNs seems made up. Fixing grammar is much easier for other editors than fixing sourcing issues, so trading better grammar for problematic sourcing is a bad trade off for the organization. CMD (talk) 07:22, 20 January 2026 (UTC)
- Yes, that is a real source, I fixed the source URL, thank you for pointing it out. The ISBNs are also real; I fixed the formatting. Bocanegris (talk) 08:06, 20 January 2026 (UTC)
- That category is regularly cleared as admins process WP:G15 requests, the amount of articles in it at any given time doesn't indicate anything (unless there's a lot, in which case it indicates an admin backlog).
- Had a look at some of your edits, I do have some complaints.
- This edit contains markdown (please read what you copy-and-paste) and fails source-text verification, the source does not mention Nomad, nor does it state that Hubzilla was built on Zot (it states that it supports multiple protocols, zot included, and that zot6 would become the "primary protocol" in the future), and there is no "nomadic identity" quote in the source.
- The generated WP:OR in Special:Diff/1324865434 is entirely unencyclopedic,
with Albarrán and Psykini providing vocals that often function more as atmospheric instruments than traditional lyrical delivery
,The tracks incorporate influences from Eastern philosophies and meditative practices, evident in song titles referencing mantras and Buddhist concepts (e.g., "Tayata Om..." and "Yatha Butha Ñana Dhassana")
- In this edit (your most recent content addition) the very first source and claim fail verification. Page 311 of the 2010 release of DC Comics Year By Year: A Visual Chronicle does not mention Superman: Birthright at all, let alone issue counts or dates.
- I don't care to look more, and the above edits were only looked at briefly. fifteen thousand two hundred twenty four (talk) 08:04, 20 January 2026 (UTC)
- Last edit was bothering me too much to not look closer. In addition to the above mentioned hallucinated source, [2] and [3] also don't exist. One of the remaining sources is WP:USERGENERATED [4]. The final source appears good [5], but it doesn't support the accompanying text, no mention of Silver Age, Byrne, vegetarianism, hope, crest, aura, or soul vision. It's very clear that the edit was generated and next to no review was performed. I've reverted it entirely. fifteen thousand two hundred twenty four (talk) 08:25, 20 January 2026 (UTC)
- Thank you for pointing out some of your perceived problems with my edits. It's a little nitpicky, but I will fix the Markdown (I'm still learning Wikipedia formatting and I don't rely on AI for everthing - I did that by hand.
- I added a reliable source from a mainstream newspaper and dictionary deffinitions to the quote on the "Bienvenido al sueño" article you mentioned, so it's no longer "OR".
- As for the Superman article, I will look into your suggestions and try to correct your perceived issues. It would have been more helpful if you tried to fix the errors instead of making a complete reversal. But I'll work on in it, thank you!
- See? We can work this out. Little by little, these tools will make us better editors. That's why I posted my original comment here. Bocanegris (talk) 08:30, 20 January 2026 (UTC)
- Using a tool to add unencyclopedic content to an encyclopedia is to the betterment of no one. Not the editor adding it, the editors who must clean it up, or the readers who are mislead. It would be better to make no edit at all, than to make edits that add hallucinations and information which appears verifiable, but isn't.
- The citations added to Bienvenido al Sueño don't support the text either. The first makes no mention of "programmed beats, ambient textures, and synthesizers" nor vocals as "atmospheric instuments", the other two sources are more original research. Please read the no original research policy, particularly the WP:SYNTH section. If you have questions about editing Wikipedia, please visit the Teahouse and ask a human, not a model. fifteen thousand two hundred twenty four (talk) 08:52, 20 January 2026 (UTC)
- Noted. I will definitely do that, thank you. Bocanegris (talk) 08:57, 20 January 2026 (UTC)
- I will just jump in here to say that I suspect that this user is using A.I. (which they previously admitted using) during their editing, as I had to reverse edits on one page, which had hallucinated sources. I talked about it on User talk:Bocanegris#Unhelpful edits on Disney and LGBTQ representation in animation (their talk page).Historyday01 (talk) 13:50, 20 January 2026 (UTC)
- If it is the AI that is suggesting using forum threads to source edits, then please consider finding other ways to research. Was this ever a real source? One of the ISBNs seems made up. Fixing grammar is much easier for other editors than fixing sourcing issues, so trading better grammar for problematic sourcing is a bad trade off for the organization. CMD (talk) 07:22, 20 January 2026 (UTC)
- It's always possible that you are seeing fewer articles with hallucinated sources because said articles are being deleted. SuperPianoMan9167 (talk) 16:13, 19 January 2026 (UTC)
Presumptive reversion
Community consensus seems to be pretty firmly in favour of presumptive reversion of all edits made by chronic LLM misusers (see the unanimous poll at WP:ANI#Another continually unconstructive LLM editor
I don't think another RfC for an entirely new proposal would be a good idea right now given the community's expressed 'RfC fatigue' about LLM topics. Given the seemingly uncritical support we could probably just BOLDly write this down somewhere.
The question is, do we boldly create a new guideline page, or a new essay page, or do we add it somewhere like WP:LLM? Do we add it to WP:PDEL even though that's a subsection of the copyvio policy? Do we spin WP:PDEL out into its own page which includes both copyvio-related PDEL and AI-related PDEL? Athanelar (talk) 11:12, 28 January 2026 (UTC)
- A new guideline is not needed. Reverting is a standard Wikipedia process, just be prepared to discuss the reversion (BRD). CMD (talk) 12:39, 28 January 2026 (UTC)
- Then why is pdel specified for copyvio cases? Athanelar (talk) 13:18, 28 January 2026 (UTC)
- Copyright issues create a known legal liability. CMD (talk) 13:35, 28 January 2026 (UTC)
- Then why is pdel specified for copyvio cases? Athanelar (talk) 13:18, 28 January 2026 (UTC)
So... what's the plan with the open cases?
The project started tracking the cases at Category:WikiProject AI Cleanup open cases in September, but unsurprisingly the edits that need fixing are coming in much more quickly than people are fixing them. We need an active process to find and stop these editors while their edit counts are still low, and we need to a way to toss their destructive edits without spending more time looking at the edits than they did making them. What have we come up with so far? The other issue is that the longer a case is pending, the more time there is for new edits to mix with and bury the damaging edits, so we need to set it up so we're prioritizing the older cases. Thebiguglyalien (talk) 🛸 05:49, 23 January 2026 (UTC)
- For early interception there are a few edit filters [6][7][8][9], but not much more can be done aside from standard patrolling. Some editors have luck with performing searches for common LLM constructions, but that's not guaranteed to return recent results. What would help most is increasing general editor awareness of the project, noticeboard, and signs of LLM use.
- Outside of G15 (when applicable) and tags there's nothing special available, it's the same processes as with any other unconstructive edits. I have performed some bold blanket draftifications in more egregious cases, but good luck trying to form consensus to codify anything like that.
- The ISO date-first naming convention means that the oldest cases are listed first within the category, so age-based prioritization is already built in.
- fifteen thousand two hundred twenty four (talk) 06:32, 23 January 2026 (UTC)
- To be fair it's possible to tip the scales on searches to return newer articles -- "stands as a testament" is going to get you 2023-2024 stuff while, say, "coverage" "regional media" is going to get you 2025-2026 stuff.
- I think it's good to have people covering both new and old articles -- really appreciate the people staying on top of the edit filters, for instance. Gnomingstuff (talk) 18:33, 23 January 2026 (UTC)
- The older cases don't really get updated any more once they're off of the noticeboard. We could really benefit from something more systematic, but I unfortunately don't know what that would be. Thebiguglyalien (talk) 🛸 04:41, 24 January 2026 (UTC)
- What would help most is more awareness. I've considered if adding a banner to the top of LLMN which would randomly present some cases from Category:WikiProject AI Cleanup open cases alongside a call to action would help, but such things are prone to banner fatigue. That and cleanup is less straightforward than a similar board like WP:CCI, there's some domain-specific knowledge needed to contribute constructively. fifteen thousand two hundred twenty four (talk) 06:12, 24 January 2026 (UTC)
- What about the many sections on the noticeboard labeled "Cleanup has been requested" but don't have a tracking page? Are they just waiting for tracking pages to be created, or are these ones distinct somehow? Thebiguglyalien (talk) 🛸 Thebiguglyalien (talk) 🛸 06:25, 26 January 2026 (UTC)
- "Cleanup has been requested" just means cleanup has been requested but hasn't yet begun. There's no special status to it.
- Some reports won't benefit from a tracking page, like these two. Some would benefit, but creating a good tracking page requires a fair amount of work. fifteen thousand two hundred twenty four (talk) 07:10, 26 January 2026 (UTC)
- Fifteen thousand two hundred twenty four, do you think there would be a benefit to having a category for cases where cleanup is finished? Right now once they're move out of Category:WikiProject AI Cleanup open cases they're just uncategorized. Thebiguglyalien (talk) 🛸 01:17, 28 January 2026 (UTC)
- I don't see any great benefit and it would add an extra step when closing reports. fifteen thousand two hundred twenty four (talk) 02:40, 28 January 2026 (UTC)
- Fifteen thousand two hundred twenty four, do you think there would be a benefit to having a category for cases where cleanup is finished? Right now once they're move out of Category:WikiProject AI Cleanup open cases they're just uncategorized. Thebiguglyalien (talk) 🛸 01:17, 28 January 2026 (UTC)
- What about the many sections on the noticeboard labeled "Cleanup has been requested" but don't have a tracking page? Are they just waiting for tracking pages to be created, or are these ones distinct somehow? Thebiguglyalien (talk) 🛸 Thebiguglyalien (talk) 🛸 06:25, 26 January 2026 (UTC)
- What would help most is more awareness. I've considered if adding a banner to the top of LLMN which would randomly present some cases from Category:WikiProject AI Cleanup open cases alongside a call to action would help, but such things are prone to banner fatigue. That and cleanup is less straightforward than a similar board like WP:CCI, there's some domain-specific knowledge needed to contribute constructively. fifteen thousand two hundred twenty four (talk) 06:12, 24 January 2026 (UTC)
- I made a suggestion at the annual plan to have an edit notice for edits that show AI signs but am not hopeful Kowal2701 (talk) 12:34, 23 January 2026 (UTC)
- AI use is really bad at NPP and AfC. I'd wager that the majority of articles from new editors are AI-assisted. There are so many start-class articles with perfect grammar and no typos nowadays. It's obvious people are using AI. For example, I tagged this as clearly AI-assisted. So, the editor removed the tag after asking their AI to rewrite the article, as is evident by part of the article that says "**Heroism and Decay:**". And that's just an obvious case, people are getting better at removing signs of AI from their articles. As for a solution, I'm not sure I have one. For some reason, people don't seem to care about this flood of AI use. ~WikiOriginal-9~ (talk) 14:18, 23 January 2026 (UTC)
- The slow moving on measures against AI editing is only going to force us to be more aggressive with editors who make these edits. That's not a desirable outcome for anyone, but it's the one we're currently moving toward Thebiguglyalien (talk) 🛸 04:43, 24 January 2026 (UTC)
- I've been sending egregious yet unG15able examples to AfD citing WP:NEWLLM and I have so far been successful. I just did so for the page you linked. Ca talk to me! 08:06, 24 January 2026 (UTC)
- The only way, unfortunately, is for a lot more people to work on the backlogs. The math is against us: There are only so many of us and so many hours in the day, and detecting and fixing AI text takes much much longer than producing it.
- (Some low-hanging fruit is monitoring the Newcomer Tasks feeds for expand and copyedit tasks, since those frequently produce AI text by nature of it being fast to crank out. I was doing this for a bit but have been bogged down in other stuff.) Gnomingstuff (talk) 17:55, 23 January 2026 (UTC)
- I think that a little more support from WMF could do a great deal... Even if an human eye is still always necessary, AI-detectors are improving, wikimedians active in these areas should be able to use the best possible instruments without being limited to free plans and websites, we should either be able to use a WMF-made app, or there should be something similar to Wikimedia Library but for users active in AI-cleanup. I don't know if they're already working on this, but I think it's safe to say that we need help. --Friniate ✉ 15:07, 25 January 2026 (UTC)
- a pangram subscription would be nice; unfortunately with the exception of wikiedu they seem to be taking the opposite stance and you get tone policed to hell if you deign to criticize this rather than shutting up like a good little boy or girl Gnomingstuff (talk) 22:52, 25 January 2026 (UTC)
- idk who you'd have to get in touch with for that, but probably worth asking Sohom Datta Kowal2701 (talk) 23:14, 25 January 2026 (UTC)
- a pangram subscription would be nice; unfortunately with the exception of wikiedu they seem to be taking the opposite stance and you get tone policed to hell if you deign to criticize this rather than shutting up like a good little boy or girl Gnomingstuff (talk) 22:52, 25 January 2026 (UTC)
- paging Sohom Datta... ClaudineChionh (she/her · talk · email · global) 00:43, 26 January 2026 (UTC)
- I can definitely try to ask the WMF to see if they are interested in looking to develop AI-cleanup tools (or at the very least give us access to the Pangram API of some sort). Sohom (talk) 00:47, 26 January 2026 (UTC)
- Thanks! The latter would probably be more feasible; have been tossing around the idea of making a rudimentary plugin of some sort myself, basically just automating what I already look for. Gnomingstuff (talk) 20:23, 28 January 2026 (UTC)
- I can definitely try to ask the WMF to see if they are interested in looking to develop AI-cleanup tools (or at the very least give us access to the Pangram API of some sort). Sohom (talk) 00:47, 26 January 2026 (UTC)
- I think that a little more support from WMF could do a great deal... Even if an human eye is still always necessary, AI-detectors are improving, wikimedians active in these areas should be able to use the best possible instruments without being limited to free plans and websites, we should either be able to use a WMF-made app, or there should be something similar to Wikimedia Library but for users active in AI-cleanup. I don't know if they're already working on this, but I think it's safe to say that we need help. --Friniate ✉ 15:07, 25 January 2026 (UTC)
Possible AI communication
I find that DangerousEagles's comments on talk pages, including their own, seem like AI writing. I messaged them about it and they said that the did use AI before but didn't now. However, their message still smells strongly of LLM writing, as do their other comments at the military-industrial complex RfC. What do you guys think? Chorchapu (talk | edits) 16:07, 25 January 2026 (UTC)
- After going through the user's contributions on the talk pages, it's probably reasonable enough that they've potentially used LLM. That said, a cleanup may be needed where necessary. sjones23 (talk - contributions) 12:05, 30 January 2026 (UTC)