Wikipedia talk:WikiProject AI Cleanup
| Main page | Discussion | Noticeboard | Guide | Resources | Policies | Research | 
| This is the talk page for discussing WikiProject AI Cleanup and anything related to its purposes and tasks. To report issues with AI use, please refer to Wikipedia:WikiProject AI Cleanup/Noticeboard.  | 
 
  | 
| Archives: 1, 2, 3, 4Auto-archiving period: 30 days  | 
| This project page does not require a rating on Wikipedia's content assessment scale. It is of interest to the following WikiProjects:  | ||||||||
  | ||||||||
| To help centralize discussions and keep related topics together, all non-archive subpages of this talk page redirect here. | 
This page has been mentioned by multiple media organizations: 
  | 
Quiz to test AI-detection abilities?
[edit]Few months ago, I tried to create a page (Wikipedia:WikiProject AI Cleanup/AI or not) in which editors can test their AI detection skills á la Tony1's copyediting exercises. At first I was copy-pasting novel examples that I generated myself, but I think compiling real-life AI-generated and human-generated articles would create a more accurate testing environment. Any help collecting examples (whether be it writing from pre-November 2022 or editors who disclosed their LLM usage) will be welcome. Ca talk to me! 00:24, 23 September 2025 (UTC)
- Some species examples:
 - AI: here (easy)
 - AI: here (a little harder)
 - AI with substantial review: here (this user has disclosed their use of LLMs, including a few they apparently trained themselves, and their review process)
 - Non-AI: here (pre-2022) Gnomingstuff (talk) 00:38, 23 September 2025 (UTC)
 - I've added a pretty hilarious one into the "Hard" section of that subpage, if that helps. Altoids0 (talk) 16:15, 15 October 2025 (UTC)
 
Notifications?
[edit]Given that this wiki project is becoming a bit more like a AI noticeboard, somewhat akin to the CCI or COIN noticeboards, any thoughts about adding the standard boilerplate "notify people if you bring them up here, and try to talk through the problem with them first" notice and banner? Not a hard and fast rule, of course, and I'm sure its something experienced editors already do, but anyways. Thoughts? GreenLipstickLesbian💌🦋 08:57, 23 September 2025 (UTC)
- As a suggestion sure, but I would disagree with indicating it's a requirement. fifteen thousand two hundred twenty four (talk) 09:59, 23 September 2025 (UTC)
 - No problem with the idea, just a lot of editors brought up might no longer be around. Gnomingstuff (talk) 14:19, 23 September 2025 (UTC)
- Well, if they're no longer around then they won't mind the notification! (And it's a good for CYA reasons). GreenLipstickLesbian💌🦋 02:13, 24 September 2025 (UTC)
 
 - ARandomName123 and Newslinger both suggested setting up a CCI for LLM misuse which makes sense and is maybe where your suggested language could go? I don't think this project should only be a noticeboard, I'd like to see a dedicated page for that and keep this open for broader topics like research to quantify the impact of LLM use (#/% of AfC declines for AI content over time, etc.) and discussing potential policies NicheSports (talk) 15:18, 23 September 2025 (UTC)
- I fully agree with this proposal – it would still be very helpful to be able to discuss broader topics, and they can easily get drowned in the amount of individual reports. Chaotic Enby (talk · contribs) 03:56, 3 October 2025 (UTC)
- Hey @Chaotic Enby - any chance you can help with a short term solution? It would be good to create a "Cases" or "Investigations" tab in the project (i.e. WikiProject AI Cleanup/Cases) so we can separate cleanup cases from general discussion. Idk how hard that is but I see that you have some experience with templates so figured I'd ask :) NicheSports (talk) 20:05, 15 October 2025 (UTC)
- Doing it at Wikipedia:WikiProject AI Cleanup/Noticeboard! Chaotic Enby (talk · contribs) 16:30, 17 October 2025 (UTC)
- heck yeah, thank you. Is it possible to move cases from this page (including those in the archives) to the new noticeboard? NicheSports (talk) 16:45, 17 October 2025 (UTC)
- I would be happy to, but I'm afraid that editors subscribed to the current threads might lose them. If that isn't the case, it's fine with me! Chaotic Enby (talk · contribs) 16:51, 17 October 2025 (UTC)
- Is there some way to copy them over to the board while leaving the original threads intact? Sarsenet•he/they•(talk) 17:19, 17 October 2025 (UTC)
- Looking at H:TALKPERMALINK and WP:SUBSCRIBE, just moving the discussions should do the trick, I'm going to do it. Chaotic Enby (talk · contribs) 17:50, 17 October 2025 (UTC)
- And it's done! Chaotic Enby (talk · contribs) 17:53, 17 October 2025 (UTC)
- thanks. Can we get a Noticeboard-specific archive set up? I'm cleaning up and trying to archive the posts about one-off articles/edits, because those aren't in scope there. But my one-click archiver says it is going to archive to "WP:WikiProject AI Cleanup/Archive 1" when I hover over the link NicheSports (talk) 18:40, 17 October 2025 (UTC)
- I just created Wikipedia:WikiProject AI Cleanup/Noticeboard/Archive 1, this should work now! Chaotic Enby (talk · contribs) 19:16, 17 October 2025 (UTC)
- thanks. I did some cleanup of the page
- Archiving inactive sections that clearly do not meet the criteria of "repeated" LLM misuse
 - Tagging + collapsing completed cleanup cases and archiving them (only one so far)
 - Tagging + collapsing cases that are being significantly worked on
 
 - It's a start! NicheSports (talk) 19:45, 17 October 2025 (UTC)
- I picked that criterion to avoid minor issues proliferating on the noticeboard, but I'm realizing that some major issues on a single article (e.g. Wikipedia:WikiProject AI Cleanup/Noticeboard#Need a second or third opinion on AI signs) absolutely did warrant a report, so I might lower the threshold. Chaotic Enby (talk · contribs) 20:24, 17 October 2025 (UTC)
 
 
 - thanks. I did some cleanup of the page
 
 - I just created Wikipedia:WikiProject AI Cleanup/Noticeboard/Archive 1, this should work now! Chaotic Enby (talk · contribs) 19:16, 17 October 2025 (UTC)
 
 - thanks. Can we get a Noticeboard-specific archive set up? I'm cleaning up and trying to archive the posts about one-off articles/edits, because those aren't in scope there. But my one-click archiver says it is going to archive to "WP:WikiProject AI Cleanup/Archive 1" when I hover over the link NicheSports (talk) 18:40, 17 October 2025 (UTC)
 
 - And it's done! Chaotic Enby (talk · contribs) 17:53, 17 October 2025 (UTC)
 
 - Looking at H:TALKPERMALINK and WP:SUBSCRIBE, just moving the discussions should do the trick, I'm going to do it. Chaotic Enby (talk · contribs) 17:50, 17 October 2025 (UTC)
 
 - Is there some way to copy them over to the board while leaving the original threads intact? Sarsenet•he/they•(talk) 17:19, 17 October 2025 (UTC)
 
 - I would be happy to, but I'm afraid that editors subscribed to the current threads might lose them. If that isn't the case, it's fine with me! Chaotic Enby (talk · contribs) 16:51, 17 October 2025 (UTC)
 
 - heck yeah, thank you. Is it possible to move cases from this page (including those in the archives) to the new noticeboard? NicheSports (talk) 16:45, 17 October 2025 (UTC)
 
 - Doing it at Wikipedia:WikiProject AI Cleanup/Noticeboard! Chaotic Enby (talk · contribs) 16:30, 17 October 2025 (UTC)
 
 - Hey @Chaotic Enby - any chance you can help with a short term solution? It would be good to create a "Cases" or "Investigations" tab in the project (i.e. WikiProject AI Cleanup/Cases) so we can separate cleanup cases from general discussion. Idk how hard that is but I see that you have some experience with templates so figured I'd ask :) NicheSports (talk) 20:05, 15 October 2025 (UTC)
 
 - I fully agree with this proposal – it would still be very helpful to be able to discuss broader topics, and they can easily get drowned in the amount of individual reports. Chaotic Enby (talk · contribs) 03:56, 3 October 2025 (UTC)
 
Template
[edit]Within the past few days, a user not associated with this wikiproject (or at least not in the members list on the project's front page) created a new {{AI citations}} template to flag for possibly AI-generated citations. Significantly, they tried to make it file pages in a dated-monthly maintenance category, Category:Articles containing possible AI-generated citations from October 2025, that obviously doesn't exist — I couldn't leave a redlinked category on the pages, but I obviously wasn't going to create the category without consulting with you lot first (especially since even if it is desirable, it obviously still won't get used adequately if you guys don't even know it exists), so I had to remove the category from the template so that it currently files pages nowhere.
So do you want a template and categories like this, or is it just unnecessarily duplicating another process queue that you already have? If you want it, then somebody needs to create the categories that would go with it, and if you don't want it, then it can just be redirected or deleted as needed — but obviously, one way or the other, it requires project attention rather than just being left as a one-person creation that files articles in a maintenance queue that doesn't exist. Thanks. Bearcat (talk) 16:37, 11 October 2025 (UTC)
- Additional context: the editor who created it appears to be using it to flag articles with 
utm_source=chatgpt.comtype citations [1][2][3]. There exists an edit filter, 1346, which also logs the addition of those types of citations. fifteen thousand two hundred twenty four (talk) 17:24, 11 October 2025 (UTC) - I don't think we need this template at this time. To fifteen's point we already have Special:AbuseFilter/1346 to track that. Although I find Special:AbuseFilter/1325 more useful NicheSports (talk) 19:10, 11 October 2025 (UTC)
 - It's redundant but I don't really think it's bad -- the abuse filter is editor-facing, the template would be more for the benefit of readers since almost none of them are going to look at the filter or the tags. Gnomingstuff (talk) 02:05, 13 October 2025 (UTC)
- I think the use of an inline template, one like {{verify source}} but tailored for LLMs, would probably be more appropriate than tagging an entire article for this specific issue, especially when considering that if there are signs of model-generated text then {{AI-generated}} is more appropriate. fifteen thousand two hundred twenty four (talk) 02:41, 13 October 2025 (UTC)
- This is the approach I would prefer.  If they're only flagging sources with chatgpt.com in the URL, then the individual source is what needs to be verified.  And these sources aren't necessarily problematic because a lot of people use LLMs in place of search engines and just lazily copy-paste the URLs after visiting the sites.  That much less concerning than using an LLM to actually write article content. WeirdNAnnoyed (talk) 11:16, 14 October 2025 (UTC)
- Agree with this, although ChatGPT also adds UTM parameters when generating content itself. Maybe a template looking like [AI-retrieved source] could work?We already have {{AI-generated source}}[AI-generated source?], but it applies to sources that themselves contain AI-generated content, rather than sources that were linked by an AI model. Chaotic Enby (talk · contribs) 19:59, 16 October 2025 (UTC)
- [AI-retrieved source] is probably the best way to phrase it. and it should link to something explaining what the tag is indicating. I suppose after a tagged ref would be reviewed the 
utm_sourceshould be removed, or the template should have achecked=dateparameter that will hide it from view to prevent retagging. fifteen thousand two hundred twenty four (talk) 20:38, 16 October 2025 (UTC)- I already made it so the hover text displays more information, but if you have an advice page in mind, that would be great! I can throw something together quickly based on {{verify source}}. Chaotic Enby (talk · contribs) 20:58, 16 October 2025 (UTC)
- Done with {{AI-retrieved source}}! Chaotic Enby (talk · contribs) 21:21, 16 October 2025 (UTC)
- Looks good!
 - I don't think linking a full advice page is necessary, just a small section to explain what "AI-retrieved" means should be adequate. To that end I've made this edit to TM:AI-retrieved source/doc to try to make it a suitable landing page for such a link. Further improvements are welcome, and if it's not an improvement a revert is welcome also. fifteen thousand two hundred twenty four (talk) 21:50, 16 October 2025 (UTC)
- Thanks for the improvements! Chaotic Enby (talk · contribs) 22:05, 16 October 2025 (UTC)
 
 
 
 - Done with {{AI-retrieved source}}! Chaotic Enby (talk · contribs) 21:21, 16 October 2025 (UTC)
 
 - I already made it so the hover text displays more information, but if you have an advice page in mind, that would be great! I can throw something together quickly based on {{verify source}}. Chaotic Enby (talk · contribs) 20:58, 16 October 2025 (UTC)
 
 - [AI-retrieved source] is probably the best way to phrase it. and it should link to something explaining what the tag is indicating. I suppose after a tagged ref would be reviewed the 
 
 - Agree with this, although ChatGPT also adds UTM parameters when generating content itself. Maybe a template looking like [AI-retrieved source] could work?We already have {{AI-generated source}}[AI-generated source?], but it applies to sources that themselves contain AI-generated content, rather than sources that were linked by an AI model. Chaotic Enby (talk · contribs) 19:59, 16 October 2025 (UTC)
 
 - This is the approach I would prefer.  If they're only flagging sources with chatgpt.com in the URL, then the individual source is what needs to be verified.  And these sources aren't necessarily problematic because a lot of people use LLMs in place of search engines and just lazily copy-paste the URLs after visiting the sites.  That much less concerning than using an LLM to actually write article content. WeirdNAnnoyed (talk) 11:16, 14 October 2025 (UTC)
 
 - I think the use of an inline template, one like {{verify source}} but tailored for LLMs, would probably be more appropriate than tagging an entire article for this specific issue, especially when considering that if there are signs of model-generated text then {{AI-generated}} is more appropriate. fifteen thousand two hundred twenty four (talk) 02:41, 13 October 2025 (UTC)
 - If the template is kept, probably easiest to have it file in Category:Articles containing suspected AI-generated texts from October 2025 for now and then split it if it actually sees use. Alpha3031 (t • c) 09:56, 14 October 2025 (UTC)
- I've modified the template to file articles in that category, although as of now it doesn't appear to actually be in use on any pages at all anymore. Beyond that, I'll leave it to you guys to decide whether to use it or get rid of it. Bearcat (talk) 13:26, 19 October 2025 (UTC)
 
 
More AI signs
[edit]Whatever the recent chatGPT (copilot?) update was, it has hilariously started adding maintenance banners to generated text occasionally - see User:SGIDavid/sandbox for a current example. Sarsenet•he/they•(talk) 09:34, 15 October 2025 (UTC)
- Google Gemini now correctly identifies certain situations, like {{Single source}}, inserts a hatnote and suggests to add more sources. Викидим (talk) 03:00, 3 November 2025 (UTC)
 

There is a Miscellany for deletion discussion at Wikipedia:Miscellany for deletion/Wikipedia:Case against LLM-generated articles that may be of interest to members of this WikiProject.—Alalch E. 10:12, 15 October 2025 (UTC)
Just talk
[edit]I just read an article that posed the question Also: If Wikipedia is “generally not considered a reliable source itself because it is a tertiary source that synthesizes information from other places,” then what does that make a chatbot? I thought it was a good question, so I asked Googles AI: [4]. I thought it was a pretty good answer, but I'm not sure other people will see what I see, maybe it changes per location etc. Gråbergs Gråa Sång (talk) 14:56, 18 October 2025 (UTC)
- courtesy ping to @Gråbergs Gråa Sång as I moved this here. WP:AINB is for reporting potential LLM misuse - it was created yesterday, so you are actually the first person to post anything there! WT:AIC is the place for general LLM-related discussion NicheSports (talk) 15:16, 18 October 2025 (UTC)
- Thanks, it does indeed fit better at a Wikiproject. Gråbergs Gråa Sång (talk) 15:25, 18 October 2025 (UTC)
 
 - Google's AI model isn't available in my location, but I'll add that Wikipedia not being a reliable source is not due to it being a tertiary source. While there isn't much of a need to cite tertiary sources in general (as they don't add any new information compared to the secondary sources they synthesize), the reasons why Wikipedia isn't considered reliable go beyond that, and are because it is user-generated content and especially because of the risk of circular sourcing. Chaotic Enby (talk · contribs) 21:33, 19 October 2025 (UTC)
 
Analyzing AI word/phrase frequency
[edit]Posted about this on the signs of AI writing page, but people here might be interested as well: I decided to loosely replicate one of the studies analyzing the frequency of certain words in AI versus human-generated research abstracts, except with (probably) AI versus (almost definitely) human-written articles.
Here are some preliminary results. It's a work in progress, not perfect by any means and there are flukes, but the data probably won't surprise anyone here. Gnomingstuff (talk) 02:37, 19 October 2025 (UTC)
- Amazing, pretty great start! I wonder if it could be possible to run them through a tokenizer? Right now, 
species,is treated as a different word fromspeciesand appears to be overrepresented in the AI dataset, likely by a statistical fluke. Chaotic Enby (talk · contribs) 21:35, 19 October 2025 (UTC)- Possibly! I'm hesitant to do too much cleaning of the text because I feel like it's likely that AI might have punctuation or capitalization or syntax quirks.
 - For instance 
Additionally,with the comma and capitalization, is pretty high up, whileAdditionallyandadditionally(comma or no) aren't even on the list. I don't know whether that's because those words just aren't in any of the human articles -- the original study has a min-occurrences-per-million-words comparison to deal with this scenario, but I don't have a million words in either data set yet -- or because of the anecdotal thing where AI really likes to start sentences/paragraphs with "Additionally, blah blah blah...." - The species thing seems to be a fluke but a dataset rather than punctuation fluke, since both those two words as well as 
species.are all more common in the AI text. (I would guess it's probably due to how many of our old species articles are stubs from a database.) Gnomingstuff (talk) 23:55, 19 October 2025 (UTC)- Concerning the additionally thing, it's possible that the latter is true. I read in a preprint that LLMs seem to not just have catchphrases but also slightly disfavor certain grammatical words, like "is" and "are". It's likely that lowercase "additionally" is another example! Assuming your current dataset is representative, anyhoo.
 - Awesome work by the way Gnoming! I am delighted to put this page in my bookmarks 
 Altoids0 (talk) 16:29, 20 October 2025 (UTC)
- Thanks for the new preprint! Just skimming: anecdotally speaking, I don't feel like I see "delve" in newer articles as much. 
delvesis the only form that shows up in any of these word/phrase lists using these thresholds, and it doesn't place high on any of them.delveanddelvingare both gone. - My datasets are probably godawful -- I tried to think of as many problems they might have as I could, the problem of course is what to do about it besides tracking down more articles to feed it and hoping it all averages out. I could just generate articles with ChatGPT or whatever myself, but I don't use it and don't intend to start, and I don't know what prompts people are using. Gnomingstuff (talk) 17:11, 20 October 2025 (UTC)
 
 - Thanks for the new preprint! Just skimming: anecdotally speaking, I don't feel like I see "delve" in newer articles as much. 
 
 
 - Hi @Gnomingstuff! I conducted an interview with Businessweek today about WikiProject AI Cleanup, and briefly mentioned this preliminary result – is that okay with you? Chaotic Enby (talk · contribs) 18:11, 24 October 2025 (UTC)
- Oh... I'm not sure that is a good idea per WP:BEANS. I'm already suspicious LLM designers are using the project (and things like the EF 1325 criteria) to train "better" models - ones with less puffery but just as much reference hallucination. If I'm wrong, seems better to not publicize what we're doing here? NicheSports (talk) 18:30, 24 October 2025 (UTC)
 - No objections on my end.
 - Not that concerned about WP:BEANS -- this is something researchers have published a ton of data on already. The thing is basically just me dumping Wikipedia text into a dumbed-down version of the Juzek/Ward study and going "fuck it, we're doing four words," doubt that's anything LLM designers haven't thought of already. Gnomingstuff (talk) 18:46, 24 October 2025 (UTC)
 
 
Template
[edit]@Chaotic Enby: do you think something like User:EF5/AINB-notice would be beneficial? EF5 17:05, 22 October 2025 (UTC)
- That could work great! Helpful for editors who might want to defend themselves from accusations, clarify the specifics of how they used AI, or just help clean up/double-check their own edits. Chaotic Enby (talk · contribs) 17:12, 22 October 2025 (UTC)
- Pinging @Athanelar and @NicheSports from the noticeboard discussion. Chaotic Enby (talk · contribs) 17:16, 22 October 2025 (UTC)
 
 - The notice looks fine to me so long as its use isn't required. fifteen thousand two hundred twenty four (talk) 17:23, 22 October 2025 (UTC)
- @Fifteen thousand two hundred twenty four: maybe there could be a notice like at WP:ANI that says "it is recommended, but not required, that you inform involved editors of a discussion", or something else of the like. EF5  17:33, 22 October 2025 (UTC)
- Something like: 
The
, would be my suggestion. fifteen thousand two hundred twenty four (talk) 17:57, 22 October 2025 (UTC){{AINB-notice}}template can be used to inform an editor of a discussion involving them- Yep, just having it as an optional tool is enough – having it be recommended can easily shift into a de facto requirement. Chaotic Enby (talk · contribs) 18:02, 22 October 2025 (UTC)
- Alright, I've WP:BOLDly moved my userspace draft to a template; no harm in having it. What to do with the notice can continue to be discussed. EF5  18:33, 22 October 2025 (UTC)
- I added documentation on the template parameters to Template:AINB-notice/doc. --Gurkubondinn (talk) 12:41, 23 October 2025 (UTC)
- As there didn't appear to be any opposition, I've added the template to the page instructions for the noticeboard in this edit: 
The {{AINB-notice}} template can be used to inform an editor of a discussion involving them, but this is not a requirement.
Anyone who disagrees with this phrasing, feel free to revert and we can continue discussing here. fifteen thousand two hundred twenty four (talk) 19:48, 26 October 2025 (UTC) 
 - As there didn't appear to be any opposition, I've added the template to the page instructions for the noticeboard in this edit: 
 
 - I added documentation on the template parameters to Template:AINB-notice/doc. --Gurkubondinn (talk) 12:41, 23 October 2025 (UTC)
 
 - Alright, I've WP:BOLDly moved my userspace draft to a template; no harm in having it. What to do with the notice can continue to be discussed. EF5  18:33, 22 October 2025 (UTC)
 
 - Yep, just having it as an optional tool is enough – having it be recommended can easily shift into a de facto requirement. Chaotic Enby (talk · contribs) 18:02, 22 October 2025 (UTC)
 
 - Something like: 
 
 - @Fifteen thousand two hundred twenty four: maybe there could be a notice like at WP:ANI that says "it is recommended, but not required, that you inform involved editors of a discussion", or something else of the like. EF5  17:33, 22 October 2025 (UTC)
 
Book citation verification strategies
[edit]Hi @NicheSports! Responding to your recent comment at ANI re edits to Social conservatism in the United States, but my reply is not really on topic there, so I figured I'd pop in here. For most of the suspected-LLM-generated book citations I've run into, I've been able to look up the book in Google Books and view the cited page. I just sometimes have to be a little crafty to stay within the number of pages that Google Books allocates for previews, such as opening the link in an incognito window and/or adjusting the Google Books URL to get to the cited page number. (There are useful template URLs at Wikipedia:Citing sources#Linking to Google Books pages.) My backup method is to check whether there's a scan of the book at the Internet Archive, in case the cited page is within the preview allocation there.
Checking book citations is not a new topic, but it's much more important now that it's so easy for even well-intentioned editors to add realistic but fake citations. The section at Wikipedia:Signs of AI writing#Citations is helpful, but specific validation approaches seem out of scope there. The linked page at Wikipedia:Fictitious references would benefit from a thorough update for the era of LLM-generated sources, including more details about checking book sources. Dreamyshade (talk) 19:36, 22 October 2025 (UTC)
- Another trick for finagling text out of Google Books' paywall is to search for any phrases that are cut off at the end/beginning of the preview, and/or your guess at them. So if the preview cuts off at "The film won the Academy" you can try searching "the Academy Award," sometimes that'll give you the next page, sometimes not. Gnomingstuff (talk) 20:40, 22 October 2025 (UTC)
 - Can't we have a bot that verifies existence of ISBNs? Bogazicili (talk) 12:09, 23 October 2025 (UTC)
- The vast majority of made up ISBNs will throw checksum errors, like ISBN 978-456454-554-3 
{{isbn}}: Checkisbnvalue: checksum (help) Headbomb {t · c · p · b} 15:57, 23 October 2025 (UTC)- We can expect a link to google books and try the strategy below, "Category:Hidden categories, such as unchecked sources with no links"? Bogazicili (talk) 16:04, 23 October 2025 (UTC)
 
 
 - The vast majority of made up ISBNs will throw checksum errors, like ISBN 978-456454-554-3 
 - Thank you for the recommendation. I was wary about trying to verify article content cited to books because I recently made a mistake while doing so. But I will try again with this approach NicheSports (talk) 21:42, 23 October 2025 (UTC)
 
Fake citations with fake links
[edit]Fake links seem to go to 404s, can't we have a bot that checks if recently added sources with links resolve to valid webpages? 404's can then be marked or something. Bogazicili (talk) 12:11, 23 October 2025 (UTC)
- Interesting idea but I think there may be technical limitations in having a bot testing tons of urls across many articles. But may be easier to create a bot that users can run on a single article or diff, for example to test for G15 criteria. Pinging @Sohom - are either of the above possible? NicheSports (talk) 12:52, 23 October 2025 (UTC)
- How about just the recently added ones? Something daily?
 - Single article would work too.
 - The bot could combine multiple things. For example
- 404s
 - No match for ISBN if it's a book source
 - Recently added source with no links or ISBNs. To identify the source in this hoax article for example: Wikipedia:List_of_hoaxes_on_Wikipedia/Amberlihisar#References. I mean they should be tagged somehow. Bogazicili (talk) 13:00, 23 October 2025 (UTC)
 
 - There's link-dispenser which you can use on a single article. It helps sieve down a lot of URLs to some potentially dodgy ones. Cheers, SunloungerFrog (talk) 13:07, 23 October 2025 (UTC)
- Nice, thanks. Would be great if this could run on a diff as well NicheSports (talk) 13:11, 23 October 2025 (UTC)
 - @SunloungerFrog: what about a tool that combines multiple things mentioned above?
 - It can also check sources without links and add a Category:Hidden categories, such as unchecked sources with no links. We may need a new reference variable for genuine sources with no links such as |no-link-check Bogazicili (talk) 13:42, 23 October 2025 (UTC)
 - So the tool is written by @Sohom Datta not me; they will better know the art of the possible in terms of additional features. Cheers, SunloungerFrog (talk) 14:01, 23 October 2025 (UTC)
- Ideally, we should have something like "AI generated source check" or more broadly "source existence verification" similar to "Fix dead links" or "Copyvio Detector" when you click page history Bogazicili (talk) 15:25, 23 October 2025 (UTC)
 
 
 
 
 You are invited to join the discussion at Wikipedia talk:Writing articles with large language models § RfC, which is within the scope of this WikiProject.  Chaotic Enby (talk · contribs) 23:18, 24 October 2025 (UTC)
Format for the noticeboard
[edit]Could we create a template for a collapsible table similar to Fifteen thousand two hundred twenty four's comment here so people can keep track of work done and work still to do (by we I mean someone else that knows how). On the user side it could look something like {{AIC table|collapsed=yes|user=Steven|article1=Fishcakes|clean=y|g15=n|article2=History of Fishcakes|clean=n|g15=y}}. Most efficient, if possible, would probably be to have the default being no for "clean" and "g15" parameters, and it only needing a yes in one of those per article entry where the other parameter/box gets greyed-out. Would people find that useful? Kowal2701 (talk) 21:51, 29 October 2025 (UTC)
- Many cases require cleanup across hundreds of articles and I don't think this is viable at that scale. I also have some feedback on the columns in this table. I often don't tag articles associated with an AINB case and just fix them instead. I have cleaned hundreds of articles and I don't think I have ever G15'd an article associated with an open case at AINB - other editors not associated with the project typically get to those first (often via AfC or NPP). When I G15 its normally by finding stuff via the edit filter logs. My actions while cleaning are typically: revert, stubify, rewrite, tag and leave, or (rarely) AfD. NicheSports (talk) 22:01, 29 October 2025 (UTC)
- That said I would love a way to both clerk and track our cases better. I have been meaning to look at CCI to see how they do it there NicheSports (talk) 22:04, 29 October 2025 (UTC)
- Agreed, and I think a table is a reasonable way to track cleanup progress. Small cleanup efforts could use a table in situ, larger efforts could be hosted on a subpage with a hatnote added to the top of the report linking to the subpage and indicating if cleanup has been completed or not. An example LLMN discussion might look like: Mass LLM misuse by User:Example
 - Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis no...
 - The CCI process looks too formal for the way reports here and at LLMN have worked, ideally tracking efforts should be low-friction and kept tightly coupled to the original reports. fifteen thousand two hundred twenty four (talk) 23:31, 29 October 2025 (UTC)
 - Yeah I've just been tagging stuff if it isn't something I can revert without splash damage -- I assume a lot of those articles have been quietly untagged but I haven't been keeping track because there is so, so much and I just don't have the patience to argue over them all. Open to a more formalized tracking system though. Gnomingstuff (talk) 02:18, 30 October 2025 (UTC)
 
 - Agreed, and I think a table is a reasonable way to track cleanup progress. Small cleanup efforts could use a table in situ, larger efforts could be hosted on a subpage with a hatnote added to the top of the report linking to the subpage and indicating if cleanup has been completed or not. An example LLMN discussion might look like: 
 
 - That said I would love a way to both clerk and track our cases better. I have been meaning to look at CCI to see how they do it there NicheSports (talk) 22:04, 29 October 2025 (UTC)
 
Icon/topicon/logo
[edit]I don't want to unarchive the discussion but did you ever reach consensus? It doesn't seem like you did and I want the topicon I make to use the right one. ~ Argenti Aertheri(Chat?) 06:12, 2 November 2025 (UTC)
 Courtesy link: Wikipedia talk:WikiProject AI Cleanup/Archive 3 § New logo?- Consensus was formed to not use the proposed new logo, with six editors opposing (myself included) and two supporting changing to it. fifteen thousand two hundred twenty four (talk) 06:25, 2 November 2025 (UTC)
- Okay, this little guy it is then: 
 Cute, I like it. ~ Argenti Aertheri(Chat?) 06:40, 2 November 2025 (UTC) - Template:Wikiproject_AI_Cleanup_topicon, now I won't have to hunt for the project page. ~ Argenti Aertheri(Chat?) 10:53, 2 November 2025 (UTC)
 
 - Okay, this little guy it is then: 
 
how the absolute fuck do you explain this shit to people
[edit]I am at my wits' end. If I have to explain to one more furious person the very clear signs of AI writing -- which are corroborated by actual fucking research -- I am going to lose my goddamn mind.
How are you managing to actually explain this to people? Gnomingstuff (talk) 17:31, 3 November 2025 (UTC)
- You're not going to because we're not arguing from the same perspective. I highly doubt anyone here uses LLMs regularly, they do, and think they are editing it well enough. We're arguing with the people making the fucking mess, they think they know what they're doing but lack WP:COMPETENCY. But oh! We don't need an LLM policy, they violated a different one. 
 Facepalm  ~ Argenti Aertheri(Chat?) 17:48, 3 November 2025 (UTC)
- To be clear this isn't about the people making the edits, it's people "well you have no proof," which, no shit I don't have any proof, it's impossible to have proof unless you go back in time and shoulder surf, but I have read several studies on the linguistic distribution of AI text, done a rough approximation of one such study on Wikipedia edits, and viewed enough thousands of diffs from pre-2023 and post-2023 to be able to pinpoint patterns. I have no idea how to explain to people that something fits an aggregate pattern. See my talk page for a good example -- literally, in the data I have run, some of these indicators are at 5200% more common in AI text, 2400%, 700%, etc., what more evidence is even fucking Gnomingstuff (talk) 17:59, 3 November 2025 (UTC)
- I have a much harder time trying to explain to people why LLM outputs are nearly useless fucking trash. No, it isn't the magical box of you want just because it sounds plausible and no I'm still not impressed or wooed because a computer used correct grammar to "tell" you something! I swear these things wouldn't be half as annoying if someone hadn't though of letting people use them with chat interface. Why does anyone want to think less? Why do they think that comparing it with horses and machinery is smart? How am I supposed to explain this to people that seemingly are actively disinterested in actual thought? What is going on? Why do I even try? --Gurkubondinn (talk) 23:18, 3 November 2025 (UTC)
- We are all here to create an encyclopedia, so eventually the cool heads will prevail (asking the opponents essentially, "what do you want Wikipedia to be: a mountain of AI trash, or a source to train AI on?". That said, many LLM outputs are not trash. For example, AI already does an excellent job translating articles between languages (wa-a-a-a-y better than yours truly for French and German). It is also pretty good at summarizing, so an article mostly based on a single review-type source would be OK (see, for example, San Felipe Creek (Texas). What we need to explain to other editors IMHO is that humans are responsible for following the rules, so (1) select the sources yourself. This eliminates hallucinated sources, (2) feed the text of these source into AI explicitly. This eliminates AI guessing what is the content of the sources that are not easily accessible, (3) prompt should guide AI to generate wikitext based on these sources only and provide page numbers. This eliminates hallucinated claims, (4) verify claims against pages manually, (5) use the best model available, this will save your time on #4 (in my experience, Gemini Pro Deep Think is decent with page numbers, the free models of the same Gemini will hallucinate pages happily). Викидим (talk) 01:17, 4 November 2025 (UTC)
 
 
 - I have a much harder time trying to explain to people why LLM outputs are nearly useless fucking trash. No, it isn't the magical box of you want just because it sounds plausible and no I'm still not impressed or wooed because a computer used correct grammar to "tell" you something! I swear these things wouldn't be half as annoying if someone hadn't though of letting people use them with chat interface. Why does anyone want to think less? Why do they think that comparing it with horses and machinery is smart? How am I supposed to explain this to people that seemingly are actively disinterested in actual thought? What is going on? Why do I even try? --Gurkubondinn (talk) 23:18, 3 November 2025 (UTC)
 
 - To be clear this isn't about the people making the edits, it's people "well you have no proof," which, no shit I don't have any proof, it's impossible to have proof unless you go back in time and shoulder surf, but I have read several studies on the linguistic distribution of AI text, done a rough approximation of one such study on Wikipedia edits, and viewed enough thousands of diffs from pre-2023 and post-2023 to be able to pinpoint patterns. I have no idea how to explain to people that something fits an aggregate pattern. See my talk page for a good example -- literally, in the data I have run, some of these indicators are at 5200% more common in AI text, 2400%, 700%, etc., what more evidence is even fucking Gnomingstuff (talk) 17:59, 3 November 2025 (UTC)
 - Perhaps we should set up our own LLM to explain it so we don't have to.[Joke] - ZLEA TǀC 17:52, 3 November 2025 (UTC)
 - This is why all the proposed restrictions on AI are focussing on the wrong thing. You can't prove that someone used AI, and it's not really relevant whether they did or not. What matters is whether the content they produce matches our standards or not - if it doesn't then tag it, fix it, revert it or delete it as appropriate for whatever the actual problem with the content is. If you can't point out what the actual problem with the content is then there isn't a problem that needs fixing and you're wasting your (and others') time and energy. So don't say "this is bad because I think you used AI" say "this is bad because the included reference doesn't support it" or "this is bad because it's written in overly-flowery language". Thryduulf (talk) 18:24, 3 November 2025 (UTC)
- I agree wholeheartedly with Thryduulf. It makes more sense to remove or change content based on whether there are actual content problems instead of simply asking editors to blindly accept removing huge swaths of content and crying "leave me alone". Thank you. Sundayclose (talk) 18:32, 3 November 2025 (UTC)
 - @Thryduulf Perfectly encapsulates what I think an LLM policy, whenever we finally get one, should say. qcne (talk) 18:33, 3 November 2025 (UTC)
 - @Thryduulf I do not think your perspective reflects the reality that these tools are having on the project. LLMs are the problem because they
- Introduce WP:V failures at far higher rates than humans
 - Generate content far faster than humans
 
 - There are hundreds of LLM-generated articles or article expansions being added to the project every day; almost all of these contain extensive policy violations, often with WP:NPOV but the most problematic and time consuming are the source-to-text integrity issues. There are about 10 editors who prioritize catching and addressing this content; we are utterly insufficient to "fix" it as you describe. The #1 thing we can do is to tell users to not use LLMs. I believe in WP:AGF when it comes to human editors - a majority will listen, and overnight there will be a huge reduction in the volume of these edits. NicheSports (talk) 18:34, 3 November 2025 (UTC)
 literally, in the data I have run, some of these indicators are at 5200% more common in AI text, 2400%, 700%, etc.
You can't prove that someone used AI
- This is a good example why some of us are getting sweary, it's like we're not even speaking the same language. ~ Argenti Aertheri(Chat?) 19:16, 3 November 2025 (UTC)
There are hundreds of LLM-generated articles or article expansions being added to the project every day; almost all of these contain extensive policy violations [...] Introduce WP:V failures at far higher rates than humans [...]
All this is just more of the same being blind to the actual problem. The problem is the policy violations (e.g. WP:V failures) not the tool used to create the text that contains the policy violations. If text fails WP:V it fails WP:V regardless of whether it was written by a human or a by an LLM, the problem with it is that it fails WP:V not who wrote it. If you spend hours shouting at new users for using LLMs they wont understand what the problem is, because you aren't telling them what the actual problem is. Thryduulf (talk) 21:33, 3 November 2025 (UTC)- I didn't mention anything about policy. Nor am I "shouting at new users" -- the shouting tends to come from the opposite direction, which is why I generally do not engage with editors at all; I do not have the patience for trying to explain the same thing to dozens or hundreds of people.
 - For example, I have explained at least dozens or hundreds of times that with AI-generated text, you do not and cannot know what the problems are without going over every single source and claim, a process that takes hours and usually requires access to the sources and/or subject-matter expertise. If you do not know text is AI-generated, you do not know that this level of scrutiny is necessary. Basically, it's a code smell. Please go read that page; I don't know how else to explain this in a way that will get people to listen. Gnomingstuff (talk) 21:45, 3 November 2025 (UTC)
 
 - I don’t think that’s enough, Thryduulf. Having an LLM write text, even as far as whole articles, is not only antithetical to the point and philosophy of Wikipedia, it’s deeply unethical in general. I believe many people would like to see it purged from Wikipedia as a tool, period. If article content is ‘art’, and the LLM its artist, this is an example of times you cannot separate them. Whatever the end product, damage to the global information web on that subject has still been done in the background - and continues to be done as long as the LLM-derived text stays on Wikipedia and is fed back into that web. Kingsif (talk) 20:50, 3 November 2025 (UTC)
 - I think this is a "guns don't kill people, people kill people" type of argument. The fundamental problem is that genAI makes it so much easier to write articles that say things the references don't support or that use overly-flowery language. A novice can get a plausible and persuasive-sounding WP article in seconds using an LLM, and it takes hours to track down the mistakes and misrepresented sources. Sure, the violations of policy are the problem. But AI has removed the barriers to creating such a problem.
 - Furthermore, it isn't just linguistic and reference features that are problematic. Many LLM-written articles I've had to correct in recent weeks have been factually accurate, but uselessly unfocused and meandering, because the LLM doesn't (can't) know what is the main point and what is ancillary...so the articles drift from the main topic to side topics, dedicate entire paragraphs to niche offshoot applications or tangentially-related subjects, and treat single-mention sources and authoritative reviews with equal weight. (See this diff for an example, if you can parse it...this took me hours to bring to presentable form). I'm 100% certain the person who added the AI edits to that article could not explain what a "protease" is without asking ChatGPT, and those of us who have spent literal decades working in the field have to clean it all up.
 - As for getting people to listen, I don't think it can happen without a zero-tolerance LLM policy (and good luck enforcing that).  I'm starting to think genAI is a cult.  The true believers can't be swayed, and the people outside simply can't believe the people inside don't see through all the malarkey. WeirdNAnnoyed (talk) 23:37, 3 November 2025 (UTC)
I think this is a "guns don't kill people, people kill people" type of argument.
- Great to know that I wasn't alone in thinking that. Tried to formulate it but gave up, so thanks for putting it into words. Bad tools are a problem, and LLM chatbots are infuriatingly destructive to enwiki. --Gurkubondinn (talk) 23:49, 3 November 2025 (UTC)
 
 - Thryduulf is there an LLM policy you would accept? What actual text would you propose? ~ Argenti Aertheri(Chat?) 00:16, 4 November 2025 (UTC)
- What we should have is a combined information and guideline page that explains in clear language that LLM-generated text often violates content policies like WP:V and as such it is essential that if you use a LLM it is essential that you carefully review the text before submitting it to ensure that the content is accurate, that you verify the references exist and do support the text, etc. Making it clear that using LLMs is not prohibited, especially for things like checking spelling and grammar, as long as there are no problems with the content but that very obviously unreviewed submissions will be speedily deleted.
 - Additionally it should make it clear that many editors strongly dislike LLM's typical writing style, and you work will generally be better received if you copyedit it to better match Wikipedia's encyclopaedic tone.
 - I don't have any suggested wording for this - writing this sort of page is hard and crafting explanations that are all of clear, concise and simple is not my forte. Thryduulf (talk) 01:22, 4 November 2025 (UTC)
 
 
 - @Gnomingstuff totally hear your frustration, I often feel the same. Our policies are grossly insufficient. Are you fine if I archive this? I'm planning on opening an RFCAFTER (lol) here once the Wikipedia talk:Writing articles with large language models § RfC RFC closes to discuss a more robust (and restrictive) LLM policy. I want to ask admins to participate in that and curse words scare admins away. NicheSports (talk) 18:51, 3 November 2025 (UTC)
- I don't know, the whole thing got derailed anyway into whether we should allow AI when it wasn't even about that in the first place. Gnomingstuff (talk) 21:49, 3 November 2025 (UTC)