Wikipedia talk:Good article nominations
| Main | Criteria | Instructions | Nominations | FAQ | Discussion | Reassessment | Report | 
| Reviewing initiatives: | Backlog drive | Mentorship | Review circles | Pledges | 

This is the discussion page for good article nominations (GAN) and the good articles process in general. To ask a question or start a discussion about the good article nomination process, click the Add topic link above. Please check and see if your question may already be answered; click the link to the FAQ above or search the archives below. If you are here to discuss concerns with a specific review, please consider discussing things with the reviewer first before posting here.
| To help centralize discussions and keep related topics together, several other GA talk pages redirect here. | 
The GAN page lists GA nominations in alphabetical order by category: agriculture, then art, then on and on until you get to video games, warfare and miscellaneous. This ordering results in bias where agriculture articles tend to get shorter time in between nomination and review than those in other categories that are listed towards the bottom of the list. The wait times are unequal for nominators depending only on the category, not the quality of the article. Over time this can discourage nominators from under-reviewed categories of articles.
Here's what I propose: we should implement a rotating system for categories in terms of the order they are listed on the page. Every two to four weeks (or once per GA backlog drive), we should reorder the list of categories cyclically so that a new category is on top. The way I see it, the rotation could be managed by a bot, a template update, or it could be done manually. If a reviewer was only interested in a particular category, we could make a subpage that only lists GA nominees in that category (e.g. WP:GAN/Video games) or something of the sort.
The benefits to this idea are that it promotes fairer exposure across all GA topic areas by helping to distribute reviewer attention a little more evenly and reduces unintentional backlog accumulation in later-listed categories (looking at you, warfare).
Thoughts? Gommeh 📖 🎮 22:21, 10 October 2025 (UTC)
- @Kusma @HurricaneZeta this discussion may be of interest to you based on our Discord conversations over the past day or so. Gommeh 📖 🎮 22:31, 10 October 2025 (UTC)
 - What's the evidence behind the opening paragraph? ~~ AirshipJungleman29 (talk) 22:50, 10 October 2025 (UTC)
- Well, there's usually less than a month of waiting time for the agriculture, food, and drink category, while categories lower than that have waiting times up to 10 months, with nominations from January or February being frequent. HurricaneZeta (T) (C) 23:15, 10 October 2025 (UTC)
- So the proposal is about rotating which category is at the top, not the whole order? ~~ AirshipJungleman29 (talk) 23:31, 10 October 2025 (UTC)
- Yeah. I would be OK reshuffling the whole order too, but this proposal would only change which category gets listed at the top. The goal is to reduce the average wait time. Gommeh 📖   🎮 23:37, 10 October 2025 (UTC)
- I agree this is worth considering. But as a nerdy maths teacher, I am going to point out that the average wait time shouldn't be affected, it is the standard deviation that would decrease (equally worth doing, don't get me wrong. I'm just being pedantic). SSSB (talk) 15:39, 11 October 2025 (UTC)
 
 
 - Yeah. I would be OK reshuffling the whole order too, but this proposal would only change which category gets listed at the top. The goal is to reduce the average wait time. Gommeh 📖   🎮 23:37, 10 October 2025 (UTC)
 
 - So the proposal is about rotating which category is at the top, not the whole order? ~~ AirshipJungleman29 (talk) 23:31, 10 October 2025 (UTC)
 
 - Well, there's usually less than a month of waiting time for the agriculture, food, and drink category, while categories lower than that have waiting times up to 10 months, with nominations from January or February being frequent. HurricaneZeta (T) (C) 23:15, 10 October 2025 (UTC)
 - I know that I suggested this on Discord, but I am not sure the effect would be worth the disruption of shuffling the page so often. We could highlight a different category as "focus category" for each month though, similar to the "subject sweeps" we had recently. —Kusma (talk) 07:01, 11 October 2025 (UTC)
- For example, the WP:GAG could feature one section of GAN per edition and maybe highlight five open nominations (perhaps the oldest open nomination of each nominator in the section). —Kusma (talk) 16:44, 11 October 2025 (UTC)
 - I think I'd prefer this over cycling the categories, this would let us better target categories with larger backlogs. Putting it on the Gazette would be nice too, but I think a section would be more visible. Gramix13 (talk) 23:03, 24 October 2025 (UTC)
 
 - I wouldn't see the harm in it. I mean, science and music are dead at the moment whilst engineering is slowly getting hacked away. Icepinner 15:05, 11 October 2025 (UTC)
 - We already have topic list subpages, they are listed at Wikipedia:Good article nominations/Topic lists. CMD (talk) 15:39, 11 October 2025 (UTC)
 - If this imbalance is a real issue (and I don't see why it wouldn't be), I support choosing a random topic to appear on top as a "focus subject" for the month or week JustARandomSquid (talk) 17:13, 17 October 2025 (UTC)
- Shall I put together an RFC to get more people's input on this? I can see it has at least a little support. Gommeh 📖   🎮 17:18, 17 October 2025 (UTC)
- Please don't, the support is marginal at best (call me an oppose if you like). Kusma's idea of having a featured section might be workable, but rotation would be a faff and it'd just make finding one's usual section more difficult. Chiswick Chap (talk) 17:38, 17 October 2025 (UTC)
- Having a featured section would also be OK and I'd be more than OK with discussing that in the RFC as well. Gommeh 📖 🎮 17:44, 17 October 2025 (UTC)
 
 
 - Please don't, the support is marginal at best (call me an oppose if you like). Kusma's idea of having a featured section might be workable, but rotation would be a faff and it'd just make finding one's usual section more difficult. Chiswick Chap (talk) 17:38, 17 October 2025 (UTC)
 
 - Shall I put together an RFC to get more people's input on this? I can see it has at least a little support. Gommeh 📖   🎮 17:18, 17 October 2025 (UTC)
 
- Comment - is anyone able run the numbers and verify empirically that the effect described is real? If so, then I would be inclined to support it. That evidence (if it exists) should form the basis of the proposed RFC.  — Amakuru (talk) 17:48, 17 October 2025 (UTC)
- This is a question particularly amenable to statistics! As of writing, the quantity of articles pending across the 33 GAN categories (incl. misc.) returns a Pearson correlation coefficient of 0.165 indicating a very weak linear correlation of more articles in lower categories. Thus, I do not think there is sufficient value in rotating the category positions. ViridianPenguin🐧 (💬) 02:21, 25 October 2025 (UTC)
- I'm not even going to pretend I understand statistics, but would a more useful metric not be how long an article has spent in the queue per category? JustARandomSquid (talk) 11:30, 26 October 2025 (UTC)
 - Whatever statistic is used will need to take into account the time the nominations have been pending appropriately, because that correlation coefficient can just be explained through more articles in the lower categories getting nominated. IAWW (talk) 13:36, 26 October 2025 (UTC)
- I think we looked at this before and we found that Agriculture does empty very fast, however the 'being at the front' bonus doesn't really extend far below it. CMD (talk) 14:36, 26 October 2025 (UTC)
- Which is why the idea of fully random order for the categories was dismissed here pretty quickly. JustARandomSquid (talk) 15:15, 26 October 2025 (UTC)
 
 
 - I think we looked at this before and we found that Agriculture does empty very fast, however the 'being at the front' bonus doesn't really extend far below it. CMD (talk) 14:36, 26 October 2025 (UTC)
 
 
 - This is a question particularly amenable to statistics! As of writing, the quantity of articles pending across the 33 GAN categories (incl. misc.) returns a Pearson correlation coefficient of 0.165 indicating a very weak linear correlation of more articles in lower categories. Thus, I do not think there is sufficient value in rotating the category positions. ViridianPenguin🐧 (💬) 02:21, 25 October 2025 (UTC)
 
Are page numbers and ranges required in citations for GAN?
[edit]I'm conducting my first GAN review since spot-checks became a requirement, and I've run into an issue where some citations don't have page numbers or specific ranges in some cases, making them hard to check. So are page numbers/ranges required or not for GA? And if they are, how precise should they be? The currently written criteria make no mention of this apart from articles having to be verifiable. FunkMonk (talk) 14:42, 21 October 2025 (UTC)
- I think not. Our only criteria related to the citations themselves (as opposed to the sources or content) is #2a, which is all about layout. I think it's reasonable for reviewers to demand page ranges from nominators, as proper reviewing would be impractical without them. Firefangledfeathers (talk / contribs) 15:17, 21 October 2025 (UTC)
- If we assume it is not required, how does that gel with spot-checks recently having become a requirement for reviewers? If nominators aren't required to give citation ranges, it's almost impossible to spot-check. FunkMonk (talk) 16:11, 21 October 2025 (UTC)
- The nominator isn't required to give page numbers, and can instead provide relevant quotes from the source upon the reviewer's request. ~~ AirshipJungleman29 (talk) 16:16, 21 October 2025 (UTC)
- That's a pretty weird, onesided halfway between former GAN and FAC reviews. So we're not provided the means to check the sources ourselves but we should ask nominators for quotes? Sure, assume good faith and all, but if someone wanted to tamper with the quotes, that would be pretty easy. What is the point of this extra hoop for reviewers then if the burden doesn't go both ways? FunkMonk (talk) 16:31, 21 October 2025 (UTC)
- I think the idea behind that is that it's fairly common for somebody to misread a source; it's fairly rare for somebody to deliberately falsify one (though that does happen!) GreenLipstickLesbian💌🦋 16:40, 21 October 2025 (UTC)
 - The burden is all on the nominator, who has to run around finding the pages and quotations (and possibly falsifying them, as you suggest). The case you are outlining, where the reviewer could access the source, but not identify the page, and the nominator takes the very risky opportunity to tamper with it, seems unlikely to happen. ~~ AirshipJungleman29 (talk) 16:44, 21 October 2025 (UTC)
- There is no running around if it's already there. I've nominated dozens of articles, I know both ends of the process very well, it seems just like basic verifiability to have page numbers. And yes, falsification is a real issue when it comes to contentious topics or paid editing. I reviewed Imelda Marcos, for example, which was later demoted for the sources basically not supporting the text. FunkMonk (talk) 16:49, 21 October 2025 (UTC)
- Yes, I also don’t know why you wouldn’t use page numbers. But if you are reviewing a WP:OFFLINE source you cannot access, you will need to AGF that the nominator provides the correct quote. It seems silly to allow that but to effectively forbid trusting nominators in other cases. ~~ AirshipJungleman29 (talk) 17:07, 21 October 2025 (UTC)
 
 
 - There is no running around if it's already there. I've nominated dozens of articles, I know both ends of the process very well, it seems just like basic verifiability to have page numbers. And yes, falsification is a real issue when it comes to contentious topics or paid editing. I reviewed Imelda Marcos, for example, which was later demoted for the sources basically not supporting the text. FunkMonk (talk) 16:49, 21 October 2025 (UTC)
 
 
 - That's a pretty weird, onesided halfway between former GAN and FAC reviews. So we're not provided the means to check the sources ourselves but we should ask nominators for quotes? Sure, assume good faith and all, but if someone wanted to tamper with the quotes, that would be pretty easy. What is the point of this extra hoop for reviewers then if the burden doesn't go both ways? FunkMonk (talk) 16:31, 21 October 2025 (UTC)
 
 - The nominator isn't required to give page numbers, and can instead provide relevant quotes from the source upon the reviewer's request. ~~ AirshipJungleman29 (talk) 16:16, 21 October 2025 (UTC)
 
 - If we assume it is not required, how does that gel with spot-checks recently having become a requirement for reviewers? If nominators aren't required to give citation ranges, it's almost impossible to spot-check. FunkMonk (talk) 16:11, 21 October 2025 (UTC)
 
- Yeah, I use kindle ebooks (and other, completely legally acquired ebooks) as sources because they're more accessible. These don't have page numbers; personally, I do my best to provide chapter/section names, but these aren't always obvious in an ebook format. I actually don't think a demand or expectation of page numbers is reasonable, but asking is fine (as long as you respect why somebody might say "no, I can't"). GreenLipstickLesbian💌🦋 16:22, 21 October 2025 (UTC)
 
- they are not formally required, but it is always reasonable to ask for page numbers when reviewing and it is generally expected for the purposes of Wikipedia:Text-source integrity and ease of verification. personally, i avoid ranges more than maybe 3 pages. ... sawyer * any/all * talk 16:34, 21 October 2025 (UTC)
 - We have in practice supported requesting page numbers for large sources for the reason Firefangledfeathers mentions. Not a precise art, so hard to put something strict in the criteria: if the source is a pdf of 2-4 pages of large text I probably wouldn't raise it. However, a reviewer (who represents a reader who might be one of those small percentage that do check sources) should ideally not have to hunt through reams of text to figure out where a particular idea is being cited from. Other factors, such as a direct gbooks link to the page, a quote as mentioned above (although perhaps in the source template rather than just the GAN page), or a lack of page numbers in the source, also means a firm rule on page numbers might slightly miss its intended purpose, but in practice page numbers are often the most convenient way to narrow the portion of a long source a reviewer needs to look through. CMD (talk) 16:36, 21 October 2025 (UTC)
- for the record, gbooks is not a very lightfast way to link page numbers - see WP:GBWP, which is an essay but presents a good case in my opinion. ... sawyer * any/all * talk 16:50, 21 October 2025 (UTC)
- I don't necessarily disagree, however when it works it does facilitate the spotcheck, and I'd be reluctant to require nominators to find other methods unless there is a firmer GACR guideline. CMD (talk) 16:55, 21 October 2025 (UTC)
- definitely, i just thought it worth mentioning. i'm not suggesting we demand nominators jump through hoops ... sawyer * any/all * talk 16:59, 21 October 2025 (UTC)
 
 
 - I don't necessarily disagree, however when it works it does facilitate the spotcheck, and I'd be reluctant to require nominators to find other methods unless there is a firmer GACR guideline. CMD (talk) 16:55, 21 October 2025 (UTC)
 - I agree with CMD's comments. I'd add that although it's not a fail if there are no page ranges, when you do the spotcheck this means, as someone says above, that the nominator will have to hunt through the book for the information to satisfy the spotcheck request. You might as well suggest to the nominator then that they add the page range. Similarly with dead links, which are allowed; you can still ask for the source information for a spotcheck, and if no archived copy can be found the source has to be removed, but if an archive is found you can suggest to the nominator that they add it. Mike Christie (talk - contribs - library) 18:28, 21 October 2025 (UTC)
 
 - for the record, gbooks is not a very lightfast way to link page numbers - see WP:GBWP, which is an essay but presents a good case in my opinion. ... sawyer * any/all * talk 16:50, 21 October 2025 (UTC)
 - There is a bit of variation. I would not require page numbers in citations of relatively short scientific articles, especially if the typical citation culture of the discipline does not use them (Mathematics is an example here). But if you cite a 600 page book, you should make it easy to locate what is being cited (by using page numbers or chapter numbers of theorem numbers or whatever works). —Kusma (talk) 18:41, 21 October 2025 (UTC)
- More than merely not requiring page numbers in citations of short scientific articles, our standard citation templates do not provide any mechanism to specify page numbers within citations of articles. Their page parameters are only correctly used to give the page range of the entire article. (Do not talk to me about {{rp}}; it is an abomination and must be destroyed.)
 - So requiring GA nominators to do something that they cannot even do would be a recipe for frustration. Page numbers within books are another matter; they sometimes do not exist or cannot be found in some formats, but when they exist they are useful information to include in a citation.
 - The only actual GA requirement is that we cite reliable sources and that the source being cited is identifiable. We don't even require consistent citation formatting, let alone inclusions of this and that piece of metadata. —David Eppstein (talk) 23:12, 21 October 2025 (UTC)
- There are multiple ways to specify different page ranges within the same sources, and doing that is even required for longer sources at FAC. An example of a co-nomination of mine where it was demanded and followed through: Heterodontosaurus FunkMonk (talk) 09:00, 22 October 2025 (UTC)
- The work cited in a different format than all the others there, with separate page numbers, is a book-length work even though maybe officially it is a journal article. But I thought that, unlike GA, FA required consistency of reference formatting? —David Eppstein (talk) 05:49, 23 October 2025 (UTC)
 
 
 - There are multiple ways to specify different page ranges within the same sources, and doing that is even required for longer sources at FAC. An example of a co-nomination of mine where it was demanded and followed through: Heterodontosaurus FunkMonk (talk) 09:00, 22 October 2025 (UTC)
 
 - They are not always necessary. If it is fairly difficult to find the part of the source that supports the text (e.g. long, offline, or the article text is a pseudo-summary of a large part of the source) then it should be considered necessary in order for reasonably being able to verify the article text. It is reasonable to encourage it, of course. Kingsif (talk) 12:39, 22 October 2025 (UTC)
 - As another way of expressing what I believe is a already the common theme in these responses: I've always understood that editors are not obligated to provide anything more in articles than "reliable sources cited inline" and "enough information to identify the source", but they are obligated to provide in the review anything a reviewer desires in order to verify a particular part of their spot check. (Page numbers, scans, quotes, etc.) If a page number is acquired during the review it would of course be sensible for either the writer or the reviewer to go add it to the article but neither is obliged to do so, and it would kind of exceed a "spot" check to request details for every citation. I think it's important for GA to avoid scope creep (especially regarding citation formatting) to keep reviewing sustainable, so in addition to considering this the status quo I also think it's a reasonable way to balance article expectations and spot check feasibility. ~ L 🌸 (talk) 23:33, 22 October 2025 (UTC)
- My argument is that "enough information to identify the source" can include page numbers, treating "source" not just as a particular work but the actual text (or sound, etc.) that information is 'sourced' from. A reader with access should be able to verify themselves in the same way a spot-checker is trying to. For similar reasons, access-dates are another important inclusion that is not just citation formatting. CMD (talk) 03:11, 23 October 2025 (UTC)
- I agree with this interpretation. Especially with more and more students realising they can’t cite Wikipedia but they can cite our sources (if only they could find the relevant part)! Kingsif (talk) 12:18, 23 October 2025 (UTC)
 - A page number is nice, but I've never yet had a situation where ctrl-F and/or the book's index failed me. Perhaps because I enjoy source checks so much, it does not bother me if they take a little more time. ~ L 🌸 (talk) 00:14, 25 October 2025 (UTC)
 
 
 - My argument is that "enough information to identify the source" can include page numbers, treating "source" not just as a particular work but the actual text (or sound, etc.) that information is 'sourced' from. A reader with access should be able to verify themselves in the same way a spot-checker is trying to. For similar reasons, access-dates are another important inclusion that is not just citation formatting. CMD (talk) 03:11, 23 October 2025 (UTC)
 
- Please note that some ebooks don't have page numbers. I have run into this problem before. Yours, &c. RGloucester  — ☎ 00:31, 25 October 2025 (UTC)
- FWIW, I assumed a "where possible" was implied. As noted above, the cite journal template doesn't actually have the parameters to do so. Kingsif (talk) 00:39, 25 October 2025 (UTC)
 
 
AI
[edit]I'm seeing more and more Good Articles being passed that were AI-generated, and that have the usual problems with AI-generated writing, up to and including hallucinations. Here's a recent example that passed yesterday of an article generated nearly entirely with AI (the article is primarily by one editor who has a pattern of confirmed AI use, and there are ChatGPT source parameters in some of the revisions here), which has major problems that persist to the current version.
The fact that this is repeatedly happening suggests that there's something broken in the review process. At the very least reviewers need to be familiar with the indicators of AI writing; the example above was not particularly hard to identify as AI even before digging into the revision history. Gnomingstuff (talk) 18:07, 22 October 2025 (UTC)
- If you are sure of what you say (and I trust you fully), I would just undo the GA rather than send it through GAR. Bgsu98 (Talk) 18:10, 22 October 2025 (UTC)
 - @Gnomingstuff The editor who passed that article has less that 75 edits; have you tried talking with them at all about this? GreenLipstickLesbian💌🦋 18:10, 22 October 2025 (UTC)
- I tagged them on the talk page, but given that I noticed this like 30 minutes ago and the reviewer doesn't seem active right now, that's all that has happened so far.
 - At any rate, I didn't post this topic as a call-out of any particular reviewer, and if it came off that way I apologize. This isn't an isolated incident, it's a structural issue: The GA criteria that worked well before 2022 don't seem to be working as well anymore with AI in the mix. Some of the criteria can be applied to AI text -- it tends to violate 1b, 2c, sometimes 2d, and 4 -- but the spot-checking in #3 doesn't seem to be cutting it anymore, and a lot of AI-type editorializing ("played a pivotal role, underscoring the rich tapestry of its impact and aligning with its enduring resonance) is slipping through. Gnomingstuff (talk) 18:22, 22 October 2025 (UTC)
- If there's an appetite for changing the GA criteria, could we start an RfC on this talk page? It would need pre-RfC discussion to settle on proposals for the community to vote up or down on. I'd like to hear from more people before taking any action. Trainsandotherthings (talk) 20:38, 22 October 2025 (UTC)
- Regardless of the specific allegations addressed here (whether the author used AI or not), I do think this is a rare opportunity for change. We have many folks assembled here and I think it would be alright to add a seventh pillar to the Good Article criterion: at the time of passing, all text must be written by a human. In the edge case where an editor relies on AI as a crutch but is responsible, they should review any generated text and the cited source word by word and take full responsibility for the contribution as their own.
 - We can't hand over the keys to robots at this point, or maybe ever. I believe AI writing as it stands cannot reliably satisfy criteria 1b), 2c), and 2d) (especially). We'll really need a seventh point to make that clear though, and nip the problem in the bud. Bremps... 02:14, 23 October 2025 (UTC)
- I get where this idea is coming from, but my understanding is that LLM detectors are notoriously inaccurate. (At least, that is what I was told recently when trying to clean up an article suspected of having been partially AI-generated.) This could unfortunately be abused to quick-fail articles that are entirely human-written, on the basis of an LLM detector incorrectly identifying something as AI. (You can test this by running the US Declaration of Independence or some similarly old document through an AI detector; in many cases, at least some of the text is incorrectly flagged as AI.) Having all text be human-written is something I'd strive for, but it could also throw up false positives in AI detectors. Epicgenius (talk) 02:56, 23 October 2025 (UTC)
- As a continuation of what I said above, ZeroGPT claims that the United States Declaration of Independence (1776) is "91.69% AI GPT", and writer.com claims that it is only "73% human-generated". I think this proves my case that LLM detectors are not useful for actually detecting AI, and that requiring all text to be written by a human would lead to GANs being incorrectly failed en masse. – Epicgenius (talk) 03:15, 23 October 2025 (UTC)
 
 
 - I get where this idea is coming from, but my understanding is that LLM detectors are notoriously inaccurate. (At least, that is what I was told recently when trying to clean up an article suspected of having been partially AI-generated.) This could unfortunately be abused to quick-fail articles that are entirely human-written, on the basis of an LLM detector incorrectly identifying something as AI. (You can test this by running the US Declaration of Independence or some similarly old document through an AI detector; in many cases, at least some of the text is incorrectly flagged as AI.) Having all text be human-written is something I'd strive for, but it could also throw up false positives in AI detectors. Epicgenius (talk) 02:56, 23 October 2025 (UTC)
 
 - I would say that a good first step here would be to provide evidence that the current criteria is missing sourcing issues.
 - I'm less concerned about AI and violations of 1b, and 4 as the editorial language and bad prose/formatting issues tend to be quite obvious.
 - However I think with respect violations of criteria 2 and to some extent criteria 4, these are already time consuming processes and accordingly i think the burden of proof required for making them more strict should be appropriately high (as in we need some solid evidence that the current system is failing to catch errors before saying the already somewhat time consuming process should be more detailed).
 - This is not me saying that we shouldn't be strict with sourcing, or that nothing needs to change, more so that we should start collecting evidence of failures in our current system if we want to change things. IntentionallyDense (Contribs) 22:33, 22 October 2025 (UTC)
 
 - If there's an appetite for changing the GA criteria, could we start an RfC on this talk page? It would need pre-RfC discussion to settle on proposals for the community to vote up or down on. I'd like to hear from more people before taking any action. Trainsandotherthings (talk) 20:38, 22 October 2025 (UTC)
 
 - Not to hijack this thread (and I’m not commenting on this specific article) but it’s also not exactly helpful to open GARs without discussion and just make blanket accusations of AI and not elaborate, as you did with Wikipedia:Good article reassessment/Epilepsy/1 which you opened for concerns of OR and SYNTH but haven’t been able to tell anyone which (or any) parts of the article are OR/SYNTH.
 - I don’t mean this to come off as accusatory, I do believe you’re fighting a good fight here, especially with medical topics, however I do believe that if we want to have a discussion about GA’s and AI, we need to include the discussion of how theses issues get brought up.
 - In regards to the Epilepsy article itself, I’m almost done adding sources to the unsourced info (so far I’ve only had to remove about 1 sentence) but I did try to ask you over a month ago for what sourcing concerns led you to open the reassessment and have not gotten a reply. It very hard to improve an article when someone is claiming there is overt issues but not backing up those claims, as people can improve issues that they know exist but can’t improve issues if they don’t know what they are.
 - I’m rambling but TLDR: we should have (probably a seperate) conversation about how we bring these issues up with existing GAs. IntentionallyDense (Contribs) 20:19, 22 October 2025 (UTC)
- I explained what parts of the article were synthesis and OR, with direct quotations, at least three times. I stopped responding because no one was listening to me even after repeated explanations, and I did not open up any further GARs because I simplydo not have it in me to have to repeat myself again. Gnomingstuff (talk) 17:23, 26 October 2025 (UTC)
- You did point out unsourced information and wording you thought was proof of AI use but you never expanded on the synthesis part. As per WP:SYNTH 
If one reliable source says A and another reliable source says B, do not join A and B together to imply a conclusion C not mentioned by either of the sources.
. All I was looking for was which sources you thought were synthesized to come to an inappropriate conclusion. - I do absolutely agree that you showed evidence of OR as shown by the several uncited paragraphs but that is a relatively easy fix, find sources and add them. However an accusation of SYNTH requires an editor to go back and check sources to make sure that there is no inappropriate misrepresentations of sources. This is time consuming and hence why i wanted just one example of it before moving on.
 - I never got a single example of SYNTH from you nor did I find any in my work on the article. Which leads me to wonder which sources you found that were synthesized. In general, when making a GAR make sure that for every claim you make, you have something to back up that claim or else it gets really difficult and an article that does actually have issues (such as the epilepsy one) may get overlooked due to confusion.
 - My point is that if you had nominated that page for GAR with claims of poor prose and OR it would have been a pretty simple process on everyone as you had clear examples of that and others could easily see that. But adding in other accusations, sticking to them while also refusing to show evidence of them derailed the process. I didn't want to spend time arguing over SYNTH when I could have been improving an article and obviously you didn't either. I know you are very involved with finding AI on Wikipedia and I think that's especially important with GA so I don't want you to feel like you can't open GAR, there is just ways to do so and ways in which will unintentionally make things harder. IntentionallyDense (Contribs) 20:20, 26 October 2025 (UTC)
- The problem is that LLM output is synthesis by nature -- even when told to restrict its output to a given source, it still has the rest of the language model -- but that since LLMs are a black box, it's impossible for end observers to know what other sources are in the mix. Even if I was employed by OpenAI and had access to the code, it would be virtually impossible.
 - So, unfortunately, since we don't have an AI policy, "this text was written by AI" is not something I can point to in a review. All I can do is point out things like how "the need for X" is an opinion and not a fact, and then have this very basic assertion be "debunked" somehow. Under those conditions, no, I don't actually feel like I can open a GAR, not without losing my damn mind. Gnomingstuff (talk) 16:53, 27 October 2025 (UTC)
- An opinion is an opinion and a fact is a fact, regardless of whoever wrote it, be it a human, an AI, an alien the Holy Spirit or whatever. The problems you mentioned earlier can be easily solved by asking for a second opinion. Cambalachero (talk) 17:02, 27 October 2025 (UTC)
 - Just to be clear, did you check any of the sources before making an accusation of SYNTH? Because it seem to me (not trying to make any assumptions here, simply trying to understand) that you saw prose issues and then assumed those must correlate with sourcing integrity.
 - Slightly off topic, but "the need for" can be a fact not an opinion and I've seen almost that exact wording is many high quality secondary sources. I don't love the wording and typically try to word it more as "research has shown xyz leading to efforts to improve xyz" etc.
 - Again, usually that wording kinda raises a red flag for lack of understanding of the topic and parroting back info but the phrase itself and the context it was used in, are appropriate. IntentionallyDense (Contribs) 17:10, 27 October 2025 (UTC)
 
 
 
 - You did point out unsourced information and wording you thought was proof of AI use but you never expanded on the synthesis part. As per WP:SYNTH 
 
 - I explained what parts of the article were synthesis and OR, with direct quotations, at least three times. I stopped responding because no one was listening to me even after repeated explanations, and I did not open up any further GARs because I simplydo not have it in me to have to repeat myself again. Gnomingstuff (talk) 17:23, 26 October 2025 (UTC)
 - Another issue is AI-generated reviews, wherein some reviewers either entirely or partially use AI. These reviews are often inaccurate, and I have seen a couple of these. HurricaneZeta (T) (C) 22:48, 22 October 2025 (UTC)
- If you see an AI generated review please raise it, we had a number of these early on in ChatGPT days and developed a pretty quick practice of reversal. CMD (talk) 03:14, 23 October 2025 (UTC)
- So far they've all been vacated/invalidated by the time I got there, but will raise it if I see any HurricaneZeta (T) (C) 12:50, 23 October 2025 (UTC)
 
 
 - If you see an AI generated review please raise it, we had a number of these early on in ChatGPT days and developed a pretty quick practice of reversal. CMD (talk) 03:14, 23 October 2025 (UTC)
 - I’m not yet persuaded that this is an “AI” problem rather than an “unskilled reviewing” problem. If someone uses an AI while producing an article that meets the GA criteria, including NPOV and verifiability, there’s no problem. If someone passes a GA that doesn’t meet those criteria, there’s a problem. The latter is also a problem we can identify definitively (unlike AI use, which can’t be proven), another reason to focus any problem-solving there. New reviewers may sometimes not be fully skilled yet and a personal talk page message is the best first step to help them. ~ L 🌸 (talk) 23:36, 22 October 2025 (UTC)
 - Can we agree That any sort of substantial usage of AI in an article (drafting more than a sentence) should disqualify the article from contention as a GA until the text has been thoroughly replaced by human-generated text?
 - Chatbots are black boxes that can reason well sometimes (or at least simulate it with word prediction) and hallucinate other times. They are fed off of websites like Wikipedia- putting AI text back on Wikipedia would result in an infinite GIGO loop, as AI trains on AI and generates more subpar text.
 - Not having substantial AI text should be a prerequisite to an article being unbannered at all, let alone being a GA. Bremps... 02:03, 23 October 2025 (UTC)
- Completely agree. GA is not only a stamp put on an article, it is a community process that demands the engagement of a nominator and a reviewer. It is an insult to a reviewer to expect them to review the syntactical output of a text generator and not the written word of a human being. ♠PMC♠ (talk) 02:08, 23 October 2025 (UTC)
- This is a competence issue. Editors who use AI tools in the manner being discussed here (without disclosure, mind you) cannot be trusted to produce articles that meet the GA standard. Do we really want to showcase machine-written articles as 'good'? What is the point of the GA process at all if it does not showcase quality articles written by human volunteers? Yours, &c. RGloucester — ☎ 04:14, 23 October 2025 (UTC)
 
 - No, I don’t agree. Any focus on the tool rather than the text itself is just ripe for needless interpersonal conflict. We can never know or prove exactly how someone wrote an article. We can know whether the article is, eg, neutral and verifiable. ~ L 🌸 (talk) 04:45, 23 October 2025 (UTC)
 - If the output from the model meets the GA criteria and is well-written, what good does it do to disqualify it on the sole basis of "being from an AI"? Outputs can have hallucinatory text, but they also can not. We ought to assess them on a case-by-case basis rather than applying a blanket rule, just like we do with human-made text. nub :) 05:04, 23 October 2025 (UTC)
- I understand that a lot of the allure of Wikipedia is that it is written by humans and by humans alone, but my view is that if a person uses the AI as a tool rather than a replacement (using it to overcome writer's block, draft a section, summarize for a lede, etc., before human refinement), there's no reason to dismiss it—it reads, cites, and informs well, does it not? nub :) 05:21, 23 October 2025 (UTC)
it reads, cites, and informs well, does it not
No? Rarely and unreliably at best? I recently, due to another user disappointingly advocating for AI, googled three articles I worked on with AI mode turned on and the original conclusions those summaries drew from text I wrote is something I hope nobody ever adds. If an AI tool is prompted to summarise, explain, or analyse then it *will* make up something that it has determined is in line with expectations and similar text, but which it hasn’t directly drawn from relevant facts. It informs about as well as handing a child a full novel and asking them for critical analysis: you’d have to put in so much effort identifying anything useful, before even working to present that in a useful way, you’d have been better doing it yourself. Kingsif (talk) 12:31, 23 October 2025 (UTC)- Ah, I should have clarified. I meant that as an if–then statement. If it reads, cites, and informs well, then why disallow it solely for its AI use—it reads, cites, and informs well, does it not? That being said, I understand that the vast majority of people who'd use AI as a "tool" would simply take its output as fact and never refine or check it. That's a product of human laziness. But I also understand that there are people who will refine it and check it. They will put in the work. nub :) 15:06, 23 October 2025 (UTC)
- If someone is going to spend the time to refine the product of a syntax generator that way, they should spend the time to write the fucking article properly instead. ♠PMC♠ (talk) 21:29, 23 October 2025 (UTC)
- Why? If both means are relatively harmless and (may) lead to the same end, why should someone be forced to restrict themselves to just one of those means? Also, what is the "proper" way to write an article? nub :) 21:48, 23 October 2025 (UTC)
- There's no "both means". You can look at the sources, identify prominent viewpoints, summarise them clearly and concisely with citations, and polish to your heart's content. If you want to use an AI at any stage of the above process, you then have to go back and do all the previous stages again because the AI cannot be trusted with any of it (not even to polish because it can and will make things up). A finished proper article will have been inspected at every stage of the writing proces by a brain with the ability to think critically. If you want to use AI, you're just wasting your own time. ~~ AirshipJungleman29 (talk) 22:09, 23 October 2025 (UTC)
- I'm not convinced that AI cannot be trusted with anything. Sure, they are "just text predictors", but they are damn good ones at that. I agree that trying to use an AI to wholly find and synthesize sources isn't okay and will likely contain untrue information, but, from what I've seen, an AI (like ChatGPT) could very easily polish a paragraph without making anything up if it is prompted correctly. I think this is a problem of the user, not inherently of the tool. You have to know when you shouldn't use a tool and you have to know how to use it correctly before applying it in an article on the information superhighway known for its verifiability. So you're right: everything must go through a human. But I disagree that AI should be wholly disallowed or that it cannot be trusted. nub :) 22:20, 23 October 2025 (UTC)
- The ultimate argument of AI proponents is always "you must be prompting it wrong". No amount of prompt fiddling can substitute for human cognition. The process of researching and writing is in itself valuable; twiddling with the output of a text generator is not the same thing. ♠PMC♠ (talk) 23:30, 23 October 2025 (UTC)
- You're right. Those two concepts are not the same thing. I agree that human research and article development is in itself valuable. But the way I see it, the use of AI can undergird the human process. Say, for the banal task of improving clarity or phrasing in a sentence. Some humans wouldn't even catch potentially inefficient or discursive language. Or maybe the task of summarizing the body of an article for its lede. Some humans really struggle with synthesizing the info into a concise 400 words. So again I ask: why disallow AI in the entire Wikipedia workflow, if it
- Is relatively harmless, and
 - Helps humans express the information more clearly?
 
 - nub :) 23:52, 23 October 2025 (UTC)
- An LLM does not understand the text it generates. It might simplify a sentence, if you're lucky. I'd you're unlucky, it might make you think it has while changing the meaning or otherwise causing errors. Relying on a syntax generator means you're never going to develop an actual facility at doing the task yourself, and I will die on the hill of defending the writing process as intrinsically valuable not just to Wikipedia but to human reasoning as a critical skill. ♠PMC♠ (talk) 00:10, 24 October 2025 (UTC)
- I agree. The problem occurs when people using it to write articles don't recognize that it is engaging in original (or nonsensical) research and add what it gives them to new or existing articles. For example, I am interested in Feigenbaum constants (who isn't) and how they are used in articles outside of science on Wikipedia. I asked ChatGPT why this is (why is there a paucity of real-world articles on applications) and it recommended I start a new article on the subject of situational elasticity to discuss the constants. It then wrote 500 words telling me all about it. The only problem is, there is no such thing as situational elasticity. It invented the idea as a conceptual metaphor and then built an entire world around it. Viriditas (talk) 00:27, 24 October 2025 (UTC)
 - Honestly, seeing this comment, I think I mostly share your conviction about writing as reasoning. Truly, one of the most rewarding things about contributing to Wikipedia (and life in general, to be honest) is the ability to work through the truckload of sources you compile, being able to realize each puzzle piece into a neat whole, and sharing that product with the world. And no machine should ever replace human cognition or reason. I will never argue for that.
 - However, I don't see using AI as a tool as necessarily diminishing or replacing that process. When people use it to replace their critical thinking and writing skills, they aren't using it as a tool. They're using it as a replacement. But I will always advocate for the ability (and encouragement) to use AI as refinement—just like spellcheckers or, and I know I might be treading on some hot water here, the input of another editor. From what I've seen, LLMs are especially good at explaining where people might be misusing a term or have a grammatical error so that they can rectify it for next time. Of course, this doesn't mean that you should take its suggestion without a grain of salt, but AI can be an exceptional tool. I mean, when have you last heard of a machine being able to further cancer research!
 - AI obviously operates on a more complex and risk-prone level than any human could, and I agree it can easily lead to intellectual laziness or, worse, the uncritical propagation of falsehoods. That’s a serious issue. But I don’t think the presence of that danger automatically disqualifies all careful use. If it is used critically, it can be a way to streamline someone's workflow, and if I'm being honest, I think that's really fucking cool. nub :) 02:43, 24 October 2025 (UTC)
 
 
 - An LLM does not understand the text it generates. It might simplify a sentence, if you're lucky. I'd you're unlucky, it might make you think it has while changing the meaning or otherwise causing errors. Relying on a syntax generator means you're never going to develop an actual facility at doing the task yourself, and I will die on the hill of defending the writing process as intrinsically valuable not just to Wikipedia but to human reasoning as a critical skill. ♠PMC♠ (talk) 00:10, 24 October 2025 (UTC)
 
 - You're right. Those two concepts are not the same thing. I agree that human research and article development is in itself valuable. But the way I see it, the use of AI can undergird the human process. Say, for the banal task of improving clarity or phrasing in a sentence. Some humans wouldn't even catch potentially inefficient or discursive language. Or maybe the task of summarizing the body of an article for its lede. Some humans really struggle with synthesizing the info into a concise 400 words. So again I ask: why disallow AI in the entire Wikipedia workflow, if it
 - Would you mind clarifying how a human knows that the prompt is correct? ~~ AirshipJungleman29 (talk) 08:23, 24 October 2025 (UTC)
 
 - The ultimate argument of AI proponents is always "you must be prompting it wrong". No amount of prompt fiddling can substitute for human cognition. The process of researching and writing is in itself valuable; twiddling with the output of a text generator is not the same thing. ♠PMC♠ (talk) 23:30, 23 October 2025 (UTC)
 
 - I'm not convinced that AI cannot be trusted with anything. Sure, they are "just text predictors", but they are damn good ones at that. I agree that trying to use an AI to wholly find and synthesize sources isn't okay and will likely contain untrue information, but, from what I've seen, an AI (like ChatGPT) could very easily polish a paragraph without making anything up if it is prompted correctly. I think this is a problem of the user, not inherently of the tool. You have to know when you shouldn't use a tool and you have to know how to use it correctly before applying it in an article on the information superhighway known for its verifiability. So you're right: everything must go through a human. But I disagree that AI should be wholly disallowed or that it cannot be trusted. nub :) 22:20, 23 October 2025 (UTC)
 - I think you provided the foremost answer to this (why not allow both methods if ultimately same effort) yourself, in your first reply to me: people who would choose the method to use a gen AI tool to do the work, will not do the checks. More pertinently, in the instances of people using an AI tool, they can not know enough to accurately check the output without going and doing all the work the AI did (e.g. reviewing the sources it used, if any) for themselves. It's madness, I think, to even consider using it if you actually want accurate - and accountable - results. Kingsif (talk) 20:49, 24 October 2025 (UTC)
 
 - There's no "both means". You can look at the sources, identify prominent viewpoints, summarise them clearly and concisely with citations, and polish to your heart's content. If you want to use an AI at any stage of the above process, you then have to go back and do all the previous stages again because the AI cannot be trusted with any of it (not even to polish because it can and will make things up). A finished proper article will have been inspected at every stage of the writing proces by a brain with the ability to think critically. If you want to use AI, you're just wasting your own time. ~~ AirshipJungleman29 (talk) 22:09, 23 October 2025 (UTC)
 
 - Why? If both means are relatively harmless and (may) lead to the same end, why should someone be forced to restrict themselves to just one of those means? Also, what is the "proper" way to write an article? nub :) 21:48, 23 October 2025 (UTC)
 
 - If someone is going to spend the time to refine the product of a syntax generator that way, they should spend the time to write the fucking article properly instead. ♠PMC♠ (talk) 21:29, 23 October 2025 (UTC)
 
- Ah, I should have clarified. I meant that as an if–then statement. If it reads, cites, and informs well, then why disallow it solely for its AI use—it reads, cites, and informs well, does it not? That being said, I understand that the vast majority of people who'd use AI as a "tool" would simply take its output as fact and never refine or check it. That's a product of human laziness. But I also understand that there are people who will refine it and check it. They will put in the work. nub :) 15:06, 23 October 2025 (UTC)
 
 
 - I understand that a lot of the allure of Wikipedia is that it is written by humans and by humans alone, but my view is that if a person uses the AI as a tool rather than a replacement (using it to overcome writer's block, draft a section, summarize for a lede, etc., before human refinement), there's no reason to dismiss it—it reads, cites, and informs well, does it not? nub :) 05:21, 23 October 2025 (UTC)
 - For the reason I mentioned above, I don't fully agree with this. Use of AI without any manual checking whatsoever is indeed problematic and is why we have WP:G15 for newly-created pages. I think we should focus on whether the article meets core policies like WP:NPOV and WP:V (which are already in the GA criteria). AI-generated text usually has problems meeting these criteria, and people who continuously use AI indiscriminately, without any review, should probably be restricted or sanctioned in some way.That being said, failing an article solely because it was AI-generated seems to be focusing on the means and not the ends. Would we be failing GANs because the nominator did their research online instead of physically going to a library? I'd hope not. – Epicgenius (talk) 15:05, 23 October 2025 (UTC)
 
 - Completely agree. GA is not only a stamp put on an article, it is a community process that demands the engagement of a nominator and a reviewer. It is an insult to a reviewer to expect them to review the syntactical output of a text generator and not the written word of a human being. ♠PMC♠ (talk) 02:08, 23 October 2025 (UTC)
 - I have sent this particular article to GAR here. Bgsu98 (Talk) 14:26, 23 October 2025 (UTC)
 - My opinion is that AI is a tool that can be used for good too. I don't care how much AI was used to write an article as long as it meets the GA criteria. IAWW (talk) 15:38, 23 October 2025 (UTC)
 
AI is not a problem in and of itself; it is a tool, and it can help to get past writer's block when starting a new page from scratch. But it is not enough, it may provide a starting text, but the user should refine the text and add references where needed (AI may include references but they are not always valid for Wikipedia, such as IMDB entries). In-text praising and puffery is a problem no matter if AI-generated or not ("Words to avoid" has been there long before the AI boom), the writer should have fixed that either way, and the reviewer should have not accepted it either way. Cambalachero (talk) 19:01, 22 October 2025 (UTC)
- Users should definitely not refine a text and add references where needed, that is entirely WP:BACKWARDS and produces all sorts of issues. This was a problem even before llms. CMD (talk) 03:15, 23 October 2025 (UTC)
- That essay is pure nonsense. There are policies and guidelines on how an article should be, but the process from blank page to a finished article (or from finished basic article to good or featured article) is open to personal preference. Cambalachero (talk) 03:35, 23 October 2025 (UTC)
- No personal preference allows for WP:OR, whether that OR is done by a person or an llm. CMD (talk) 04:40, 23 October 2025 (UTC)
- And now you're just randomly dropping policy shortcuts. How can it be original research to add references to a text? Cambalachero (talk) 13:30, 23 October 2025 (UTC)
- The original research is the automatic generation of unsourced text. CMD (talk) 13:31, 23 October 2025 (UTC)
- No, it isn't. "On Wikipedia, original research means material—such as facts, allegations, and ideas—for which no reliable, published source exists". Please read the pages instead of linking blindly. You don't need to cite that the sky is blue, and if something does not have a reference right now but it can have one because such references do exist, then the problem is just adding such references, not the unreferenced but otherwise correct text. Cambalachero (talk) 14:18, 23 October 2025 (UTC)
- I wonder if CMD may be thinking of something closer to WP:SYNTH, if they mean an AI compiling multiple sources of mixed relevance and authority to write text with a perhaps liberal interpretation of them all… Kingsif (talk) 14:24, 23 October 2025 (UTC)
- Llms don't compile sources. Llms abstract language into a mathematical formula, and produce a response through that process. It is not linked to whether cited sources exists at all, or whether the sky is blue. Generating article text with llms is one of the core potential uses that almost everyone has opposed in llm discussions. CMD (talk) 02:01, 24 October 2025 (UTC)
- So that kind of AI is not what you meant ':) But yes, using entirely gen AI to write articles is... not sharing knowledge, to put it mildly. Kingsif (talk) 20:55, 24 October 2025 (UTC)
 
 
 - Llms don't compile sources. Llms abstract language into a mathematical formula, and produce a response through that process. It is not linked to whether cited sources exists at all, or whether the sky is blue. Generating article text with llms is one of the core potential uses that almost everyone has opposed in llm discussions. CMD (talk) 02:01, 24 October 2025 (UTC)
 
 - I wonder if CMD may be thinking of something closer to WP:SYNTH, if they mean an AI compiling multiple sources of mixed relevance and authority to write text with a perhaps liberal interpretation of them all… Kingsif (talk) 14:24, 23 October 2025 (UTC)
 
 - No, it isn't. "On Wikipedia, original research means material—such as facts, allegations, and ideas—for which no reliable, published source exists". Please read the pages instead of linking blindly. You don't need to cite that the sky is blue, and if something does not have a reference right now but it can have one because such references do exist, then the problem is just adding such references, not the unreferenced but otherwise correct text. Cambalachero (talk) 14:18, 23 October 2025 (UTC)
 
 - The original research is the automatic generation of unsourced text. CMD (talk) 13:31, 23 October 2025 (UTC)
 
 - And now you're just randomly dropping policy shortcuts. How can it be original research to add references to a text? Cambalachero (talk) 13:30, 23 October 2025 (UTC)
 
 - No personal preference allows for WP:OR, whether that OR is done by a person or an llm. CMD (talk) 04:40, 23 October 2025 (UTC)
 
 - That essay is pure nonsense. There are policies and guidelines on how an article should be, but the process from blank page to a finished article (or from finished basic article to good or featured article) is open to personal preference. Cambalachero (talk) 03:35, 23 October 2025 (UTC)
 
- Obvious AI use should be a quick-fail criterion; reviewers should be obliged to perform a check as such. Yours, &c. RGloucester — ☎ 00:55, 23 October 2025 (UTC)
 - I think the consensus here is firmly against AI, and is swinging more towards complete ban (which I’d support) and quick fail rather than just requiring rewrites. Based on this, would there be any appetite for a WikiProject or similar of users advocating against AI Wikipedia-wide? With things like that human-computer partnership Wikiproject starting up, it might also serve as a useful antidote. Kingsif (talk) 12:31, 23 October 2025 (UTC)
- Something like WP:WikiProject AI Cleanup perhaps? (Nearly two years old!) ClaudineChionh (she/her · talk · email · global) 12:44, 23 October 2025 (UTC)
- Hmm, a start, but still explicitly accepting and implicitly encouraging. Kingsif (talk) 14:26, 23 October 2025 (UTC)
 
 
 - Something like WP:WikiProject AI Cleanup perhaps? (Nearly two years old!) ClaudineChionh (she/her · talk · email · global) 12:44, 23 October 2025 (UTC)
 
- Wikiprojects are not meant to serve as activist fifth columns. If we want to change some aspect of policy, the right method is to propose a change to the community in an RfC. I would be happy to propose adding obvious AI use to the GA quick-fail criteria. Yours, &c. RGloucester  — ☎ 23:46, 23 October 2025 (UTC)
- No, but I'd imagine a Wikiproject for identifying and cleaning up AI text and references would at least endorse some essays about the harms of pretty much every kind of AI in the information space - things that can be handily pointed to as examples of opinions and arguments when they approach editors adding AI, or for quick linking on a local consensus in future RfCs. Kingsif (talk) 20:58, 24 October 2025 (UTC)
 
 
- Wikiprojects are not meant to serve as activist fifth columns. If we want to change some aspect of policy, the right method is to propose a change to the community in an RfC. I would be happy to propose adding obvious AI use to the GA quick-fail criteria. Yours, &c. RGloucester  — ☎ 23:46, 23 October 2025 (UTC)
 
- My question is how are we determining if AI is being used? Like if there is somehow no violations in the current GAN criteria but a reviewer believes an editor used AI, on what grounds would that be disallowed and how would that decision be made? we know that AI detectors are unreliable but i’m also not sure what human assessed criteria to detect AI would be effective (again, is our current GA criteria not already written in a way that would prevent AI use).
 - I could see the reasoning behind having nominators agree that if they nominate an article they are agreeing that, to their knowledge, the article is human written. this allows people to QF if there is concrete evidence of AI usage and focus more on the current criteria while also making nominators stop and think.
 - Perhaps an automatic statement like:
 
- “by creating this page (the nomination page) you agree to the following:
 
- To your knowledge all prose is human generated
 - To your knowledge AI has not been used to add references, modify prose, or write significant amounts of the article
 - To your knowledge there is no current concerns about AI being used inappropriately voiced on notice boards or the talkpage
 
- I get the point of this, but 
To your knowledge all prose is human generated
could actually have the unintended side effect of banning articles about AI from the GA process. Maybe tweaking it to remove the word "all" might work though; we'd still want most text to be human-generated, which is what point 2 covers. (Not to mention that reviewers unscrupulously using AI detectors may erroneously flag a human-generated sentence as AI, and fail solely on that basis.) Epicgenius (talk) 02:01, 24 October 2025 (UTC) - I don't think we can mandate text be human generated, even though I personally agree with it. What do you consider to be human generated? Where do we draw that line? It is my opinion that most AI text already fails our policies and guidelines on numerous levels. Be it the numerous weird mannerisms of AI text, the faked references, or the inability to think and process the way a human brain can, there's a lot we can use against AI writing as it stands. I do see the rising number of issues with AI misuse (most uses are misuse if you ask me) and understand the appetite to do something about it. I'm just concerned that a ban won't stop people, in the same way we've declined to formally ban paid editing because simply banning it won't stop it, just make it harder to detect. Trainsandotherthings (talk) 02:42, 24 October 2025 (UTC)
- What I think we could ban is AI generated references. Every reference should be created and checked by a human without the use of any AI tool. AI misuse is at its worse when it involves falsifying references, which is very much a blockable offense since it's introducing misinformation.Trainsandotherthings (talk) 02:44, 24 October 2025 (UTC)
- Yeah - that, I think, is uncontroversial. Falsified references is a direct violation of WP:V. On the other hand, mandating that GA noms contain only human-generated text (while something that I strive for, personally) is very likely to result in a situation where an overzealous reviewer uses an LLM detector, decides that something like the United States Declaration of Independence is AI-generated, and then fail the article. Not to mention that not disallowing AI-generated text, ironically, makes it much easier to detect poorly-written AI-generated text, since people won't be trying to find creative ways to evade such an AI ban. We already have six criteria that can in most cases weed out the AI garbage, which often tends to be non-neutral, makes up hallucinated references, and so on and so forth. – Epicgenius (talk) 02:57, 24 October 2025 (UTC)
- We don't need to mandate LLM detectors. A simple human check for AI-generated references, AI prompt or markdown should be performed, and if any of these are found, the article should be quick-failed. Checking whether the article itself was machine-generated may be impossible, but the presence of any hallmarks of AI-generated content should immediately disqualify the article on the grounds that the article creator has a competence issue. Yours, &c. RGloucester  — ☎ 04:02, 24 October 2025 (UTC)
A simple human check for AI-generated references, AI prompt or markdown should be performed
That sounds reasonable; my concern is just that the reviewer would (ironically) try to use AI (i.e. the AI detectors) to weed out AI. So if we were to add an anti-AI pledge for nominators, we might as well have the reviewer do the same. – Epicgenius (talk) 11:04, 24 October 2025 (UTC)- There's no irony in that. The community seems to have little issue in editors using AI for non-writing purposes, such as finding sources. CMD (talk) 13:02, 24 October 2025 (UTC)
- If the output doesn't affect the quality of the article, sure. The thing is that using an LLM detector to check for AI does impact the quality of the article. (As an aside, WP:AICATCH has an entire section dedicated to catching edits that add sources which have an AI tracking indicator in the URL.) – Epicgenius (talk) 13:57, 24 October 2025 (UTC)
- I'm not seeing a directly link between the use of a checker and article quality. The url tracking indicators are useful due to high correlation, but in all the discussions I have seen they are noted considered as one factor in the wider assessment (similar to llm detectors in that sense I suppose). CMD (talk) 14:10, 24 October 2025 (UTC)
- My point is that, if an editor is using an LLM detector to assess that a GAN is or isn't using AI-generated text, then it is impacting the outcome of the GAN process. The GAN process is one of the ways where an article's quality can be checked against objective criteria. Hence, this is why I said that the use of an LLM detector is affecting the quality of the article. – Epicgenius (talk) 16:02, 24 October 2025 (UTC)
 
 - To add to that, Headbomb's unreliable source user script has a feature that automatically highlights sources with those AI trackers in URLs, making them easy to spot. DrOrinScrivello (talk) 15:55, 24 October 2025 (UTC)
 
 - I'm not seeing a directly link between the use of a checker and article quality. The url tracking indicators are useful due to high correlation, but in all the discussions I have seen they are noted considered as one factor in the wider assessment (similar to llm detectors in that sense I suppose). CMD (talk) 14:10, 24 October 2025 (UTC)
 
 - If the output doesn't affect the quality of the article, sure. The thing is that using an LLM detector to check for AI does impact the quality of the article. (As an aside, WP:AICATCH has an entire section dedicated to catching edits that add sources which have an AI tracking indicator in the URL.) – Epicgenius (talk) 13:57, 24 October 2025 (UTC)
 
- There's no irony in that. The community seems to have little issue in editors using AI for non-writing purposes, such as finding sources. CMD (talk) 13:02, 24 October 2025 (UTC)
 - You don't even need to do that, just CTRL-F for any of the unholy pantheon of AI verbiage ("highlighting," "underscoring," "aligns with," "emphasizing," etc.) and check whatever source is attached. You will likely find issues. (Recent example: Talk:Abracadabra (Lady Gaga song).) Gnomingstuff (talk) 22:19, 27 October 2025 (UTC)
 
 
- We don't need to mandate LLM detectors. A simple human check for AI-generated references, AI prompt or markdown should be performed, and if any of these are found, the article should be quick-failed. Checking whether the article itself was machine-generated may be impossible, but the presence of any hallmarks of AI-generated content should immediately disqualify the article on the grounds that the article creator has a competence issue. Yours, &c. RGloucester  — ☎ 04:02, 24 October 2025 (UTC)
 
 
 - Yeah - that, I think, is uncontroversial. Falsified references is a direct violation of WP:V. On the other hand, mandating that GA noms contain only human-generated text (while something that I strive for, personally) is very likely to result in a situation where an overzealous reviewer uses an LLM detector, decides that something like the United States Declaration of Independence is AI-generated, and then fail the article. Not to mention that not disallowing AI-generated text, ironically, makes it much easier to detect poorly-written AI-generated text, since people won't be trying to find creative ways to evade such an AI ban. We already have six criteria that can in most cases weed out the AI garbage, which often tends to be non-neutral, makes up hallucinated references, and so on and so forth. – Epicgenius (talk) 02:57, 24 October 2025 (UTC)
 - human generated = prose is written (as in words put together into sentences to form bodies of text) by a human (inputting the words themselves not simply copy and pasting from an ChatGBT). i think the emphasis here should be one a human linking together words to form sentences and convey meaning. that is where AI messes up. Again I do not think this will stop people from using AI, it just allows people to at least take action regarding it. IntentionallyDense (Contribs) 05:51, 24 October 2025 (UTC)
 
 - What I think we could ban is AI generated references. Every reference should be created and checked by a human without the use of any AI tool. AI misuse is at its worse when it involves falsifying references, which is very much a blockable offense since it's introducing misinformation.Trainsandotherthings (talk) 02:44, 24 October 2025 (UTC)
 - That's rather pointless. The problem is puffery and referencing, not the use of AI in and of itself. Unreviewed AI text may contain such problems, but reviewed AI texts are perfectly fine. Focus on the finished product, not the production method. Cambalachero (talk) 14:01, 24 October 2025 (UTC)
- Would this be the first rule that applies to a production method rather than actual article content? I disagree with crossing that line for several reasons. IAWW (talk) 21:04, 24 October 2025 (UTC)
- Honestly, in the case of gen AI I think there are definite arguments for focusing on the production method, things that set it apart. There's the fact Wikipedia's purpose is to share the sum of all human knowledge (emphasis mine), and things like (mentioned in a comment just below) how some of the necessary human touches like awareness of a wider subject to judge significance at a specific sub-topic article are never going to be replicated. There's the fact AI will struggle to distinguish sources, and that important parts of a topic covered in fewer or more advanced sources could be ignored, for a non-expert reviewer to miss. There's the simple fact that all the AI work should be checked by a human for simple verifiability, which theoretically means finding everywhere the AI gleaned its information from, and then reading around to make sure it didn't miss or misrepresent - a headache for a nominator and an unnecessarily unfair workload put on the reviewer if a nominator hasn't done that. Kingsif (talk) 21:14, 24 October 2025 (UTC)
- I think these issues with the tool are nowhere near strong enough to ban it in people's workflow. Also, the tool can still be used effectively even though it has its issues in my opinion. I use GPT-5 three separate times in my GA workflow: to add rowscopes to tables, to double check the data in the tables (very commonly one or two numbers are wrong from human error copying the source), and then to find grammar mistakes and typos in my finished prose. AI is in my opinion the best tool for these tasks. It saves me time so I can spend more time on the important stuff like evaluating sources and writing prose. IAWW (talk) 21:47, 24 October 2025 (UTC)
- We're clearly talking about gen AI, not longstanding organisational tools. While I think you should do a grammar/typo check yourself, because automated tools typically love to make everything sixth-grade standard and there's something to be said for personality in writing style, there's a big difference between a form filler & spellcheck, and gen AI spewing out nonsense. Kingsif (talk) 23:56, 24 October 2025 (UTC)
- Yes, but the line between acceptable use and unacceptable use is not clear, and banning all use certainly is not the solution. IAWW (talk) 10:53, 26 October 2025 (UTC)
- If I can be perfectly honest, I don't know where you're getting the idea that people want to ban all kinds of AI and all (including non-content) uses. I have seen people express that as a personal opinion within the context of proposing gen AI content-creation bans, but not yet propose it. That you are so concerned about the possibility, though, somewhat makes me worry you feel too reliant on using the worst spellcheck in the world. Kingsif (talk) 00:33, 27 October 2025 (UTC)
- Earlier in the conversation you said: "I think the consensus here is firmly against AI, and is swinging more towards complete ban (which I’d support)." That's where I "got the idea". IAWW (talk) 07:59, 27 October 2025 (UTC)
 
 
 - If I can be perfectly honest, I don't know where you're getting the idea that people want to ban all kinds of AI and all (including non-content) uses. I have seen people express that as a personal opinion within the context of proposing gen AI content-creation bans, but not yet propose it. That you are so concerned about the possibility, though, somewhat makes me worry you feel too reliant on using the worst spellcheck in the world. Kingsif (talk) 00:33, 27 October 2025 (UTC)
 
 - Yes, but the line between acceptable use and unacceptable use is not clear, and banning all use certainly is not the solution. IAWW (talk) 10:53, 26 October 2025 (UTC)
 
 - We're clearly talking about gen AI, not longstanding organisational tools. While I think you should do a grammar/typo check yourself, because automated tools typically love to make everything sixth-grade standard and there's something to be said for personality in writing style, there's a big difference between a form filler & spellcheck, and gen AI spewing out nonsense. Kingsif (talk) 23:56, 24 October 2025 (UTC)
 - "There's the fact Wikipedia's purpose is to share the sum of all human knowledge (emphasis mine)"
 - First, the fact that prefacing a statement with "the fact that..." or variants of it is usually done to make it sound as uncontroversial, while being anything but.
 - And second, with the exception of alien civilizations that we have not met yet, all knowledge is human knowledge, so the whole point is moot. Cambalachero (talk) 00:04, 25 October 2025 (UTC)
- Okay, so Wikipedia's stated purpose is not highly controversial, calm down mate.
 - Second, you are missing the point, in a way that feels deliberate. But let's play ball, sure, AI can't have knowledge. But how we share knowledge is communicating information. And what AI is actually kinda known for is communicating disinformation. It cannot reliably share human knowledge, so we cannot utilise it to try. Kingsif (talk) 00:12, 25 October 2025 (UTC)
- No, it's completely on-point. You talk about "human knowledge" and that "AI can't have knowledge" as if we were in the classic sci-fi scenario where Skynet, Ultron, VIKY, Bender and all robots develop artificial general intelligence and become competitors of the human race. That's just sci-fi, and AI as we know it in this discussions is nowhere near that. It is a tool to obtain knowledge, with advantages and disadvantages like any other source of information. And what do we do with sources of information with disadvantages? We use them, with care.
 - Consider for comparison biased and opinionated sources. They are bad sources of information, and a non-partisan reader must set apart the raw facts from the opinions and the subtle and unsubtle manipulative narratives; and yet nobody considers banning them. Cambalachero (talk) 02:15, 25 October 2025 (UTC)
- I am literally talking about AI generated content not being reliable for sharing what we consider knowledge. I think that has been clear. AIs spitting out hallucinations is a pretty undisputed thing. I have not once suggested there is some robot 'alternative knowledge' base being created to rise up and wipe out human knowledge, and your attempt to ignore my explanations to make it seem like I only have sci-fi concerns is a ridiculous straw man.
 - A human can parse those biased sources in a way your precious AI can't, that's why they're not banned. Congrats on only making arguments against yourself. Kingsif (talk) 00:24, 27 October 2025 (UTC)
 
 
 
 
 - I think these issues with the tool are nowhere near strong enough to ban it in people's workflow. Also, the tool can still be used effectively even though it has its issues in my opinion. I use GPT-5 three separate times in my GA workflow: to add rowscopes to tables, to double check the data in the tables (very commonly one or two numbers are wrong from human error copying the source), and then to find grammar mistakes and typos in my finished prose. AI is in my opinion the best tool for these tasks. It saves me time so I can spend more time on the important stuff like evaluating sources and writing prose. IAWW (talk) 21:47, 24 October 2025 (UTC)
 
 - Honestly, in the case of gen AI I think there are definite arguments for focusing on the production method, things that set it apart. There's the fact Wikipedia's purpose is to share the sum of all human knowledge (emphasis mine), and things like (mentioned in a comment just below) how some of the necessary human touches like awareness of a wider subject to judge significance at a specific sub-topic article are never going to be replicated. There's the fact AI will struggle to distinguish sources, and that important parts of a topic covered in fewer or more advanced sources could be ignored, for a non-expert reviewer to miss. There's the simple fact that all the AI work should be checked by a human for simple verifiability, which theoretically means finding everywhere the AI gleaned its information from, and then reading around to make sure it didn't miss or misrepresent - a headache for a nominator and an unnecessarily unfair workload put on the reviewer if a nominator hasn't done that. Kingsif (talk) 21:14, 24 October 2025 (UTC)
 
 - Would this be the first rule that applies to a production method rather than actual article content? I disagree with crossing that line for several reasons. IAWW (talk) 21:04, 24 October 2025 (UTC)
 - I agree there's no reliable AI detector, so some kind of self-declaration sounds like a good idea. Now, while I do think we should outright ban AI, I also think there would need to be a discussion before doing that because of the 0.1% chance an AI-generated article theoretically meets all the present GA criteria. However, as mentioned, AI-generated references clearly don't meet those criteria, so it should be common sense to ban such. That could be achieved in an additional specific note to the GA crit.
 - Weasel words and puffery, other big concerns, are also covered by the GA crit, as would be things like obviously incorrect/non-verifiable content an AI has hallucinated. What is harder to identify, but terrible for an article and something gen AI does a lot, is to not weigh sources. While DUE and broadness are covered in the GA crit, I think it can be hard for a non-subject expert to necessarily identify where things may be factually supported by one source or another, but still misrepresented due to an AI not having background awareness to judge or contextualise. Kingsif (talk) 21:07, 24 October 2025 (UTC)
- Isn't weighing sources the one thing LLMs get right with the help of gradient descent?  One of the problems I've seen recently is that the models aren't updated enough to have the most current information. That means weight can be off due to relying on older data. Viriditas (talk) 21:18, 24 October 2025 (UTC)
- Not really. They will have the biases of their data, which (besides obvious potential neutrality problems) means they weigh more importantly sources that are structured similarly to the largest dataset. Apparently, also longer sources get preference. This skews weighing sources in favour of, in my interpretation of those characteristics, traditional sources. This is not always appropriate. It also means the approach LLMs seem to take in weighing sources is unlike that of humans (they seek objective patterns which have little bearing on the appropriateness of any given source for any particular subject), which I would describe as very flawed. Kingsif (talk) 00:05, 25 October 2025 (UTC)
 - How would gradient descent make LLMs good at assessing the contextual reliability of sources? jlwoodwa (talk) 00:41, 25 October 2025 (UTC)
- When I perform a CRAAP test, I'm using conceptual pattern matching just like an LLM is doing it statistically. I can meta-evaluate, but the LLM can't. Viriditas (talk) 01:30, 25 October 2025 (UTC)
 
 
 
 - Isn't weighing sources the one thing LLMs get right with the help of gradient descent?  One of the problems I've seen recently is that the models aren't updated enough to have the most current information. That means weight can be off due to relying on older data. Viriditas (talk) 21:18, 24 October 2025 (UTC)
 
RfC on new quick fail criterion
[edit]I have opened an RfC to evaluate whether there is community consensus for a new quick-fail criterion. I should be most happy if interested parties would express their opinion at Wikipedia:VPR#RFC: New GA quick fail criterion for AI. Yours, &c. RGloucester — ☎ 10:14, 26 October 2025 (UTC)
Another seemingly AI-generated review
[edit]See at Talk:Silver Lake (Whatcom County, Washington)/GA1. Can we have this put back in the queue? Generalissima (talk) (it/she) 14:28, 26 October 2025 (UTC)
- What evidence is there that this review is AI-generated? Bgsu98 (Talk) 14:34, 26 October 2025 (UTC)
- I would not be 100% confident it’s AI generated, but the signs are very much there. Wikitext errors (the ==== not creating a subheading sometimes and appearing in plaintext as if it’s been copied from non-code instead of typed), the review content is almost entirely just restatement of the GA criteria (feels like someone has asked a chatbot outright if the article meets the criteria, without how or why), and the conclusion at the end recommending someone decide it passes all criteria.
 - The exception is in the crit 3 review, with a line of personal opinion. But then you could worry it’s an AI that’s read enough other reviews to add a benign opinion because that’s what happens.
 - Regardless, I would send it back for not really being a review. Even when I find nothing wrong, I try to detail how a crit is met, this review has no feedback for anyone to know why it thinks the crit are met. Kingsif (talk) 14:53, 26 October 2025 (UTC)
 - The fact that it was created in one go with inexplicable nowiki errors? That it contains nothing that could be interpreted as an actual human critique? That it contains meaningless statements like "The references are authoritative, and tiny improvements for citation linking only would polish it further."? That it contains no evidence of a spot check having been conducted? ♠PMC♠ (talk) 14:57, 26 October 2025 (UTC)
- In fact, I would suggest that Talk:Canon EOS/GA2 also bears indications of being LLM-generated. The duplication of the "GA review" header with different capitalizations, the similar yet slightly different styling for two reviews done an hour apart, and most damningly, the plausible-sounding bullshit like "As it stands, it has a ‘list’ signature". (And again no evidence of an actual spot check in the sense of looking at any references and seeing if they verify the text). ♠PMC♠ (talk) 15:12, 26 October 2025 (UTC)
- I am also almost 100% sure that is AI-generated. The phrasing is so off. IAWW (talk) 16:13, 26 October 2025 (UTC)
 
 - The critiques also seem quite off, "the writing avoids jargon" seems unlikely to be a human conclusion, "developed steadily" is simply wrong on its face, and talking about the same images twice is strange. CMD (talk) 17:06, 26 October 2025 (UTC)
- Also "most the images are on Commons"... all of the images are on Commons. IAWW (talk) 18:10, 26 October 2025 (UTC)
 
 
 - In fact, I would suggest that Talk:Canon EOS/GA2 also bears indications of being LLM-generated. The duplication of the "GA review" header with different capitalizations, the similar yet slightly different styling for two reviews done an hour apart, and most damningly, the plausible-sounding bullshit like "As it stands, it has a ‘list’ signature". (And again no evidence of an actual spot check in the sense of looking at any references and seeing if they verify the text). ♠PMC♠ (talk) 15:12, 26 October 2025 (UTC)
 
 - I think the review is such bad quality that it can be considered a drive-by review and the nomination can be put back in the queue regardless of whether it is AI-generated or not. IAWW (talk) 15:01, 26 October 2025 (UTC)
- I cannot get involved here because this editor did Talk:Cup of China/GA1, which is obviously one of my articles. At the same time, I sent another article the user had written to CSD15 for being 100% AI-generated (Anoxic microsites in soil). The thread on the user’s talk page about Bee also smells of AI, but a check of it said no. So, I don’t know what’s going on here with this user. If you think the review is insufficient, then by all means, it should probably be put back in queue, but I can’t be the one to do it. Bgsu98 (Talk) 15:09, 26 October 2025 (UTC)
 - This, send back as not a review, ask the reviewer about it and go from there. Kingsif (talk) 15:18, 26 October 2025 (UTC)
 - I agree with this as well. AI or not, it fails to engage with the article in a meaningful way. We'd typically vacate a review like this regardless, and that's what should be done here. Trainsandotherthings (talk) 18:40, 26 October 2025 (UTC)
- I agree. I have deleted the Silver Lake review per G15. (G15 requires specific markers of being AI-generated but I think the line at the end about recommending a pass counts as "communication intended for the user", and the markup errors are also strongly suggestive although not part of the G15 requirements.) I think that should suffice to return it to the queue, as the review's recommendation that someone pass this article appears not to have been acted on. Along with Talk:Canon EOS/GA2, the same reviewer's other review Talk:Cup of China/GA1 probably needs scrutiny. —David Eppstein (talk) 20:00, 26 October 2025 (UTC)
- I stand behind Cup of China and welcome any re-inspection. Bgsu98 (Talk) 22:18, 26 October 2025 (UTC)
- Totally fair, no one is scrutinizing you for having the misfortune to be subjected to an LLM-generated review. ♠PMC♠ (talk) 22:43, 26 October 2025 (UTC)
 - FWIW, based on a quick look, I would not GAR it, even if I feel there can be improvements. Kingsif (talk) 00:19, 27 October 2025 (UTC)
 
 - Thanks for doing that IAWW (talk) 08:26, 27 October 2025 (UTC)
 
 - I stand behind Cup of China and welcome any re-inspection. Bgsu98 (Talk) 22:18, 26 October 2025 (UTC)
 
 - I agree. I have deleted the Silver Lake review per G15. (G15 requires specific markers of being AI-generated but I think the line at the end about recommending a pass counts as "communication intended for the user", and the markup errors are also strongly suggestive although not part of the G15 requirements.) I think that should suffice to return it to the queue, as the review's recommendation that someone pass this article appears not to have been acted on. Along with Talk:Canon EOS/GA2, the same reviewer's other review Talk:Cup of China/GA1 probably needs scrutiny. —David Eppstein (talk) 20:00, 26 October 2025 (UTC)
 
 
Renominating an article that failed because of reviewer concerns
[edit]This is for the Trichy assault rifle article that didn't pass a Good article nomination. I'm trying to address the reviewer based on his assessment as much as I can. Should I go use the GAN again? Ominae (talk) 13:14, 30 October 2025 (UTC)
- The concerns were quite vague, but probably valid. I suggest reaching out to the reviewer to see if their concerns are mostly alleviated, and if not, ask them for some specific examples or help. If the original nominator is not available and you have fixed the concerns to the best of your ability, I suggest listing it at WP:PEERREVIEW, where you are more likely to get helpful general feedback than in a GA review. IAWW (talk) 13:35, 30 October 2025 (UTC)
- I reached out. Reviewer seemed okay with the changes I did. Ominae (talk) 16:39, 30 October 2025 (UTC)
 
 
Log of GAs?
[edit]Is there anywhere that keeps a log of the GA reviews that we have done? I've not bothered to keep a record, and now I wish I had. Bgsu98 (Talk) 16:39, 3 November 2025 (UTC)
- @Bgsu98: There is this  SSSB (talk) 16:46, 3 November 2025 (UTC)
- We really do need to store these tools somewhere more accessible. Thinking of where the best place for the 'backend' editors to access it my current best idea is to put in on a page which creates a short header for the fully bot-generated Wikipedia:Good article nominations/Report which is the best back-end place we have aside from this talkpage. CMD (talk) 17:27, 3 November 2025 (UTC)
- I added this a few days ago to the list of tools at Wikipedia:Good articles, although a better name may be needed. Rollinginhisgrave (talk | edits) 17:37, 3 November 2025 (UTC)
- I don't object if there's no more consolidated place. I've added it to the See also at Wikipedia:Good article statistics too, as that is one place I've looked for it in the past. CMD (talk) 17:50, 3 November 2025 (UTC)
 
 
 - I added this a few days ago to the list of tools at Wikipedia:Good articles, although a better name may be needed. Rollinginhisgrave (talk | edits) 17:37, 3 November 2025 (UTC)
 - That is exactly what I needed; thank you! Bgsu98 (Talk) 17:55, 3 November 2025 (UTC)
 
 - We really do need to store these tools somewhere more accessible. Thinking of where the best place for the 'backend' editors to access it my current best idea is to put in on a page which creates a short header for the fully bot-generated Wikipedia:Good article nominations/Report which is the best back-end place we have aside from this talkpage. CMD (talk) 17:27, 3 November 2025 (UTC)
 
Where to place GA on the list of GA?
[edit]Hi, I just finished reviewing Formula One, but I'm unsure where to put it on the list as under Motorsports there's only "Races and seasons" and "Racers, racecars, and tracks", which don't apply for the article. Would I have to create another section, and if so, what should it be titled? Fwedthebwead (talk) 19:22, 3 November 2025 (UTC)