Wikipedia:Village pump (proposals)
| Policy | Technical | Proposals | Idea lab | WMF | Miscellaneous |
The proposals section of the village pump is used to offer specific changes for discussion. Before submitting:
- Check to see whether your proposal is already described at Perennial proposals. You may also wish to search the FAQ.
- This page is for concrete, actionable proposals. Consider developing earlier-stage proposals at Village pump (idea lab).
- This is a high-visibility page intended for proposals with significant impact. Proposals that affect only a single page or small group of pages should be held at a corresponding talk page.
- Proposed policy changes belong at Village pump (policy).
- Proposed WikiProjects or task forces may be submitted at Wikipedia:WikiProject Council/Proposals.
- Proposed new articles belong at Wikipedia:Requested articles.
- Discussions or proposals which warrant the attention or involvement of the Wikimedia Foundation belong at Wikipedia:Village pump (WMF).
- Software changes which have consensus should be filed at Phabricator.
Discussions are automatically archived after remaining inactive for 7 days.
RFC: Should Slogans be removed from ALL infoboxes.
[edit]
|
There have been some perennial discussions about removal of |slogan= from various infoboxes, but I could not find a case that discussed making WP:SLOGAN essentially policy.
In recent years, the slogan parameter has been removed from {{Infobox bus company}}, {{Infobox airline}} and the widely used {{Infobox company}} (see the MANY discussions about removing it from Infobox company).
Now WP:SLOGAN is just an essay which I know many people object to, but hence the reason for this RFC. I encourage everyone to read the essay but here are the key points (This is copied from WP:SLOGAN)
Mission statements generally suffer from some fundamental problems that are incompatible with Wikipedia style guidelines:
Even though mission statements are verifiable, they are written by the company itself, which makes them a primary source.
They contain boastful words and puffed-up, flowery language.
They contain vague unsubstantiated claims such as We are the industry leaders in commitment to <insert industry here> excellence.
They focus on the speculation about the future of the company: becoming the industry leader, or the top producer, or the most reliable manufacturer.
They are promotional in both tone and purpose.
They are not usually verifiable in third party sources.
Per this search there are at least 37 infoboxes that have some form of slogan in them. The question is should all of those be removed? This does not mean that slogans cannot be mentioned in the body of an article, that is another conversation about whether they meet notability and are encyclopedic. My question is purely do they belong in the infobox?
In addition to this, what about mottos? It seems as though they are used rather interchangeably in Infoboxes... This search shows at least 72 infoboxes with a motto type parameter. Should some of those be removed? Personally I'd say keep it for settlement type infoboxes, but the way it is used on {{Infobox laboratory}} or {{Infobox ambulance company}}, it is performing the same functionality as a slogan and has the same issues.
Look forward to everyone's thoughts! - Zackmann (Talk to me/What I been doing) 22:29, 20 October 2025 (UTC)
Discussion (removal slogans)
[edit]- Yes (I don't have anything more to add; the arguments in favor of removal have been explained sufficiently in the nomination statement) * Pppery * it has begun... 23:29, 20 October 2025 (UTC)
- No A slogan is one of those trivial things people go on Wikipedia to find out. (What company's slogan is "leave the driving to us"?) The claim that they conflict with Wikipedia style guidelines is nonsense. Quoting a slogan isn't endorsing it, anymore than the quotations in Mein Kampf endorse Naziism. --Isaac Rabinovitch (talk) 01:19, 21 October 2025 (UTC)
- No Infoboxes are intended for such auxillary information that are useful reference. Agree with Isaac Rabinovitch. Ca talk to me! 02:08, 21 October 2025 (UTC)
- I don't care how this cookie crumbles, but slogans coming from primary sources, or "not being verifiable though third party sources", really is irrelevant to whether or not to include them. Headbomb {t · c · p · b} 02:29, 21 October 2025 (UTC)
- I will add that mottos for Countries/Cities/States/Provinces are very different than corporate mottos. Canada's A mari usque ad mare is very different than McDonald's I'm Lovin' It. Headbomb {t · c · p · b} 02:34, 21 October 2025 (UTC)
- No (and doubly no for mottos). But I do think editors should use some discretion when deciding whether to include one. Nike can have Just Do It. Apple Inc. can have Think different. Disneyland can have "Happiest Place on Earth". M&M's should have "Melts in Your Mouth, Not in Your Hands". But slogans that almost nobody recognizes should be excluded through editorial judgement, not through removing the option entirely from the infobox. WhatamIdoing (talk) 02:40, 21 October 2025 (UTC)
- Also, a Slogan is not the same thing as a Mission statement. Mission statements are internal facing and IMO should normally be excluded. Slogans are about marketing and branding; they exist for an external audience. WhatamIdoing (talk) 02:44, 21 October 2025 (UTC)
- No. Mottos are absolutely often promotional, but oftentimes so are names/logos/etc. They can still be essential pieces of information about an organization. I'd rather we encourage tight editorial discretion about which mottos are notable enough to warrant inclusion than ban them outright by removing the fields for them. Perhaps a good minimum standard would be secondary coverage (i.e. a source explicitly noting that they have a particular motto). Sdkb talk 04:51, 21 October 2025 (UTC)
- No each use should be determined on a case by case basis. If it is famous slogan (finger licking good) or (the fish others reject) then may as well include it. But if it is excessive or ridiculous, then omit it. Graeme Bartlett (talk) 08:25, 21 October 2025 (UTC)
- Comment the RFC question is not neutral -- it has a deletionist bias. If the arguments given in the No-votes above gain consensus, the slogan parameter should be restored in the infoboxes it was removed from. This would be a new global consensus overriding the local consensus at the infobox talk page archive. Joe vom Titan (talk) 14:14, 21 October 2025 (UTC)
- Agree with both. FaviFake (talk) 16:10, 21 October 2025 (UTC)
- Mixed opinion - WP:SPS and WP:About self both caution against promotional material… so at minimum we would need the slogan to be mentioned by an independent secondary source. Blueboar (talk) 18:25, 21 October 2025 (UTC)
- No per Sdkb and Isaac Rabinovitch, and restore any that have been removed without a specific consensus discussion per Joe von Titan. Thryduulf (talk) 21:28, 21 October 2025 (UTC)
- I'm with Blueboar on this, if secondary sources are mentioning it then we should to. I'd also add that we are a global site, writing for a global audience, I doubt all of these slogans are global, or even consistent across the Anglosphere. ϢereSpielChequers 09:55, 23 October 2025 (UTC)
- No. Slogans, mission statements, etc. are a basic piece of information about a company. They are reasonable to include and inclusion is not really promotional. We include logos and mention marketing stylization like all-caps but don't consider these promotional. A primary/self-published source is fine for this. Readers know what a slogan is and seeing one reproduced in an infobox is not going to be interpreted as Wikipedia declaring its accuracy. Secondary sources should be used to resolve any discrepancies or doubt, I suppose. All that said, I don't know that every company article needs to have the slogan included. Individual cases should be discussed on talk. —Myceteae🍄🟫(talk) 01:01, 25 October 2025 (UTC)
- Yes per MOS:INFOBOXPURPOSE. The infobox is not for every true fact about a topic, it is about basic, uncontroversial facts, and ideally the kind that change rarely. If a slogan is relevant, great, cover it in prose. It doesn't have to be in the infobox. SnowFire (talk) 18:58, 30 October 2025 (UTC)
- Slogans are clearly notable, we have articles on some of them (Category:Slogans), and clearly where appropriate it would be part of our prime directive to include mention of them, as indicated in the WP:MISSION essay which prompted this discussion: "Slogans may be worth mentioning briefly as part of a description of the organization's marketing approach." As such we shouldn't set about forbidding editors to include these details. A company's mission statement may also be worth noting, even though most are not - it comes down to judgement and consensus of the editors working on the article. I feel the Mission/Slogan essay is a useful guideline, leaving decision to editors working on the articles. I don't think we should be about imposing restrictions which may limit or restrict appropriate, notable, and useful encyclopedic knowledge. Blind, sweeping restrictions are rarely useful. So, of course, No. SilkTork (talk) 14:53, 31 October 2025 (UTC)
RFC: What should be done about unknown birth/death dates
[edit]
|
With the implementation of Module:Person date, all |birth_date= and |death_date= values in Infoboxes (except for deities and fictional characters) are now parsed and age automatically calculated when possible.
With this implementation, it was found that there are a large number of cases (currently 4534) where the birth/death date is set to Unk, Unknown, ? or ##?? (such as 19??). Full disclosure, Module:Person date was created by me and because of an issue early on I added a number of instances of |death_date=Unknown in articles a few weeks ago. (I had not yet been informed about the MOS I link to below, that's my bad).
Per MOS:INFOBOX: If a parameter is not applicable, or no information is available, it should be left blank, and the template coded to selectively hide information or provide default values for parameters that are not defined.
.
There is also the essay WP:UNKNOWN which says, in short, Don't say something is unknown just because you don't know
.
So the question is what to do about these values? Currently Module:Person date is simply tracking them and placing those pages in Category:Pages with invalid birth or death dates (4,534). It has been growing by the minute since I added that tracking. Now I am NOT proposing that this sort of tracking be done for every parameter in every infobox... There are plenty of cases of |some_param=Unknown, but with this module we have a unique opportunity to address one of them.
I tried to find a good case where the |death_date= truly is Unknown, but all the cases I could think of use |disappeared_date= instead. (See Amelia Earhart for example).
- The way I see it there are a few options
- Option A - Essentially do nothing. Keep the tracking category but make no actual changes to the pages.
- Option B - Implement a {{preview warning}} that would say This value "VALUE" is invalid per MOS:INFOBOX & WP:UNKNOWN. (Obviously open to suggestions on better language).
- Option C - Take B one step further and actually suppress the value. Display a preview warning that says This value "VALUE" is invalid per MOS:INFOBOX & WP:UNKNOWN. It will not be displayed when saved. then display nothing on the page. In other words treat
|death_date=Unknownthe same as|death_date=. (Again open to suggestions on better language for the preview warning). - Option D - Some other solution, please explain.
Thanks in advance! --Zackmann (Talk to me/What I been doing) 23:43, 21 October 2025 (UTC)
Discussion (birth/death unknown)
[edit]- We definitely shouldn't be using things like "Unk" or "?" - if we want to say this is not known we should explicitly say "Unknown". Should we ever say "unknown" though? Yes, but for births only when we have reliable sources that explicitly say the date is unknown to a degree that makes values like "circa" or "before" unhelpful - even "early 20th Century" and is more useful imo than "unknown". "Unknown" is better than leaving it blank when we have a known date of birth but no known date of death (e.g. Chick Albion). I'm not sure how this fits into your options. Thryduulf (talk) 00:24, 22 October 2025 (UTC)
- Agreed. There are cases where no exact date is given but MOS:INFOBOX and WP:UNKNOWN do not apply because the lack of known date can be sourced reliably. If the module cannot account for this, I really think only option A is acceptable. —Rutebega (talk) 18:15, 22 October 2025 (UTC)
- @Rutebega and Thryduulf: So I can very easily make it so that
|..._date=Unknown<ref>...is allowed but just plain|..._date=Unknownis not. That is just a mater of tweaking the regular expression. Not hard at all to do at all. That being said (mostly for curiosity sake) can you give me an example of a page wherethe lack of known date can be sourced reliably
? Every case I could think of (and I really did try to find one) either has a relevant|disappeared_date=(so you don't need to specify that|death_date=Unknown) or you can least provide approximate dates (i.e.{{circa|1910}}, 1620s or 12th century). Zackmann (Talk to me/What I been doing) 18:23, 22 October 2025 (UTC)- Metrodora isn't quite date unknown, but the only fixed date we have is the manuscript which preserves her text (c.1100 AD), and her floruit has been variously estimated between the first and sixth centuries AD. Of course, so little is known for certain about Metrodora that every single infobox field would be "unknown" were it filled in, and therefore there's little point having an infobox at all.
- Corinna's dates are disputed: she was traditionally a contemporary of Pindar (thus born late 6th century and active in the fifth century BC) but some modern scholars argue for a third-century date. If the article had an infobox, a case could be made either for listing her floruit as either "unknown", "disputed", "5th–3rd century BC", "before 1st century BC" (the date of the first source to mention her) or simply omit it entirely.
- I'm open to convincing about how these cases should be handled; my inclination is that any historical figure where the date fields alone need this much nuance are probably a bad fit for an infobox, but the size of Category:Pages with invalid birth or death dates suggests that not everybody agrees with me! Caeciliusinhorto-public (talk) 08:56, 23 October 2025 (UTC)
- @Caeciliusinhorto-public: thanks for some real examples. I think your point that so little is known that Infoboxes don't make sense is a good one... If there were other info that made sense to have in an Infobox I think the dates would still be able to be estimated (even if the range is hundreds of years). You could still put
|birth_date=5th-3rd century BCor, of course, just leave it blank! Leaving it blank to me implies that it is Unknown, though it does leave ambiguous whether is it Unknown because no editor has taken the time to figure it out or whether it is Unknown because the person live some 2,200 years ago and we have no real way of knowing when they were born... Zackmann (Talk to me/What I been doing) 09:05, 23 October 2025 (UTC)
- @Caeciliusinhorto-public: thanks for some real examples. I think your point that so little is known that Infoboxes don't make sense is a good one... If there were other info that made sense to have in an Infobox I think the dates would still be able to be estimated (even if the range is hundreds of years). You could still put
- @Rutebega and Thryduulf: So I can very easily make it so that
- Agreed. There are cases where no exact date is given but MOS:INFOBOX and WP:UNKNOWN do not apply because the lack of known date can be sourced reliably. If the module cannot account for this, I really think only option A is acceptable. —Rutebega (talk) 18:15, 22 October 2025 (UTC)
- This is above my pay grade but can you give us an idea of how much "It has been growing by the minute". The scale of those additions may inform our view as to how best to deal with it.Lukewarmbeer (talk) 16:34, 22 October 2025 (UTC)
- @Lukewarmbeer: so this is mostly a caching issue. I don't think very many new instances of this are being created each day, it just takes a while for the code to propagate. I really don't have an objective way of saying how many new instances are being created daily... Zackmann (Talk to me/What I been doing) 17:13, 22 October 2025 (UTC)
- FWIW, about 15% of our biographies of living people have unknown birthdates (based on a count by category I did in 2023). I would assume that deceased biographies are perhaps more likely to miss this data, so we're looking at a number in the low hundreds of thousands? Not all of those will have infoboxes, of course. Andrew Gray (talk) 20:39, 22 October 2025 (UTC)
- @Andrew Gray: when you say
have unknown birthdates
do you mean "no birthdates are given"? Because that is NOT what we are talking about here... We are talking about|birth_date=Unknown, where someone has specifically stated that the date is Unknown, not just left it blank. Zackmann (Talk to me/What I been doing) 20:42, 22 October 2025 (UTC)- @Zackmann08 ah, right - I think I misunderstood, apologies. If the module does nothing when the birthdate field is blank or missing, that sounds good.
- I think the simple tracking category for non-date values sounds fine for now. Andrew Gray (talk) 20:52, 22 October 2025 (UTC)
- @Andrew Gray: when you say
- FWIW, about 15% of our biographies of living people have unknown birthdates (based on a count by category I did in 2023). I would assume that deceased biographies are perhaps more likely to miss this data, so we're looking at a number in the low hundreds of thousands? Not all of those will have infoboxes, of course. Andrew Gray (talk) 20:39, 22 October 2025 (UTC)
- @Lukewarmbeer: so this is mostly a caching issue. I don't think very many new instances of this are being created each day, it just takes a while for the code to propagate. I really don't have an objective way of saying how many new instances are being created daily... Zackmann (Talk to me/What I been doing) 17:13, 22 October 2025 (UTC)
- Perhaps the problem is the multiple meanings of "Unknown". Some may have filled it meaning "nobody knows about the early life of this historical guy, only that he became relevant during the X events, already an adult", and others "unknown because I don't know". We may make it so that "Unknown" has the same effect as an empty field, and require a special input for people with truly unknown dates. And note that any biography after whatever point birth and death certificates became ubiquitous should be treated as the second case. Cambalachero (talk) 14:09, 23 October 2025 (UTC)
- Option D The variant on option C where it's permitted iff there's a citation seems like a good solution to me. By a similar argument to WP:ALWAYSCITELEAD, I think a citation should always be required to assert that someone's date of death is outside the scope of human knowledge. From WP:V we should always cite
material that is likely to be challenged
, and I think the assertion that someone's date of death is "unknown" falls well within that scope; in particular I myself will always challenge it if unsourced. lp0 on fire () 16:32, 23 October 2025 (UTC)- I think whether someone's date of birth or death being unknown falls into the category of
material that is likely to be challenged
is party a factor of when and where they were born and the time, place and manner of their death and how much we know about them generally. It is not at all surprising to me that we don't know the date of birth or death of a 3rd century saint or 18th century enslaved person, or when a Peruvian athlete who competed in the 1930s died; we do need a citation to say that we only know the approximate date of death for Dennis Ritchie and Gene Hackman. Thryduulf (talk) 16:50, 23 October 2025 (UTC) - Do you think the citation always needs to be inside the infobox? Our article about Metrodora has a couple of paragraphs about which century she might have lived in. There's no infobox at the moment, but if we added one, would you insist that the citations be duplicated into the infobox? WhatamIdoing (talk) 18:40, 24 October 2025 (UTC)
- I think whether someone's date of birth or death being unknown falls into the category of
- Option D Allow Unknown but not other abbreviations. Require citations for dates. Rationale: Looking at the Sven Aggesen it’s easy to see that “Unknown” is helpful because it’s communicating that the person is dead. In my opinion it’s still stating a fact. So Unknown should be allowed. “?” Should not. It seems like dates of birth and death should always be cited. Thanks for your work on this!! Dw31415 (talk) 17:54, 23 October 2025 (UTC)
- In the case of Sven Aggesen I think we could reasonably expect a reader to infer from "born: 1140? or 1150?" that he is probably dead! In the case of people born recently enough that there might be confusion, I can't imagine there are many cases where both (a) they are known to be dead and (b) their date of death is known so imprecisely that we don't have a more useful value than "unknown" for the infobox. Caeciliusinhorto (talk) 20:35, 23 October 2025 (UTC)
RfC: Aligning community CTOPs with ArbCom CTOPs
[edit]
|
Should the community harmonize the rules that govern community-designated contentious topics (which are general sanctions authorized by the community) with WP:CTOP? If so, how? 19:55, 22 October 2025 (UTC)
Before 2022, the contentious topics process (CTOP) was instead known as "discretionary sanctions" (DS). Discretionary sanctions were authorized in a number of topic areas, first by the Arbitration Committee and then by the community (under its general sanctions authority).
In 2022, ArbCom made a number of significant changes to the DS process, including by renaming it to contentious topics and by changing the set of sanctions that can be issued, awareness requirements, and other procedural requirements (see WP:CTVSDS for a comparison). But because the community's general sanctions are independent of ArbCom, these changes did not automatically apply to community-authorized discretionary sanctions enacted before that date.[a]
In an April 2024 RfC, the community decided that there should be clarity and consistency regarding general sanctions language
and decided to rename community-authorized discretionary sanctions to "contentious topics". However, the community did not reach consensus on several implementation details, most prominently whether the enforcement of community CTOPs should occur at the arbitration enforcement noticeboard (AE) instead of the administrators' noticeboard (AN), as is now allowed (but not required) by ArbCom's contentious topics procedure.[b]
Because of the lack of consensus, no changes were made to the community-designated contentious topics other than the naming. As a result, there currently exist 24 ArbCom-designated contentious topics and 7 community-designated contentious topics, and the rules between the two systems differ as documented primarily at WP:OLDDS.
- Question 1: Should the community align the rules that currently apply in community-designated contentious topics with WP:CTOP, mutatis mutandis (making the necessary changes) for their community-designated nature?
- Question 2: Should the community authorize enforcement of community contentious topics at AE (in addition to AN, where appeals and enforcement requests currently go)?
- If questions 1 and 2 both pass, the text at Wikipedia:Contentious topics (community-designated)/Proposal with AE would be adopted as an information page.
- If question 1 passes and question 2 fails, the text at Wikipedia:Contentious topics (community-designated)/Proposal without AE would be adopted as an information page.
In either case above, all existing community CTOPs would be amended by linking to the new information page to document the applicable provisions.
If question 1 fails, no changes would be made.
Notes
- ^ WP:GS/SCW&ISIL, WP:GS/UKU, WP:GS/Crypto, WP:GS/PW, WP:GS/MJ, and WP:GS/UYGHUR follow WP:OLDDS. WP:GS/ACAS was enacted after December 2022 and therefore follows the current ArbCom contentious topics procedure.
- ^ Specifically, AE may consider "requests or appeals pursuant to community-imposed remedies which match the contentious topics procedure, if those requests or appeals are assigned to the arbitration enforcement noticeboard by the community." – Wikipedia:Arbitration Committee/Procedures § Noticeboard scope 2
Survey (Q1&Q2)
[edit]- The following discussion is an archived record of a request for comment. Please do not modify it. No further edits should be made to this discussion. A summary of the conclusions reached follows.
- Yes to both questions. For almost three years now, we have had two different systems called "contentious topics" but with different rules around awareness, enforcement, allowable restrictions, etc. In fact, because WP:GS/ACAS follows the new CTOP procedure but without AE enforcement, we actually have three different systems. We should take this chance to make the process meaningfully less confusing. There is no substantive reason why the enforcement of, for example, WP:GS/UYGHUR and WP:CT/AI should differ in subtle but important ways. As for using AE, AE is designed for and specialized around CTOP enforcement requests and appeals. AE admins are used to maintaining appropriate order and have the benefit of standard templates, word limits, etc., while AN or ANI are not specialized around this purpose. As a result of WP:CT2022, ArbCom now specifically allows AE to hear
requests or appeals pursuant to community-imposed remedies which match the contentious topics procedure, if those requests or appeals are assigned to the arbitration enforcement noticeboard by the community
. We should take them up on the offer as Barkeep49 first suggested at the previous RfC. FYI, I am notifying all participants in the previous RfC, as this RfC is focused on the same topic. Best, KevinL (aka L235 · t · c) 19:57, 22 October 2025 (UTC) - Yes to both - I don't see a downside to this standardization, and it would appear to both make the system as a whole easier to understand, and allow admins to take advantage of the automated protection logging bot for the currently-GS topics. signed, Rosguill talk 20:01, 22 October 2025 (UTC)
- Yes to both. The CTOP system is complicated even without these three different regimes and confuses almost everyone involved. AE can be a great option for reducing noise in discussions, compared to AN. —Femke 🐦 (talk) 20:20, 22 October 2025 (UTC)
- Yes to both as standardization can help clarify confusion especially among newcomers about contentious topics. Aasim (話す) 20:29, 22 October 2025 (UTC)
- Yes to both but as I said in the previous RFC, if we're going to go in this direction, we should also be moving towards a process where the community eventually takes over older ArbCom-imposed CTOPs, especially in areas where the immediate on-wiki disruption that required ArbCom intervention has mostly settled down but the topic itself remains indefinitely contentious for off-wiki reasons. ArbCom was intended as the court of last resort for things the community failed to handle; it's not supposed to create policy. Yet currently, huge swaths of our most heavily-trafficked articles are under perpetual ArbCom sanctions, which can only be modified via appeal to ArbCom itself, and which are functionally the same as policy across much of the wiki. This isn't desirable; when ArbCom creates long-term systems like this, we need a way for the community to eventually assume control of them. We need to go back to treating ArbCom as a court of last resort, not as an eternal dumping ground for everything controversial, and unifying ArbCom and community sanctions creates an opportunity to do so by asking ArbCom to agree to (with the community's agreement to endorse them) convert some of the older existing ArbCom CTOPs into community ones. --Aquillion (talk) 20:51, 22 October 2025 (UTC)
- Yes to both per nom. Consistency is great, and eliminating the byzantine awareness system (where you need an alert every 12 months) is essential. WP:AE is a miracle of a noticeboard (how is the noticeboard with the contentious issues the relatively tame one?), and we as a community should take advantage of ArbCom's offer to let us use it. Best, HouseBlaster (talk • he/they) 22:10, 22 October 2025 (UTC)
- Yes to both. This is a huge step in the right direction. Toadspike [Talk] 22:16, 22 October 2025 (UTC)
- Yes to both, and a full-throated "yes" for using AE in particular. The other noticeboards are not fit for purpose with respect to handling CTOP disruption. Vanamonde93 (talk) 22:24, 22 October 2025 (UTC)
- Yes to both – This has been a mess for more than a decade. Harmonising the community and ArbCom general sanctions regimes will cut red tape, and eliminate confusion over which rules apply in any given case. I am also strongly in favour of allowing community sanctions to be enforced at WP:AE. Previously, there were numerous proposals to create a separate board for community enforcement, such as User:Callanecc/Essay/Community discretionary sanctions, but all failed to go anywhere. In my opinion, the most important aspect of community sanctions (as opposed to ArbCom sanctions) is that the community authorises them, and retains control over their governance. Enforcement at AE does nothing to reduce the community's power to enact sanctions; if anything, it will ensure that these regimes are enforced with the same rapidity as ArbCom sanctions. It would be foolish to not take advantage of ArbCom's offer to allow us to use their existing infrastructure. Yours, &c. RGloucester — ☎ 23:54, 22 October 2025 (UTC)
- Yes to both. I was in favor of this during the March 2024 rfc but was relcutant to push it too hard since I was then on arbcom. I am no longer on arbcom and thus can freely and fully support this thoughtful and wise pproposal for the same reasons I hinted at in the previous discussion. Best, Barkeep49 (talk) 02:00, 23 October 2025 (UTC)
- Yes to both, and future changes to either sanction procedure should be considered for both. Not to be unduly repetitive of others above, but the system is more complex than it needs to be. AE as an additional option is a positive. CMD (talk) 04:38, 23 October 2025 (UTC)
- Yes to both and thank you to L235 for working on this. As RGloucester I'd worked on this previously so am definitely supportive. Callanecc (talk • contribs • logs) 07:11, 23 October 2025 (UTC)
- Yes to both, per my comment in the 2024 RfC. It is not reasonable to expect new editors to familiarize themselves with multiple slightly different sanctions systems that emphasize procedural compliance. — Newslinger talk 08:20, 23 October 2025 (UTC)
- Yes to both. Let's not make the CT system more complicated and impenetrable than it needs to be already; consistency can only be good here. Caeciliusinhorto-public (talk) 09:07, 23 October 2025 (UTC)
- Yes to both, long overdue. ~ Jenson (SilverLocust 💬) 16:38, 23 October 2025 (UTC)
- Yes to both, with the same caveats as Aquillion lp0 on fire () 16:39, 23 October 2025 (UTC)
- Yes to both for consistency. Chaotic Enby (talk · contribs) 16:59, 23 October 2025 (UTC)
- Yes to both. We already have overlapping CSes (Arbcom-imposed) and GSes (community-imposed) - A-A and KURD, at least, where the community chose to impose stricter sanctions on a topic area than ArbCom mandated (in both of those cases, the community chose to ECR the topic area). This has caused confusion for me as an admin a few times, for a regular user it can only be more so. Harmonizing the restrictions, with the only difference being who imposed them, can only make sense. - The Bushranger One ping only 20:02, 23 October 2025 (UTC)
- Yes and Yes - The same procedures should apply to topics that the ArbCom has found to be contentious as to topics which the community has found to be contentious. The differences have only caused confusion. Robert McClenon (talk) 20:56, 23 October 2025 (UTC)
- Yes to both CTs (whether issued by Arbcom or the community) should be treated the same regardless of whoever issued it. JuniperChill (talk) 11:04, 24 October 2025 (UTC)
- Yes to both: A long time coming. This centralization will clean up so much unnecessary red tape. — EarthDude (Talk) 13:26, 24 October 2025 (UTC)
- No I understand what Arbcom is per WP:ARBCOM and it seems to be a reasonably well-organised body with good legitimacy due to it being elected. But what's the community? Per WP:COMMUNITY and Wikipedia community, it seems to be be any and all Wikipedians and this seems quite amorphous and uncertain. Asking such a vague community to do something is not sensible. In practice, I suppose the sanctions were cooked up at places like WP:ANI which is a notoriously dysfunctional and toxic forum. That's not a sensible place to get anything done.
- I looked at one of these community sanctions as an example, and it was some special measure for conflict about units of measurements in the UK: WP:GS/UKU. Now I'm in the UK and so might easily run afoul of this but this is the first I heard of this being an especially hot topic. And I've been actively editing for nigh on 20 years. Our general policies about edit-warring, disruption and tendentious editing seem quite adequate for such an issue and so WP:CREEP applies. That sanction was created over 10 years ago and so should be expired rather than harmomised. The other general sanctions concern such topics as Michael Jackson, who died 16 years ago and that too seems quite dated.
- So, I suggest that all the general sanctions be retired. If problems with those topics then recur, fresh sanctions can be established using the new WP:CTOP process and so we'll then all be on the same page.
- Andrew🐉(talk) 16:24, 25 October 2025 (UTC)
- I will note that policy assigns to the community the primary responsibility to resolve disputes, and allows ArbCom to intervene in
serious conduct disputes the community has been unable to resolve
(Wikipedia:Arbitration/Policy § Scope and responsibilities) (emphasis added). That is to say, ArbCom's role is to supplement the community when the community's efforts are unsuccessful. I think that's why there should be some harmonized community CTOP process that can be applied for all extant community CTOPs. I understand that it may be time to revisit some of the community-designated CTOPs, which I support – when I was on ArbCom, I was a drafter for the WP:DS2021 initiative which among other things rescinded old remedies from over half a dozen old cases. But that seems to be a different question than whether to harmonize the community structure with ArbCom's. Best, KevinL (aka L235 · t · c) 14:32, 29 October 2025 (UTC) A motion to revoke authorisation for this sanctions regime was filed at the administrators' noticeboard on 17 April 2020. The motion did not gain community consensus. 09:47, 22 April 2020 (UTC)
— Wikipedia:General sanctions/Units in the United Kingdom#Motion
Aaron Liu (talk) 02:16, 31 October 2025 (UTC)At this time there is no consensus to lift these sanctions, with a majority opposed. People are concerned that disputes might flare up again if sanctions are removed: Give them an inch and they will take a kilometer ...
— User:Sandstein 00:00, 12 November 2020 (UTC)
- I will note that policy assigns to the community the primary responsibility to resolve disputes, and allows ArbCom to intervene in
- Yes to both - By having two systems with the same name, we should then avoid differences in the rules. I say this because if the rules are different, then a user will need to be aware of who designated an area as a contentious topic before reporting or handling reports. For example, if we had the two systems use the same rules but different reporting pages (with no overlap on what pages that can be used), then I expect that users will by mistake post to the wrong pages. Dreamy Jazz talk to me | my contributions 20:56, 1 November 2025 (UTC)
Question 3. How should we handle logging of community contentious topics?
[edit]- Use Arbitration Enforcement Log (WP:AELOG)
- Create a new page such as Wikipedia:Contentious topics/Log which can be separated into two sections, one for community, and one that transcludes WP:AELOG
- Create a new page such as Wikipedia:General sanctions/Log which would only log enforcement actions for community contentious topics (subpages would be years)
- Continue logging at each relevant page describing the community contentious topics (Wikipedia:General sanctions/Topic area), and if 2 or 3 are chosen, the page would transclude these relevant pages.
— Preceding unsigned comment added by Awesome Aasim (talk • contribs) 20:42, 22 October 2025 (UTC)
- 2+3+4 as proposer, one of the problems I do notice is that loading WP:AELOG does take a lot of time because the page has a lot of enforcement actions. The advantage of 2 is having a single page that can be quickly searched. Aasim (話す) 20:42, 22 October 2025 (UTC)
- BTW except for 1 the other options are mutually exclusive. If option 1 is chosen, options 2-4 are irrelevant. I am not asking people to pick one and be done, people can choose any combination. Aasim (話す) 21:32, 23 October 2025 (UTC)
- 2 > 1 – Both ArbCom and community CT are forms of general sanctions (see my incomplete essay on the subject); the only distinction is who authorises them. For this reason, '3' does not make sense. Eliminating the sprawling log pages that currently exist for community-authorised regimes should be a priority if our goal is to eliminate red tape, therefore '4' does not make sense either. That leaves me with 2, which allows for a centralised log for both forms of sanctions. I am perfectly fine with creating subpages as needed, but centralisation is paramount in my mind. Yours, &c. RGloucester — ☎ 00:02, 23 October 2025 (UTC)
- I support option 4. I think it continues to make sense to log individual actions for a given topic area to the corresponding subpage of Wikipedia:General sanctions. isaacl (talk) 01:12, 23 October 2025 (UTC)
- In the past, there have been concerns raised about it being clear if the enacting authority is the arbitration committee or the community. Thus I do not feel option 1 is the best choice.
- Regarding searching: I feel the typical use case is to search for actions performed within a specific topic area. If necessary, Wikipedia search with a page prefix criterion can be used to search multiple subpages. isaacl (talk) 16:19, 23 October 2025 (UTC)
- I will note that having the logpages as subpages of Wikipedia:General sanctions (rather than a more tailored page) makes searchability much harder, which is why scripts like WP:SUPERLINKS don't surface community CTOP enforcement entries even though it does surface WP:AELOG entries. Best, KevinL (aka L235 · t · c) 16:21, 23 October 2025 (UTC)
- 2 I am in favor of fewer, larger pages because they are easier to find and to search. If a searcher needs to confirm that something isn't there, for example, fewer pages, even if very large, are much easier to work with. Darkfrog24 (talk) 13:33, 23 October 2025 (UTC)
- 1 - in keeping with the spirit for Q1 and Q2, the whole point here is to merge everything into a single system that is simpler to follow. We already have a practice of splitting off subpages when specific sections in the log get too large. signed, Rosguill talk 13:52, 23 October 2025 (UTC)
- 2 as a first choice, as centralization is helpful, but the current WP:AELOG is ultimately an ArbCom page and shouldn't have jurisdiction over community sanctions. I agree with Rosguill's point about splitting off subpages, and I presume this would be encouraged to a greater extent here. I could also be convinced by 1 (to avoid an unnecessary transclusion, although it should be made clear that it isn't an ArbCom-only page anymore) or by a temporary 3 (to avoid a lag spike until the main subpages are sorted out). Chaotic Enby (talk · contribs) 17:04, 23 October 2025 (UTC)
- Actually, I'm realizing that 2 doesn't help with centralization compared to 3, and creates a bit of an inconsistency between some topics being directly logged there and others being transcluded. Count 3 as my first choice, with the possibility of a combined log transcluding both for reference. Chaotic Enby (talk · contribs) 19:34, 23 October 2025 (UTC)
- 1 > 3 > 4, but my actual preference is to delegate this to a local consensus of those who are involved in implementing this. 1 is my preference, like Rosguill, because centralizing where the existing logs live promotes simplicity and would avoid the need for admins to check which types of CTOPs are which (one goal I have is for the community CTOPs and ArbCom CTOPs to feel almost identical). Not to mention, it would preserve compatibility with tools like WP:SUPERLINKS that check AELOG but not other pages. The biggest hurdle in my mind is that #1 would require ArbCom approval, which I think is likely but not certain (given that ArbCom allows AE for community CTOPS, why not AELOG?). Best, KevinL (aka L235 · t · c) 19:29, 23 October 2025 (UTC)
- 3, but there is a nuance: inclulde the recently-changed bit about protections being automatically logged, as part of a unified page at Wikipedia:Arbitration enforcement log/Protections. Protections for the "overlapping" CT/GS regions (A-A and KURD) are already logged there (as, technically, they fall under both) so this would make, and keep, things simple. - The Bushranger One ping only 20:04, 23 October 2025 (UTC)
- 5 There should be one system, not two. As noted above, the community is too amorphous and uncertain to be the basis for this. Andrew🐉(talk) 17:28, 25 October 2025 (UTC)
- 1 > 2 These should be standardized as much as possible. It's already the most confusing and obfuscated system of policies on Wikipedia; we should strive to eliminate as much confusion and pointless red tape as possible. Apart from where actions are logged, there are now pretty much no practical differences between ArbCom and community CTOPs: they are imposed by different bodies, enforced identically, and logged in different places. I agree with others that these systems should feel identical; this would have the additional advantage of making Aquillion's vague long-term proposal, to have old ArbCom topics "expire" into community ones if deemed no longer pertinent, seem like a realistic option. lp0 on fire () 22:49, 10 November 2025 (UTC)
Discussion (CTOP)
[edit]- Comment I understand the functional difference between an AE sanction and AN sanction is that an AE sanction can be removed only by a) the exact same admin who placed it, called the "enforcing admin" or b) a clearly-more-than-half balance of AE admins at an AE appeal while a sanction placed at AN can be removed by c) any sufficiently convinved admin acting alone. To give an example of how this would change things, I found myself in a situation in which I was indefinitely blocked at AE and then the enforcing admin left Wikipedia, which removed one of my options for lifting a sanction. Some of our fellow Wikipedians will think making it easier to get a sanction lifted is a good thing and others will think it's a bad thing, but we should be clear about that so we can all make our decision. Am I correct about how these changes would affect those seeking to have sanctions removed? Darkfrog24 (talk) 13:31, 23 October 2025 (UTC)
- @Darkfrog24: I think this is incorrect. As it stands now, restrictions imposed under community CTOPs are only appealable to the enforcing administrator or to AN (see, e.g., WP:GS/Crypto, which says
Sanctions imposed may be appealed to the imposing administrator or at the appropriate administrators' noticeboard.
). Q1 is about aligning the more subtle but still important differences between community CTOPs and ArbCom CTOPs, while Q2 is about adding AE as a place (but not changing the substantive amount of agreement needed) for enforcement requests and appeals. Best, KevinL (aka L235 · t · c) 13:43, 23 October 2025 (UTC)- Thanks, KevinL. I will ponder this and make my decision. Darkfrog24 (talk) 13:50, 23 October 2025 (UTC)
- @Darkfrog24: I think this is incorrect. As it stands now, restrictions imposed under community CTOPs are only appealable to the enforcing administrator or to AN (see, e.g., WP:GS/Crypto, which says
- Comment: Is there any way that we could implement the semi-automated logging process that is used for page protection of CTOPS here? Is there any expectation that if any of these options were chosen, that process would revert to manual? ⇒SWATJester Shoot Blues, Tell VileRat! 18:17, 23 October 2025 (UTC)
- Pinging @L235 whose bot is in charge of that – for the Twinkle integration of the CTOP logging, I'm currently working on a pull request that would work for both. Chaotic Enby (talk · contribs) 19:08, 23 October 2025 (UTC)
- I bet the bot could be adapted to whichever option the community opts for! KevinL (aka L235 · t · c) 19:24, 23 October 2025 (UTC)
- Pinging @L235 whose bot is in charge of that – for the Twinkle integration of the CTOP logging, I'm currently working on a pull request that would work for both. Chaotic Enby (talk · contribs) 19:08, 23 October 2025 (UTC)
- Comment – If we are to create a seperate log for community-authorised contentious topics as in alternative 3, it should not be subpage of Wikipedia:General sanctions. 'General sanctions' is a broad category that includes ArbCom sanctions, and also non-contentious topics remedies such as the extended confirmed restriction. This is a recipe for confusion. Please consider an alternative naming scheme. Yours, &c. RGloucester — ☎ 00:19, 24 October 2025 (UTC)
- The title can always be different. The title I named was just an example title to explain the purpose of the question. Aasim (話す) 01:30, 24 October 2025 (UTC)
- Given this has now passed (aside from the nitty-gritty of logging), does this mean community GSes imposing ECR now conform to Wikipedia:Arbitration Committee/Procedures#Extended confirmed restriction, specifically the portion about
Non-extended-confirmed editors may use the "Talk:" namespace only to make edit requests related to articles within the topic area, provided they are not disruptive
? Because the fact that, at least previously, that did not apply to community-imposed GSes has tripped me up in the past. - The Bushranger One ping only 23:32, 4 November 2025 (UTC)
- The extended confirmed restriction is a separate kind of general sanction, not part of contentious topics. Nothing in this discussion should apply to community-imposed extended confirmed restrictions. Yours, &c. RGloucester — ☎ 23:39, 4 November 2025 (UTC)
RFC: New GA quick fail criterion for AI
[edit]
|
Should the following be added to the 'Immediate failures' section of the good article criteria?
6. It contains obvious evidence of LLM use, such as AI-generated references or remnants of AI prompt.
Proposed after discussion at Wikipedia talk:Good articles#AI. Yours, &c. RGloucester — ☎ 10:08, 26 October 2025 (UTC)
Survey (GA quick fail)
[edit]- Support – Articles that contain obvious evidence of unreviewed AI use are evidence of a competence issue on the part of their creator that is not compatible with the GA process. Having reviewers perform a spot check for obvious signs of AI use will help militate against the recent problem whereby AI-generated articles are being promoted to GA status without sufficient review. Yours, &c. RGloucester — ☎ 10:08, 26 October 2025 (UTC)
- Support. Hardly needs saying. The use of AI is fundamentally contrary to the process of encyclopaedic writing. AndyTheGrump (talk) 10:14, 26 October 2025 (UTC)
- Strong Support: No article of real quality would ever have any signs of AI use easy to see. CabinetCavers (talk) 20:10, 10 November 2025 (UTC)
- Support This is an excellent proposal to help stop Wikipedia falling into absolute disrepute Billsmith60 (talk) 10:21, 26 October 2025 (UTC)
Support Billsmith60 (talk) 10:22, 26 October 2025 (UTC)- Billsmith60, presumably you didn't mean to enter two supports? Mike Christie (talk - contribs - library) 12:41, 26 October 2025 (UTC)
- Sorry about that Mike. Was on my phone for this and it is always temperamental Billsmith60 (talk) 01:02, 27 October 2025 (UTC)
- Billsmith60, presumably you didn't mean to enter two supports? Mike Christie (talk - contribs - library) 12:41, 26 October 2025 (UTC)
- Support Per nomination. This would not prohibit AI use per se, but would rule out promoting any low effort usage of AI. AI use in this manner could be argued to be a failure of GA criteria 1 and 2 as well, but explicitly stating as such will give a bit more weight to reviewers' decisions. --Grnrchst (talk) 10:45, 26 October 2025 (UTC)
- Oppose Per comment in discussion. Rollinginhisgrave (talk | contributions) 10:47, 26 October 2025 (UTC)
- Support Per nom. Vacant0 (talk • contribs) 10:54, 26 October 2025 (UTC)
- Oppose per comment in discussion IAWW (talk) 11:05, 26 October 2025 (UTC)
- Oppose. GAs should pass or fail based only and strictly only on the quality of the article. If there are AI-generated references then they either support the article text or they don't, if they don't then the article already fails criteria 2 and the proposal is redundant. If the reference does verify the text it supports then there is no problem. If there are left-over prompts then it already fails criteria 1 and so this proposal is redundant. If the AI-generated text is a copyright violation, then it's already an immediate failure and so the proposal is redundant. If the generated text is rambly, non-neutral, veers off topic, or similar issues then it already fails one or more criteria and so this proposal is redundant. Thryduulf (talk) 12:20, 26 October 2025 (UTC)
- As I see it, this proposal as-written is actually quite limited in scope and is not doing anything beyond saving resources. Obvious unreviewed AI use will not meet all criteria, but at the moment a reviewer of the GAN is still expected to do a full review. This proposal if passed would effectively codify that obvious AI is considered (by consensus of users of the GA process) to mean the article has insurmountable issues in its current state and should be worked on first before a full review. Kingsif (talk) 14:07, 26 October 2025 (UTC)
- Support, we can't afford wasting precious reviewer time (a very scarce resource) on stuff with fake references. —Kusma (talk) 12:29, 26 October 2025 (UTC)
- If there are fake references then it's already a fail for verifiability. This proposal does not save any additional reviewer time. Thryduulf (talk) 12:42, 26 October 2025 (UTC)
- It turns a fail into a quick fail, which of course saves reviewer time. —Kusma (talk) 12:49, 26 October 2025 (UTC)
- See my comment below in the discussion section. Thryduulf (talk) 12:57, 26 October 2025 (UTC)
- It turns a fail into a quick fail, which of course saves reviewer time. —Kusma (talk) 12:49, 26 October 2025 (UTC)
- If there are fake references then it's already a fail for verifiability. This proposal does not save any additional reviewer time. Thryduulf (talk) 12:42, 26 October 2025 (UTC)
- Oppose per IAWW and Thryduulf. All issues arising from AI use are already covered by other criteria, and there are legitimate uses of AI, which should not be prohibited. Kovcszaln6 (talk) 12:33, 26 October 2025 (UTC)
- Oppose. I sympathize with the intent of this RfC but it's the state of the article, not the process by which it got there, that GA criteria should address. Mike Christie (talk - contribs - library) 12:39, 26 October 2025 (UTC)
- Oppose. I agree with this in spirit, but I don't think it would be a useful addition. If a reviewer spots blatant and problematic AI usage (e.g. AI-generated references), almost all would quickfail the article immediately anyway. I can't imagine this proposal saving any additional reviewer time or reducing the handful of flawed articles that slip through that process. But if a nominator used AI for something entirely unproblematic and left an edit summary saying something like "used ChatGPT to change table formatting" or "fixed typos identified by ChatGPT", that would be
obvious evidence of LLM usage
and yet clearly doesn’t warrant a quickfail. MCE89 (talk) 12:52, 26 October 2025 (UTC)- Hmm, if “content” was in the proposed text somewhere, would that assuage your legitimate use thoughts? Kingsif (talk) 14:15, 26 October 2025 (UTC)
- I think that would be slightly better, but I still don't really see what actual problem this proposal is trying to solve. If an article consists of unreviewed or obviously problematic LLM output and contains things like fake references, reviewers aren't going to hesitate to quickfail it (and potentially G15 it) already. I don't see any signs that GAN is currently overwhelmed by AI-generated articles that reviewers just don't have the tools to deal with. And given that lack of a clear benefit, I'm more worried about the potential for endless arguments about process rather than content in the marginal cases (e.g. Can an article be quickfailed if the creator discloses that they used ChatGPT to help copyedit? What if they say they've manually verified and rewritten the LLM output? What is the burden of proof to say that LLM usage is "obvious", e.g. could I quickfail an article solely based on GPTZero?) MCE89 (talk) 15:11, 26 October 2025 (UTC)
- About the problems, I have a lot of thoughts and happy to discuss, perhaps we should move it to the section below? I also assume and hope people take obvious to mean obvious: if it’s marginal, it’s not obvious. Genuine text/code leftovers from copypasting LLM output is obvious, having to ask a different AI isn’t. Kingsif (talk) 15:29, 26 October 2025 (UTC)
- I think that would be slightly better, but I still don't really see what actual problem this proposal is trying to solve. If an article consists of unreviewed or obviously problematic LLM output and contains things like fake references, reviewers aren't going to hesitate to quickfail it (and potentially G15 it) already. I don't see any signs that GAN is currently overwhelmed by AI-generated articles that reviewers just don't have the tools to deal with. And given that lack of a clear benefit, I'm more worried about the potential for endless arguments about process rather than content in the marginal cases (e.g. Can an article be quickfailed if the creator discloses that they used ChatGPT to help copyedit? What if they say they've manually verified and rewritten the LLM output? What is the burden of proof to say that LLM usage is "obvious", e.g. could I quickfail an article solely based on GPTZero?) MCE89 (talk) 15:11, 26 October 2025 (UTC)
- Hmm, if “content” was in the proposed text somewhere, would that assuage your legitimate use thoughts? Kingsif (talk) 14:15, 26 October 2025 (UTC)
- Oppose largely per Thryydulf, except that I don't believe that AI content necessarily violates criterion 1. AI style is often recognisable but if it's well-written then I wouldn't care and we should investigate if the sources were not hallucinated. Fake references (as opposed to incomplete/obscure/not readily available references) should be an instafail reason. Szmenderowiecki (talk) 13:13, 26 October 2025 (UTC)
- Support Per my comments in discussion and here. I also see no objection that couldn’t be quelled by the proposed text already having the qualifier “obvious”: the proposal includes benefit of the doubt, even if I personally would take it much further. Kingsif (talk) 14:12, 26 October 2025 (UTC)
- Support per Kingsif and following the spirit of the guidance contained in WP:HATGPT and adjacently WP:G15. —Fortuna, imperatrix 14:36, 26 October 2025 (UTC)
- Support. On a volunteer-led project, it is an insult to expect a reviewer to engage with the extruded output of a syntax generator and not the work of a human volunteer. I am not interested in debating this; please don't ping me to explain that I'm being a Luddite in holding this view. ♠PMC♠ (talk) 14:52, 26 October 2025 (UTC)
- Oppose per MCE89. LLM use isn't necessarily problematic (even if it often is), and the proposed wording would discourage people from disclosing LLM use in their edit summaries. Anne drew (talk · contribs) 15:28, 26 October 2025 (UTC)
- Weak support -- Did not realize this discussion had been ongoing, I noped out because I was frankly way too exhausted to sisypheanly re-explain things I had already tried to explain. Anyway, I don't object to these criteria per se but this is a really low bar. What I would really support is mandatory disclosure of any AI use, because if AI was used then the spot-checking that is required in GA review is not going to be nearly enough. Nor is the problem really fake sources anymore, the problem is "interpretations" of sources that might not seem worth checking if you don't know what AI text sounds like, but if you do know what AI text sounds like, are huge blaring alarms that the text is probably wrong. Here's an example (albeit for a Featured Article and not a Good Article). All the sources were real, but the text describing the sources was fabricated. And it took me about 15 minutes to zero in on the references that were likely to have issues because I know how LLMs word things; without AI disclosure, reviewers are likely to spot-check the wrong things (as happened here). Gnomingstuff (talk) 17:34, 26 October 2025 (UTC)
- Weak oppose While I fully agree with the intent of this proposal, in practice I am concerned that this is subject to misuse by labeling anything as "AI". I agree with Thryduulf and others that any sort of poorly done AI use (which is almost all of it) will already be failable per the existing GA criteria. I share others' concern about the proliferation of AI generated articles and reviews but I'm not convinced this is the solution. Trainsandotherthings (talk) 18:37, 26 October 2025 (UTC)
- Weak oppose per my comments at WT:GAN. I also agree that LLM-generated articles are problematic, but the existing criteria already cover most of what's proposed - for instance, evidence of persistent failed verification is a valid reason to quickfail already. I'm concerned that a reviewer would use an LLM detector to check an article, the detector incorrectly says that the article is AI, and the reviewer fails based on that basis. AI detectors are notoriously unreliable - you can run a really old document, like the United States Declaration of Independence, through an AI detector to see what I'm talking about. (Edit - I would support changing WP:GACR criterion 3 -
It has, or needs, cleanup banners that are unquestionably still valid. These include {{cleanup}}, {{POV}}, {{unreferenced}} or large numbers of {{citation needed}}, {{clarify}}, or similar tags
- to list {{AI-generated}} as an example of a template that would merit a quickfail, since AI articles can already be quickfailed under that criterion. 13:01, 27 October 2025 (UTC)) Epicgenius (talk) 20:18, 26 October 2025 (UTC)- Wrong. I don't use AI detectors, but the best ones achieve ~99% accuracy. The Declaration of Independence is one of the worst possible counterexamples -- no shit, a famous English-language public domain text is all over the training data? Gnomingstuff (talk) 02:24, 27 October 2025 (UTC)
- They have high numbers of both false positives and false negatives. See, for instance, this study:
Looking at the GPT 3.5 results, the OpenAI Classifier displayed the highest sensitivity, with a score of 100%, implying that it correctly identified all AI-generated content. However, its specificity and NPV were the lowest, at 0%, indicating a limitation in correctly identifying human-generated content and giving pessimistic predictions when it was genuinely human-generated. GPTZero exhibited a balanced performance, with a sensitivity of 93% and specificity of 80%, while Writer and Copyleaks struggled with sensitivity. The results for GPT 4 were generally lower, with Copyleaks having the highest sensitivity, 93%, and CrossPlag maintaining 100% specificity. The OpenAI Classifier demonstrated substantial sensitivity and NPV but no specificity.
The link you provided saysannotators who frequently use LLMs for writing tasks excel at detecting AI-generated text
. This is about human writers detecting AI, not AI detectors detecting AI. That is not what I am talking about. Other studies like this one state that human reviewers have significant numbers of false positives and false negatives when detecting AI:In Gao et al.’s study, blind human reviewers correctly identified 68% of the AI-generated abstracts as generated and 86% of the original abstracts as genuine. However, they misclassified 32% of generated abstracts as real and 14% of original abstracts as generated.
– Epicgenius (talk) 02:57, 27 October 2025 (UTC)- The study also contains a chart comparing the performance of automatic AI detectors such as Pangram, GPTZero, and Binoculars. As you would have noticed if you read it fully. Gnomingstuff (talk) 16:03, 27 October 2025 (UTC)
- If you'd read to the conclusion you'd see
While AI-output detectors may serve as supplementary tools in peer review or abstract evaluation, they often misclassify texts and require improvement.
The limitations section also notes that paraphrasing the AI output significantly decreases the detection rate. This clearly indicates they are not fit for the purpose they would be used for here - especially when the false positive rate is sometimes over 30%. We absolutely cannot afford to tell a third of users that their submission was rejected because they used AI when they didn't actually use AI. Thryduulf (talk) 17:42, 27 October 2025 (UTC)- Seconding what Thryduulf said. I've said my piece, though, so I won't belabor it any further. – Epicgenius (talk) 01:38, 28 October 2025 (UTC)
- If you'd read to the conclusion you'd see
- The study also contains a chart comparing the performance of automatic AI detectors such as Pangram, GPTZero, and Binoculars. As you would have noticed if you read it fully. Gnomingstuff (talk) 16:03, 27 October 2025 (UTC)
- They have high numbers of both false positives and false negatives. See, for instance, this study:
- Wrong. I don't use AI detectors, but the best ones achieve ~99% accuracy. The Declaration of Independence is one of the worst possible counterexamples -- no shit, a famous English-language public domain text is all over the training data? Gnomingstuff (talk) 02:24, 27 October 2025 (UTC)
- Support It's not fair to submit GA checkers to the noxious task of checking everything in a long detailed article for AI problems. Even without a rule, if you see evidence of AI, say so in the review, that everyone can see, the AI rabbit hole has been found. Nobody is obligated to go down that warren, note it and pass it by. Heck make some warning templates or essays, so future reviewers understand their obligation. It should take 10+ hours to correctly verify an AI article, requires reading all sources and understanding topic in depth. -- GreenC 20:52, 26 October 2025 (UTC)
It's not fair to submit GA checkers to the noxious task of checking everything in a long detailed article for AI problems
they don't have to at the moment. If there are problems the review is already failed regardless of whether or not the problems result from AI use. If there are no problems then whether AI was used is irrelevant. Thryduulf (talk) 20:57, 26 October 2025 (UTC)- Also, I should note that if a reviewer finds so many issues that the article requires 10+ hours to fix, it is already acceptable to quickfail based on these other issues. GA is supposed to be a lightweight process; reviewers already can fail articles if they find things like failed verification or issues needing maintenance banners, and determine that the issues can't be reasonably fixed within a week or so. The proposed GA criterion is well-intentioned, but I think focusing on the means of writing the articles, rather than the ends, is not the correct way to go about it. Epicgenius (talk) 22:40, 26 October 2025 (UTC)
- With AI you don't even know errors exist. It took me 7 days once to find all the problems in an AI generated article. Turned out to have a reasonable sounding but nationalistic-bent supported by errors of omission. How do you know this without research on the topic? This is why so many are against AI, it's incredibly difficult to debug. Normally a nationalistic writer is easy to spot, but AI is such a good liar, not even the operators realize what it is doing. Not to say AI is impossible to use correctly, with a skilled, disciplined, and intellectually honest operator. — GreenC 23:41, 26 October 2025 (UTC)
- I agree, and based on some known AI model biases, any controversial topic (designated or just by common sense) should probably have AI use banned completely. Kingsif (talk) 23:50, 26 October 2025 (UTC)
- I do see, and agree with, the point that you would have to very carefully examine all claims in an article that is suspected of containing AI content. However, WP:GAQF criterion 3 (
It has, or needs, cleanup banners that are unquestionably still valid. These include {{cleanup}}, {{POV}}, {{unreferenced}} or large numbers of {{citation needed}}, {{clarify}}, or similar tags
) already covers this. If an article is suspected of containing AI, and thus deserves (or has) {{AI-generated}}, it is already eligible for a quick fail under QF criterion 3. – Epicgenius (talk) 02:53, 27 October 2025 (UTC)- Epicgenius, yeah that sounds right. Maybe somewhere in the GA rules there could be a reminder about adding
{{AI-generated}}if AI is discovered during the vetting process. Then QF#3 takes effect. — GreenC 06:27, 27 October 2025 (UTC)- That's fair, and I can agree with adding it to QF#3. – Epicgenius (talk) 13:02, 27 October 2025 (UTC)
- Epicgenius, yeah that sounds right. Maybe somewhere in the GA rules there could be a reminder about adding
- With AI you don't even know errors exist. It took me 7 days once to find all the problems in an AI generated article. Turned out to have a reasonable sounding but nationalistic-bent supported by errors of omission. How do you know this without research on the topic? This is why so many are against AI, it's incredibly difficult to debug. Normally a nationalistic writer is easy to spot, but AI is such a good liar, not even the operators realize what it is doing. Not to say AI is impossible to use correctly, with a skilled, disciplined, and intellectually honest operator. — GreenC 23:41, 26 October 2025 (UTC)
- Also, I should note that if a reviewer finds so many issues that the article requires 10+ hours to fix, it is already acceptable to quickfail based on these other issues. GA is supposed to be a lightweight process; reviewers already can fail articles if they find things like failed verification or issues needing maintenance banners, and determine that the issues can't be reasonably fixed within a week or so. The proposed GA criterion is well-intentioned, but I think focusing on the means of writing the articles, rather than the ends, is not the correct way to go about it. Epicgenius (talk) 22:40, 26 October 2025 (UTC)
- Weak oppose while i am against use of AI in GA and the GANR process I think this is a somewhat misguided proposal as it covers things that would already fall under quick fail criteria and does not actually identify the scope of the issues (i.e what is considered obvious evidence of AI use?). I would be able to support a non-redundant and more detailed proposal but it would need to be more fleshed out than this. IntentionallyDense (Contribs) 02:18, 27 October 2025 (UTC)
- The proposal clearly identifies 'obvious evidence of AI use' as AI-generated references, such as those that can be detected by Headbomb's script, and remnants of AI prompt. Yours, &c. RGloucester — ☎ 04:03, 27 October 2025 (UTC)
- So you want people to quick-fail a nomination on the basis of @Headbomb's script, about which the documentation for the script says it "is not necessarily an issue ("AI, find me 10 reliable sources about Pakistani painter Sadequain Naqqash")". That sounds like a bad idea to me. WhatamIdoing (talk) 06:20, 27 October 2025 (UTC)
- I think I have made my stance on LLM use in relation to good articles very clear. You are free to object as you see fit. Yours, &c. RGloucester — ☎ 07:12, 27 October 2025 (UTC)
- your wording says
6. It contains obvious evidence of LLM use, such as AI-generated references or remnants of AI prompt.
this tells me that you can’t have AI prompts in your writing and no AI generated references. Okay… so both of those would be covered by the current criteria. It didn’t mean took the headbomb script. And I definitely would not support any quickfail criteria that relies on a used script especially when the script states “ This is not a tool to be mindlessly used.” also on what basis of HB script are we quick failing? - The current proposal tells me nothing about what is considered suspicious for AI usage outside of the current existing quick fail criteria. It gives me no guidance as a reviewer as to what may be AI unless it is blatantly obvious. IntentionallyDense (Contribs) 13:18, 27 October 2025 (UTC)
- WP:AISIGNS is a good start. Gnomingstuff (talk) 20:15, 28 October 2025 (UTC)
- I agree, we do have some pretty solid parameters around what is a red flag for AI but I believe any proposal around policy/guidelines for AI needs to incorporate those and lay out what that looks like before we take action on it. I would just like an open conversation on what editors think signs of AI use are, and what we can gain some consensus around regarding indications of AI, then it will be a lot easier to implement policy on how to deal with those indications.
- My main issue with this proposal is that it completely skipped that first step of gaining consensus about what the scope of the problem is and jumped to implementing measures to resolve said problem that we have not properly reached consensus on. IntentionallyDense (Contribs) 20:26, 28 October 2025 (UTC)
- WP:AISIGNS is a good start. Gnomingstuff (talk) 20:15, 28 October 2025 (UTC)
- So you want people to quick-fail a nomination on the basis of @Headbomb's script, about which the documentation for the script says it "is not necessarily an issue ("AI, find me 10 reliable sources about Pakistani painter Sadequain Naqqash")". That sounds like a bad idea to me. WhatamIdoing (talk) 06:20, 27 October 2025 (UTC)
- The proposal clearly identifies 'obvious evidence of AI use' as AI-generated references, such as those that can be detected by Headbomb's script, and remnants of AI prompt. Yours, &c. RGloucester — ☎ 04:03, 27 October 2025 (UTC)
- Oppose. If a GAN is poorly written, it fails the first criterion. If references are made up, it fails the second criterion. If the associated prose does not conform with the references, then it fails the second criterion. We shouldn't be adding redundant instructions to the good article criteria. I don't want new reviewers to be further intimidated by a long set of instructions. Steelkamp (talk) 04:53, 27 October 2025 (UTC)
- Oppose, as I pointed in other discussions, and many already pointed here, GA criteria should be focused on the result, not the process. But besides that, I see a high anti-AI sentiment in those discussions and fear that if those proposals are approved, they will be abused. Cambalachero (talk) 13:16, 27 October 2025 (UTC)
- Support. Playing whack-a-mole with AI is a problematic time sink for good-faith Wikipedia editors because of the disparity in how much time it takes an AI-using editor to make a mess and how much time it takes the good-faith editors to figure it out and clean it up. This is especially problematic in GA where even in the non-AI cases making a review can be very time consuming with little reward. The proposal helps reduce this time disparity and by doing so helps head off AI users from gaming the system and clogging up the nomination queue, already a problem. —David Eppstein (talk) 17:29, 27 October 2025 (UTC)
- Please rewrite that. By writing "good-faith Wikipedia editors" meaning editors who do not use AI, you are implying that those who do are acting in bad faith. Cambalachero (talk) 17:37, 27 October 2025 (UTC)
- "Consensus-abiding Wikipedia editors" and "editors who either do not know of or choose to disrespect the emerging consensus against AI content" would be too unwieldy. But I agree that many new editors have not yet understood the community's distaste for AI and are using it in good faith. Many other editors have heard the message but have chosen to disregard it, often while using AI tools to craft discussion contributions that insist falsely that they are not using AI. I suspect that the ones who have reached the stage of editing where they are making GA nominations may skew more towards the latter than the broader set of AI-using editors. AGF means extending an assumption of good faith towards every individual editor unless they clearly demonstrate that assumption to be unwarranted. It does not mean falsely pretending the other kind of editor does not exist, especially in a discussion of policies and procedures intended to head off problematic editing. —David Eppstein (talk) 18:31, 27 October 2025 (UTC)
- Please rewrite that. By writing "good-faith Wikipedia editors" meaning editors who do not use AI, you are implying that those who do are acting in bad faith. Cambalachero (talk) 17:37, 27 October 2025 (UTC)
- Support. People arguing that any article containing such things would ultimately fail otherwise are missing the point. The point is to make it an instant failure so further time doesn't need to be wasted on it - otherwise, people would argue eg. "oh that trace of a prompt / single hallucinated reference is easily fixed, it doesn't mean the article as a whole isn't well-written or passes WP:V. There, I fixed it, now continue the GA review." One bad sentence or one bad ref isn't normally an instant failure; but in a case where it indicates that the article was poorly-generated via AI, it should be, since it means the entire article must be carefully reviewed and, possibly, rewritten before GA could be a serious consideration. Without that requirement, large amounts of time could be wasted verifying that an article is AI slop. This is especially true because the purpose of existing generative AI is to create stuff that looks plausible at a glance - it will often not be easy to demonstrate that it is
a long way from meeting any one of the six good article criteria
, wasting editor time and energy digging into material that had little time and effort put into it in the first place. That's not a tenable situation; once there is evidence that an article was badly-generated with AI, the correct procedure is to immediately terminate the GA assessment to avoid wasting further time, and only allow a new one once there is substantial evidence that the problem has been addressed by in-depth examination and improvement. Determining whether an articleshould pass or fail based only and strictly only on the quality of the article
is a laborious, time-intensive process; it is absolutely not appropriate to demand that an article be given that full assessment once there's a credible reason to believe that it's AI slop. That's the entire point of the quickfail criteria - to avoid wasting everyone's time in situations where a particular easily-determined criteria means it is glaringly obvious that the article won't pass. --Aquillion (talk) 19:46, 27 October 2025 (UTC)- Bravo, Aquillion! You explained my rationale for this proposal better than I could have done. I am much obliged. Yours, &c. RGloucester — ☎ 23:55, 27 October 2025 (UTC)
- Support in principle, although perhaps I'd prefer such obvious tells in the same criteria as the copyvio one. Like copyvio, the problems might not be immediately apparent, and like copyvio, the problems can be a headache to fix. Llm problems are possibly even much more of a timesink, checking through and potentially cleaning up llm stuff is not a good use of reviewer time. This QF as proposed will only affect the most blatant signals that llm text was not checked, which has its positives and negatives but worth noting when thinking about the proposal. CMD (talk) 01:34, 28 October 2025 (UTC)
- Support. Deciding on the accuracy and relevance of every LLM's output is not sustainable on article talkpages or in articles. Sure, it could produce something passable, but there is no way to be sure without unduly wasting reviewer time. They're designed to generate text faster than any human being can produce or review it and designed in such a way as to make fake sources or distorted information seem plausible.--MattMauler (talk) 19:34, 28 October 2025 (UTC)
- Support: per Aquillion who's reasoning matches my own thoughts exactly. fifteen thousand two hundred twenty four (talk) 21:12, 28 October 2025 (UTC)
- Support AI is killing our planet (to an even worse extent than other technologies) and we need to strongly discourage its use. JuxtaposedJacob (talk) | :) | he/him | 00:46, 29 October 2025 (UTC)
- Oppose This proposal is far too broad. This would mean that an article with a single potentially hallucinated reference (that may not have even been added by the nominator) would be quickfailed. Nope. voorts (talk/contributions) 01:19, 29 October 2025 (UTC)
- If an article has even a single hallucinated reference, it should be quickfailed, as that means the nominator has failed to do the bare minimum of due diligence. CaptainEek Edits Ho Cap'n!⚓ 19:28, 2 November 2025 (UTC)
- That's why I said potentially hallucinated. I'm worried this will be interpreted by some broadly and result in real, but hard to find, sources being deemed hallucinated. Also, sometimes editors other than the nominator edit an article in the months between nomination and review. We shouldn't penalize such editors with a quickfail over just one reference that they may not have added. voorts (talk/contributions) 19:34, 2 November 2025 (UTC)
- If an article has even a single hallucinated reference, it should be quickfailed, as that means the nominator has failed to do the bare minimum of due diligence. CaptainEek Edits Ho Cap'n!⚓ 19:28, 2 November 2025 (UTC)
- Comment Here's a list of editors who have completed a GAN review. voorts (talk/contributions) 01:29, 29 October 2025 (UTC)
- "Whoever wants to know a thing has no way of doing so except by coming into contact with it, that is, by living (practicing) in its environment. ... If you want knowledge, you must take part in the practice of changing reality. If you want to know the taste of a pear, you must change the pear by eating it yourself.... If you want to know the theory and methods of revolution, you must take part in revolution. All genuine knowledge originates in direct experience." – Mao Zedong
- Editors who have never done a GA review or who have done very few should consider that they may not have adequate knowledge to know what GAN reviewers want/need as tools. It seems to me like a lot of support for this is a gut reaction against any AI/LLM use, and I don't think that's a good way to make rules. voorts (talk/contributions) 15:48, 29 October 2025 (UTC)
- I like the way you’ve worded this as it is my general concern as well. While I’m not too high up on that list I’ve done 75 ish reviews and have never encountered AI usage. I know it exists and do see it as problem, however I don’t feel it deserves such a hurried reaction to create hard and fast rules. I would much prefer we take the time to properly flesh out a plan to deal with these issues that involves community input from a range of experiences and reviewers on the scope of the problem, how we should deal with it and to what extent. IntentionallyDense (Contribs) 20:32, 29 October 2025 (UTC)
- Yes. Recently, I've noticed a lot of editors rushing to push through new PAGs without much discussion or consideration of the issues beforehand. It's not conducive to good policymaking. voorts (talk/contributions) 20:35, 29 October 2025 (UTC)
- I echo this sentiment. In my 100+ reviews done in the last year I have only had a few instances where I suspected AI use, and I can't think of any that had deep rooted issues clearly caused by AI. IAWW (talk) 22:10, 29 October 2025 (UTC)
- This rule would’ve been useful years ago when we had users who really wanted to contribute but couldn’t write well enough, their primitive chat bot text was poor and they were unable to fix it, and keeping a review open to go through everything was the response because they didn’t want to close it and insisted it just needed work. As gen AI use is only increasing, addressing the situation before it gets that bad is a good thing. Kingsif (talk) 19:13, 30 October 2025 (UTC)
- Cool, but I am easily highest up that list (which doesn’t count the reviews I did before it, or took over after) of everyone in this discussion, so your premise is faulty. Kingsif (talk) 19:07, 30 October 2025 (UTC)
- I don't think my premise is faulty. I never said everyone who does GAN reviews needs to think the same way, nor do I believe that, and I see that you and other experienced GAN reviewers disagree with me. My point was that editors who have never done one should consider whether they have enough knowledge to make an informed opinion one way or the other. voorts (talk/contributions) 19:12, 30 October 2025 (UTC)
- While you didn’t speak in absolutes, your premise was based in suggesting the people who disagree with you aren’t aware enough. Besides being wrong, you must know it was unnecessary and rather unseemly to bring it up in the first place: this is a venue for everyone to contribute. Kingsif (talk) 19:19, 30 October 2025 (UTC)
- That wasn't my premise. I just told you what my premise is and I stand by it. I felt like it needed to be said in this discussion because AI/LLM use is a hot button issue and we should be deliberative about how we handle it on wiki. If editors who have never handled a GAN review want to ignore me, they can. As you said, anyone can participate here. voorts (talk/contributions) 19:51, 30 October 2025 (UTC)
- Forgive me for disagreeing with your point, then, but I don’t think it even really requires editing experience in general to have an opinion on “should we make people waste time explaining why gen AI content doesn’t get a Good stamp or just let them say it doesn’t” Kingsif (talk) 20:11, 30 October 2025 (UTC)
- Fair enough. No need to apologize. I'm always open to disagreement. voorts (talk/contributions) 20:39, 30 October 2025 (UTC)
- Forgive me for disagreeing with your point, then, but I don’t think it even really requires editing experience in general to have an opinion on “should we make people waste time explaining why gen AI content doesn’t get a Good stamp or just let them say it doesn’t” Kingsif (talk) 20:11, 30 October 2025 (UTC)
- That wasn't my premise. I just told you what my premise is and I stand by it. I felt like it needed to be said in this discussion because AI/LLM use is a hot button issue and we should be deliberative about how we handle it on wiki. If editors who have never handled a GAN review want to ignore me, they can. As you said, anyone can participate here. voorts (talk/contributions) 19:51, 30 October 2025 (UTC)
- While you didn’t speak in absolutes, your premise was based in suggesting the people who disagree with you aren’t aware enough. Besides being wrong, you must know it was unnecessary and rather unseemly to bring it up in the first place: this is a venue for everyone to contribute. Kingsif (talk) 19:19, 30 October 2025 (UTC)
- I don't think my premise is faulty. I never said everyone who does GAN reviews needs to think the same way, nor do I believe that, and I see that you and other experienced GAN reviewers disagree with me. My point was that editors who have never done one should consider whether they have enough knowledge to make an informed opinion one way or the other. voorts (talk/contributions) 19:12, 30 October 2025 (UTC)
- I like the way you’ve worded this as it is my general concern as well. While I’m not too high up on that list I’ve done 75 ish reviews and have never encountered AI usage. I know it exists and do see it as problem, however I don’t feel it deserves such a hurried reaction to create hard and fast rules. I would much prefer we take the time to properly flesh out a plan to deal with these issues that involves community input from a range of experiences and reviewers on the scope of the problem, how we should deal with it and to what extent. IntentionallyDense (Contribs) 20:32, 29 October 2025 (UTC)
- Oppose. The proposal lacks clarity in definitions and implementation and the solution is ill-targeted to the problems raised in this and the preceding discussion. Editors have stated that the rationale for new quick fail criteria is to save time. On the other hand, editors have said it takes hours to verify hallucinated references and editors disagree vehemently about the reliability of subjective determinations of AI writing or use of AI detectors. Others have stated that it is already within the reviewer's purview to quick fail an article if they determine that too much time is required to properly vet the article. It is not clear how reviewers will determine that an article meets the proposed AI quick fail criterion, how long this will take, or that a new criterion is needed to fail such articles. Editors disagree about which signs of AI writing are "obvious" and as to whether all obvious examples are problematic. The worst examples would fail, anyway, and seemingly without requiring hours to complete the review so again it is unclear that this new criterion addresses the stated problem. Editors provided examples of articles with problematic, (allegedly) AI-generated content that have passed GA. New quick fail criteria would not address these situations where the reviewer apparently did not find the article problematic while another felt the problems were "obvious". Reviewers who are bad at detecting AI writing or don't verify sources or whatever the underlying deficit is won't invoke the new quick fail criterion and won't stop AI slop from attaining GA status.—Myceteae🍄🟫 (talk) 01:42, 29 October 2025 (UTC)
- Support in the strongest possible terms. This is half practical, and half principle: the principle being that LLM/AI has no place on Wikipedia. Yes, there may be some, few, edge-cases where AI is useful on Wikipedia. But one good apple in a barrel of bad apples does not magically make the place that shipped you a barrel of bad apples a good supplier. For people who want a LLM-driven encylopedia, Grokipedia is thataway →. For people who want an encyclopedia actually written by and for human useage, the line must be drawn here. - The Bushranger One ping only 01:53, 29 October 2025 (UTC)
- Oppose per IntentionallyDense and because detecting AI generation isn't always "obvious", and because the nom's proposed method for detecting LLM use to generate the article's contents will also flag people who use (e.g.) ChatGPT as a web search engine without AI generating even a single word in the whole article article. Also: if you want to make any article look very suspicious, then spam
?utm_source=chatgpt.comat the end of every URL. The "AI detecting" script will light up every source on the page as being suspicious, because it's not actually detecting AI use; it's detecting URLs with some referral codes. I might support adding {{AI generated}} to the list of other QF-worthy tags. WhatamIdoing (talk) 02:09, 29 October 2025 (UTC) - Oppose. If it's WP:G15-level, G15 it (no need to quickfail). Otherwise, we shouldn't go down the rabbit hole of unprovable editor behaviour and should focus on the actual quality of the article in front of us. If it has patently non-neutral language or several things fail verification, it can already be quick-failed as being a long way from the criteria. ~ L 🌸 (talk) 07:01, 29 October 2025 (UTC)
- Oppose per MCE89. If you use an LLM to generate text and then use it as the basis for creating good, properly verified content, who cares? It's not as if a reviewer has to check every single citation — if you find one that's nonexistent, that alone should be sufficient to reject the article. Stating "X is Y"<ref>something</ref>, when "something" doesn't say so or doesn't even exist, is a hoax, and any hoax means that the article is a long way from meeting the "verifiable with no original research" criterion. And if we encounter "low effort usage of AI", that's certainly not going to pass a GA review. And why should something be instantly failed just because you believe that it's LLM-generated? Solidly verifying that something is automatically written — not just a high suspicion, but solidly demonstrating — will take more work than checking some references, and as Whatamidoing notes, it's very difficult to identify LLM usage conclusively; we shouldn't quick-fail otherwise good content just because someone incorrectly thinks that it was automatically written. I understand that LLMs tend to use M-dashes extensively. I've always used them a lot more than the average editor does; this was the case even when I joined Wikipedia 19 years ago, long before LLMs were a problem this way. Nyttend (talk) 10:44, 29 October 2025 (UTC)
- Support per PMC, David Epstein and Aquillion. A lot of opposes to me look like they are either completely missing the point of a useful practical measure over irrelevant theoretical concerns. I also do find it absolutely insulting to not give reviewers every possible tool to deal with this trash, making them waste precious time and effort to needlessly satisfy another of the existing criteria. Choucas0 🐦⬛⋅💬⋅📋 15:25, 29 October 2025 (UTC)
I also do find it absolutely insulting to not give reviewers every possible tool to deal with this trash, making them waste precious time and effort to needlessly satisfy another of the existing criteria.
I've reviewed a lot of GAs and oppose this because it's vague and a solution in search of a problem. I see that you've completed zero GAN reviews. voorts (talk/contributions) 15:44, 29 October 2025 (UTC)- You are entitled to your opinion, but so am I, and I honestly do not see what such a needlessly acrimonious answer is meant to achieve here. The closer will be free to weigh your opposition higher than my support based on experience, but in the meantime that does not entitle you to gate-keep and belittle views you disagree with because you personally judge them illegitimate. Choucas0 🐦⬛⋅💬⋅📋 15:58, 29 October 2025 (UTC)
- You are entitled to your opinion. But when your opinion is based on the fact that something is insulting to a group to which I belong, I am entitled to point out that you're not part of that group and that you're not speaking on my behalf. I don't see how it's
acrimonious
orgate-keep[ing]
orbelittl[ing]
to point out that fact. voorts (talk/contributions) 16:06, 29 October 2025 (UTC)- That is not what my opinion is based on (the first half of my comment pretty clearly is), and I did not mean to speak on anyone's behalf; I apologize if it was not clearer, since it is something that I aim to never do. I consider being exposed to raw LLM output insulting to anyone on this site, so I hope what I meant is clearer now. On another hand, your comment quoting Mao Zedong immediately after your first answer to me clearly shows that you do intend to gate-keep this discussion at large, so you will forgive me for being somewhat skeptical and not engaging further. Choucas0 🐦⬛⋅💬⋅📋 16:31, 29 October 2025 (UTC)
- I'm not sure how pointing out that editors should think before they opine on something with which they have little to no experience is a form of gatekeeping. That's why I didn't say "those editors can't comment" in this discussion. It's a suggestion that people stop and think about whether they actually know enough to have an informed opinion. voorts (talk/contributions) 16:44, 29 October 2025 (UTC)
- That is not what my opinion is based on (the first half of my comment pretty clearly is), and I did not mean to speak on anyone's behalf; I apologize if it was not clearer, since it is something that I aim to never do. I consider being exposed to raw LLM output insulting to anyone on this site, so I hope what I meant is clearer now. On another hand, your comment quoting Mao Zedong immediately after your first answer to me clearly shows that you do intend to gate-keep this discussion at large, so you will forgive me for being somewhat skeptical and not engaging further. Choucas0 🐦⬛⋅💬⋅📋 16:31, 29 October 2025 (UTC)
- You are entitled to your opinion. But when your opinion is based on the fact that something is insulting to a group to which I belong, I am entitled to point out that you're not part of that group and that you're not speaking on my behalf. I don't see how it's
- You are entitled to your opinion, but so am I, and I honestly do not see what such a needlessly acrimonious answer is meant to achieve here. The closer will be free to weigh your opposition higher than my support based on experience, but in the meantime that does not entitle you to gate-keep and belittle views you disagree with because you personally judge them illegitimate. Choucas0 🐦⬛⋅💬⋅📋 15:58, 29 October 2025 (UTC)
- Oppose per LEvalyn. Like WhatamIdoing, I would rather treat {{AI generated}} as reason to quick-fail under GA Criteria 1 and 2. ViridianPenguin🐧 (💬) 15:35, 29 October 2025 (UTC)
- I've seen a couple people suggest this, and... I don't really get how this is different at all? Anything under the proposed criterion can be tagged as AI-generated already, this would just be adding an extra step. Gnomingstuff (talk) 20:25, 31 October 2025 (UTC)
- Oppose. If it has remnants of a prompt, that's already WP:G15. If the references are fake, that's already WP:G15. If it's not that bad, further review is needed and it shouldn't be QF'd. If AI-generated articles are being promoted to GA status without sufficient review, that means the reviewer has failed to do their job. Telling them their job is now also to QF articles that have signs of AI use won't help them do their job any better - they already didn't notice it was AI-generated. -- asilvering (talk) 15:56, 29 October 2025 (UTC)
- Oppose. The article should be judged on its merits and its quality, not the matter or methods of its creation. The judgment should be based only on its quality. Any AI-generated references will fail category 2. If the AI-generated text is a copyright violation, it would be an instant failure as well. We didn't need to write up new rules for things that are forbidden in the first place anyway. Another concern for me is the term "obvious". While there may be universal agreement that some AI slop are obvious AI ("This article is written for your request...", "Here is the article...") some might not be obvious for other people. The use of em-dashes might not be an obvious AI use as some ESL writers might use them as well. The term "obvious" will be vague and it will create problems. Obviously AI slop can be dealt with G15 as well. ✠ SunDawn ✠ Contact me! 02:55, 30 October 2025 (UTC)
- Support - too much junk at this point to be worthwhile. Readers come here exactly because it is written by people and not Grokipedia garbage. We shouldn't stoop to that level. FunkMonk (talk) 13:38, 30 October 2025 (UTC)
- Support If there is an obvious trace of LLM use in the article and you are the creator, then you have no business being anywhere near article creation. If you are the nominator, then you have failed to apply a basic level of due diligence. Either way the article will have to be gone over with a fine comb, and should be removed from consideration. --Elmidae (talk · contribs) 13:54, 30 October 2025 (UTC)
- Support Per nom. LLM-generated text has no place on Wikipedia. The Morrison Man (talk) 13:59, 30 October 2025 (UTC)
- Support GAN is not just a quality assessment – it also serves as a training ground for editors. LLM use undermines this; using LLMs just will not lead to better editors. As a reviewer, I refrain from reading anything that is potentially AI generated, as it is simply not worth my time. I want to help actual humans with improving their writing; I am not going to pointlessly correct the same LLM mistakes again and again, which is entirely meaningless. LLM use should be banned from Wikipedia entirely. --Jens Lallensack (talk) 15:59, 30 October 2025 (UTC)
- Oppose. The Venn Diagram crossover of "editors who use LLMs" and "editors who are responsible enough to be trusted to use LLMs responsibly" is incredibly narrow. It would not surprise me if 95% of LLM usage shouldn't merely be quickfailed, but actively rolled back. That said, just because most editors cannot be trusted to use it properly does not mean it is completely off the table - using an LLM to create a table in source given input is fine, say. Additionally, AI accusations can prove a "witch hunt" where just because an editor's writing style includes m-dashes or bold, it gets an AI accusation - even though real textbooks may often also use bolding and m-dashes and everything too! If a problematic LLM article is found, it can still be quick-failed on criterion 1 (if the user wrote LLM-style rather than Wikipedia-style) or criterion 2 (if the user used the LLM for content without triple-verifying everything to real sources they had access to). We don't need a separate criterion for those cases. SnowFire (talk) 18:40, 30 October 2025 (UTC)
- Oppose – either the issues caused by AI make an article
a long way from meeting any one of the six good article criteria
, in which case QF1 would apply, or they do not, in which case I believe a full review should be done. With the current state of LLMs, any article in the latter category will be one that a human has put significant work into. Some editors would dislike reviewing these nominations, but others are willing; I think making WP:LLMDISCLOSE mandatory would be a better solution. jlwoodwa (talk) 04:20, 31 October 2025 (UTC)- I would also fully support mandatory LLM disclosure IAWW (talk) 08:58, 31 October 2025 (UTC)
- But wouldn't those reviewers that are possibly willing to review an LLM generated article be primarily those that use LLMs themselves, have more trust in them, and probably even use them for their review? A situation where most LLM-generated GAs are reviewed by LLMs does not sound healthy. --Jens Lallensack (talk) 12:00, 31 October 2025 (UTC)
- I think that's a stretch. I've used an LLM to create two articles, but wouldn't trust it to review an article against GAN criteria. ScottishFinnishRadish (talk) 12:15, 31 October 2025 (UTC)
- LLM usage is a scale. It is not as black-and-white as those who use LLMs vs those who don't. I am of the opinion that LLMs should only be used in areas where their error rate is less than humans. In my opinion LLMs pretty much never write adequate articles or reviews, yet they can be used as tools effectively in both. IAWW (talk) 13:22, 31 October 2025 (UTC)
- But wouldn't those reviewers that are possibly willing to review an LLM generated article be primarily those that use LLMs themselves, have more trust in them, and probably even use them for their review? A situation where most LLM-generated GAs are reviewed by LLMs does not sound healthy. --Jens Lallensack (talk) 12:00, 31 October 2025 (UTC)
- I would also fully support mandatory LLM disclosure IAWW (talk) 08:58, 31 October 2025 (UTC)
- Oppose - Redundant. Stikkyy t/c 11:53, 31 October 2025 (UTC)
- Oppose. GA is about assessing the quality of the article, not about dealing with prejudice toward any individual or individuals. If the article is bad (poorly written, biased, based on rumour rather than fact, with few cites to reliable sources), it doesn't matter who has written it. Equally, if an article is good (well written, balanced, factual, and well cited to reliable sources), it doesn't matter who has written it, nor what aid(s) they used. Lets assess the content not the contributor. SilkTork (talk) 12:15, 31 October 2025 (UTC)
- Oppose. Focuses too much on the process rather than on the end result. Also, the vagueness of 'obvious' lays the ground for after-the-event arguments on such things as "I already know this editor uses LLMs in the background; the expression 'stands as a ..' appears, and that's an obvious LLM marker". MichaelMaggs (talk) 18:20, 31 October 2025 (UTC)
I already know this editor uses LLMs in the background
- How is this not a solid argument? Gnomingstuff (talk) 00:57, 3 November 2025 (UTC)
- Support per Aquillion. Nikkimaria (talk) 18:50, 1 November 2025 (UTC)
- Support GA is a mark of quality. If you read something and you can obviously tell it is AI, that does not meet our standards of quality. Florid language, made up citations, obvious formatting errors a human wouldn't make, whatever it is that indicates clear AI use, that doesn't meet our standards. Could we chock that up to failing another criteria? Maybe. But it's nice to have a straightforward box to check to toss piss poor AI work out--and to discourage the poor use of AI. CaptainEek Edits Ho Cap'n!⚓ 19:34, 2 November 2025 (UTC)
- Support In my view, if someone is so lazy that they generate an entire article to nominate without actually checking if it complies with the relevant policies and guidelines, then their nomination is not worth considering. Reviews are already a demanding process, especially nowadays. Why should I or anyone else put in the effort if the nominator is not willing to also put in the effort. Lazman321 (talk) 03:10, 3 November 2025 (UTC)
- This proposal would impact those people, but it would also speedily fail submissions by people who do (or are suspected of) using LLMs but who do put in the effort to check that the LLM-output complies with all the relevant policies and guidelines. For example:
- Editor A uses an LLM to find a source, verifies that that source exists, is reliable, and supports the statement it is intended to support but doesn't remove the associated LLM metadata from the URL. This nomination is speedily failed, despite being unproblematic.
- Editor B uses an LLM to find a source, verifies that that source exists, is reliable, and supports the statement it is intended to support, and removes the associated LLM metadata from the URL. This nomination is speedily failed if someone knows or suspects that an LLM was used, it is accepted if someone doesn't know or suspect LLM use, despite the content being identical and unproblematic.
- Editor D finds a source without using an LLM, verifies that that source exists, is reliable, and supports the statement it is intended to support. This nomination is accepted, even though the content is identical in all respects to the preceding two nominations.
- Editor D adds a source, based on a reference in an article they don't know is a hoax without verifying anything about the source. The reviewer AGFs that the offline source exists and does verify the content (no LLMs were used so there is no need to suspect otherwise) and so the article gets promoted.
- Please explain how this benefits readers and/or editors. Thryduulf (talk) 04:36, 3 November 2025 (UTC)
- I am nitpicking, but you got two "Editor D" there. ✠ SunDawn ✠ Contact me! 00:55, 4 November 2025 (UTC)
- Whoops, the second should obviously be Editor E (I changed the order of the examples several times while writing it, obviously I missed correcting that). Thryduulf (talk) 01:23, 4 November 2025 (UTC)
- I am nitpicking, but you got two "Editor D" there. ✠ SunDawn ✠ Contact me! 00:55, 4 November 2025 (UTC)
- This proposal would impact those people, but it would also speedily fail submissions by people who do (or are suspected of) using LLMs but who do put in the effort to check that the LLM-output complies with all the relevant policies and guidelines. For example:
- Oppose "Obvious" is subjective, especially if AI chatbots become more advanced than they are now and are able to speak in less stilted language. Furthermore, either we ban all AI-generated content on Wikipedia, or we allow it anywhere, this is just a confusing half-measure. (I am personally in support of a total ban, since someone skilled enough to proofread the AI and remove all hallucinations/signs of AI writing would likely just write it from scratch, it doesn't save much time). Or if it did, they'd still avoid it out of fear of besmirching their reputation given the sheer amount of times AI is abused. ᴢxᴄᴠʙɴᴍ (ᴛ) 11:31, 3 November 2025 (UTC)
- A total ban of AI has not gained consensus, in part because there are few 'half-measures' in place that would be indicative that there is a widespread problem. The AI image ban came only after a BLP image ban, for example. CMD (talk) 11:53, 3 November 2025 (UTC)
- Adding hypocritical half-measures just to push towards a full ban would be "disrupting Wikipedia to make a point". As long as it's allowed, blocking it in GAs would make no sense. It's also likely that unedited AI trash will be caught by reviewers anyway because it's incoherent, even before we get to the AI criterion. ᴢxᴄᴠʙɴᴍ (ᴛ) 15:46, 4 November 2025 (UTC)
- I'm not sure where the hypocrisy is in the proposal. Whether reviewers will catch unedited AI trash is also not affected by the proposal, the proposal provides a route for action following the catch of said text. CMD (talk) 16:01, 4 November 2025 (UTC)
- Adding hypocritical half-measures just to push towards a full ban would be "disrupting Wikipedia to make a point". As long as it's allowed, blocking it in GAs would make no sense. It's also likely that unedited AI trash will be caught by reviewers anyway because it's incoherent, even before we get to the AI criterion. ᴢxᴄᴠʙɴᴍ (ᴛ) 15:46, 4 November 2025 (UTC)
- A total ban of AI has not gained consensus, in part because there are few 'half-measures' in place that would be indicative that there is a widespread problem. The AI image ban came only after a BLP image ban, for example. CMD (talk) 11:53, 3 November 2025 (UTC)
- Support-I think at some point LLMs like to cite wikipedia whenever they spit out an essay or any kind of info on a given topic. Then an editor will paste this info into the article, which the AI will cite again, and wikipedia articles will basically end up ouroboros'd User:shawtybaespade (talk) 12:01, 3 November 2025 (UTC)
- Support Aquillion's comment above expresses my view very well. Stepwise Continuous Dysfunction (talk) 20:11, 3 November 2025 (UTC)
- Support. Obviously a sensible idea. Stifle (talk) 21:15, 3 November 2025 (UTC)
- There is extensive explanation above of why this is not a good proposal, so this comment just indicates you haven't read anything of the discussion, which is something for the closer to take note of. Thryduulf (talk) 21:25, 3 November 2025 (UTC)
- I urge everyone not to make inferences about what others have read. The wide diversity of opinions makes it clear that different editors find different arguments compelling, even after reading all of them. isaacl (talk) 23:58, 3 November 2025 (UTC)
- If Stifle had read and thought about any of the comments on this page it would be extremely clear that it is not "obviously" a sensible idea. Something that is "obviously" a sensible proposal does not get paragraphs of detailed explanation about why it isn't sensible from people who think it goes too far and from those who think it doesn't go far enough. Thryduulf (talk) 01:27, 4 November 2025 (UTC)
- I urge everyone not to make inferences about what others have read. The wide diversity of opinions makes it clear that different editors find different arguments compelling, even after reading all of them. isaacl (talk) 23:58, 3 November 2025 (UTC)
- There is extensive explanation above of why this is not a good proposal, so this comment just indicates you haven't read anything of the discussion, which is something for the closer to take note of. Thryduulf (talk) 21:25, 3 November 2025 (UTC)
- Oppose I feel the scope that this criteria would cover is already redundant by the other criterias (see Thryduulf's !vote). Additionally, I am concerned that this will raise false positives for those whose writing style is too close to what an LLM could generate. Gramix13 (talk) 23:02, 3 November 2025 (UTC)
- Support, as per expressed by Aquillion. Furthermore would rather not see Wikipedia become a Grokipedia. Lf8u2 (talk) 01:41, 4 November 2025 (UTC)
- Grokipedia is an uncontrollable AI slop where no one can control the content (except for Elon Musk and his engineers). Current Wikipedia's rules is enough to stop such travesty without adding this quickfail category. GA criteria #1 and #2 is more than enough to stop the AI slop. G15 is still there as well. No need to put rules on top of other rules. ✠ SunDawn ✠ Contact me! 04:37, 4 November 2025 (UTC)
- Oppose For the statements made above and in the discussion that the failures of AI (hallucination) are easily covered by criterions 1 and 2. But, additionally, because I am not confident that AI is easily detected. AI-detector tools are huge failures, and my own original works on other sites have been labeled AI in the past when they're not. So I personally have experience being accused of using AI when I know my work is original all because I use emdashes. And since AI is only going to improve and become even harder to detect, this criterions is most likely going to be used to give false confidence to over-eager reviewers ready to quick-fail based on a hunch. Terrible idea.--v/r - TP 01:40, 4 November 2025 (UTC)
- Support after consideration. I do not love how the guideline is currently written - I think all criteria for establishing "obvious" LLM use should be defined. However, I would rather support and revise than oppose. Seeing multiple frequent GA reviewers !vote support also suggests there is a gap with the current QF criteria. NicheSports (talk) 04:58, 4 November 2025 (UTC)
- Support, the fact that people are starting to write like AI/bots/LLMs means that "false positives" will be detecting (in some cases) users who are too easily influenced by what they are reading. Let's throw those babies out with the bathwater. Abductive (reasoning) 05:16, 4 November 2025 (UTC)
- It's literally the opposite of that. LLMs and GenAI are trained on human writing. It mimics human writing. Not the other way around. And are you suggesting banning users for writing in similar prose to the highly skilled published authors that LLMs are trained on? What the absolute fuck?!?--v/r - TP 15:21, 4 November 2025 (UTC)
- The Washington Post says "It’s happening: People are starting to talk like ChatGPT with a subheading "Unnervingly, words overrepresented in chatbot responses are turning up more in human conversation." Abductive (reasoning) 06:20, 5 November 2025 (UTC)
- LLMs are trained on highly skilled published authors? Pull the other one, it's got bells on. I didn't know highly skilled published authors liked to delve into things with quite so many emojis. Cremastra (talk · contribs) 15:26, 4 November 2025 (UTC)
- Yes, LLMs are trained on published works. Duh.--v/r - TP 00:47, 5 November 2025 (UTC)
- Yeah, I know that. Dial down the condescension. But they're trained on all published works, including plenty of junk scraped from the internet. Most published works aren't exactly Terry Pratchett-quality either. Cremastra (talk · contribs) 00:56, 5 November 2025 (UTC)
- You want me to dial down the condescension on a request that anyone whose prose is similar to that of the material the AI is trained on, including published works, be banned? Did you read the top level comment that I'm being snarky to?--v/r - TP 00:59, 5 November 2025 (UTC)
- I did, and it isn't relevant here. What's relevant is your misleading claim that AI writing represents the best-quality writing humanity has to offer and is acceptable to be imitated. In practice, it can range from poor to decent, but rarely stellar. Cremastra (talk · contribs) 01:07, 5 November 2025 (UTC)
- First off - I made no such claim. I said AI is trained on some of the best quality writing humanity has to offer. Don't put words in my mouth. Second off - Even if I did, calling for a ban on users who positive contribute because their writing resembles AI is outrageous. Get your priorities straight or don't talk to me.--v/r - TP 22:17, 5 November 2025 (UTC)
- Modern LLMs are trained on very large corpuses, which include everything from high-quality to low-quality writing. And even if one were trained exclusively on high-quality writing, that wouldn't necessarily mean its output is also high-quality. But I agree that humans picking up speech patterns from LLMs doesn't make them incompetent to write an encyclopedia. jlwoodwa (talk) 22:30, 5 November 2025 (UTC)
- First off - I made no such claim. I said AI is trained on some of the best quality writing humanity has to offer. Don't put words in my mouth. Second off - Even if I did, calling for a ban on users who positive contribute because their writing resembles AI is outrageous. Get your priorities straight or don't talk to me.--v/r - TP 22:17, 5 November 2025 (UTC)
- I did, and it isn't relevant here. What's relevant is your misleading claim that AI writing represents the best-quality writing humanity has to offer and is acceptable to be imitated. In practice, it can range from poor to decent, but rarely stellar. Cremastra (talk · contribs) 01:07, 5 November 2025 (UTC)
- You want me to dial down the condescension on a request that anyone whose prose is similar to that of the material the AI is trained on, including published works, be banned? Did you read the top level comment that I'm being snarky to?--v/r - TP 00:59, 5 November 2025 (UTC)
- Yeah, I know that. Dial down the condescension. But they're trained on all published works, including plenty of junk scraped from the internet. Most published works aren't exactly Terry Pratchett-quality either. Cremastra (talk · contribs) 00:56, 5 November 2025 (UTC)
- Yes, LLMs are trained on published works. Duh.--v/r - TP 00:47, 5 November 2025 (UTC)
- It's literally the opposite of that. LLMs and GenAI are trained on human writing. It mimics human writing. Not the other way around. And are you suggesting banning users for writing in similar prose to the highly skilled published authors that LLMs are trained on? What the absolute fuck?!?--v/r - TP 15:21, 4 November 2025 (UTC)
- Support per Aquillion and Lf8u2, most editors I assume would not want Wikipedia to become Grokipedia, a platform of AI slop. LLMs such as Grok and ChatGPT write unencyclopedically or unnaturally and cite unreliable sources such as Reddit. Alexeyevitch(talk) 07:55, 4 November 2025 (UTC)
- Users !opposed to this proposal are not supportive of AI slop or a 'pedia overrun by AIs. It's just a bad proposal.--v/r - TP 15:22, 4 November 2025 (UTC)
- We can either wait for the 'perfect' proposal, which may never come, or try something like this, so as to have some recourse. It has been years since ChatGPT arrived. If there are some problems that arise with this criterion in actual practice, they can be dealt with by modifying the criterion through the usual Wikipedia process of trial and error. The point is that there is value merely in expressing Wikipedia's stance on AI in relation to good articles. I hope you can understand that users who support this proposal think something is better than nothing, which is the current state of affairs. Yours, &c. RGloucester — ☎ 22:02, 4 November 2025 (UTC)
- There is already something. 1) That sources in a GA review are verified to support the content, and 2) That it follows the style guide. What does this new criterion add that isn't already captured by the first two?--v/r - TP 00:48, 5 November 2025 (UTC)
- Adding this criterion will make clear what is already expected in practice. Namely, that editors should not waste reviewer time by submitting unreviewed LLM-generated content to the good articles process, as Aquillion wrote above. It is true that the other criteria may be able to be used to quick-fail LLM-generated content. This is also true of articles with copyright violations, however, which could logically be failed under 1 or 3, but have their own quick-fail criterion, 2. I would argue that purpose of criterion 2 is equivalent to the purpose of this new, proposed criterion: namely, to draw a line in the sand. The heart of the matter is this: what is the definition of a good article on Wikipedia? What does the community mean when it adds a good article tag to any given article? Adding this criterion makes clear that, just as we do not accept copyright violations, even those that are difficult to identify, like close paraphrasing, we brook no slapdash use of LLMs. Yours, &c. RGloucester — ☎ 01:34, 5 November 2025 (UTC)
- I disagree that the quickfail criterion, as proposed, would make that clear. Not all
obvious evidence of LLM use
is evidence of unreviewed LLM use. jlwoodwa (talk) 01:43, 5 November 2025 (UTC)- Any 'successful' use of LLMs, if there can be such a thing, should leave no trace behind in the finished text. If the specified bits of prompt or AI-generated references are present, that is evidence that whatever 'review' may have been conducted was insufficient to meet the expected standard. Yours, &c. RGloucester — ☎ 07:25, 5 November 2025 (UTC)
- If someone verifies that an article's references exist and support the claims they're cited for, I would call that a sufficient review of those references, whether or not there are UTM parameters remaining in the citation URLs. jlwoodwa (talk) 07:47, 5 November 2025 (UTC)
- No, because not only the references would need to be checked. If such 'obvious evidence' of slapdash AI use is in evidence, the whole article will need to be checked for hallucinations, line-by-line. Yours, &c. RGloucester — ☎ 07:53, 5 November 2025 (UTC)
- Shit flow diagram and Malacca dilemma have obvious evidence of LLM use, in that my edit summaries make it clear I used an LLM, and I've disclosed that on the talk page. Is that sufficient for a quickfail? ScottishFinnishRadish (talk) 11:45, 5 November 2025 (UTC)
- Unfortunately I think it might be. Malacca dilemma for example, claims a pipeline has been "operational since 2013" using a source published in 2010 (and that discusses October 2009 in the future tense); in fact most of that paragraph seems made up around the bare bones of the potential for these future pipelines being mentioned. I assume the llm is drawing from other mentions of the pipelines somewhere in its training data, or just spinning out something plausible. Another llm issue is that when actually maintaining the transfer of source content into article prose, such as the first paragraph of background, it can be quite CLOPpy. CMD (talk) 11:59, 5 November 2025 (UTC)
- That's actually found in Chen, but with with 2013 given as the planned operational date.
China-Myanmar 2000 Kunming 20, for 2013 1.5
I've wikilinked the article on the pipeline and added another source for becoming operational in 2013. That was definitely an issue, but would you call that a quick fail? ScottishFinnishRadish (talk) 12:12, 5 November 2025 (UTC)
Oil Pipeline 30 years... Construction of the two pipelines will begin soon and is expected to be completed by 2013. China National Petroleum Corporation (CNPC), the largest oil and gas company in China, holds 50.9 per cent stake in the project, with the rest owned by the Myanmar Oil and Gas Enterprise (MOGE). (Sudha, 2009)- What's found in Chen are parts of the text, the bare bones I mention above. The rest of the issues with that paragraph remain, and it is the presence of many of these issues, especially with the way llms work by producing words that sound right whether actual information or not, that is the problem. CMD (talk) 12:35, 5 November 2025 (UTC)
- That's actually found in Chen, but with with 2013 given as the planned operational date.
- Unfortunately I think it might be. Malacca dilemma for example, claims a pipeline has been "operational since 2013" using a source published in 2010 (and that discusses October 2009 in the future tense); in fact most of that paragraph seems made up around the bare bones of the potential for these future pipelines being mentioned. I assume the llm is drawing from other mentions of the pipelines somewhere in its training data, or just spinning out something plausible. Another llm issue is that when actually maintaining the transfer of source content into article prose, such as the first paragraph of background, it can be quite CLOPpy. CMD (talk) 11:59, 5 November 2025 (UTC)
- If someone verifies that an article's references exist and support the claims they're cited for, I would call that a sufficient review of those references, whether or not there are UTM parameters remaining in the citation URLs. jlwoodwa (talk) 07:47, 5 November 2025 (UTC)
- Any 'successful' use of LLMs, if there can be such a thing, should leave no trace behind in the finished text. If the specified bits of prompt or AI-generated references are present, that is evidence that whatever 'review' may have been conducted was insufficient to meet the expected standard. Yours, &c. RGloucester — ☎ 07:25, 5 November 2025 (UTC)
- I disagree that the quickfail criterion, as proposed, would make that clear. Not all
- Adding this criterion will make clear what is already expected in practice. Namely, that editors should not waste reviewer time by submitting unreviewed LLM-generated content to the good articles process, as Aquillion wrote above. It is true that the other criteria may be able to be used to quick-fail LLM-generated content. This is also true of articles with copyright violations, however, which could logically be failed under 1 or 3, but have their own quick-fail criterion, 2. I would argue that purpose of criterion 2 is equivalent to the purpose of this new, proposed criterion: namely, to draw a line in the sand. The heart of the matter is this: what is the definition of a good article on Wikipedia? What does the community mean when it adds a good article tag to any given article? Adding this criterion makes clear that, just as we do not accept copyright violations, even those that are difficult to identify, like close paraphrasing, we brook no slapdash use of LLMs. Yours, &c. RGloucester — ☎ 01:34, 5 November 2025 (UTC)
- There is already something. 1) That sources in a GA review are verified to support the content, and 2) That it follows the style guide. What does this new criterion add that isn't already captured by the first two?--v/r - TP 00:48, 5 November 2025 (UTC)
- We can either wait for the 'perfect' proposal, which may never come, or try something like this, so as to have some recourse. It has been years since ChatGPT arrived. If there are some problems that arise with this criterion in actual practice, they can be dealt with by modifying the criterion through the usual Wikipedia process of trial and error. The point is that there is value merely in expressing Wikipedia's stance on AI in relation to good articles. I hope you can understand that users who support this proposal think something is better than nothing, which is the current state of affairs. Yours, &c. RGloucester — ☎ 22:02, 4 November 2025 (UTC)
- Oppose as WP:CREEP—bad GA noms should be failed for being bad, not specifically for using AI. – Closed Limelike Curves (talk) 01:23, 6 November 2025 (UTC)
- Oppose as discussed above by Myceteae. Adumbrativus (talk) 04:03, 6 November 2025 (UTC)
- Support: Yes, this is redundant, but it also saves time, and discourages editors who have used AI from submitting their (really, the LLM's) articles at GAN. --not-cheesewhisk3rs ≽^•⩊•^≼ ∫ (pester) 10:08, 9 November 2025 (UTC)
- Support a change in the status quo, not necessarily this proposal but a similar one. FaviFake (talk) 17:17, 11 November 2025 (UTC)
Discussion (GA quick fail)
[edit]- I sometimes use AI as a search engine and link remnants are automatically generated. I'd rather not face quickfail for that. I'm also not seeing how the existing criteria are not sufficient; if links are fake or clearly don't match the text, that is already covered under a quickfail as being a long way from demonstrated verifiability. Can a proponent of this proposal give an example of an article they would be able to quickfail under this that they can't under the current criteria? Rollinginhisgrave (talk | contributions) 10:47, 26 October 2025 (UTC)
- The purpose of this proposal is to draw a line in the sand, to preserve the integrity of the label 'good article', and make clear where the encyclopaedia stands. Yours, &c. RGloucester — ☎ 12:55, 26 October 2025 (UTC)
- In a nutshell the difference is that with AI-generated text, every single claim and source must be carefully checked, and not just for the source's existence; GA only requires spot-checking a handful. The example I gave above was a FA, not GA, but it's basically the same thing. Gnomingstuff (talk) 17:56, 26 October 2025 (UTC)
- Thankyou for this example, although I'm not sure how it's applicable here as it wouldn't fall under "obvious evidence of LLM use". At what point in seeing edits like this are you invoking the QF? Rollinginhisgrave (talk | contributions) 21:23, 26 October 2025 (UTC)
- The combination of "clear signs of text having been written by AI..." plus "...and there are multiple factual inaccuracies in that text." Or in other words, obvious evidence (#1) plus problems that suggest that the output wasn't reviewed well/at all (#2). Gnomingstuff (talk) 02:20, 27 October 2025 (UTC)
- I've spent some time thinking about this. Some thoughts:
- What you describe as obviously AI is very different to what RGloucester describes here, which makes me concerned about reading any consensus for what proponents are supporting.
- I would describe what you encountered at La Isla Bonita as "possible/probable AI use" not "obvious", and your description of it as "clear" is unconvincing, especially when put against cases where prompts are left in etc.
- If I encountered multiple substantial TSI issues like that and suspected AI use, I would be more willing to quickfail as I would have less trust in the text's verifiability. I would want other reviewers to feel emboldened to make the same assessment, and I think it's a problem if they are not currently willing to do so because of how the QF criteria is laid out.
- I see no evidence that this is actually occuring.
- I think that the QF criteria would have to be made more broad than proposed ("likely AI use") to capture such occurrences, and I would like to see wording which would empower reviewers in that scenario but would avoid quickfails where AI use is suspected but only regular TSI issues exist (for those who do not review regularly, almost all spot checks will turn up issues with TSI).
- Rollinginhisgrave (talk | contributions) 17:49, 29 October 2025 (UTC)
- Not a fan of RGloucester's criteria tbh, I don't feel like references become quickfail-worthy just because someone used ChatGPT search, especially given that AI browsers now exist.
- As far as the rest this is why I !voted weak support and not full support -- I'm not opposed to quickfail but it's not my preference. My preference is closer to "don't promote until a lot more review/rewriting than usual is done." Gnomingstuff (talk) 05:00, 1 November 2025 (UTC)
- I've spent some time thinking about this. Some thoughts:
- The combination of "clear signs of text having been written by AI..." plus "...and there are multiple factual inaccuracies in that text." Or in other words, obvious evidence (#1) plus problems that suggest that the output wasn't reviewed well/at all (#2). Gnomingstuff (talk) 02:20, 27 October 2025 (UTC)
- It's correct that every single claim and source needs to be carefully checked, but it needs to be checked by the author, not the GA reviewer. The spot check is there to verify the author did their part in checking. – Closed Limelike Curves (talk) 01:23, 6 November 2025 (UTC)
- Thankyou for this example, although I'm not sure how it's applicable here as it wouldn't fall under "obvious evidence of LLM use". At what point in seeing edits like this are you invoking the QF? Rollinginhisgrave (talk | contributions) 21:23, 26 October 2025 (UTC)
- What's "obvious evidence of AI-generated references"? For example, I often use the automatic generation feature of the visual editor to create a citation template. Or I might use a script to organise the references into the reflist. The proposal seems to invite prejudice against particular AI tells but these include things like using an m-dash, and so are unreliable. Andrew🐉(talk) 10:53, 26 October 2025 (UTC)
- Yeah, it's poorly written. "Obvious evidence of AI-generated references" in this context means a hallucination of a reference that doesn't exist. Viriditas (talk) 02:43, 28 October 2025 (UTC)
- If the article cites a nonexistent reference, that should be grounds for failure by itself. – Closed Limelike Curves (talk) 01:16, 6 November 2025 (UTC)
- Yeah, it's poorly written. "Obvious evidence of AI-generated references" in this context means a hallucination of a reference that doesn't exist. Viriditas (talk) 02:43, 28 October 2025 (UTC)
- What about something similar to WP:G15, for example
6. It contains content that could only plausibly have been generated by large language models and would have been removed by any reasonable human review.
Kovcszaln6 (talk) 10:59, 26 October 2025 (UTC) - This would be the first GA criterion that regulates the workflow people use to write articles rather than the finished product, which doesn't make much sense because the finished product is all that matters. Gen AI as a tool is also extremely useful for certain tasks, for example I use it to search for sources I may have missed (it is particularly good at finding multilingual sources), to add rowscopes to tables to comply with MOS:DTAB, to double check table data matches with the source, and to check for any clear typos/grammar errors in finished prose. IAWW (talk) 11:05, 26 October 2025 (UTC)
- It’s irrelevant to this discussion but I don’t think it’s right to call something “extremely useful” when the tasks are layout formatting, and source-finding and copy editing skills you can and should develop for yourself. You will get better the more you try, and when even just pretty good, you will be better than a chatbot. You also really don’t need gen AI to edit tables, there are completely non-AI tools to extract datasets and add fixed content in fixed places, tools that you know won’t throw in curveballs at random. Kingsif (talk) 14:24, 26 October 2025 (UTC)
- Well, "extremely useful" is subjective, and in my opinion it probably saves me about 30 mins per small article I write, which in my opinion justifies the adjective. I still do develop all the relevant skills myself, but I normally make some small mistakes (like for example putting a comma instead of a full stop), which AI is very good at detecting. IAWW (talk) 14:55, 26 October 2025 (UTC)
- You still don’t need overconfident error-prone gen AI for spellcheck. Microsoft has been doing it with pop ups that explain why your text may or may not have a mistake for almost my whole life. Kingsif (talk) 15:02, 26 October 2025 (UTC)
- GenAI is just faster and easier to use for me. IAWW (talk) 16:15, 26 October 2025 (UTC)
- Well yes, if you consider speed and ease of use to be more important than accuracy, generative AI is probably the way to go... AndyTheGrump (talk) 21:09, 26 October 2025 (UTC)
- @AndyTheGrump is there any evidence that the uses to which IAWW puts generative AI result in a less accurate output than doing it manually? Thryduulf (talk) 21:11, 26 October 2025 (UTC)
- I have no idea how accurately IAWW can manually check spelling, grammar etc. That wasn't the alternative offered however, which was to use existing specialist tools to do the job. They can get things wrong too, but rarely in the making-shit-up tell-them-what-they-want-to-hear way that generative AI does. AndyTheGrump (talk) 21:40, 26 October 2025 (UTC)
- Generative AI can do that in certain situations, but things like checking syntax doesn't seem like one of those situations. Anyway, if the edits IAWW makes to Wikipedia are accurate and free of neutrality issues, fake references, etc. why does it matter how that content was arrived at? Thryduulf (talk) 21:48, 26 October 2025 (UTC)
- 'If' is doing a fair bit of work in that question, but ignoring that, it wouldn't, except in as much as IAWW would be better off learning to use the appropriate tools, rather than using gen AI for a purpose other than that it was designed for. I'd find the advocates of the use of such software more convincing if they didn't treat it as if it was some sort of omniscient and omnipotent entity capable of doing everything, and instead showed a little understanding of what its inherent limitations are. AndyTheGrump (talk) 23:02, 26 October 2025 (UTC)
- I clearly don't treat it as an omniscient and omnipotent entity, and I welcome any criticism of my work. IAWW (talk) 08:05, 27 October 2025 (UTC)
- To me - and look, as much as it's a frivolous planet-killer, I am not going to go after any individual user for non-content AI use, but I will encourage them against it - if we assume there are no issues with IAWW's output, my main concern would be the potential regression in IAWW's own capabilities for the various tasks they use an AI for, and how this could affect their ability to contribute to the areas of Wikipedia they frequent. E.g. if you are never reviewing your own writing and letting AI clean it up, will your ability to recognise in/correct grammar and spelling deteriorate, and therefore your ability to review others' writing. That, however, would be a personal concern, and something I would not address unless such an outcome became serious. As I said, with this side point, I just want to encourage people to develop and use these skills themselves. Kingsif (talk) 23:21, 26 October 2025 (UTC)
why does it matter how that content was arrived at?
Value? Morality? If someone wants ChatGPT, it's over this way. We're an encyclopedia. We have articles with value written by people who care about the articles. LLM-generated articles make a mockery of that. Why would you deny our readers this? I genuinely can't understand why you're so pro-AI. Do you not see how AI tools, while they have some uses, are completely incompatible with our mission of writing good articles? Cremastra (talk · contribs) 01:57, 28 October 2025 (UTC)- Once again, Wikipedia is not a vehicle for you to impose your views on the morality of AI on the world. Wikipedia is a place to write neutral, factual encyclopaedia articles free of value judgements - and that includes value judgements about tools other people use to write factual, neutral articles. Thryduulf (talk) 02:17, 28 October 2025 (UTC)
- Your refusal to take any stance on a tool that threatens the value of our articles is starting to look silly. As I say here, we take moral stances on issues all the time, and LLMs are right up our alley. Cremastra (talk · contribs) 02:28, 28 October 2025 (UTC)
- That LLM is
a tool that threatens the value of our articles
is your opinion, seemingly based on your dislike of LLMs and/or machine learning. You are entitled to that opinion, but that does not make it factual. - If an article is neutral and factual then it is neutral and factual regardless of what tools were or were not used in its creation.
- If an article is not neutral and factual then it is not neutral and factual regardless of what tools were or were not used in its creation. Thryduulf (talk) 02:52, 28 October 2025 (UTC)
- You missed two: If an article is not neutral and factual and was written by a person, you can ask that person to retrace their steps in content creation (if not scan edit-by-edit to see yourself) so everyone can easily identify where the inaccuracies originated and fix them. If an article is not neutral and factual and you cannot easily trace its writing process, it is hard to have confidence in any content at all when trying to fix it. Kingsif (talk) 03:01, 28 October 2025 (UTC)
- That LLM is
- Your refusal to take any stance on a tool that threatens the value of our articles is starting to look silly. As I say here, we take moral stances on issues all the time, and LLMs are right up our alley. Cremastra (talk · contribs) 02:28, 28 October 2025 (UTC)
- Once again, Wikipedia is not a vehicle for you to impose your views on the morality of AI on the world. Wikipedia is a place to write neutral, factual encyclopaedia articles free of value judgements - and that includes value judgements about tools other people use to write factual, neutral articles. Thryduulf (talk) 02:17, 28 October 2025 (UTC)
- 'If' is doing a fair bit of work in that question, but ignoring that, it wouldn't, except in as much as IAWW would be better off learning to use the appropriate tools, rather than using gen AI for a purpose other than that it was designed for. I'd find the advocates of the use of such software more convincing if they didn't treat it as if it was some sort of omniscient and omnipotent entity capable of doing everything, and instead showed a little understanding of what its inherent limitations are. AndyTheGrump (talk) 23:02, 26 October 2025 (UTC)
- Generative AI can do that in certain situations, but things like checking syntax doesn't seem like one of those situations. Anyway, if the edits IAWW makes to Wikipedia are accurate and free of neutrality issues, fake references, etc. why does it matter how that content was arrived at? Thryduulf (talk) 21:48, 26 October 2025 (UTC)
- I have no idea how accurately IAWW can manually check spelling, grammar etc. That wasn't the alternative offered however, which was to use existing specialist tools to do the job. They can get things wrong too, but rarely in the making-shit-up tell-them-what-they-want-to-hear way that generative AI does. AndyTheGrump (talk) 21:40, 26 October 2025 (UTC)
- @AndyTheGrump is there any evidence that the uses to which IAWW puts generative AI result in a less accurate output than doing it manually? Thryduulf (talk) 21:11, 26 October 2025 (UTC)
- Well yes, if you consider speed and ease of use to be more important than accuracy, generative AI is probably the way to go... AndyTheGrump (talk) 21:09, 26 October 2025 (UTC)
- GenAI is just faster and easier to use for me. IAWW (talk) 16:15, 26 October 2025 (UTC)
- You still don’t need overconfident error-prone gen AI for spellcheck. Microsoft has been doing it with pop ups that explain why your text may or may not have a mistake for almost my whole life. Kingsif (talk) 15:02, 26 October 2025 (UTC)
- It’s irrelevant to this discussion but I don’t think it’s right to call a calculator “extremely useful” when the tasks are division, exponentiation, and root-finding skills you can and should develop for yourself. – Closed Limelike Curves (talk) 01:18, 6 November 2025 (UTC)
- Well, "extremely useful" is subjective, and in my opinion it probably saves me about 30 mins per small article I write, which in my opinion justifies the adjective. I still do develop all the relevant skills myself, but I normally make some small mistakes (like for example putting a comma instead of a full stop), which AI is very good at detecting. IAWW (talk) 14:55, 26 October 2025 (UTC)
- It’s irrelevant to this discussion but I don’t think it’s right to call something “extremely useful” when the tasks are layout formatting, and source-finding and copy editing skills you can and should develop for yourself. You will get better the more you try, and when even just pretty good, you will be better than a chatbot. You also really don’t need gen AI to edit tables, there are completely non-AI tools to extract datasets and add fixed content in fixed places, tools that you know won’t throw in curveballs at random. Kingsif (talk) 14:24, 26 October 2025 (UTC)
| Nonconstructive. FaviFake (talk) 05:26, 12 November 2025 (UTC) |
|---|
| The following discussion has been closed. Please do not modify it. |
|
- Now, relevantly, this proposal clearly does not regulate workflow, only the end product. It only refers to the article itself having evidence of obvious AI generation in its actual state. Clean up after your LLMs and you won’t get caught and charged 😉 Kingsif (talk) 14:28, 26 October 2025 (UTC)
- The "evidence" in the end product is being used to infer things about the workflow, and the stuff in the workflow is what the proposal is targeting. IAWW (talk) 14:50, 26 October 2025 (UTC)
- Y’all know I think gen AI is incompatible with Wikipedia and would want to target it, but I don’t think this proposal does that. If there’s AI leftovers, that content at least needs human cleanup, and that shouldn’t be put on a reviewer. That’s no different to identifying copyvio and quickfailing saying a nominator needs to work on it rather than sink time in a full review. Kingsif (talk) 14:59, 26 October 2025 (UTC)
- The "evidence" in the end product is being used to infer things about the workflow, and the stuff in the workflow is what the proposal is targeting. IAWW (talk) 14:50, 26 October 2025 (UTC)
- Now, relevantly, this proposal clearly does not regulate workflow, only the end product. It only refers to the article itself having evidence of obvious AI generation in its actual state. Clean up after your LLMs and you won’t get caught and charged 😉 Kingsif (talk) 14:28, 26 October 2025 (UTC)
- Regarding "fake references", I can see the attraction in this being changed from a slow fail to a quick fail, but before it can be a quick fail there needs to be a reliable way to distinguish between references that are completely made up, references that exist but are inaccessible to (some) editors (e.g. offline, geoblocked, paywalled), references that used to be accessible but no longer are (e.g. linkrot), and references with incorrect details (e.g. typos in URIs/dois/ISBNs/titles/etc). Thryduulf (talk) 12:56, 26 October 2025 (UTC)
- If you cannot determine if a reference that doesn’t work is AI or not, then it’s not obvious AI and this wouldn’t apply… Kingsif (talk) 14:09, 26 October 2025 (UTC)
- I think this is the problem: The proposal doesn't say "a reference that doesn’t work". It says "AI-generated references". Now maybe @RGloucester meant the kind of ref that's completely fictional, rather than real sources that someone found by using ChatGPT as a type of cumbersome web search engine, but that's not clear from what's written in the proposal.
- This is a bit concerning, because there have been problems with citations that people can't check since before Wikipedia's creation – for example:
- Proof by reference to inaccessible literature: The author cites a simple corollary of a theorem to be found in a privately circulated memoir of the Slovenian Philological Society, 1883.
- Proof by ghost reference: Nothing even remotely resembling the cited theorem appears in the reference given.
- Proof by forward reference: Reference is usually to a forthcoming paper of the author, which is often not as forthcoming as at first.
- – and AI is adding to the traditional list the outright fabrication of sources: "Proof by non-existent source: A paper is alleged to exist, except that no such paper ever existed, and sometimes the alleged author and the alleged journal are made-up names, too". These are all problems, but these need different responses in the GA process. Made-up sources should be WP:QF #1: "It is a long way from meeting any one of the six good article criteria" (specifically, the requirement to cite real sources. A ghost reference is a real source but what's in the Wikipedia article {{failed verification}}; depending on the scale, that's a surmountable problem. A forward reference is an unreliable source, but if the scale is small enough, that's also a surmountable problem. Inaccessible literature is not grounds for failing a GA nom.
- If this is meant to be "most or all of the references are to sources that actually doesn’t exist (not merely offline, not merely inconvenient, etc.)", then it can be quick-failed right now. But if it means (or gets interpreted as) "the URL says ?utm=chatgpt", then that's not an appropriate reason to quick-fail the nomination. WhatamIdoing (talk) 06:10, 27 October 2025 (UTC)
- Perhaps a corollary added to existing crit, saying that such AI source invention is a QF, would be more specific and helpful. I had thought this proposal was good because it wasn’t explicitly directing reviewers to “this exact thing you should QF”, but if there are reasonable concerns (not just the ‘but I like AI’ crowd) that the openness could instead confuse reviewers, then adding explicit AI notes to existing crit may be a better route. Kingsif (talk) 16:05, 27 October 2025 (UTC)
- If you cannot determine if a reference that doesn’t work is AI or not, then it’s not obvious AI and this wouldn’t apply… Kingsif (talk) 14:09, 26 October 2025 (UTC)
- Suggestion: change the fail criterion to read "obvious evidence of undisclosed LLM use". There are legitimate uses of LLMs, but if LLM use is undisclosed then it likely hasn't been handled properly and shouldn't be wasting reviewers' time, since more than a spot-check is required as explained by Gnomingstuff. lp0 on fire () 09:17, 27 October 2025 (UTC)
- My concern here is that these in practice are basically the same thing: WP:LLMDISCLOSE is not mandatory, so almost all LLM use is undisclosed, even when people are doing review. Gnomingstuff (talk) 16:08, 27 October 2025 (UTC)
- It would also be so hard to implement making it mandatory, in practice. Heavy rollout means some users may not even know when they’ve used it. Left google on AI mode (or didn’t turn it off…)? Congrats, when you searched for a synonym you “used” an LLM. Kingsif (talk) 16:12, 27 October 2025 (UTC)
- My concern here is that these in practice are basically the same thing: WP:LLMDISCLOSE is not mandatory, so almost all LLM use is undisclosed, even when people are doing review. Gnomingstuff (talk) 16:08, 27 October 2025 (UTC)
- Any evidence of LLM use? Does that include disclosing LLM use used in article development/creation? See Shit flow diagram and Malacca dilemma for examples. Should both of those quick fail based only on LLM use? ScottishFinnishRadish (talk) 11:10, 27 October 2025 (UTC)
- I took evidence to mean things in the article. I hope no reviewer would extend the GA crit to things not reviewed in the GAN process - like an edit reason or other disclosure. I can see the concern that this wording could allow or encourage them to, now that you bring it up. Kingsif (talk) 15:56, 27 October 2025 (UTC)
- A difficult part of workshopping any sort of rule like this is you have to remember not everyone who uses it will think the same way you do, or even the way the average person does. What I'd hate to see happen is we pass something like this and then have to come back multiple times to edit it because of people using it as license to go open season on anything they deem AI, evidence or no evidence. I don't mean to suggest you would do anything like that, Kingsif, but someone out there probably will. Trainsandotherthings (talk) 01:52, 28 October 2025 (UTC)
- I didn't think you were suggesting so ;) As noted, I agree. As much as obvious should mean obvious and evidence should be tangible evidence, and the spirit of the proposal should be clear... I still support it, as certainly less harmful than not having something like it, but I can see how even well-intentioned reviewers trying to apply it could go beyond this limited proposal's intention. Kingsif (talk) 01:59, 28 October 2025 (UTC)
- A difficult part of workshopping any sort of rule like this is you have to remember not everyone who uses it will think the same way you do, or even the way the average person does. What I'd hate to see happen is we pass something like this and then have to come back multiple times to edit it because of people using it as license to go open season on anything they deem AI, evidence or no evidence. I don't mean to suggest you would do anything like that, Kingsif, but someone out there probably will. Trainsandotherthings (talk) 01:52, 28 October 2025 (UTC)
- I took evidence to mean things in the article. I hope no reviewer would extend the GA crit to things not reviewed in the GAN process - like an edit reason or other disclosure. I can see the concern that this wording could allow or encourage them to, now that you bring it up. Kingsif (talk) 15:56, 27 October 2025 (UTC)
- I mentioned this above in my !vote, but isn't this already covered by WP:GAQF #3 (
# It has, or needs, cleanup banners that are unquestionably still valid. These include {{cleanup}}, {{POV}}, {{unreferenced}} or large numbers of {{citation needed}}, {{clarify}}, or similar tags
)? Any blatant use of AI means that the article deserves {{AI-generated}} and, as such, already is QF-able. All that has to be done is to modify the existing QF criterion 3 to make it explicit that AI generation is a rationale that would cause QF criterion 3 to be triggered. – Epicgenius (talk) 01:44, 28 October 2025 (UTC)- To keep it short, isn't QF3 just a catch-all for "any clean-up issues that might not completely come under 1 & 2" and theoretically both those quickfail conditions come under it and they're unnecessary? But they're important enough to get their own coverage? Then we ask is unmonitored gen AI more or less significant than GA crit and copyvio. Kingsif (talk) 02:06, 28 October 2025 (UTC)
- Suggestion: combining the
obvious use of AI
withevidence that the submission falls short of any of the other six GA criteria
(particularly criteria 2). Many of the current Opposes reflect a sentiment that this policy would encapsulate too much: instead of reflecting the state of the article, it punishes those who use AI in their workflow. This suggestion would cover a quickfail of articles with AI-hallucinated references (so, for instance, if a reviewer notes a source with a?utm_source=chatgpt.comtag and determines that the sentence is not verifiable, they can quickfail it); however, this suggestion limits the quickfail potential for people who use AI, review its outputs, and put work into making sure it meets the guidelines for a Wikipedia article. Staraction (talk | contribs) 07:41, 30 October 2025 (UTC)- We already have this: it's WP:QF #1. Kovcszaln6 (talk) 08:23, 30 October 2025 (UTC)
- Sorry, I don't think I worded the tqi part well. I mean that, if there is
obvious use of AI
and any evidence of a hallucinated source, unverified citation, etc. at all that the reviewer is allowed to quickfail. - If this still is WP:QF #1, then I sincerely apologize for wasting everybody's time. Staraction (talk | contribs) 08:39, 30 October 2025 (UTC)
- I see. I support this. Kovcszaln6 (talk) 08:45, 30 October 2025 (UTC)
- Sorry, I don't think I worded the tqi part well. I mean that, if there is
- We already have this: it's WP:QF #1. Kovcszaln6 (talk) 08:23, 30 October 2025 (UTC)
RfC: Should edit filter managers be allowed to use the "revoke autoconfirmed" action in edit filters?
[edit]
|
An edit filter can perform certain actions when triggered, such as warning the user, disallowing the edit, or applying a change tag to the revision. However, there are lesser known actions that aren't currently used in the English Wikipedia, such as blocking the user for a specified amount of time, desysopping them, and something called "revoke autoconfirmed". Contrary to its name, this action doesn't actually revoke anything; it instead prevents them from being "autopromoted", or automatically becoming auto- or extended-confirmed. This restriction can be undone by any EFM at any time, and automatically expires in five days provided the user doesn't trigger that action again. Unlike block and desysop (called "degroup" in the code), this option is enabled for use on enwiki, but has seemingly never been used at all.
Fast forward to today, and we have multiple abusers and vandalbots gaming extended confirmed in order to vandalize or edit contentious topics. One abuser in particular has caused an edit filter to be created for them, which is reasonably effective in slowing them down, but it still lets them succeed if left unchecked. As far as I'm aware, the only false positive for this filter was triggered by PaulHSAndrews, who has since been community-banned. In theory, setting this filter to "revoke autoconfirmed" should effectively stop them from being able to become extended confirmed. Some technical changes were recently made to allow non-admin EFMs to use this action, but since it has never been used, I was told to request community consensus here.
So, should edit filter managers be allowed to use the "revoke autoconfirmed" action in edit filters? Children Will Listen (🐄 talk, 🫘 contribs) 05:04, 28 October 2025 (UTC)
Survey (edit filters)
[edit]Discussion (edit filters)
[edit]- @ChildrenWillListen: Does current policy/guideline prohibit edit filter managers from using the "block autopromote" setting? I am looking at WP:EF#Basics of usage and WP:EF#Recommended uses and it makes several references to "block autopromote" as an available option. It seems to me that edit filter managers already have discretion to use that setting under the current guidelines. Is there a particular edit filter that you feel the setting would be useful? Mz7 (talk) 05:08, 28 October 2025 (UTC)
- See Wikipedia:Edit filter/Requested/Archive_21#Set 807 to revoke autoconfirmed. Children Will Listen (🐄 talk, 🫘 contribs) 05:10, 28 October 2025 (UTC)
- Got it, thanks for that context. Unless I am missing something, there does not seem to be any written rule that prevents edit filter managers from using the "block autopromote" setting. However, it seems the reason why edit filter managers are hesitant to use it is because it is rarely actually helpful. Looking at 807 in particular, I see it listed at Template:DatBot filters, meaning a bot will automatically report filter hits to WP:AIV—maybe that takes care of the need to use "block autopromote"? Mz7 (talk) 05:33, 28 October 2025 (UTC)
- They're generally active in a time where most admins are asleep, so it isn't effective in all cases. See, for example, whatever happened at Erika Kirk. Children Will Listen (🐄 talk, 🫘 contribs) 05:37, 28 October 2025 (UTC)
- For example, see Utube2 who is active right now and has triggered this filter. If we set it to prevent them from becoming extended confirmed, we wouldn't have to worry about this. This has been happening every single day for the last three months. Children Will Listen (🐄 talk, 🫘 contribs) 06:11, 28 October 2025 (UTC)
- They're generally active in a time where most admins are asleep, so it isn't effective in all cases. See, for example, whatever happened at Erika Kirk. Children Will Listen (🐄 talk, 🫘 contribs) 05:37, 28 October 2025 (UTC)
- Got it, thanks for that context. Unless I am missing something, there does not seem to be any written rule that prevents edit filter managers from using the "block autopromote" setting. However, it seems the reason why edit filter managers are hesitant to use it is because it is rarely actually helpful. Looking at 807 in particular, I see it listed at Template:DatBot filters, meaning a bot will automatically report filter hits to WP:AIV—maybe that takes care of the need to use "block autopromote"? Mz7 (talk) 05:33, 28 October 2025 (UTC)
- See Wikipedia:Edit filter/Requested/Archive_21#Set 807 to revoke autoconfirmed. Children Will Listen (🐄 talk, 🫘 contribs) 05:10, 28 October 2025 (UTC)
- See Wikipedia talk:Protection policy#Revised proposal to improve extended confirmed grants, which is still open and asks an mostly overlapping question. I doesn't look (to me, but I'm not neutral) to have a clear consensus in that discussion. Thryduulf (talk) 09:57, 28 October 2025 (UTC)
- @Daniel Quinlan: "Revoke autoconfirmed" doesn't actually... revoke autoconfirmed (or anything else), as strange as it sounds. See [1], [2], and [3]. The name is extremely misleading and I'll look to get that changed.
- Also, I wasn't aware of
$wgAutopromoteOnceuntil now, and since that code path doesn't call intoHookRunner::onGetAutoPromoteGroups, this might not even block the extended-confirmed autopromotion. Children Will Listen (🐄 talk, 🫘 contribs) 13:22, 28 October 2025 (UTC)- We can change the label locally by editing MediaWiki:abusefilter-edit-action-blockautopromote. Getting it changed for everyone would require a code patch. Anomie⚔ 14:08, 28 October 2025 (UTC)
- It would be easier to revise the documentation on https://www.mediawiki.org. Daniel Quinlan (talk) 16:54, 28 October 2025 (UTC)
- Also, it looks like you're right that it won't block the promotion to extendedconfirmed. Anomie⚔ 14:26, 28 October 2025 (UTC)
- We can change the label locally by editing MediaWiki:abusefilter-edit-action-blockautopromote. Getting it changed for everyone would require a code patch. Anomie⚔ 14:08, 28 October 2025 (UTC)
- I've put in a request at WP:CR for someone to close the discussion at WT:PP, but looking at the backlog there, it doesn't seem likely to happen quickly. – PharyngealImplosive7 (talk) 13:49, 28 October 2025 (UTC)
- The problem is we can't implement that solution since revoke autoconfirmed doesn't revoke anything, and it looks like my idea isn't going to work either. Children Will Listen (🐄 talk, 🫘 contribs) 19:44, 28 October 2025 (UTC)
- If (you think) the proposal there can't be implemented for technical reasons then you should note that in the discussion so participants and the closer are aware. Thryduulf (talk) 21:11, 28 October 2025 (UTC)
- The
blockautopromotefilter action works fine (after the above discussion, I tested it on test.wikipedia.org) although it's worth clarifying that it doesn't technically "revoke" permissions in the way people are used to thinking about permissions. It helps to understand that autopromotion doesn't actually add users to a group. MediaWiki dynamically checks whether a user meets the conditions for autopromotion when it's checking a user's rights or groups. Theblockautopromoteaction prevents those conditions from being met for a temporary period of time (five days). The rights are effectively revoked during that period, but once the period ends, autopromotion works normally and the "revoked" rights return to the user. Theconfirmedpermission can also be granted manually during that period and edit filter managers also have an interface to undoblockautopromotemistakes. - I will also mention that enabling
blockautopromotefor one or two filters as proposed in the WT:PP RFC will have another immediate effect (i.e., without any additional configuration changes): it will lower the edit rate throttle for the five day period from theuseredit rate limit to thenewbieedit rate limit. Based on the current settings, that would shift the rate limit from 90 edits per 60 seconds to 8 edits per 60 seconds. - Caveat: I've done my best to explain the technical details in a straightforward way, but the code is complex and corrections are welcome! Daniel Quinlan (talk) 23:08, 28 October 2025 (UTC)
- I investigated this and you are indeed correct (except for some caching issues which probably won't matter.) I think this RfC can be closed in preference of the WP:PP RfC. Children Will Listen (🐄 talk, 🫘 contribs) 00:54, 29 October 2025 (UTC)
- The
- If (you think) the proposal there can't be implemented for technical reasons then you should note that in the discussion so participants and the closer are aware. Thryduulf (talk) 21:11, 28 October 2025 (UTC)
- The problem is we can't implement that solution since revoke autoconfirmed doesn't revoke anything, and it looks like my idea isn't going to work either. Children Will Listen (🐄 talk, 🫘 contribs) 19:44, 28 October 2025 (UTC)
- Is the use of "revoke autoconfirmed" prohibited somewhere? Is an RFC to start using "revoke autoconfirmed" in edit filters necessary, or would a WP:EFN discussion be sufficient to get this started? (RFCs are expensive in terms of editor time.) Does this perhaps overlap too much with Wikipedia talk:Protection policy#Revised proposal to improve extended confirmed grants? Did you notify WP:EFN using subst:Please see or a similar template? –Novem Linguae (talk) 22:36, 28 October 2025 (UTC)
- If the discussion at WT:PP reaches a consensus against that proposal (or does not reach a consensus), then I'd say that a discussion at that or similarly prominent venue (such as a village pump) would be required to start using the option. If that discussion is concluded with a consensus in favour then anything similar would probably be fine with just an EFN discussion, but anything significantly different would probably benefit from a more prominent discussion.
Does this perhaps overlap too much with [the discussion at WT:PP]?
I'm inclined to say yes, but others might disagree. Certainly I can't see the utility in this discussion before that one is closed.- WP:EFN has not been notified of this discussion. Thryduulf (talk) 22:47, 28 October 2025 (UTC)
- It has been notified now. – PharyngealImplosive7 (talk) 01:06, 29 October 2025 (UTC)
- I don't think this is really prohibited so much as it's not something anyone particularly wants to use without a heavy level of specific community consensus. It certainly could be used, but in the modern age unless it's an emergency it's almost certain to go through a long discussion first. I don't think there's a lot of EFMs who are eager to use that option without a lot of code review and being confident of consensus in favor first. EggRoll97 (talk) 01:30, 29 October 2025 (UTC)
In general, this proposal seems highly dangerous, and policy shouldn't change. Just go to WP:STOCKS and you'll find some instances in which misconfigured filters prevented edits by everyone; imagine that these filters also included provisions to block or revoke rights from affected editors. However, the proposal seems to be talking about a filter for one particularly problematic user; I could support a proposal to make an exemption for egregious cases, but I think such an exemption should always be discussed by the community, so the suggested reconfiguration is the result of community consensus. Nyttend (talk) 10:51, 29 October 2025 (UTC)
Should we delete outlines?
[edit]I started an AfD with an "article" I stumbled across:
Come to find out there are a whole slew of Wikipedia:Outlines that, I guess, are supposed to be some sort of cross between cliff notes and DMOZ. They are classified as "lists", but they aren't lists. They are, I guess, the private project of a few people who seem to be operating parallel articles to main topics but without any narrative structure.
Why do these exist? How are they controlled editoirally? Should they all be merged into other articles?
Could I just propose deleting all of them?
jps (talk) 00:24, 30 October 2025 (UTC)
- IMHO outlines is what categories should ideally be. They have their purpose. Szmenderowiecki (talk · contribs) 00:59, 30 October 2025 (UTC)
- How so? Categories are a structured and hierarchical data type. Outline articles seem to be attempts to force an article into something like an article without narrative or prose. jps (talk) 01:36, 30 October 2025 (UTC)
- I look at it from another perspective - it's an enhanced version of categories where you very briefly describe what's going on and what the reader is likely to see of relevance to the outlined topic when they land on any given page (The comparison I like is .txt file and HTML). We don't give sources in categories, and outlines are supposed to do the same. I think we can obviate this in a way by forcing categories to show short descriptions for each article; but for example Greta Thunberg is mentioned in Outline of autism but this would be inappropriate info for a short description, at least at the current understanding of what short descriptions should be ("autistic climate activist" sounds denigrating)
- Maybe outline should be its own namespace. Szmenderowiecki (talk · contribs) 02:02, 30 October 2025 (UTC)
- I would be more comfortable with them in their own namespace since the ostensible subject of the article is not the "outline of Wikipedia" for example which, I gather, has not notability outside of Wikipedia's invention of the outline structure. jps (talk) 02:56, 30 October 2025 (UTC)
- I've always thought categories should have a more user-oriented view by default with the shortdescs and thumbnail, like the Vector 2022 search suggestions. The current view with the plain links would be accessible with a quick toggle, and it'd remember your preference so it wouldn't be a burden on existing editors that prefer the current layout. novov talk edits 03:30, 30 October 2025 (UTC)
- Wikipedia:Why do we have outlines in addition to...? addresses at least one reason why outlines are in article space, in describing how they differ from Wikipedia:Portals, which among other things are in their own namespace:
I think it's fair to ask whether individual outlines, or outlines as a whole, are serving their purpose, but they clearly are intended to live alongside and complement lists and articles. —Myceteae🍄🟫 (talk) 04:47, 30 October 2025 (UTC)Outlines show up when you search Wikipedia, which is important because we want people to be able to find them easily. Portals don't show up in searches by default, and when they are included, their subpage entries make the search results very hard to read (because their many subpages clutter the results).
- Why do "we want people to be able to find them easily" but we don't want them to be able to find portals easily? jps (talk) 17:39, 30 October 2025 (UTC)
- Fair question about portals. I've seen other editors say they think the should be in article space. The quote above mentions the navigation issues with portal subpages cluttering search results. I don't know the history of portals or what went into their creation and placement into a dedicated namespace. —Myceteae🍄🟫 (talk) 18:28, 30 October 2025 (UTC)
- Pretty much because the community decided it likes outlines more than portals. Dege31 (talk) 15:17, 1 November 2025 (UTC)
- Why do "we want people to be able to find them easily" but we don't want them to be able to find portals easily? jps (talk) 17:39, 30 October 2025 (UTC)
- How so? Categories are a structured and hierarchical data type. Outline articles seem to be attempts to force an article into something like an article without narrative or prose. jps (talk) 01:36, 30 October 2025 (UTC)
- I like these articles. I don't see why they should be deleted. Aaron Liu (talk) 01:27, 30 October 2025 (UTC)
- How do you maintain editorial control over them? Who decides how Outline of Wikipedia should be different from Wikipedia? What is the source upon which we are basing this artform? jps (talk) 01:35, 30 October 2025 (UTC)
- The same way as every other article? —Myceteae🍄🟫 (talk) 02:29, 30 October 2025 (UTC)
- Doubtful. I see no examples on which to draw. Outlines are the precursor to writing. The ones I'm looking at look like they are stubs of the main articles. jps (talk) 02:55, 30 October 2025 (UTC)
Outlines are the precursor to writing.
I mean that is one type of outline but that's not what these are. —Myceteae🍄🟫 (talk) 04:37, 30 October 2025 (UTC)- I have no idea what these are. As far as I can tell, they are the wholecloth invention of this website. jps (talk) 17:40, 30 October 2025 (UTC)
- You can outlining something before expanding that outline into a full article, or you can summarize existing information with an outline. This is the latter. Aaron Liu (talk) 23:57, 30 October 2025 (UTC)
- I have no idea what these are. As far as I can tell, they are the wholecloth invention of this website. jps (talk) 17:40, 30 October 2025 (UTC)
- As mentioned below, the example you're looking for is disambiguation pages. Aaron Liu (talk) 11:33, 30 October 2025 (UTC)
- Doubtful. I see no examples on which to draw. Outlines are the precursor to writing. The ones I'm looking at look like they are stubs of the main articles. jps (talk) 02:55, 30 October 2025 (UTC)
- The same way as every other article? —Myceteae🍄🟫 (talk) 02:29, 30 October 2025 (UTC)
- How do you maintain editorial control over them? Who decides how Outline of Wikipedia should be different from Wikipedia? What is the source upon which we are basing this artform? jps (talk) 01:35, 30 October 2025 (UTC)
- I'm vaguely aware of outlines but have rarely ever looked at them. I don't think we should delete them and I'm struggling to see the problem. Of course, individual outlines can be nominated for deletion when they have issues. I anticipate that deletion will be a hard sell when the topic being outlined is notable and the outline contains relevant links to many notable articles, similar in a way to how the notability of standalone lists is assessed. —Myceteae🍄🟫 (talk) 02:46, 30 October 2025 (UTC)
- I don't see an issue with them, though I don't use them personally. They're in mainspace for the same reason disambs are in mainspace, both being non article content... they are still a useful navigational aid for the encyclopedia. PARAKANYAA (talk) 06:23, 30 October 2025 (UTC)
- At least disambigs serve a navigational purpose. Outlines are just a bunch of links some Wikipedians think belong together, as far as I can tell. How does one decide what does or does not belong in an outline? I see no means to adjudicate the content whereas with disambiguation, one can refer to the outside world or the spelling of the term as a means to decide what belongs on the page. I dunno, I am just having a really hard time wrapping my head around the use case. jps (talk) 17:43, 30 October 2025 (UTC)
How does one decide what does or does not belong in an outline?
Consensus. i.e. how the content of every page on Wikipedia is decided. Thryduulf (talk) 17:51, 30 October 2025 (UTC)- Sure, but we sometimes labor under the illusion that there are certain principles worked out that we attempt to adhere to. I'm just not clear what the principles are for writing outlines, and, yes, I read WP:OUTLINES. Still clear as mud to me, but apparently there are a buncha others who get it even as I might not understand what they're saying. jps (talk) 18:00, 30 October 2025 (UTC)
- It's basically a hierarchical overview ("outline") of a subject. They'd don't appear to get much attention so it's quite possible that individual outlines or the concept as a whole is poorly developed. If I thought a specific outline needed work I would edit it myself or start a discussion on talk. If that wasn't satisfactory I would reach out to Wikipedia:WikiProject Outlines or a more specific WikiProject or notice board related to the topic or type of issue. Or post here, as you've done. —Myceteae🍄🟫 (talk) 18:42, 30 October 2025 (UTC)
- Sure, but we sometimes labor under the illusion that there are certain principles worked out that we attempt to adhere to. I'm just not clear what the principles are for writing outlines, and, yes, I read WP:OUTLINES. Still clear as mud to me, but apparently there are a buncha others who get it even as I might not understand what they're saying. jps (talk) 18:00, 30 October 2025 (UTC)
- Outlines do obviously serve a navigational purpose. And yes, consensus, like everything else. How do you decide what goes in a category? The same way. PARAKANYAA (talk) 18:00, 30 October 2025 (UTC)
- "Obviously" is a pretty strong word. They look to me like study guides or something, but I struggle to understand how they are part of an encyclopedia intead of, say, Wikiversity or something. jps (talk) 18:06, 30 October 2025 (UTC)
- In what way do they look like a study guide moreso than any of our mainspace pages, categories, or navboxes? Aaron Liu (talk) 23:58, 30 October 2025 (UTC)
- In the sense that they look like crib notes. jps (talk) 00:05, 31 October 2025 (UTC)
- I can say the same about our mainspace nav pages, categories, and navboxes. Outlines help navigate like a set-index article, therefore it is part of the encyclopedia. Aaron Liu (talk) 01:51, 31 October 2025 (UTC)
- You can say whatever you want. I'm not sure that this makes much sense, however. jps (talk) 18:39, 31 October 2025 (UTC)
- They are intended to give an outline of a topic so the fact that they are reminiscent of a study guide or crib sheet doesn't seem off. —Myceteae🍄🟫 (talk) 02:12, 31 October 2025 (UTC)
- I can say the same about our mainspace nav pages, categories, and navboxes. Outlines help navigate like a set-index article, therefore it is part of the encyclopedia. Aaron Liu (talk) 01:51, 31 October 2025 (UTC)
- In the sense that they look like crib notes. jps (talk) 00:05, 31 October 2025 (UTC)
- In what way do they look like a study guide moreso than any of our mainspace pages, categories, or navboxes? Aaron Liu (talk) 23:58, 30 October 2025 (UTC)
- "Obviously" is a pretty strong word. They look to me like study guides or something, but I struggle to understand how they are part of an encyclopedia intead of, say, Wikiversity or something. jps (talk) 18:06, 30 October 2025 (UTC)
- At least disambigs serve a navigational purpose. Outlines are just a bunch of links some Wikipedians think belong together, as far as I can tell. How does one decide what does or does not belong in an outline? I see no means to adjudicate the content whereas with disambiguation, one can refer to the outside world or the spelling of the term as a means to decide what belongs on the page. I dunno, I am just having a really hard time wrapping my head around the use case. jps (talk) 17:43, 30 October 2025 (UTC)
- No, they're not visible but they're really enjoyable to read. They surface pages I never knew about before, and also topics I wouldn't have known to look for. The only real issue is that anyone who isn't a wikipedia editor is probably never going to find one, and that won't be solved by deleting them. Mrfoogles (talk) 17:28, 5 November 2025 (UTC)
- I strongly agree with this. Maybe the solution would be to integrate them with their much more visible cousin, the sidebar, somehow? Maybe we could automatically generate sidebars from outlines. – Closed Limelike Curves (talk) 01:55, 12 November 2025 (UTC)
- I find them very useful as a reader too, for finding quickly all sorts of interconnected articles around a given topic. Seeing the hierarchy of a subject has value. — Very Polite Person (talk/contribs) 19:56, 5 November 2025 (UTC)
- I think these articles could, in theory, be some of the most important ones on Wikipedia, if done properly: outlines like these are an important part of any encyclopedia (though they're usually called an "index"). The issue is they really need better visibility, and that in terms of information, they're often redundant with sidebars—maybe we could automatically generate sidebars from outlines, so they're kept up-to-date and in sync? – Closed Limelike Curves (talk) 01:53, 12 November 2025 (UTC)
- Those sidebars would be way too long. Sidebars need to be short. Aaron Liu (talk) 02:01, 12 November 2025 (UTC)
- Yes, we shouldn't include everything (just excerpt some based on a rule like depth). – Closed Limelike Curves (talk) 02:03, 12 November 2025 (UTC)
- I think User:Closed Limelike Curves is on to something. There's really not a lot of these introduction articles today. And there wouldn't be, long term relative to the whole project, because one could service a hundred or more other articles. But having a properly built out sidebar (at sidebar scale) that can then branch to "Introduction" pages, while carrying all the deeper/myriad other articles, could be a great connective tissue for readers.
- I made this: Sentient (intelligence analysis system)
- And it's got that Intelligence template on the right side: Template:Intelligence.
- That's got a lot of articles in the template--expand/show all of the fields.
- Now stick a great Introduction to intelligence article directly under the little graphic of the spy man. That writ large could even be a Wikipedia:Did You Know type creation arms race to highlight an Introduction page on the front page, for like a week at a time.
- Shark week? Introduction to sharks! — Very Polite Person (talk/contribs) 02:09, 12 November 2025 (UTC)
- Those sidebars would be way too long. Sidebars need to be short. Aaron Liu (talk) 02:01, 12 November 2025 (UTC)
Surely if they are articles they should be properly sourced? But looking at Outline of the United Kingdom and Outline of political science they are not. It's tempting to just strip those of anything without a source, which would leave hardly anything. Doug Weller talk 14:22, 1 November 2025 (UTC)
- Often categories are added to an article or articles to a list without a source because the grouping is not disputed and sourcing it would be hard. As long as you can add Category:Protected areas of the United Kingdom to Environmentally sensitive area, you can add that article to "List of protected areas of the United Kingdom" or the relevant outline section. Though I agree with you on the facts mentioned like that there are 33 shires of Scotland—though most likely un-WP:Likely to be challenged they would do good with a source. Aaron Liu (talk) 15:53, 1 November 2025 (UTC)
- The guidance at Wikipedia:Outlines says outlines are a type of list article and should follow the guidance at MOS:LIST for reference citations (and other sections). The MOS:LIST guidance in the section MOS:SOURCELIST says that inline citations are required for
any of the four kinds of material absolutely required to have citations
. At first glance, the near-total lack of references is surprising for these large pages but having a source to support inclusion of every single entry under, say, Outline of political science § Political issues and policies seems unnecessary. —Myceteae🍄🟫 (talk) 17:23, 1 November 2025 (UTC) - They're entirely valid navigation articles. See Outline of lichens and Outline of the Marvel Cinematic Universe for ones that have been classified as featured lists. Personally, I think they're neat features that could be interesting for our many rabbit-hole-oriented readers if we improved them and made them more visible. Thebiguglyalien (talk) 🛸 02:34, 2 November 2025 (UTC)
- I do not use outlines often, but I find them very useful as a reader. I don't think they need sources: Ideally they're more of a collection of links rather than the quasi-prose Doug linked above. Toadspike [Talk] 20:11, 4 November 2025 (UTC)
- But why do they need to be sourced? Nothing should be in them that doesn't tie into the parent topic(s) by category or obvious sourcing on the blue page. — Very Polite Person (talk/contribs) 19:57, 5 November 2025 (UTC)
- Outlines are a very bizarre shadow Wikipedia that basically exists because of one very prolific contributor duplicating the idea of portals and pushing them. There's fundamentally no scope to outlines—you could create an "Outline of" on any article despite there not being any notability criteria to use. The fact that they are decoupled from the actual state of the articles they shadow, that they rarely have required citations, and that they are a manual, brute-force way of doing things—like the aforementioned 'hierarchies of a subject' idea—means they would be better as a dynamically-generated product. Most of the arguments about them devolve into WP:ITSUSEFUL; it's telling that there's no keep arguments in Wikipedia:Articles for deletion/Outline of Wikipedia that actually cite any guideline or policy for why it should be kept. As far as I know, outlines were never actually formally discussed and ratified as A Thing by the community. Der Wohltemperierte Fuchs talk 20:37, 5 November 2025 (UTC)
- Do they need to be ratified or has community consensus of allowing them to build and evolve over 15~ years sufficient endorsement? It's just a visual sorted/tree based view of articles around a subject. — Very Polite Person (talk/contribs) 20:50, 5 November 2025 (UTC)
- Well, we're discussing them now. And crucially, there wasn't much in the way of a coherent delete rationale presented at AfD. —Myceteae🍄🟫 (talk) 21:42, 5 November 2025 (UTC)
- The keep argument was that there was no deletion argument. What's needed is consensus to delete.Outlines were originally created as "List of basic topics of...", which being lists needed no additional affirmation. Besides the mass move discussion to "Outline of..." I remember finding once but not right now, consensus that outlines should be a thing was also established at Wikipedia:Village pump (proposals)/Archive 78#Alternative Outline Articles Proposal, whose parent proposal (closing down outlines) was {{CENT}}-listed.
I don't see how this is a problem inherent to outlines. Aaron Liu (talk) 22:55, 5 November 2025 (UTC)that they rarely have required citations
RfC: Increase the frequency of Today's Featured Lists
[edit]
|
Increase the frequency of Today's Featured Lists from 2 per week to 3 or 4 per week, either on a trial basis, with the option to expand further if sustainable, or without a trial at all. Vanderwaalforces (talk) 07:02, 2 November 2025 (UTC)
- Background
Right now, Today's Featured List only runs twice a week; that is Mondays and Fridays. The problem is that we've built up a huge (and happy?) backlog because there are currently over 3,400 Featured Lists that have never appeared on the Main Page (see category). On top of that and according to our Featured list statistics we're adding about 20 new Featured Lists every month, which works out to around 4 to 5 a week, and looking at the current pace of just 2 per week, it would take forever to get through what we already have, and the backlog will only keep growing.
Based on prior discussion at WT:FL, I can say we could comfortably increase the number of TFLs per week without running out of material. Even if we went up to 3 or 4 a week, the rate at which new lists are promoted would keep things stable and sustainable. Featured Lists are one of our high-quality contents and they get this less exposure compared to WP:TFAs or WP:POTDs, so trust me, this isn't about numbers, and neither is it about FL contributors being jealous (we could just be :p). Giving them more space would better showcase the work that goes into them. We could run a 6‑month pilot, then review the backlog impact, scheduling workload, community satisfaction, etc.
Of course, there are practical considerations. Scheduling is currently handled by Giants2008 the FL director, and increasing the frequency would mean more work, which I think could be handled by having one of the FL delegates (PresN and Hey man im josh) OR another experienced editor to help with scheduling duties. Vanderwaalforces (talk) 07:03, 2 November 2025 (UTC)
- Options
- Option 1: Three TFLs per week (Mon/Wed/Fri)
- Option 2: Four TFLs per week (e.g., Mon/Wed/Fri/Sun)
- Option 3: Every other day, with each TFL staying up for two days (This came up at the WT:FL discussion, although it might cause imbalance if comparing other featured content durations.)
- Option 4: Three TFLs per week (Mon/Wed/Fri) as a 6‑month pilot and come back to review backlog impact, scheduling workload, community satisfaction, etc.
- Option 5: Four TFLs per week (e.g., Mon/Wed/Fri/Sun) as a 6‑month pilot and come back to review backlog impact, scheduling workload, community satisfaction, etc.
- Option 6: Retain status-quo
Discussion (TFLs)
[edit]- Generally supportive of an increase, if the increase has the support of Giants2008, PresN, and Hey man im josh. Could there be an elaboration on the potential main page balance? TFL seems to slot below the rest of the page, without the columnar restrictions. CMD (talk) 10:01, 2 November 2025 (UTC)
- @Chipmunkdavis Per the former, yeah, I totally agree, which is why I suggested earlier that one of the FLC delegates could help share the load, alternatively, an experienced FLC editor or someone familiar with how FL scheduling works could assist. Per the latter, nothing changes actually, the slot for TFL remains the same, viewers only get to see more FLs than the status-quo. It might fascinate you that some editors do not know if we have TFLs (just like TFAs) on English Wikipedia either because they have never viewed the Mainpage on a Monday/Friday or something else. Vanderwaalforces (talk) 17:06, 2 November 2025 (UTC)
- Support Option 2 with the Monday list also showing on Tuesday, the Wednesday list also showing on Thursday and the Friday list also showing on Saturday — Preceding unsigned comment added by Easternsahara (talk • contribs) 16:28, 2 November 2025 (UTC)
- Option 1, for two main reasons: (1) there is no reason to rush into larger changes (we can always make further changes later), and (2) FL topics tend to be more limited and I think it's better to space out similar lists (e.g., having a "List of accolades received by <insert movie/show/actor>" every other week just to keep filling slots would get repetitive). Strongly oppose any option that results in a TFL being displayed for 2 days; this would permanently push POTD further down, break the patterns of the main page (no other featured content is up for more than 1 day), and possibly cause technical issues for templates meant to change every day. RunningTiger123 (talk) 18:08, 2 November 2025 (UTC)
- Option 1 – Seeing the notification for this discussion pop up on my talk page really made me take a step back and ponder how long I've been active in the FL process (and my mortality in general, but let's not go there). I can't believe I'm typing this, but I've been scheduling lists at TFL for 13 years now. That's a long time to be involved in any one process, as this old graphic makes even more clear. Where did the time go? Anyway, I agree with RunningTiger that immediately pushing for 4+ TFLs per week when we may not have enough topic diversity to probably support that amount would do more harm than good, but I think enough lists are being promoted through the FL process to support an increase to three TFLs weekly. In addition, I agree with RT that we don't need to be running lists over multiple days when none of the other featured processes do.While I'm here, I do want to address potential workload issues. My suggestion is that, presuming the delegates have the spare time to take this on, each of us do one blurb per week. With the exception of the odd replaced blurb once in a blue moon, I've been carrying TFL by myself for the vast majority of the time I've been scheduling TFLs (over a decade at this point). If I take a step back and ignore the fact that I'm proud to have had this responsibility for the site for this many years (and that the train has been kept on the tracks fairly well IMO), it really isn't a great idea for the entire process to have been dependent on the efforts of a single editor for that long. I just think it would be a good sign of the strength of the TFL process for a rotation of schedulers to be introduced. Also, in the event of an emergency we would have a much better chance of keeping TFL running smoothly with a rotation. Of course, this part can be more thoroughly hammered out at TFL, but I did want to bring it up in case the wider community has any thoughts. Giants2008 (Talk) 01:42, 4 November 2025 (UTC)
- Option 1, and I'd be willing to do some TFL scheduling. --PresN 15:59, 4 November 2025 (UTC)
- Option 1, though I would support any permanent increase to the frequency of TFLs as long as the coords or other volunteers have the capacity for that. Toadspike [Talk] 20:13, 4 November 2025 (UTC)
- Option 4, let's see if some backlog can be cleared and evaluate the workload. Blue Riband► 01:00, 5 November 2025 (UTC)
Change the banner on the main page
[edit]The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
make it look like this instead: (see source code)
|
- I'm in support of such a change, but not until general navigation pages are revamped and updated to be useful to the readers who like to go down Wikipedia rabbit holes. For portals, see my proposal at User:Thebiguglyalien/Portal sample and User:Thebiguglyalien/Portal sample 2. The only reason I haven't formally moved toward their adoption is because I lack the technical abilities to make automation work. Thebiguglyalien (talk) 🛸 18:24, 2 November 2025 (UTC)
- Wikipedia:Village_pump_(idea_lab)/Archive_54#Portals on the main page. Cremastra (talk · contribs) 19:48, 2 November 2025 (UTC)
- @Vita69, you should probably look at the 2024 discussion that Cremastra linked, as well as the 2022 RFC that led to the current state. WhatamIdoing (talk) 03:58, 3 November 2025 (UTC)
- Looks pretty ugly to me: odd bullets beside the counts, and content strangely squashed to the sides. Checked on Monobook and Vector 2022 in a normal-sized browser window. Anomie⚔ 21:59, 2 November 2025 (UTC)
- Because it's a table layout. @Vita69: can you make this banner responsive/use div? sapphaline (talk) 13:50, 3 November 2025 (UTC)
- I disagree with the prominence given to portals, most of which are rarely updated and nearly dead. (t · c) buidhe 03:38, 3 November 2025 (UTC)
- Note that I've blocked the OP as a
Confirmed sock.-- Ponyobons mots 20:59, 3 November 2025 (UTC)
Paraphrasing allowed for species descriptions about 'obscure' and 'newly described' species
[edit]Hello all, I am running into a problem. I am adding articles about beetles. Many beetle species are very poorly studied. Hence: often there is only a few or even one source available that gives a description of the species (i.e. it's appearance). Another editor stated that I am not allowed to use close paraphrasing when adding an article about a species to wikipedia, and stated he intends to remove all of these paraphrased statements. I do not agree with his stance in this matter, because there is literally no other way to add these species discriptions. To make it clear: I do not just copy-paste the species description, and only use one or two sentences (a typical modern species description is about 1 page long). I will give two examples of what the other editor thinks is not acceptable, but I think is:
- I changed this: Original - "The head (except black mandibles and labrum), antennae (except antennomeres 8-11 black), and legs chestnut-brown; eyes and scutellum black; pronotum shiny reddish-brown with medial 3 black (with bluish reflections) longitudinal vittae- 1 medial and 2 lateral;elytra shiny reddish-brown with 3 shining black oblique vittae from lateral to sutural margins; venter and legs reddish." into this: Wikipedia entry - "The head, antennae and legs are chestnut-brown, while the pronotum is shiny reddish-brown with three black vittae with bluish reflections. The elytra are shiny reddish-brown with three shining black vittae." That is not a copyvio in my mind. How else should I ever get a species description on wikipedia.
- This second source is in German and I translated and changed it: Original - "Beschreibung. Länge 7,4-7,7 mm, Elytrenlänge 5,4-5,7 mm, Breite 4,7-4,8 mm. Körper eiförmig oval, dunkel kastanienbraun, Oberfläche mit matter Beschichtung, Labroclypeus, Tarsen und Schienen glänzend, bis auf laterale Bewimperung und einige Borsten auf dem Kopf kahl." Wikipedia entry - "Adults reach a length of about 7.4-7.7 mm. They have a dark chestnut brown, oval body. The dorsal surface is dull and glabrous, except for the lateral cilia and some setae on the head."
I think that this would be ok, IF it is a species for which only very few sources are available to work with (for a lot of these species there is one source with an actual description, and some listings in checklists an databases, but nothing else. By the way, I am not the only one that thinks that feels that species descriptions should be free of restrictions. The database/website Plazi.org follows the following reasoning about the legality of using species descriptions published in copyrighted journals: [4]. I was searching for any (legal) challenges to Plazi (could not find one). Did find this: Scientific names of organisms: attribution, rights, and licensing | BMC Research Notes | Full Text, it is mainly about databases and checklists, but also states this: "Taxonomic treatments are not copyrightable: Taxonomic treatments and descriptions of species are not copyrightable because they lack creativity of form. Rather, they are presented with a standardized form of expression for better comprehension." They also drafted a 'blue list', which includes components of names and taxonomy that are not subject to copyright
- - A hierarchical organization (= classification), in which, as examples, species are nested in genera, genera in families, families in orders, and so on.
- - Alphabetical, chronological, phylogenetic, palaeontological, geographical, ecological, host-based, or feature-based (e.g. life-form) ordering of taxa.
- - Scientific names of genera or other uninomial taxa, species epithets of species names, binomial combinations as species names, or names of infraspecific taxa; with or without the author of the name and the date when it was first introduced. An analysis and/or reasoning as to the nomenclatural and taxonomic status of the name is a familiar component of a treatment.
- - Information about the etymology of the name; statements as to the correct, alternate or erroneous spellings; reference or citation to the literature where the name was introduced or changed.
- - Rank, composition and/or apomorphy of taxon.
- - For species and subordinate taxa that have been placed in different genera, the author (with or without date) of the basionym of the name or the author (with or without date) of the combination or replacement name.
- - Lists of synonyms and/or chresonyms or concepts, including analyses and/or reasoning as to the status or validity of each.
- - Citations of publications that include taxonomic and nomenclatural acts, including typifications.
- - Reference to the type species of a genus or to other type taxa.
- - References to type material, including current or previous location of type material, collection name or abbreviation thereof, specimen codes, and status of type.
- - Data about materials examined.
- - References to image(s) or other media with information about the taxon.
- - Information on overall distribution and ecology, perhaps with a map.
- - Known uses, common names, and conservation status (including Red List status recommendation).
- - Description and/or circumscription of the taxon (features or traits together with the applicable values), diagnostic characters of taxon, possibly with the means (such as a key) by which the taxon can be distinguished from relatives.
- - General information including but not limited to: taxonomic history, morphology and anatomy, reproductive biology, ecology and habitat, biogeography, conservation status, systematic position and phylogenetic relationships of and within the taxon, and references to relevant literature.
- - It would appear that no copyright law is infringed if a user extracts elements of the blue list from material that lacks legitimate user agreements.
They argue all of the above is not copyrightable. I can imagine wikipedia would not just want to accept that as truth, however: I do feel this support the argument that we could at the very least paraphrase these copyrighted sources, if we stick to one or two sentences, rewrite them, and only do it for 'obscure' species (so not for species like a kangaroo, a duck, etc., where countless of sources are available, but for species like a mosquito that is endemic to one forest in Sumatra, or a mollusk described last year, etc.) B33tleMania12 (talk) 18:23, 2 November 2025 (UTC)
- Pinging some people: Moneytrees, Sennecaster, The Knowledge Pirate, Myceteae, WhatamIdoing. — Preceding unsigned comment added by B33tleMania12 (talk • contribs) 18:28, 2 November 2025 (UTC)
- As the edit was unsigned those pings will not have worked, so pinging on B33tleMania12's behalf: @Moneytrees, Sennecaster, The Knowledge Pirate, Myceteae, and WhatamIdoing:. Thryduulf (talk) 18:39, 2 November 2025 (UTC)
- Background: Wikipedia:Village pump (miscellaneous)#Question about Plazi.org and copyright. That discussion is still active-ish and includes links to other relevant discussions: Wikipedia talk:WikiProject Biology/Archive 3#Direct copies of species descriptions from external website and the related User:Moonriddengirl/copyright FAQ#Taxonomic descriptions; descriptions of facts. I'm just posting this for visibility; I think B33tleMania12 has done a decent job re-presenting the issue based on the most recent discussion and, with much appreciated assistance from Thryduulf, alerting other participants. —Myceteae🍄🟫 (talk) 19:15, 2 November 2025 (UTC)
- It'd probably be more pointful to ping people who know something about copyright, like Diannaa.
- Facts are not copyrightable, so (e.g.,) a fact "about the etymology of the name" is not copyrightable. But the expression of a fact can be (=is not always) copyrightable. Editors should write in your own words and sentences. However, if the expression is simple enough ("E. expertia was named after Alice Expert"), then even though Wikipedia wants you to write in your own words, that sentence wouldn't constitute a copyvio. WhatamIdoing (talk) 03:53, 3 November 2025 (UTC)
- I agree, something of de minimis originality is not copyrightable. Andre🚐 03:56, 3 November 2025 (UTC)
- We shouldn't have a wp article if there is only one source with significant coverage. (t · c) buidhe 05:22, 3 November 2025 (UTC)
- WP:NSPECIES does not have that rule.
- Long-term, if you'd like that to be a rule for all articles, then I suggest getting an actionable definition of "significant coverage" into the GNG. We still have disagreements about whether SIGCOV is about importance or volume, or if it is determined by the number of word in a source or the number of facts that could be used in an encyclopedia article. To give you an idea of how this matters, see User:WhatamIdoing/Database article, where I've written a 225-word-long Wikipedia article from a source that does not contain a single complete sentence about the subject of the article. Some editors say that source is SIGCOV, because obviously it covered enough facts for me to write a Start-class article about the subject, easily meeting the goal of SIGCOV as explained in WP:WHYN. And others say that it's not, because it's obviously impossible to have SIGCOV if the source presents the information about the subject of the article in any form other than multiple consecutive sentences of prose. WhatamIdoing (talk) 05:41, 3 November 2025 (UTC)
- Yes, I know it's against NSPECIES. My view is that it's a bad guideline because it leads to mass generation of low quality articles that are poorly watched and maintained. (t · c) buidhe 05:43, 3 November 2025 (UTC)
- Well the idea was to make the article longer than just one sentence saying where it lives, but I cannot if I am not allowed to use anything else. There is enough to write an article that is actually saying something about the species in the original description, but if there is no way to use it, the article will indeed stay a stubby sub-stub until someone else writes something about it. B33tleMania12 (talk) 07:41, 3 November 2025 (UTC)
- It's a good and much needed guideline in my opinion, and I do not really see the problems you mention. --Jens Lallensack (talk) 08:43, 3 November 2025 (UTC)
- Yes, I know it's against NSPECIES. My view is that it's a bad guideline because it leads to mass generation of low quality articles that are poorly watched and maintained. (t · c) buidhe 05:43, 3 November 2025 (UTC)
- Regarding "We shouldn't have a wp article if there is only one source with significant coverage", which you argue "lead to mass generation of low quality articles that are poorly watched and maintained." This article is what you can do with a single source: Maladera cardamomensis (luckily it is CC-BY, so no issues with using the species description). I think this is substantial enough to deserve an article. In essence, this could be done for every species, because there will always be a species description. But then again: we must be allowed to use it (hence this discussion) B33tleMania12 (talk) 11:34, 3 November 2025 (UTC)
- It is probably worth finding something more than a stub for future "This article is what you can do with a single source" arguments. Much more is possible, with a good enough source. CMD (talk) 16:15, 3 November 2025 (UTC)
- @B33tleMania12: The important thing is to not copy-paste anything that could be remotely considered to be a creative choice. In your first example, I would replace "shiny" with the synonym "glossy" (or "reflective"). In your second example, I would not copy-paste "chestnut-brown", but instead say "reddish-brown" (and pipe-link that to "chestnut (color)"), which is also more accessible to lay readers, and this is the term you use in your first example. More importantly, try to reduce/explain technical language (see WP:MTAU). The goal is to rephrase this to make it as understandable as possible. For example, three shining black vittae need to be explained; something like "On the elytra there are [your explanation], called vittae, that are black" and you will have a very different sentence. You could also change the structure by first describing general features rather going section by section. For example, you could write something like "Both the pronotum ([explanation of term]) and the elytra ([explanation of term]) are red-brown with a reflective surface", followed by the details of these parts, and that would be very different from the source and easier to understand than the highly technical and formalized way the source puts it. --Jens Lallensack (talk) 08:43, 3 November 2025 (UTC)
- Thanks, that is valuable feedback! If that would be acceptable, I could definitely work with that. B33tleMania12 (talk) 08:49, 3 November 2025 (UTC)
- Could I formally request that @The Knowledge Pirate: hold off on trimming any content he deems copyvio's until this discussion is done? When I started, I did not always add CC-BY and PD US Government tags.. adding these is off course no issue, but there are also many articles I made using sources that are not under a 'free' licence. Following his reasoning, these would be copyvio's, and thus be removed. However, if the conclusion of this discussion is that they are not, they would be removed and rev-del'd for nothing. B33tleMania12 (talk) 14:26, 3 November 2025 (UTC)
- Making the descriptions more accessible to a general audience is an added benefit here. —Myceteae🍄🟫 (talk) 15:44, 3 November 2025 (UTC)
- Thanks, that is valuable feedback! If that would be acceptable, I could definitely work with that. B33tleMania12 (talk) 08:49, 3 November 2025 (UTC)
- I agree it's better to have it rephrased in a more accessible format, but overall I think it should be ok to just copy-paste the defining species description, as long as this is legal of course -- it's much better to have than not, and rephrasing/explaining is a lot of work that can be done slowly after the description has been copied in: there are a lot of beetle articles. Mrfoogles (talk) 17:47, 5 November 2025 (UTC)
Proposal to speed COI edit requests
[edit]When a new COI edit request is posted, it appears on Category:Wikipedia conflict of interest edit requests. When a volunteer starts to address the request, it can be tagged with the {{started}} template. But we still have to click on each request to go to the request on the talk page to see if it's been tagged with "started" yet. It would save time if the presence of the started template triggers some kind of visual alert on the category page. Currently, a lot of real estate and color coding goes to show that an article is edit protected, but that has very little impact on most editors handling these requests. Instead, if a field could be used to simply say "started", or "new" (default), it would make it easier for volunteers to clear the queue by highlighting new requests that aren't already being worked on by someone else. STEMinfo (talk) 23:46, 4 November 2025 (UTC)
- You're talking about User:AnomieBOT/COIREQTable, which is transcluded on the category page, right? jlwoodwa (talk) 01:40, 5 November 2025 (UTC)
- @Jlwoodwa: Yes - I didn't know there was another location for the queue. On the link you shared, there's even more empty space, so it seems there would be room to put in a "started" icon or the word started in a stareted column to help the volunteers.STEMinfo (talk) 00:07, 8 November 2025 (UTC)
I do not believe this message, which appears when a temporary account attempts to exit session, is necessary. The wikilinks in message is currently broken due to T409630, and no good faith user would believe that it is ok to disrupt Wikipedia, evade a block or ban, or to avoid detection or sanctions
. The exit session dialogue is already cluttered enough, and the message can come across as assuming bad faith. Ca talk to me! 13:15, 8 November 2025 (UTC)
- Pinging translator @K6ka Ca talk to me! 13:19, 8 November 2025 (UTC)
- You can do such a thing? We should just get rid of that "feature", which has probably already been abused by vandals. Children Will Listen (🐄 talk, 🫘 contribs) 13:32, 8 November 2025 (UTC)
- We have disabled system messages before; simply replacing them with a
-is usually enough to hide them. As for the message itself, I'm all for simplifying interface messages (as long as they're still informative enough) so I have no major issues with this message being hidden for us. —k6ka 🍁 (Talk · Contributions) 14:05, 8 November 2025 (UTC)- I'm talking about the logout button offered for temp accounts. Children Will Listen (🐄 talk, 🫘 contribs) 14:07, 8 November 2025 (UTC)
- Ah yes, that feature wasn't too well documented. Yes, users of temporary accounts can use the "End Session" button to essentially log out of their temporary account (forever), no cookie-clearing required. I suppose there is a concern that it could be used for abuse, but it's not like a warning message would stop determined malice anyway. —k6ka 🍁 (Talk · Contributions) 16:48, 8 November 2025 (UTC)
- At a minimum, I support disabling the "Exit session" feature for blocked temporary accounts. Even if this only stops less determined vandals, removing the feature would still reduce the anti-vandalism workload. — Newslinger talk 16:15, 10 November 2025 (UTC)
- Ah yes, that feature wasn't too well documented. Yes, users of temporary accounts can use the "End Session" button to essentially log out of their temporary account (forever), no cookie-clearing required. I suppose there is a concern that it could be used for abuse, but it's not like a warning message would stop determined malice anyway. —k6ka 🍁 (Talk · Contributions) 16:48, 8 November 2025 (UTC)
- I'm talking about the logout button offered for temp accounts. Children Will Listen (🐄 talk, 🫘 contribs) 14:07, 8 November 2025 (UTC)
- The only qualm I have disabling the feature is that when using a TA, it adds an obnoxious gray bar at the top. Ca talk to me! 23:43, 10 November 2025 (UTC)
- I agree that being "logged in" to a temporary account offers a worse visual experience than being logged out. As someone who spends a lot more time reading than editing, I'll log out of a temporary account after making an edit to get back to normal. ~2025-32801-03 (talk) 11:24, 11 November 2025 (UTC)
- We have disabled system messages before; simply replacing them with a
- Support – makes no sense FaviFake (talk) 17:20, 11 November 2025 (UTC)
Royal family templates
[edit]We have a number of "royal family" templates like Template:British royal family, Template:Danish royal family, Template:Monegasque princely family and so on (see Category:Royal and noble family templates and its subcats) which tend to use formal titles instead of recognisable names / article titles, making them IMO more obscure and unnecessary deferential than is the standard on Wikipedia. I tried to correct this on the British one but was reverted[5].
I don't think it is reader-friendly or useful if a template e.g. has a link to "The Princess of Hanover" when we actually mean Princess Caroline of Monaco, to "The Dowager Princess of Sayn-Wittgenstein-Berleburg" when we mean Princess Benedikte of Denmark, "The Emperor Emeritus" when we mean Akihito, or to "The Duke of Sussex" when we mean Prince Harry, Duke of Sussex. I would propose as a rule that these templates should use the article titles they link to (minus unnecessary disambiguation if applicable) instead of the formal titles. Thoughts? Fram (talk) 09:45, 10 November 2025 (UTC)
- One benefit of the formal titles is that they are resilient to change in who is holding the position - i.e. "The Queen" at Template:British royal family is always covered by 'The Queen' if even the person with that title changes. Katzrockso (talk) 11:55, 10 November 2025 (UTC)
- Hardly a good reason to keep these, as their position in the tree will often change anyway when the title holder changes (e.g. switching of King and Queen in the UK a few years ago). Fram (talk) 14:10, 10 November 2025 (UTC)
Should we adopt the new "protection padlock" feature?
[edit]The current practice when protecting a page is to:
- change the status to protected via the protection form, and then
- edit the page to insert a protection template at the top of the page
However, this:
- Requires extra editor attention. On the English Wikipedia, bots and scripts are used to add the {{Protection padlock}} template.
- Clutters the wikicode of the page, especially since it is placed at the top.
- Adds two extra edits to a page's history (one when the page is protected to add the template, and a second one, after the protection expires, to remove it) in addition to the protection revision history lines
- Inconsistent behavior across wikis causes confusion. For admins on the English Wikipedia a common pattern is: a page is protected with Twinkle, automatically adding the {{Protection padlock}} template, but the page needs to be reverted to remove vandalism, requiring another edit to re-add the template again.
There's a new MediaWiki feature that aims to fix this, announced in the newest tech news issue and mw:Help:Protection indicators:
MediaWiki can now display a page indicator automatically while a page is protected. This feature is disabled by default. It can be enabled by community request.
Starting with MediaWiki 1.43, protection indicators that are small lock icons on the top of a page might show up when a page is protected. This feature can be enabled using the setting
$wgEnableProtectionIndicators.
So, should we switch from using the {{Protection padlock}} template to MediaWiki's new automatic protection indicator? FaviFake (talk) 17:44, 11 November 2025 (UTC)
- The English Wikipedia's {{pp}} template has some features that the new built-in "protection indicators" lack:
- The ability to function either as a small icon (the most common usage) or a large banner (as currently visible on Kajal Aggarwal).
- The ability to specify the reason for protection, which changes the icon's alt-text or the banner message. For example,
{{pp|dispute}}and{{pp|vandalism}}, which add Category:Wikipedia pages semi-protected due to dispute and Category:Wikipedia pages semi-protected against vandalism (resp.). - The ability to distinguish between edit protection and move protection. The "protection indicator" feature seems to allow customization by protection level (full, extended, semi) and duration (finite vs. indefinite), but not by edit vs. move protection.
- How much do these features actually matter? As a lowly
IP editortemporary account holder, no idea. Admins would presumably have a better idea of that. Technically, some of these features could probably be added to the new "protection indicators" with additional CSS and templates, but the new system would end up no simpler than the present one. ~2025-32085-07 (talk) 18:44, 11 November 2025 (UTC)- My first thought is that there would be no harm in adding a large banner in addition to the automatic indicator if one is justified, so the change would be positive for those pages which only have a small icon (most) and neutral for those which have a banner. However, that doesn't account for the categorisation and tooltip issues brought up by 32085-07. Thryduulf (talk) 19:48, 11 November 2025 (UTC)
- If I read the documentation correctly, the icons can still be overridden with templates that follow the correct structure. novov talk edits 22:28, 11 November 2025 (UTC)
$wgEnableProtectionIndicatorsis currently set to 'true' on az.wikipedia and sr.wikipedia so I checked those sites to get a better feel for what this could look like. Here are some examples of protected pages from the Azerbaijani and Serbian Wikipedias:- You can also set
?uselang=qqxto check the MediaWiki interface messages. - Summary of what I figured out from reading the documentation and looking at its use in production:
- The feature appears as a page indicator icon near the page title.
- By default, the icon is a black padlock which does not vary by the level of protection (as on sr.wikipedia).
- Using CSS it is possible to set up different icons for semi-protected pages, extended protected pages, fully protected pages, etc. (as on az.wikipedia).
- The icon's tooltip text is configured by MediaWiki:Protection-indicator-title for temporarily protected pages, or MediaWiki:Protection-indicator-title-infinity for pages that are indef protected.
- The icon by default links to mw:Special:MyLanguage/Help:Protection. However, the link target can be customized by editing MediaWiki:protection-autoconfirmed-helppage, MediaWiki:protection-sysop-helppage, etc.
- It only seems to deal with edit protection. A fully move-protected page without edit protection has no indicator icon whatsoever. Perhaps that feature will be added in a future version.
- The feature appears as a page indicator icon near the page title.
- ~2025-32085-07 (talk) 23:31, 11 November 2025 (UTC)
- I find the different colors and letters on the protection padlock icons very useful. According to the anon's research above, this new feature somehow doesn't provide these basic distinctions. I don't think the advantages outweigh the missing features yet, but we should keep an eye on it to see if this is a feature that the WMF continues to develop or abandons half-built. If there are phab tasks for this new feature, someone could add a link to our protection icons so that they can catch up with what our volunteers have developed here at en.WP. – Jonesey95 (talk) 01:33, 12 November 2025 (UTC)
- With appropriate global CSS, it looks like the feature could handle applying




for generic protections. You can see this on azwiki, where they've done some of that. Anything else, including where we want the title-text on the icon to talk about BLP or the like (e.g. {{pp-blp}}) instead of a generic message or when we want custom categorization, would still need a template overriding the feature. Anomie⚔ 02:51, 12 November 2025 (UTC)
- With appropriate global CSS, it looks like the feature could handle applying
- I find the different colors and letters on the protection padlock icons very useful. According to the anon's research above, this new feature somehow doesn't provide these basic distinctions. I don't think the advantages outweigh the missing features yet, but we should keep an eye on it to see if this is a feature that the WMF continues to develop or abandons half-built. If there are phab tasks for this new feature, someone could add a link to our protection icons so that they can catch up with what our volunteers have developed here at en.WP. – Jonesey95 (talk) 01:33, 12 November 2025 (UTC)