Jump to content

Wikipedia:Village pump (proposals)

From Wikipedia, the free encyclopedia
(Redirected from Wikipedia:PROPS)
 Policy Technical Proposals Idea lab WMF Miscellaneous 

The proposals section of the village pump is used to offer specific changes for discussion. Before submitting:

Discussions are automatically archived after remaining inactive for 7 days.

Removing extended autoconfirmed time for WP:Tor exit node users

[edit]

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Currently, we block all WP:Tor exit nodes such that any user wanting to edit through a Tor exit node would first need to contact a administrator and obtain the WP:IPBE user right before making any edits. (i.e. convince a admin that you will edit constructively and not sock, which is a much higher bar than typical autoconfirmed). However, currently MediaWiki artificially extends the period of time a user needs to edit for to be autoconfirmed to be atleast 90 day with a edit threshold of 100 edits. Given the way we currently handle Tor IPs, adding this extended time period seems counter intuitive. Would it be a good idea to remove the 90 day, 100 edit barrier for WP:AC users (especially those who have been granted IPBE)? (cc @W00zles and @Stwalkerster who were involved in the request, and @Risker, who I know has a lot of institutional knowledge of why certain features were implemented v/s weren't) -- If the community is okay/open to the idea of reducing/removing the WP:AC time/edit extension, I'll file a phabricator task to reduce the period! Sohom (talk) 20:07, 26 September 2025 (UTC)[reply]

I'm generally supportive, but a couple of points. Global IPBE (torunblocked) can be granted by stewards, even for accounts which don't exist on this wiki. In those cases, and in some cases on this wiki, the bar can be very low - often we'll just take a look whether we believe someone is in China, or Iran, or wherever. No edits required. That said, having a high bar for AC doesn't really add much in today's environment. -- zzuuzz (talk) 20:34, 26 September 2025 (UTC)[reply]
I support this motion. I will admit that this directly benefits me as a Tor editor. Failing the accepting of this proposal, I would ask that the tools that check for autoconfirmed to be unified. A lot of sources don't agree with the autoconfirmed status of Tor users. The MediaWiki API returns that I am confirmed, but the internal tools such as Special:UserRights indicates to me I only have IPBE. My Preferences page also disagrees with the API. W00zles (talk) 21:58, 26 September 2025 (UTC)[reply]
Some groups are actually assigned in the database, while others are implicitly assigned at runtime. It appears that TorBlock changes the implicit assignment based on the source of the incoming request, so you when using Tor you'll see the higher requirements being applied even when you look at the implicit groups of other users who never use Tor. OTOH, if you're making the API request via Tor and it's not applying the higher requirements, that's probably a bug. Anomie 22:18, 26 September 2025 (UTC)[reply]
Z@Anomie, I think the problem here is that they see very different results compared to me, if I make a query through the API, I see them as autoconfirmed, whereas when they try to edit a page that has a autoconfirmed restriction, they can't. If the answer is "keep the extended period", then we need better tools as administrator to know that this restriction is being applied. Sohom (talk) 13:37, 27 September 2025 (UTC)[reply]
I'm not certain what your proposal is here, so I'll give two answers. I think I'm opposed to reducing Tor autoconfirmed status below non-Tor status especially if GIPBE allows Tor editing, but I don't have any reasons why we shouldn't equalise it between Tor/non-Tor while a blanket ban on non-(G)IPBE Tor users exists so I'm weakly supportive of that. stwalkerster (talk) 22:28, 26 September 2025 (UTC)[reply]
To clarify, I want it back to normal levels that any other non-auto-confirmed user would face. Sohom (talk) 13:32, 27 September 2025 (UTC)[reply]
I support. Tor requires IPBE, which isn't exactly an easy feat for a new user. I don't see much of a reason to have the higher bar these days, with the global ban on Tor. EggRoll97 (talk) 15:20, 27 September 2025 (UTC)[reply]
  • Responding to ping. Tor is a form of open proxy, which is generally banned throughout the Wikimedia world for a lot of complex reasons. There are attribution issues, there is the longterm dramatically higher rate of vandalism, and there is the simple fact that Tor historically was used regularly by two groups: vandals and activists of various stripes. I genuinely do not know if the extended period for autoconfirmed applies only to those using Tor, or if it applies to all users with IP block exemption. (Logically, it should apply to all accounts with IPBE, because there is no "special" permission above that to enable access via Tor.) All known Tor exit nodes are blocked globally, and pretty much have been since the "no open proxies" global policy and philosophy was introduced. In response to EggRoll97, I can honestly say that probably 1/2 of IPBE requests coming in through the central checkuser VRT queue are from comparative newcomers; in about 5-10% of cases, we actually have to create their accounts for them. It's really not hard to find us or to ask for it.
    I'd suggest that the real debate here is whether the no open proxies policy needs to be revisited, in light of (often justified) concerns about personal internet security throughout the world, not just regions with authoritarian governments. I realize that doesn't really answer the subject of this thread, but on reading things through, I can't tell if this is a local issue or a global one; and given my personal opinion on autoconfirmed is that I'd rather increase the requirements for everyone, I'm probably not the best person to weigh in. Risker (talk) 15:43, 27 September 2025 (UTC)[reply]
    My recollection of the discussion is that there was seeming consensus for 7 days/20 edits, but (as I recall), the devs went with 4/10, as a preliminary test. Perhaps enough time has gone by to revisit this. - jc37 04:24, 14 October 2025 (UTC)[reply]
  • I should clarify, my main reason for calling it not an "easy feat" is the specific case you mentioned, which is that they would need to go through the VRT queue. That's already a lot just to get an account created, involving manually contacting CheckUsers (I don't believe there's really an automated way to just submit a request to the CU VRT queue). I know it probably doesn't sound like a lot of work, but comparatively to just creating an account as normal, that's a much more involved process.
    Side note, I do think it probably would be good to start discussing autoconfirmed being bumped a bit, the low standard may be intentional, but it feels far too easy to game autoconfirmed socks with the current requirements. EggRoll97 (talk) 16:21, 27 September 2025 (UTC)[reply]
    I can say in my personal experience attempting to get my account usable was a real challenge. I had to go to an admin in real life and ask them to confirm me. I never got a response from the stewards when I made the IPBE request after getting my account made for me over Tor, leaving me with little ability to do anything.
    I am not opposed to a raised autoconfirm requirement due to vandals but I do think, as mentioned in the other reply thread, it should be equal at a minimum between IPBE and not. W00zles (talk) 23:10, 29 September 2025 (UTC)[reply]
  • Per the generally positive outlook, I've filed a RFC below. -- Sohom (talk) 14:56, 3 October 2025 (UTC)[reply]


RFC: Should we equalize/remove the difference in autoconfirmed time between non-Tor and Tor users

[edit]

Background, currently, we block all WP:Tor exit nodes such that any user wanting to edit through a Tor exit node would first need to contact a administrator and obtain the WP:IPBE user right before making any edits. (i.e. convince a admin that you will edit constructively and not sock, which is a much higher bar than typical autoconfirmed). However, currently MediaWiki artificially extends the period of time a user needs to edit for to be autoconfirmed to be atleast 90 day with a edit threshold of 100 edits. This is enforced by the the TorBlock extension which was added some time in 2008. Since then, our policies have shifted, in the current day, due to our No open proxies rules, editing through Tor exit nodes are typically always blocked locally (and many times globally). Due to this, the bar for editing through Tor proxies has become "request the IPBE userright" + the aformentioned extended autoconfirmed userright. Given this, I would like to propose that we remove the special extended time period to get autoconfirmed for Tor users, and instead equalize the bar for recieving the autconfirmed userright for both Tor and non-Tor users. -- 14:56, 3 October 2025 (UTC)

!Votes (autoconfirmed for Tor users)

[edit]

Discussion (autoconfirmed for Tor users)

[edit]

Some clarification is needed here. The extension involved is a core Wikimedia MediaWiki extension and is deployed on all Wikimedia projects. I'm not seeing any discussion about whether this specific parameter is variable project by project, or if changing this parameter would affect all Wikimedia projects equally. I am also very concerned that the cart appears to be being put before the horse here. A lot of the comments above have made it clear that there are serious concerns about the current auto-confirm parameters being too low, and it seems rather absurd to reduce the parameters for Tor users to current levels if it is fairly likely that a similar discussion would increase the levels for non-Tor users. How about we settle what the right parameters are for auto-confirm, before we go monkeying around with a parameter that affects perhaps 25 people throughout the entire project? Risker (talk) 17:41, 21 October 2025 (UTC)[reply]

@Risker, Wrt to your first point, Extension:TorBlock has two parameters $wgTorAutoConfirmAge and $wgTorAutoConfirmCount both of which need to be matched (over and above the normal autoconfirmed threshold) for a user to be autopromoted. These two variable can be configured to different values on a per wiki basis (as with all other configuration values). In this case, the change should only be made on enwiki. By setting them to zero, the normal auto-confirmed takes over, and if and when we decide on a elevated threshold, the newer elevated threshold will apply to Tor users as well by default. (i.e. with zero additional configuration change). Your assertion above that After we increase the requirements for autoconfirmed for non-Tor accounts, then ironically Tor accounts will hit autoconfirmed before anyone else. is just incorrect, based on my reading of the code here if the Tor configuration value is lowered to a number below the number of normal autoconfirmed threshold, the normal autoconfirmed threshold will prevail. I think you are conflating two very different things here, this RFC is "should Tor and non-Tor users have the same auto-confirmed threshold regardless of what it is", not "Tor users will be forever assigned the lower current threshold". -- Sohom (talk) 22:19, 21 October 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Permission to create beetle articles at a pace that exceeds the normal limit

[edit]

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Hello everyone. I have been creating stubs and expanding existing articles on beetle species for a while. They range from very short, to start class (I guess). If they are very short, they always include: a taxobox (with all relevant info, including all synonyms), distribution of the species and host plant/prey (if known). It seems I create articles at a pace that has raised some eyebrows for some fellow wikipedians. See: User_talk:B33tleMania12 and Wikipedia_talk:Notability_(species) to read all about it. What I am requesting is permission to add beetle articles like I am doing (which may include a range of these very short articles). I was told that the normal daily limit is between 20-50 articles per day. I guess that if I could get a waiver to create about 100 per day, that would at least make clear to everyone that it is ok what I am doing. That being said: I am now working on a source that allows me to create these very short basic stubs. I do intend to get back to making more sizeable articles after that, see for examples all species articles I created for this genus Cephaloleia (but I might in the future like to work on a list of these basic stubs again if I find a good source). B33tleMania12 (talk) 22:26, 26 September 2025 (UTC)[reply]

Oh, I dont make automated or semi-automated articles. I use a template in notepad which includes the static info. That is why the request is not a pure MASSCREATE request. B33tleMania12 (talk) 22:29, 26 September 2025 (UTC)[reply]
And to add some more, this type of article is what people are opposed to Draft:Agoniella_horsfieldi B33tleMania12 (talk) 22:37, 26 September 2025 (UTC)[reply]
I think this is okay. The subjects are all notable per Wikipedia:Notability (species), and there is basically no chance of them being deleted. A spot check shows that he's citing two sources in each stub. The {{Taxonbar}} links indicate that most of these have another half-dozen reliable sources available. Unlike other species, there doesn't seem to be an authoritative, comprehensive database for beetles, so I'm glad that he's checking them individually. Looking at articles such as Brontispa cyperaceae, even the shortest ones provide basic information (name, type, family, location, what it eats), plus the usual {{Speciesbox}}. Articles such as Gonophora whitei provide even more (that one even qualifies for WP:DYK).
"The normal rate", or at least the normal maximum under WP:MASSCREATE, is 25–50 articles per day. I think B33tlemania12 recently created about 100 or so articles in one day. That's a lot, but reviewing these is so much quicker and easier than for, e.g., BLPs, that it's IMO not unmanageable on our end. I have seen discussions about the creations, and none of the comments are about an unfair load on page patrollers.
Based on prior discussions, I believe that any complaints will take one of two forms:
  • People who want everyone to Wikipedia:Beef up that first revision will be unhappy that some of these are only a few sentences long, even if the creator goes back to expand them later – which the creator seems to do when sources are available.
  • People who (incorrectly) think that systematic article creation (as opposed to hopscotching through whatever catches your eye) turns Wikipedia into a database. WP:NOTDATABASE is about not putting unexplained raw data into an article; it's not about writing an ordinary sentence about whether we know what a given beetle eats.
As someone has unilaterally draftified a few of these, I want to say that there is no point in sending these through AFC; species articles are evaluated only for notability, which means that species articles get kicked right back out into the mainspace. (AFC's job is not "protect my delicate eyes from all these WP:UGLY little stubs"; it's job is to determine notability of the subject, rather than the beauty of the current revision, and to get pages moved to the mainspace as soon as the AFC reviewer is confident that the article is unlikely to be deleted if it's sent to AFD.)
I don't think that either of these are valid reasons to prevent or slow down the creation of these articles. IMO these are decent, if usually brief, articles on obviously notable subjects, and the creator should be encouraged to continue creating articles with a minimum of two cited reliable sources. If someone wanted to create, say, 500 articles a day, then I think we should have another discussion, but for anything in the 500 a week range, I think we're just fine. WhatamIdoing (talk) 23:22, 26 September 2025 (UTC)[reply]
" The links indicate that most of these have another half-dozen reliable sources available. " Nope. E.g. Aspidispa maai has 8 links in the taxonbar, but at least two of them (Wikidata and iNaturalist) are not reliable sources, others are not independent of each other: Open Tree Taxonomy is an automatic representation of data from GBIF, also included in the taxonbar, similarly Catalogue of Life is a rehash of ITIS, and ITIS is already a reference in the article anyway. I wasn't able to access BioLib (thanks to AI scrapers overload), but the others are at best interdependent databases, not a series of reliable independent sources. Fram (talk) 13:45, 30 September 2025 (UTC)[reply]
Sources don't have to be independent of each other to be reliable. WhatamIdoing (talk) 19:14, 30 September 2025 (UTC)[reply]
No, but we don't count them as multiple sources (e.g. multiple newspapers reposting the same Reuters report = one source). The Taxonbar doesn't indicate what you claimed it did. Fram (talk) 08:50, 1 October 2025 (UTC)[reply]
The notability of a species doesn't depend on how many sources exist.
Those links overlap, but I think most of those sources also contain some unique information. For example, in Aspidispa maai, the first link in the taxonbar has very little information, but gives the family's common name of 'leaf beetles'; the second link says that it's an accepted species, which isn't in the first, and doesn't say anything about the common name, which is; the third link says it has bilateral symmetry, which isn't in either of the prior ones, and so on. Unlike the example of a Reuters or other wire service article, they are not simply duplicates of each other. WhatamIdoing (talk) 15:30, 1 October 2025 (UTC)[reply]
Your first sentence is a rpeply to nothing. The "first link in the taxonbar" is to Wikidata, which is not a reliable source. Like I said already. The second link is another wiki, again not a reliable source, third one is one I had no issue with, and then you "and so on" the ones I actually did discuss to indicate that your "another half dozen reliable sources" was incorrect (three wikis, one other already used as a reference, so immediately you are down to 4 anyway; and then something like the "Catalogue of Life" entry[1] is just a copy of ITIS, the exact same data presented in a different layout; this is not "another" reliable source, this is two instances of the same database (just like opentree is a gbid copy, not a new source). So the taxonbox on this article gives us two new databases. Fram (talk) 15:57, 1 October 2025 (UTC)[reply]
Well, for this subfamily, there will always be at least three reliable sources (if you include ITIS and its 'copies' if you want to classify them as such). That is: 1. World Catalogue of Hispines, 2. The original description of the species (this must exist for it to be an accepted species) and 3. ITIS. For most, more will exist, but I think these three would be enough to get it at least at the level of Aspidispa maai (which could be longer if I would have included more detail about the species description, but I think that would make it too technical and I don't want to add words just to make a certain imaginary threshold. I do copy the full species description from old sources sometimes, because they are written in such a way that make it hard for me to rewrite (I do try to make proper sentences though, which is often not the case in the original. All of that being said: I wont be able to get access to point 2 sources (the original description) if they are behind a paywall or in a language I dont understand (and if google translate isnt able to make me understand). However: even if that part of the article would be missing (for now or worst case forever), as I understood it, these would still be acceptable articles. So the debate is not IF I am allowed to make them, but at which pace. At least, that was the intention of the discussion. To add to this (I wont use any user names to spare them the ) but I have seen other articles being added constantly that are lower quality (bare URLs, only one sentence, etc.) and I have looked at the Talk pages for these users, and there is no discussion (sometimes suggestions to improve, that are not followed by that user, at which point nothin happens), so I am wondering what that is about. Besides not following the suggestion to merge species into genus pages and making too many in a short amount of time, I have (I think) taken to heart every comment made by others to improve the articles. Anyway: I think there will be no end to this discussion and I don't know how this works now.. the question was if I could make more than the normal daily limit. So far, there is not a clear answer I think? B33tleMania12 (talk) 17:37, 1 October 2025 (UTC)[reply]
I was not commenting on the request you made (although I would prefer in general that much more often similar short articles would be combined into lists, by all editors), I was commenting on typical clearly incorrect statements by WhatAmIDoing. Requests like yours should be discussed on their actual merits, not on misconceptions from other editors. Fram (talk) 08:21, 2 October 2025 (UTC)[reply]
My first sentence ("The notability of a species doesn't depend on how many sources exist") is a reply to your comment that we don't count them as multiple sources – as if that could be a problem, given that WP:NSPECIES does not require multiple sources.
I ignored Wikidata in counting the links, because obviously we don't use wikis as reliable sources. I'm not sure where BioLib falls in the WP:UGC spectrum. The website says "Unlike other systems such as Wikipedia users are not directly changing content of BioLib but the added data is added as "unconfirmed" and our administrators review it first before accepting it among verified data". This constitutes "editorial oversight", to use the wording in WP:V and WP:RS, but is it good oversight, or rather perfunctory? Someone would have to know more about that site than I do. At first glance, though, I wouldn't describe it as a wiki myself. WhatamIdoing (talk) 23:41, 2 October 2025 (UTC)[reply]
I agree with the above that this seems OK. Being efficient and systematic are not problems. It sounds like enough care is being taken to avoid bad information being included in the encyclopedia, which is the most important thing. Stepwise Continuous Dysfunction (talk) 15:36, 27 September 2025 (UTC)[reply]
I likewise agree. People may prefer longer articles, but these are longer than the alternative, which, unless you are prepared to write about the subject yourself, is of zero length. Phil Bridger (talk) 16:41, 27 September 2025 (UTC)[reply]
Ok, thanks for your input all. I do not know how much time must pass to come to a conclusion, but I am going to at least move the drafted stuff back to Live. B33tleMania12 (talk) 12:04, 28 September 2025 (UTC)[reply]
I think this is fine. Especially with species articles, stubs are much better compared to the longer articles of AI slop that we have been getting en masse in this topic area lately. Gnomingstuff (talk) 17:09, 27 September 2025 (UTC)[reply]
  • I do question whether having separate articles on every species of beetle is the right approach (I would probably go with a list), but that is more a quibble with the notability guideline itself and not with the rate at which the articles are created. THAT is more an issue of not overwhelming new article reviewers. If they are comfortable with this, I don’t see a problem. Blueboar (talk) 17:12, 27 September 2025 (UTC)[reply]
  • Oppose. If all the content you have for an article is this then please do not create a new article. Consider adding it to a list instead, as I have suggested to you multiple times. I do not understand why editors want to create so many low quality articles. You should get more satisfaction from writing a featured list and this would serve readers much better. In my opinion 25 new articles in a day is plenty, and allowing more means that editors will not be giving due attention to the new articles they create — Martin (MSGJ · talk) 08:26, 30 September 2025 (UTC)[reply]
    It is not the only thing I have, but like I said numerous times also: it is easier to first make the page at the right place (i.e. get the taxonomy right) before adding more detail. If I add more info right away, I have to do a dive into taxonomic changes for each individual species. The source to expand the article you just mentioned would be this: Wayback Machine, but I would like to get the species pages added first, so I don't waste time adding species that have now been placed in synonymy, moved to other genera, etc. B33tleMania12 (talk) 08:40, 30 September 2025 (UTC)[reply]
    Do we have any statistics that show how likely a separate article is to be expanded, as compared to an entry on a list? Intuitively it feels to me that a separate article is much more likely to be expanded. Phil Bridger (talk) 10:51, 30 September 2025 (UTC)[reply]
    The point of comparison would be what articles are created from a redlink on a list (or unredlinked item?), rather than expansion on the list itself. CMD (talk) 13:01, 30 September 2025 (UTC)[reply]
    Depends on the list? A table-formatted list, or one with descriptions for each item, might see expansion. A list of bare names (see Aspidispa#Species, for the example Martin gives) is unlikely to be expanded in the list. I guess the answer to your question is: B33tleMania12 made that list of redlinked species, and is now turning the red links blue, and Martin recommends that they should go make the list...that they already made. WhatamIdoing (talk) 15:25, 30 September 2025 (UTC)[reply]
    What was my question? CMD (talk) 17:22, 30 September 2025 (UTC)[reply]
    I don't think it's a good idea to tell editors what kind of contribution "should" be satisfying to them.
    Also, Martin, the time to say that people should make lists instead of short articles was in Wikipedia talk:Notability (species)/Archive 2#Proposal to adopt this guideline (where that view was rejected). These aren't "low-quality articles". These are "short" articles. 16% of Wikipedia's articles have three sentences or less. 38% of Wikipedia's articles have two refs or less. These are not weirdly short outliers compared to the rest of Wikipedia's articles. WhatamIdoing (talk) 15:15, 30 September 2025 (UTC)[reply]
    Yeah, and I'd rather that admins didn't restore potential copyright violations[2] added by an editor with a history of source fraud, socking, and nationalist POV stuff, after being told they were likely copyright violations,[3], and I'd also really like it if their most recent article creation didn't have a bunch of close paraphrasing of a tourism site (ad) that they split from the main article, but I guess those were more satisfying? Beetle guy's article contributions are higher quality that than, anyway. If they want to make these articles, then I say let them. GreenLipstickLesbian💌🦋 09:39, 1 October 2025 (UTC)[reply]
I'm with Martin, I don't support this but I also don't necessarily oppose it. It's worth noting that you've been denied autopatrolled twice, most recently in August, so I don't know if mass-creating would be the best idea. EF5 13:11, 30 September 2025 (UTC)[reply]
To be fair, one of those denies was down to the fact that the editor creates articles that 'are easy reviews' (which did strike me as an odd rationale!) and the other was that the editor hadn't created any long articles (when they clearly create very short ones). It appears to me that many, many of these species type stubs are out there and the bar for creation (ie: not applying WP:GNG to the letter) appears to have long been set low for species stubs, no? Best Alexandermcnabb (talk) 14:20, 30 September 2025 (UTC)[reply]
The GNG doesn't apply, because Wikipedia:Notability (species) is the relevant guideline. WhatamIdoing (talk) 15:16, 30 September 2025 (UTC)[reply]
It's not that odd a rationale if you consider why autopatrol exists. It is a means of reducing the work involved in new page patrolling, rather than a hat to be collected. Phil Bridger (talk) 15:47, 30 September 2025 (UTC)[reply]
Fair enough, but an editor creating hundreds of valid species articles (and I couldn't find one that NPP had rejected/tagged/draftified) could - and I believe should - be autopatrolled, even if each article isn't a huge overhead to review. But, hey, that's just me... Best Alexandermcnabb (talk) 16:10, 30 September 2025 (UTC)[reply]
I did not ask for autopatrol myself, so I was not aware that I was 'denied'. I do not care for myself either, but I guess it would help others. Anyway: it seems reasonable not to give autopatrol too soon. You never know how that might turn out in the end. B33tleMania12 (talk) 16:32, 30 September 2025 (UTC)[reply]
Oppose this is a WP:PAGEDECIDE issue and here the answer is often "no". Instead of aiming for completion, we should ask: do these articles help the reader? And the answer is no, because the reader wants an encyclopedia article, which is not what they're getting. Wikispecies is a worthy project, but it's not this project. Microstubs which have very little hope of expansion aren't helpful; they're an annoyance. Cremastra (talk · contribs) 21:45, 30 September 2025 (UTC)[reply]

Support On condition that you make the new articles as fleshy as possible. A meaty stub of no less than 1kb readable prose ideal. Given that I've been running contests to reduce the number of stubs, I would prefer it if you were able to generate start class ones over 1.5 kb if possible. Rate of creation is of no issue to me as long as they are accurate and error free. No short database stubs please.♦ Dr. Blofeld 20:19, 5 October 2025 (UTC)[reply]

I looked at some of the articles they made. It seems like the articles he makes is 200b or under. Mikeycdiamond (talk) 20:42, 5 October 2025 (UTC)[reply]
  • Support and I say this as a reviewer who has accepted hundreds of their creations already. I don't see any policy-based reason for these species to not be deserving of articles. Lynch44 00:20, 17 October 2025 (UTC)[reply]
  • Oppose Pages like Prionispa vethi are incredbly low-quality. Articles should not be created if you aren't going to include more than a statement of existence and a country. Improvement of genus articles like Prionispa would be more beneficial than countless species pages. Reywas92Talk 14:35, 17 October 2025 (UTC)[reply]
    Well, the idea is to expand them to something like this: Oncocephala camachoi, or this Oncocephala cuneata. However: especially for species that were described long ago: It would be highly beneficial to have them at their current name and with synonyms. If they are, you can just go and insert the original descriptions from that old source and add them to stubs already in place. That at least, was the whole idea. To get those basic articles on wikipedia however, I would need to add them. I try to expand the ones I happen to have a good source about right away, however: other sources might be behind paywalls or scattered all over the internet. But all that aside though: the discussion is about the pace, not about the articles itself, since I now know enough to be confident that they are allowed. B33tleMania12 (talk) 15:56, 18 October 2025 (UTC)[reply]
    Just noting that the first two sections of the first article you cited cite no sources. Mikeycdiamond (talk) 16:53, 18 October 2025 (UTC)[reply]
    Ok, Uhm.. all of the above is the source at the bottom. How else should I do it? B33tleMania12 (talk) 19:41, 18 October 2025 (UTC)[reply]
    Adding the citation at the end of each paragraph is usual. Cremastra (talk · contribs) 19:46, 18 October 2025 (UTC)[reply]
    @B33tleMania12, open it in the visual editor (https://en.wikipedia.org/wiki/Oncocephala_camachoi?veaction=edit should work). Put your cursor at the end of a paragraph, where you want the little blue clicky number to end up. Click the "Cite" button in the middle of the toolbar, and choose the "Reuse" tab. Click the big blue button to publish your changes. The software will handle everything automatically for you.
    This can also be done in wikitext with WP:NAMEDREFS. WhatamIdoing (talk) 19:51, 18 October 2025 (UTC)[reply]
    Ah ok, thanks, I will try that! B33tleMania12 (talk) 23:06, 18 October 2025 (UTC)[reply]
    Wow, that works like a charm, thnx! B33tleMania12 (talk) 23:11, 18 October 2025 (UTC)[reply]
    You're welcome. WhatamIdoing (talk) 23:45, 18 October 2025 (UTC)[reply]
  • Oppose - We should not be generating possibly hundreds of thousands of articles that will never be expanded. No objection for redirecting to lists. FOARP (talk) 12:58, 18 October 2025 (UTC)[reply]
    So... just to be clear here, the question isn't whether we are "generating possibly hundreds of thousands of articles that will never be expanded". Notable subjects are notable even if the article is WP:UGLY. The only question here is whether they'll be created at a rate of, say, 20 a day vs 60 a day. (Also, we have evidence that this editor actually does expand stubs later.) WhatamIdoing (talk) 18:53, 18 October 2025 (UTC)[reply]
  • Support - we have in this case a relatively recent notability guideline (WP:NSPECIES) with community consensus, supported by an active project that includes subject-matter experts. We have an editor who is following that guideline and producing results that conform to our policies and guidelines. That editor would like flexibility in creating up to 500 articles per week rather than 50 per day - I see no reason to deny them.
  • I've seen a few !oppose votes by editors who are using this request as a pretext to object to NSPECIES. I don't think those arguments should carry any weight in the decision about the request, since they are in effect arguing that these policy- and guideline- compliant articles should not be created at all. Such an argument is off-piste in a discussion of whether 200 or 500 is a better number of articles to create in a week, IMO. Newimpartial (talk) 20:09, 18 October 2025 (UTC)[reply]
  • Comment It seems to me that sometimes there is genuinely very little to say about an individual beetle. In other words, we cannot expect these articles to be expanded much because there is literally no more information to add. Is that a fair take? Or is the intent to create the articles first and expand them down the line? Pinguinn 🐧 23:02, 21 October 2025 (UTC)[reply]
    No, I don't think that's true. Look at an article like Callispa cumingii (created by the OP in August and expanded by the OP in September). It's 12 sentences/more than 200 words long with two sections after the lead. For reference, the median English Wikipedia article has just 13 sentences, so this is definitely a statistically normal size of article for the English Wikipedia. Since we spend more time at bigger articles, I think editors forget what an average article looks like. We have never had a rule requiring a minimum size for articles. I've just added a table to WP:SIZERULE that should let editors compare the size of an article against the distribution. For example, 20% of our articles have ≤100 words, a third of them cite ≤2 sources, and so on. Whenever you're feeling like an article is unusually short, it's probably worth checking the table, because "feels too short" often falls into the statistically normal range for article length.
    It might be possible to expand articles like Callispa cumingii article even further (e.g., in this case, to say that the first published scientific description was by Joseph Sugar Baly in 1858, whether the beetle causes agricultural/economic concerns, what the binomial name means...). There will be some for which this much isn't easy to do, but only because interested editors have trouble getting their hands on the right sources, not because it isn't possible in principle. WhatamIdoing (talk) 01:44, 22 October 2025 (UTC)[reply]
    This assumes that most Wikipedia articles are already the size they should be, rather than also being too short. Fully 36% of our non-redirect, non-disambig mainspace pages are explicitly marked as in need of expansion, either with a stub template (the overwhelming majority), one of the article- or section-scope templates listed at Template:Expand, or both. Being as long as the median article, when it's been skewed low by that many articles that are known to be too short, doesn't mean it's good enough. —Cryptic 02:28, 22 October 2025 (UTC)[reply]
    This assumes that (nearly) all non-deleted Wikipedia articles are a size that the community actually does accept, which is IMO a valid definition of "good enough".
    If we want to have a rule that says "Articles must have at least X words in them within 24 hours of their creation", then let's make that rule, but let's not pretend that there already is such a rule and that therefore these articles are somehow non-compliant with the non-existent rule. (Though if you want to make new rules, I'd suggest trying to get consensus for a rule saying that "All articles must cite at least one source, even if the article is so short and boring that it contains no information for which an inline citation would normally be required" first. Because we don't actually have that rule on the books either, no matter how much we like to pretend otherwise.) WhatamIdoing (talk) 02:48, 22 October 2025 (UTC)[reply]
    Just commenting on the initial question: yes, the idea is to expand if I have access to a source that lets me. Every species has at least a description. Problems are: sometimes I dont have access to that description (pay-walled stuff) or (especially in the case of the group of species I am working on now [but am almost done with]) were originally described in German/French/Italian or even just in Latin. I have found a lot of these original descriptions, but I am not confident in my ability to properly translate these descriptions. For a lot of the species in that category, there are also descriptions in English available, but these are often also pay-walled. B33tleMania12 (talk) 07:44, 22 October 2025 (UTC)[reply]
    For an example of a species described in German, with a later source giving a description in English Acentroptera nevermanni. B33tleMania12 (talk) 07:46, 22 October 2025 (UTC)[reply]
    If you have the original descriptions but are not confident in translating them, just include them as an external link. CMD (talk) 08:22, 22 October 2025 (UTC)[reply]
    Ok, that is a good idea indeed. I will try to do that from now on! B33tleMania12 (talk) 08:29, 22 October 2025 (UTC)[reply]
    Historic sources can also be listed in a Wikipedia:Further reading section. That's helpful even if the source is not available online. Wikipedia:External links must be readable by most people online. WhatamIdoing (talk) 19:36, 22 October 2025 (UTC)[reply]
    Ok check, will try to include these as well B33tleMania12 (talk) 22:44, 22 October 2025 (UTC)[reply]
  • I support what this editor is doing. It's a little unorthodox and understandably raises some eyebrows. In addition to filling content gaps, this editor exhibits many desirable attributes by seeking input here and acknowledging the value of new page patrol. Thank you, B33tleMania12, for your contribution to the encyclopedia! —Myceteae🍄‍🟫(talk) 16:02, 25 October 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

RFC: Should Slogans be removed from ALL infoboxes.

[edit]

There have been some perennial discussions about removal of |slogan= from various infoboxes, but I could not find a case that discussed making WP:SLOGAN essentially policy.

In recent years, the slogan parameter has been removed from {{Infobox bus company}}, {{Infobox airline}} and the widely used {{Infobox company}} (see the MANY discussions about removing it from Infobox company).

Now WP:SLOGAN is just an essay which I know many people object to, but hence the reason for this RFC. I encourage everyone to read the essay but here are the key points (This is copied from WP:SLOGAN)

Mission statements generally suffer from some fundamental problems that are incompatible with Wikipedia style guidelines:

Per this search there are at least 37 infoboxes that have some form of slogan in them. The question is should all of those be removed? This does not mean that slogans cannot be mentioned in the body of an article, that is another conversation about whether they meet notability and are encyclopedic. My question is purely do they belong in the infobox?

In addition to this, what about mottos? It seems as though they are used rather interchangeably in Infoboxes... This search shows at least 72 infoboxes with a motto type parameter. Should some of those be removed? Personally I'd say keep it for settlement type infoboxes, but the way it is used on {{Infobox laboratory}} or {{Infobox ambulance company}}, it is performing the same functionality as a slogan and has the same issues.

Look forward to everyone's thoughts! - Zackmann (Talk to me/What I been doing) 22:29, 20 October 2025 (UTC)[reply]

Discussion (removal slogans)

[edit]
  • No (and doubly no for mottos). But I do think editors should use some discretion when deciding whether to include one. Nike can have Just Do It. Apple Inc. can have Think different. Disneyland can have "Happiest Place on Earth". M&M's should have "Melts in Your Mouth, Not in Your Hands". But slogans that almost nobody recognizes should be excluded through editorial judgement, not through removing the option entirely from the infobox. WhatamIdoing (talk) 02:40, 21 October 2025 (UTC)[reply]
    Also, a Slogan is not the same thing as a Mission statement. Mission statements are internal facing and IMO should normally be excluded. Slogans are about marketing and branding; they exist for an external audience. WhatamIdoing (talk) 02:44, 21 October 2025 (UTC)[reply]
  • No. Mottos are absolutely often promotional, but oftentimes so are names/logos/etc. They can still be essential pieces of information about an organization. I'd rather we encourage tight editorial discretion about which mottos are notable enough to warrant inclusion than ban them outright by removing the fields for them. Perhaps a good minimum standard would be secondary coverage (i.e. a source explicitly noting that they have a particular motto). Sdkbtalk 04:51, 21 October 2025 (UTC)[reply]
  • No each use should be determined on a case by case basis. If it is famous slogan (finger licking good) or (the fish others reject) then may as well include it. But if it is excessive or ridiculous, then omit it. Graeme Bartlett (talk) 08:25, 21 October 2025 (UTC)[reply]
  • Comment the RFC question is not neutral -- it has a deletionist bias. If the arguments given in the No-votes above gain consensus, the slogan parameter should be restored in the infoboxes it was removed from. This would be a new global consensus overriding the local consensus at the infobox talk page archive. Joe vom Titan (talk) 14:14, 21 October 2025 (UTC)[reply]
    Agree with both. FaviFake (talk) 16:10, 21 October 2025 (UTC)[reply]
  • Mixed opinion - WP:SPS and WP:About self both caution against promotional material… so at minimum we would need the slogan to be mentioned by an independent secondary source. Blueboar (talk) 18:25, 21 October 2025 (UTC)[reply]
  • No per Sdkb and Isaac Rabinovitch, and restore any that have been removed without a specific consensus discussion per Joe von Titan. Thryduulf (talk) 21:28, 21 October 2025 (UTC)[reply]
  • I'm with Blueboar on this, if secondary sources are mentioning it then we should to. I'd also add that we are a global site, writing for a global audience, I doubt all of these slogans are global, or even consistent across the Anglosphere. ϢereSpielChequers 09:55, 23 October 2025 (UTC)[reply]
  • No. Slogans, mission statements, etc. are a basic piece of information about a company. They are reasonable to include and inclusion is not really promotional. We include logos and mention marketing stylization like all-caps but don't consider these promotional. A primary/self-published source is fine for this. Readers know what a slogan is and seeing one reproduced in an infobox is not going to be interpreted as Wikipedia declaring its accuracy. Secondary sources should be used to resolve any discrepancies or doubt, I suppose. All that said, I don't know that every company article needs to have the slogan included. Individual cases should be discussed on talk. —Myceteae🍄‍🟫(talk) 01:01, 25 October 2025 (UTC)[reply]
  • Yes per MOS:INFOBOXPURPOSE. The infobox is not for every true fact about a topic, it is about basic, uncontroversial facts, and ideally the kind that change rarely. If a slogan is relevant, great, cover it in prose. It doesn't have to be in the infobox. SnowFire (talk) 18:58, 30 October 2025 (UTC)[reply]
  • Slogans are clearly notable, we have articles on some of them (Category:Slogans), and clearly where appropriate it would be part of our prime directive to include mention of them, as indicated in the WP:MISSION essay which prompted this discussion: "Slogans may be worth mentioning briefly as part of a description of the organization's marketing approach." As such we shouldn't set about forbidding editors to include these details. A company's mission statement may also be worth noting, even though most are not - it comes down to judgement and consensus of the editors working on the article. I feel the Mission/Slogan essay is a useful guideline, leaving decision to editors working on the articles. I don't think we should be about imposing restrictions which may limit or restrict appropriate, notable, and useful encyclopedic knowledge. Blind, sweeping restrictions are rarely useful. So, of course, No. SilkTork (talk) 14:53, 31 October 2025 (UTC)[reply]

;notes -> boldface

[ boldface">edit]

Per mos:fakeheading, pseudo-headings created with description term markup (;) are discouraged; the same section of the MoS recommends using boldface markup (''' ''') for them.

Should a bot replace all cases of ;Notes with '''Notes''' when this pseudo-heading is immediately followed by a {{notelist}} or {{reflist}} template? See the discussion at wp:bot requests for more context (ping @Qwerfjkl:, @Anomie: and @Jonesey95: as its participants). sapphaline (talk) 14:05, 21 October 2025 (UTC)[reply]

Support – makes sense, more accessible FaviFake (talk) 16:11, 21 October 2025 (UTC)[reply]
I agree with the discussion on the bot request page that the choice of replacement can depend on context. I think more investigation should be done to survey the actual instances in current articles to learn more about what is the best replacement in each case, and the relative frequency of each preferred option. isaacl (talk) 16:34, 21 October 2025 (UTC)[reply]
That suggested replacement is never the correct one, it's just a "do enough" one. I am not sold that this would be a context bot with the requirement that it be followed by a note or ref list. Izno (talk) 17:17, 21 October 2025 (UTC)[reply]
"the MoS recommends using boldface markup ( ) for them." ... euh no it doesn't... it says use the heading level that is appropriate. Except for people too stubborn to use headings because they F with their mental idea of how many things should be in the ToC. Those people who don't want to use html properly because they are super stubborn but who we unfortunately cannot get to follow a proper system, those people can use bold sometimes when these 'headings' are below level 4. —TheDJ (talkcontribs) 19:07, 21 October 2025 (UTC)[reply]
This is context-dependent - sometimes it should be boldface, sometimes it should be a proper heading, maybe there are times it should be something else instead and it's not impossible that there are instances where this markup is actually correct. This isn't suitable for an unsupervised bot. Thryduulf (talk) 21:48, 21 October 2025 (UTC)[reply]
There are basically no places where this is actually context dependent. Please provide examples since you believe so. Izno (talk) 22:24, 22 October 2025 (UTC)[reply]
I don't know how to find examples (and search isn't helping) but I literally explained in my comment why it is context dependent: sometimes it should be boldface, sometimes it should be a proper heading. I've seen examples of both in my time on Wikipedia (although I've not seen any examples of this markup in article space recently, so I can't remember where I saw them). In project space of course we do have examples of this markup being used appropriately (e.g. at Wikipedia:Requests for review of admin actions). Thryduulf (talk) 22:54, 22 October 2025 (UTC)[reply]
Maybe you missed the requirements expected e.g. one of which is ;thing(newline){{reflist}}. There are of course appropriate uses of the semi colon in the context of definition lists, but that limitation immediately cuts that off as uninteresting.
Here is a search of the kind of use that should be cleaned. Most of those notes could go either way, but it's quite clear that any that appear in a section dedicated to our usual appendices should not have this form. (Funnily enough, I made a bad search at [4] and those just simply need removal.) But these are cases that can be worked through as part of bot trials. Izno (talk) 01:12, 23 October 2025 (UTC)[reply]
I would have thought that the relevant removal situation is when the next line doesn't start with a :. Aren't these the possible cases?
  • Line starting with ; followed immediately by line starting with : – correct; leave it alone
  • Line starting with ; followed a blank line and then a line starting with :WP:LISTGAP; fix it
  • Line starting with ; followed anything else (including *) – incorrect; change formatting
(Though any script would have to check for both separate lines and putting the formatting on a single line. ;Word:Definition is a case of "followed immediately by :".)
WhatamIdoing (talk) 18:31, 24 October 2025 (UTC)[reply]
The "change formatting" step seems to be the sticking point: '''bold''' or == heading ==, and which level of heading for the latter? The proposal here tried to limit it to a very specific case to avoid that, but still ran into it. Anomie 19:42, 24 October 2025 (UTC)[reply]
No. Although outside of mainspace, editors misuse the colon syntax for generic unbulleted lists, in mainspace the resulting HTML syntax should conform with the indicated semantics from the W3C, for best accessibility. Description lists should be used for pairs of terms and definitions. {{Unbulleted list}} should be used for unbulleted lists, if deemed necessary. The only place where description lists are appropriate that I've come across are glossary articles. isaacl (talk) 16:40, 25 October 2025 (UTC)[reply]
I don't understand how that contradicts what WAID said. jlwoodwa (talk) 18:35, 27 October 2025 (UTC)[reply]
I was answering the question "Aren't these the possible cases?" No, because although a description term (starting with ;) followed by a description details item (starting with :) is syntactically correct, it is probably semantically wrong in mainspace. (I noted the one valid use case I've seen in mainspace.) isaacl (talk) 01:25, 28 October 2025 (UTC)[reply]
Is Disease#Terminology semantically wrong? WhatamIdoing (talk) 01:29, 29 October 2025 (UTC)[reply]
It's a glossary, which is an appropriate use for description lists. isaacl (talk) 01:34, 29 October 2025 (UTC)[reply]

RFC: What should be done about unknown birth/death dates

[edit]

With the implementation of Module:Person date, all |birth_date= and |death_date= values in Infoboxes (except for deities and fictional characters) are now parsed and age automatically calculated when possible.

With this implementation, it was found that there are a large number of cases (currently 4450) where the birth/death date is set to Unk, Unknown, ? or ##?? (such as 19??). Full disclosure, Module:Person date was created by me and because of an issue early on I added a number of instances of |death_date=Unknown in articles a few weeks ago. (I had not yet been informed about the MOS I link to below, that's my bad).

Per MOS:INFOBOX: If a parameter is not applicable, or no information is available, it should be left blank, and the template coded to selectively hide information or provide default values for parameters that are not defined..

There is also the essay WP:UNKNOWN which says, in short, Don't say something is unknown just because you don't know.

So the question is what to do about these values? Currently Module:Person date is simply tracking them and placing those pages in Category:Pages with invalid birth or death dates (4,450). It has been growing by the minute since I added that tracking. Now I am NOT proposing that this sort of tracking be done for every parameter in every infobox... There are plenty of cases of |some_param=Unknown, but with this module we have a unique opportunity to address one of them.

I tried to find a good case where the |death_date= truly is Unknown, but all the cases I could think of use |disappeared_date= instead. (See Amelia Earhart for example).

The way I see it there are a few options
  • Option A - Essentially do nothing. Keep the tracking category but make no actual changes to the pages.
  • Option B - Implement a {{preview warning}} that would say This value "VALUE" is invalid per MOS:INFOBOX & WP:UNKNOWN. (Obviously open to suggestions on better language).
  • Option C - Take B one step further and actually suppress the value. Display a preview warning that says This value "VALUE" is invalid per MOS:INFOBOX & WP:UNKNOWN. It will not be displayed when saved. then display nothing on the page. In other words treat |death_date=Unknown the same as |death_date=. (Again open to suggestions on better language for the preview warning).
  • Option D - Some other solution, please explain.

Thanks in advance! --Zackmann (Talk to me/What I been doing) 23:43, 21 October 2025 (UTC)[reply]

Discussion (birth/death unknown)

[edit]
  • We definitely shouldn't be using things like "Unk" or "?" - if we want to say this is not known we should explicitly say "Unknown". Should we ever say "unknown" though? Yes, but for births only when we have reliable sources that explicitly say the date is unknown to a degree that makes values like "circa" or "before" unhelpful - even "early 20th Century" and is more useful imo than "unknown". "Unknown" is better than leaving it blank when we have a known date of birth but no known date of death (e.g. Chick Albion). I'm not sure how this fits into your options. Thryduulf (talk) 00:24, 22 October 2025 (UTC)[reply]
    Agreed. There are cases where no exact date is given but MOS:INFOBOX and WP:UNKNOWN do not apply because the lack of known date can be sourced reliably. If the module cannot account for this, I really think only option A is acceptable. —Rutebega (talk) 18:15, 22 October 2025 (UTC)[reply]
    @Rutebega and Thryduulf: So I can very easily make it so that |..._date=Unknown<ref>... is allowed but just plain |..._date=Unknown is not. That is just a mater of tweaking the regular expression. Not hard at all to do at all. That being said (mostly for curiosity sake) can you give me an example of a page where the lack of known date can be sourced reliably? Every case I could think of (and I really did try to find one) either has a relevant |disappeared_date= (so you don't need to specify that |death_date=Unknown) or you can least provide approximate dates (i.e. {{circa|1910}}, 1620s or 12th century). Zackmann (Talk to me/What I been doing) 18:23, 22 October 2025 (UTC)[reply]
    Metrodora isn't quite date unknown, but the only fixed date we have is the manuscript which preserves her text (c.1100 AD), and her floruit has been variously estimated between the first and sixth centuries AD. Of course, so little is known for certain about Metrodora that every single infobox field would be "unknown" were it filled in, and therefore there's little point having an infobox at all.
    Corinna's dates are disputed: she was traditionally a contemporary of Pindar (thus born late 6th century and active in the fifth century BC) but some modern scholars argue for a third-century date. If the article had an infobox, a case could be made either for listing her floruit as either "unknown", "disputed", "5th–3rd century BC", "before 1st century BC" (the date of the first source to mention her) or simply omit it entirely.
    I'm open to convincing about how these cases should be handled; my inclination is that any historical figure where the date fields alone need this much nuance are probably a bad fit for an infobox, but the size of Category:Pages with invalid birth or death dates suggests that not everybody agrees with me! Caeciliusinhorto-public (talk) 08:56, 23 October 2025 (UTC)[reply]
    @Caeciliusinhorto-public: thanks for some real examples. I think your point that so little is known that Infoboxes don't make sense is a good one... If there were other info that made sense to have in an Infobox I think the dates would still be able to be estimated (even if the range is hundreds of years). You could still put |birth_date=5th-3rd century BC or, of course, just leave it blank! Leaving it blank to me implies that it is Unknown, though it does leave ambiguous whether is it Unknown because no editor has taken the time to figure it out or whether it is Unknown because the person live some 2,200 years ago and we have no real way of knowing when they were born... Zackmann (Talk to me/What I been doing) 09:05, 23 October 2025 (UTC)[reply]
  • This is above my pay grade but can you give us an idea of how much "It has been growing by the minute". The scale of those additions may inform our view as to how best to deal with it.Lukewarmbeer (talk) 16:34, 22 October 2025 (UTC)[reply]
    @Lukewarmbeer: so this is mostly a caching issue. I don't think very many new instances of this are being created each day, it just takes a while for the code to propagate. I really don't have an objective way of saying how many new instances are being created daily... Zackmann (Talk to me/What I been doing) 17:13, 22 October 2025 (UTC)[reply]
    FWIW, about 15% of our biographies of living people have unknown birthdates (based on a count by category I did in 2023). I would assume that deceased biographies are perhaps more likely to miss this data, so we're looking at a number in the low hundreds of thousands? Not all of those will have infoboxes, of course. Andrew Gray (talk) 20:39, 22 October 2025 (UTC)[reply]
    @Andrew Gray: when you say have unknown birthdates do you mean "no birthdates are given"? Because that is NOT what we are talking about here... We are talking about |birth_date=Unknown, where someone has specifically stated that the date is Unknown, not just left it blank. Zackmann (Talk to me/What I been doing) 20:42, 22 October 2025 (UTC)[reply]
    @Zackmann08 ah, right - I think I misunderstood, apologies. If the module does nothing when the birthdate field is blank or missing, that sounds good.
    I think the simple tracking category for non-date values sounds fine for now. Andrew Gray (talk) 20:52, 22 October 2025 (UTC)[reply]
  • Perhaps the problem is the multiple meanings of "Unknown". Some may have filled it meaning "nobody knows about the early life of this historical guy, only that he became relevant during the X events, already an adult", and others "unknown because I don't know". We may make it so that "Unknown" has the same effect as an empty field, and require a special input for people with truly unknown dates. And note that any biography after whatever point birth and death certificates became ubiquitous should be treated as the second case. Cambalachero (talk) 14:09, 23 October 2025 (UTC)[reply]
  • Option D The variant on option C where it's permitted iff there's a citation seems like a good solution to me. By a similar argument to WP:ALWAYSCITELEAD, I think a citation should always be required to assert that someone's date of death is outside the scope of human knowledge. From WP:V we should always cite material that is likely to be challenged, and I think the assertion that someone's date of death is "unknown" falls well within that scope; in particular I myself will always challenge it if unsourced. lp0 on fire () 16:32, 23 October 2025 (UTC)[reply]
    I think whether someone's date of birth or death being unknown falls into the category of material that is likely to be challenged is party a factor of when and where they were born and the time, place and manner of their death and how much we know about them generally. It is not at all surprising to me that we don't know the date of birth or death of a 3rd century saint or 18th century enslaved person, or when a Peruvian athlete who competed in the 1930s died; we do need a citation to say that we only know the approximate date of death for Dennis Ritchie and Gene Hackman. Thryduulf (talk) 16:50, 23 October 2025 (UTC)[reply]
    Do you think the citation always needs to be inside the infobox? Our article about Metrodora has a couple of paragraphs about which century she might have lived in. There's no infobox at the moment, but if we added one, would you insist that the citations be duplicated into the infobox? WhatamIdoing (talk) 18:40, 24 October 2025 (UTC)[reply]
  • Option D Allow Unknown but not other abbreviations. Require citations for dates. Rationale: Looking at the Sven Aggesen it’s easy to see that “Unknown” is helpful because it’s communicating that the person is dead. In my opinion it’s still stating a fact. So Unknown should be allowed. “?” Should not. It seems like dates of birth and death should always be cited. Thanks for your work on this!! Dw31415 (talk) 17:54, 23 October 2025 (UTC)[reply]
    • In the case of Sven Aggesen I think we could reasonably expect a reader to infer from "born: 1140? or 1150?" that he is probably dead! In the case of people born recently enough that there might be confusion, I can't imagine there are many cases where both (a) they are known to be dead and (b) their date of death is known so imprecisely that we don't have a more useful value than "unknown" for the infobox. Caeciliusinhorto (talk) 20:35, 23 October 2025 (UTC)[reply]

RfC: Aligning community CTOPs with ArbCom CTOPs

[edit]

Should the community harmonize the rules that govern community-designated contentious topics (which are general sanctions authorized by the community) with WP:CTOP? If so, how? 19:55, 22 October 2025 (UTC)

Background

Before 2022, the contentious topics process (CTOP) was instead known as "discretionary sanctions" (DS). Discretionary sanctions were authorized in a number of topic areas, first by the Arbitration Committee and then by the community (under its general sanctions authority).

In 2022, ArbCom made a number of significant changes to the DS process, including by renaming it to contentious topics and by changing the set of sanctions that can be issued, awareness requirements, and other procedural requirements (see WP:CTVSDS for a comparison). But because the community's general sanctions are independent of ArbCom, these changes did not automatically apply to community-authorized discretionary sanctions enacted before that date.[a]

In an April 2024 RfC, the community decided that there should be clarity and consistency regarding general sanctions language and decided to rename community-authorized discretionary sanctions to "contentious topics". However, the community did not reach consensus on several implementation details, most prominently whether the enforcement of community CTOPs should occur at the arbitration enforcement noticeboard (AE) instead of the administrators' noticeboard (AN), as is now allowed (but not required) by ArbCom's contentious topics procedure.[b]

Because of the lack of consensus, no changes were made to the community-designated contentious topics other than the naming. As a result, there currently exist 24 ArbCom-designated contentious topics and 7 community-designated contentious topics, and the rules between the two systems differ as documented primarily at WP:OLDDS.

Questions:
  • Question 1: Should the community align the rules that currently apply in community-designated contentious topics with WP:CTOP, mutatis mutandis (making the necessary changes) for their community-designated nature?
  • Question 2: Should the community authorize enforcement of community contentious topics at AE (in addition to AN, where appeals and enforcement requests currently go)?
Implementation details:

In either case above, all existing community CTOPs would be amended by linking to the new information page to document the applicable provisions.

If question 1 fails, no changes would be made.

Notes

  1. ^ WP:GS/SCW&ISIL, WP:GS/UKU, WP:GS/Crypto, WP:GS/PW, WP:GS/MJ, and WP:GS/UYGHUR follow WP:OLDDS. WP:GS/ACAS was enacted after December 2022 and therefore follows the current ArbCom contentious topics procedure.
  2. ^ Specifically, AE may consider "requests or appeals pursuant to community-imposed remedies which match the contentious topics procedure, if those requests or appeals are assigned to the arbitration enforcement noticeboard by the community." – Wikipedia:Arbitration Committee/Procedures § Noticeboard scope 2

Survey (Q1&Q2)

[edit]
The following discussion is an archived record of a request for comment. Please do not modify it. No further edits should be made to this discussion. A summary of the conclusions reached follows.

  • Yes to both questions. For almost three years now, we have had two different systems called "contentious topics" but with different rules around awareness, enforcement, allowable restrictions, etc. In fact, because WP:GS/ACAS follows the new CTOP procedure but without AE enforcement, we actually have three different systems. We should take this chance to make the process meaningfully less confusing. There is no substantive reason why the enforcement of, for example, WP:GS/UYGHUR and WP:CT/AI should differ in subtle but important ways.
    As for using AE, AE is designed for and specialized around CTOP enforcement requests and appeals. AE admins are used to maintaining appropriate order and have the benefit of standard templates, word limits, etc., while AN or ANI are not specialized around this purpose. As a result of WP:CT2022, ArbCom now specifically allows AE to hear requests or appeals pursuant to community-imposed remedies which match the contentious topics procedure, if those requests or appeals are assigned to the arbitration enforcement noticeboard by the community. We should take them up on the offer as Barkeep49 first suggested at the previous RfC.
    FYI, I am notifying all participants in the previous RfC, as this RfC is focused on the same topic. Best, KevinL (aka L235 · t · c) 19:57, 22 October 2025 (UTC)[reply]
  • Yes to both - I don't see a downside to this standardization, and it would appear to both make the system as a whole easier to understand, and allow admins to take advantage of the automated protection logging bot for the currently-GS topics. signed, Rosguill talk 20:01, 22 October 2025 (UTC)[reply]
  • Yes to both. The CTOP system is complicated even without these three different regimes and confuses almost everyone involved. AE can be a great option for reducing noise in discussions, compared to AN. —Femke 🐦 (talk) 20:20, 22 October 2025 (UTC)[reply]
  • Yes to both as standardization can help clarify confusion especially among newcomers about contentious topics. Aasim (話すはなす) 20:29, 22 October 2025 (UTC)[reply]
  • Yes to both but as I said in the previous RFC, if we're going to go in this direction, we should also be moving towards a process where the community eventually takes over older ArbCom-imposed CTOPs, especially in areas where the immediate on-wiki disruption that required ArbCom intervention has mostly settled down but the topic itself remains indefinitely contentious for off-wiki reasons. ArbCom was intended as the court of last resort for things the community failed to handle; it's not supposed to create policy. Yet currently, huge swaths of our most heavily-trafficked articles are under perpetual ArbCom sanctions, which can only be modified via appeal to ArbCom itself, and which are functionally the same as policy across much of the wiki. This isn't desirable; when ArbCom creates long-term systems like this, we need a way for the community to eventually assume control of them. We need to go back to treating ArbCom as a court of last resort, not as an eternal dumping ground for everything controversial, and unifying ArbCom and community sanctions creates an opportunity to do so by asking ArbCom to agree to (with the community's agreement to endorse them) convert some of the older existing ArbCom CTOPs into community ones. --Aquillion (talk) 20:51, 22 October 2025 (UTC)[reply]
  • Yes to both per nom. Consistency is great, and eliminating the byzantine awareness system (where you need an alert every 12 months) is essential. WP:AE is a miracle of a noticeboard (how is the noticeboard with the contentious issues the relatively tame one?), and we as a community should take advantage of ArbCom's offer to let us use it. Best, HouseBlaster (talk • he/they) 22:10, 22 October 2025 (UTC)[reply]
  • Yes to both. This is a huge step in the right direction. Toadspike [Talk] 22:16, 22 October 2025 (UTC)[reply]
  • Yes to both, and a full-throated "yes" for using AE in particular. The other noticeboards are not fit for purpose with respect to handling CTOP disruption. Vanamonde93 (talk) 22:24, 22 October 2025 (UTC)[reply]
  • Yes to both – This has been a mess for more than a decade. Harmonising the community and ArbCom general sanctions regimes will cut red tape, and eliminate confusion over which rules apply in any given case. I am also strongly in favour of allowing community sanctions to be enforced at WP:AE. Previously, there were numerous proposals to create a separate board for community enforcement, such as User:Callanecc/Essay/Community discretionary sanctions, but all failed to go anywhere. In my opinion, the most important aspect of community sanctions (as opposed to ArbCom sanctions) is that the community authorises them, and retains control over their governance. Enforcement at AE does nothing to reduce the community's power to enact sanctions; if anything, it will ensure that these regimes are enforced with the same rapidity as ArbCom sanctions. It would be foolish to not take advantage of ArbCom's offer to allow us to use their existing infrastructure. Yours, &c. RGloucester 23:54, 22 October 2025 (UTC)[reply]
  • Yes to both. I was in favor of this during the March 2024 rfc but was relcutant to push it too hard since I was then on arbcom. I am no longer on arbcom and thus can freely and fully support this thoughtful and wise pproposal for the same reasons I hinted at in the previous discussion. Best, Barkeep49 (talk) 02:00, 23 October 2025 (UTC)[reply]
  • Yes to both, and future changes to either sanction procedure should be considered for both. Not to be unduly repetitive of others above, but the system is more complex than it needs to be. AE as an additional option is a positive. CMD (talk) 04:38, 23 October 2025 (UTC)[reply]
  • Yes to both and thank you to L235 for working on this. As RGloucester I'd worked on this previously so am definitely supportive. Callanecc (talkcontribslogs) 07:11, 23 October 2025 (UTC)[reply]
  • Yes to both, per my comment in the 2024 RfC. It is not reasonable to expect new editors to familiarize themselves with multiple slightly different sanctions systems that emphasize procedural compliance. — Newslinger talk 08:20, 23 October 2025 (UTC)[reply]
  • Yes to both. Let's not make the CT system more complicated and impenetrable than it needs to be already; consistency can only be good here. Caeciliusinhorto-public (talk) 09:07, 23 October 2025 (UTC)[reply]
  • Yes to both, long overdue. ~ Jenson (SilverLocust 💬) 16:38, 23 October 2025 (UTC)[reply]
  • Yes to both, with the same caveats as Aquillion lp0 on fire () 16:39, 23 October 2025 (UTC)[reply]
  • Yes to both for consistency. Chaotic Enby (talk · contribs) 16:59, 23 October 2025 (UTC)[reply]
  • Yes to both. We already have overlapping CSes (Arbcom-imposed) and GSes (community-imposed) - A-A and KURD, at least, where the community chose to impose stricter sanctions on a topic area than ArbCom mandated (in both of those cases, the community chose to ECR the topic area). This has caused confusion for me as an admin a few times, for a regular user it can only be more so. Harmonizing the restrictions, with the only difference being who imposed them, can only make sense. - The Bushranger One ping only 20:02, 23 October 2025 (UTC)[reply]
  • Yes and Yes - The same procedures should apply to topics that the ArbCom has found to be contentious as to topics which the community has found to be contentious. The differences have only caused confusion. Robert McClenon (talk) 20:56, 23 October 2025 (UTC)[reply]
  • Yes to both CTs (whether issued by Arbcom or the community) should be treated the same regardless of whoever issued it. JuniperChill (talk) 11:04, 24 October 2025 (UTC)[reply]
  • Yes to both: A long time coming. This centralization will clean up so much unnecessary red tape. — EarthDude (Talk) 13:26, 24 October 2025 (UTC)[reply]
  • No I understand what Arbcom is per WP:ARBCOM and it seems to be a reasonably well-organised body with good legitimacy due to it being elected. But what's the community? Per WP:COMMUNITY and Wikipedia community, it seems to be be any and all Wikipedians and this seems quite amorphous and uncertain. Asking such a vague community to do something is not sensible. In practice, I suppose the sanctions were cooked up at places like WP:ANI which is a notoriously dysfunctional and toxic forum. That's not a sensible place to get anything done.
I looked at one of these community sanctions as an example, and it was some special measure for conflict about units of measurements in the UK: WP:GS/UKU. Now I'm in the UK and so might easily run afoul of this but this is the first I heard of this being an especially hot topic. And I've been actively editing for nigh on 20 years. Our general policies about edit-warring, disruption and tendentious editing seem quite adequate for such an issue and so WP:CREEP applies. That sanction was created over 10 years ago and so should be expired rather than harmomised. The other general sanctions concern such topics as Michael Jackson, who died 16 years ago and that too seems quite dated.
So, I suggest that all the general sanctions be retired. If problems with those topics then recur, fresh sanctions can be established using the new WP:CTOP process and so we'll then all be on the same page.
Andrew🐉(talk) 16:24, 25 October 2025 (UTC)[reply]
I will note that policy assigns to the community the primary responsibility to resolve disputes, and allows ArbCom to intervene in serious conduct disputes the community has been unable to resolve (Wikipedia:Arbitration/Policy § Scope and responsibilities) (emphasis added). That is to say, ArbCom's role is to supplement the community when the community's efforts are unsuccessful. I think that's why there should be some harmonized community CTOP process that can be applied for all extant community CTOPs. I understand that it may be time to revisit some of the community-designated CTOPs, which I support – when I was on ArbCom, I was a drafter for the WP:DS2021 initiative which among other things rescinded old remedies from over half a dozen old cases. But that seems to be a different question than whether to harmonize the community structure with ArbCom's. Best, KevinL (aka L235 · t · c) 14:32, 29 October 2025 (UTC)[reply]

A motion to revoke authorisation for this sanctions regime was filed at the administrators' noticeboard on 17 April 2020. The motion did not gain community consensus. 09:47, 22 April 2020 (UTC)
— Wikipedia:General sanctions/Units in the United Kingdom#Motion

At this time there is no consensus to lift these sanctions, with a majority opposed. People are concerned that disputes might flare up again if sanctions are removed: Give them an inch and they will take a kilometer ...
— User:Sandstein 00:00, 4 November 2020 (UTC)

Aaron Liu (talk) 02:16, 31 October 2025 (UTC)[reply]
  • Yes to both - By having two systems with the same name, we should then avoid differences in the rules. I say this because if the rules are different, then a user will need to be aware of who designated an area as a contentious topic before reporting or handling reports. For example, if we had the two systems use the same rules but different reporting pages (with no overlap on what pages that can be used), then I expect that users will by mistake post to the wrong pages. Dreamy Jazz talk to me | my contributions 20:56, 1 November 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Question 3. How should we handle logging of community contentious topics?

[edit]
  1. Use Arbitration Enforcement Log (WP:AELOG)
  2. Create a new page such as Wikipedia:Contentious topics/Log which can be separated into two sections, one for community, and one that transcludes WP:AELOG
  3. Create a new page such as Wikipedia:General sanctions/Log which would only log enforcement actions for community contentious topics (subpages would be years)
  4. Continue logging at each relevant page describing the community contentious topics (Wikipedia:General sanctions/Topic area), and if 2 or 3 are chosen, the page would transclude these relevant pages.

— Preceding unsigned comment added by Awesome Aasim (talkcontribs) 20:42, 22 October 2025 (UTC)[reply]

  • 2+3+4 as proposer, one of the problems I do notice is that loading WP:AELOG does take a lot of time because the page has a lot of enforcement actions. The advantage of 2 is having a single page that can be quickly searched. Aasim (話すはなす) 20:42, 22 October 2025 (UTC)[reply]
    BTW except for 1 the other options are mutually exclusive. If option 1 is chosen, options 2-4 are irrelevant. I am not asking people to pick one and be done, people can choose any combination. Aasim (話すはなす) 21:32, 23 October 2025 (UTC)[reply]
  • 2 > 1 – Both ArbCom and community CT are forms of general sanctions (see my incomplete essay on the subject); the only distinction is who authorises them. For this reason, '3' does not make sense. Eliminating the sprawling log pages that currently exist for community-authorised regimes should be a priority if our goal is to eliminate red tape, therefore '4' does not make sense either. That leaves me with 2, which allows for a centralised log for both forms of sanctions. I am perfectly fine with creating subpages as needed, but centralisation is paramount in my mind. Yours, &c. RGloucester 00:02, 23 October 2025 (UTC)[reply]
  • I support option 4. I think it continues to make sense to log individual actions for a given topic area to the corresponding subpage of Wikipedia:General sanctions. isaacl (talk) 01:12, 23 October 2025 (UTC)[reply]
    In the past, there have been concerns raised about it being clear if the enacting authority is the arbitration committee or the community. Thus I do not feel option 1 is the best choice.
    Regarding searching: I feel the typical use case is to search for actions performed within a specific topic area. If necessary, Wikipedia search with a page prefix criterion can be used to search multiple subpages. isaacl (talk) 16:19, 23 October 2025 (UTC)[reply]
    I will note that having the logpages as subpages of Wikipedia:General sanctions (rather than a more tailored page) makes searchability much harder, which is why scripts like WP:SUPERLINKS don't surface community CTOP enforcement entries even though it does surface WP:AELOG entries. Best, KevinL (aka L235 · t · c) 16:21, 23 October 2025 (UTC)[reply]
  • 2 I am in favor of fewer, larger pages because they are easier to find and to search. If a searcher needs to confirm that something isn't there, for example, fewer pages, even if very large, are much easier to work with. Darkfrog24 (talk) 13:33, 23 October 2025 (UTC)[reply]
  • 1 - in keeping with the spirit for Q1 and Q2, the whole point here is to merge everything into a single system that is simpler to follow. We already have a practice of splitting off subpages when specific sections in the log get too large. signed, Rosguill talk 13:52, 23 October 2025 (UTC)[reply]
  • 2 as a first choice, as centralization is helpful, but the current WP:AELOG is ultimately an ArbCom page and shouldn't have jurisdiction over community sanctions. I agree with Rosguill's point about splitting off subpages, and I presume this would be encouraged to a greater extent here. I could also be convinced by 1 (to avoid an unnecessary transclusion, although it should be made clear that it isn't an ArbCom-only page anymore) or by a temporary 3 (to avoid a lag spike until the main subpages are sorted out). Chaotic Enby (talk · contribs) 17:04, 23 October 2025 (UTC)[reply]
    Actually, I'm realizing that 2 doesn't help with centralization compared to 3, and creates a bit of an inconsistency between some topics being directly logged there and others being transcluded. Count 3 as my first choice, with the possibility of a combined log transcluding both for reference. Chaotic Enby (talk · contribs) 19:34, 23 October 2025 (UTC)[reply]
  • 1 > 3 > 4, but my actual preference is to delegate this to a local consensus of those who are involved in implementing this. 1 is my preference, like Rosguill, because centralizing where the existing logs live promotes simplicity and would avoid the need for admins to check which types of CTOPs are which (one goal I have is for the community CTOPs and ArbCom CTOPs to feel almost identical). Not to mention, it would preserve compatibility with tools like WP:SUPERLINKS that check AELOG but not other pages. The biggest hurdle in my mind is that #1 would require ArbCom approval, which I think is likely but not certain (given that ArbCom allows AE for community CTOPS, why not AELOG?). Best, KevinL (aka L235 · t · c) 19:29, 23 October 2025 (UTC)[reply]
  • 3, but there is a nuance: inclulde the recently-changed bit about protections being automatically logged, as part of a unified page at Wikipedia:Arbitration enforcement log/Protections. Protections for the "overlapping" CT/GS regions (A-A and KURD) are already logged there (as, technically, they fall under both) so this would make, and keep, things simple. - The Bushranger One ping only 20:04, 23 October 2025 (UTC)[reply]
  • 5 There should be one system, not two. As noted above, the community is too amorphous and uncertain to be the basis for this. Andrew🐉(talk) 17:28, 25 October 2025 (UTC)[reply]

Discussion (CTOP)

[edit]
  • Comment I understand the functional difference between an AE sanction and AN sanction is that an AE sanction can be removed only by a) the exact same admin who placed it, called the "enforcing admin" or b) a clearly-more-than-half balance of AE admins at an AE appeal while a sanction placed at AN can be removed by c) any sufficiently convinved admin acting alone. To give an example of how this would change things, I found myself in a situation in which I was indefinitely blocked at AE and then the enforcing admin left Wikipedia, which removed one of my options for lifting a sanction. Some of our fellow Wikipedians will think making it easier to get a sanction lifted is a good thing and others will think it's a bad thing, but we should be clear about that so we can all make our decision. Am I correct about how these changes would affect those seeking to have sanctions removed? Darkfrog24 (talk) 13:31, 23 October 2025 (UTC)[reply]
    @Darkfrog24: I think this is incorrect. As it stands now, restrictions imposed under community CTOPs are only appealable to the enforcing administrator or to AN (see, e.g., WP:GS/Crypto, which says Sanctions imposed may be appealed to the imposing administrator or at the appropriate administrators' noticeboard.). Q1 is about aligning the more subtle but still important differences between community CTOPs and ArbCom CTOPs, while Q2 is about adding AE as a place (but not changing the substantive amount of agreement needed) for enforcement requests and appeals. Best, KevinL (aka L235 · t · c) 13:43, 23 October 2025 (UTC)[reply]
    Thanks, KevinL. I will ponder this and make my decision. Darkfrog24 (talk) 13:50, 23 October 2025 (UTC)[reply]
  • Comment: Is there any way that we could implement the semi-automated logging process that is used for page protection of CTOPS here? Is there any expectation that if any of these options were chosen, that process would revert to manual? SWATJester Shoot Blues, Tell VileRat! 18:17, 23 October 2025 (UTC)[reply]
    Pinging @L235 whose bot is in charge of that – for the Twinkle integration of the CTOP logging, I'm currently working on a pull request that would work for both. Chaotic Enby (talk · contribs) 19:08, 23 October 2025 (UTC)[reply]
    I bet the bot could be adapted to whichever option the community opts for! KevinL (aka L235 · t · c) 19:24, 23 October 2025 (UTC)[reply]
  • Comment – If we are to create a seperate log for community-authorised contentious topics as in alternative 3, it should not be subpage of Wikipedia:General sanctions. 'General sanctions' is a broad category that includes ArbCom sanctions, and also non-contentious topics remedies such as the extended confirmed restriction. This is a recipe for confusion. Please consider an alternative naming scheme. Yours, &c. RGloucester 00:19, 24 October 2025 (UTC)[reply]
    The title can always be different. The title I named was just an example title to explain the purpose of the question. Aasim (話すはなす) 01:30, 24 October 2025 (UTC)[reply]

RFC: New GA quick fail criterion for AI

[edit]

Should the following be added to the 'Immediate failures' section of the good article criteria?

6. It contains obvious evidence of LLM use, such as AI-generated references or remnants of AI prompt.

Proposed after discussion at Wikipedia talk:Good articles#AI. Yours, &c. RGloucester 10:08, 26 October 2025 (UTC)[reply]

Survey (GA quick fail)

[edit]
  • Support – Articles that contain obvious evidence of unreviewed AI use are evidence of a competence issue on the part of their creator that is not compatible with the GA process. Having reviewers perform a spot check for obvious signs of AI use will help militate against the recent problem whereby AI-generated articles are being promoted to GA status without sufficient review. Yours, &c. RGloucester 10:08, 26 October 2025 (UTC)[reply]
  • Support. Hardly needs saying. The use of AI is fundamentally contrary to the process of encyclopaedic writing. AndyTheGrump (talk) 10:14, 26 October 2025 (UTC)[reply]
  • Support This is an excellent proposal to help stop Wikipedia falling into absolute disrepute Billsmith60 (talk) 10:21, 26 October 2025 (UTC)[reply]
  • Support Billsmith60 (talk) 10:22, 26 October 2025 (UTC)[reply]
    Billsmith60, presumably you didn't mean to enter two supports? Mike Christie (talk - contribs - library) 12:41, 26 October 2025 (UTC)[reply]
    Sorry about that Mike. Was on my phone for this and it is always temperamental Billsmith60 (talk) 01:02, 27 October 2025 (UTC)[reply]
  • Support Per nomination. This would not prohibit AI use per se, but would rule out promoting any low effort usage of AI. AI use in this manner could be argued to be a failure of GA criteria 1 and 2 as well, but explicitly stating as such will give a bit more weight to reviewers' decisions. --Grnrchst (talk) 10:45, 26 October 2025 (UTC)[reply]
  • Oppose Per comment in discussion. Rollinginhisgrave (talk | contributions) 10:47, 26 October 2025 (UTC)[reply]
  • Support Per nom. Vacant0 (talkcontribs) 10:54, 26 October 2025 (UTC)[reply]
  • Oppose per comment in discussion IAWW (talk) 11:05, 26 October 2025 (UTC)[reply]
  • Oppose. GAs should pass or fail based only and strictly only on the quality of the article. If there are AI-generated references then they either support the article text or they don't, if they don't then the article already fails criteria 2 and the proposal is redundant. If the reference does verify the text it supports then there is no problem. If there are left-over prompts then it already fails criteria 1 and so this proposal is redundant. If the AI-generated text is a copyright violation, then it's already an immediate failure and so the proposal is redundant. If the generated text is rambly, non-neutral, veers off topic, or similar issues then it already fails one or more criteria and so this proposal is redundant. Thryduulf (talk) 12:20, 26 October 2025 (UTC)[reply]
    As I see it, this proposal as-written is actually quite limited in scope and is not doing anything beyond saving resources. Obvious unreviewed AI use will not meet all criteria, but at the moment a reviewer of the GAN is still expected to do a full review. This proposal if passed would effectively codify that obvious AI is considered (by consensus of users of the GA process) to mean the article has insurmountable issues in its current state and should be worked on first before a full review. Kingsif (talk) 14:07, 26 October 2025 (UTC)[reply]
  • Support, we can't afford wasting precious reviewer time (a very scarce resource) on stuff with fake references. —Kusma (talk) 12:29, 26 October 2025 (UTC)[reply]
    If there are fake references then it's already a fail for verifiability. This proposal does not save any additional reviewer time. Thryduulf (talk) 12:42, 26 October 2025 (UTC)[reply]
    It turns a fail into a quick fail, which of course saves reviewer time. —Kusma (talk) 12:49, 26 October 2025 (UTC)[reply]
    See my comment below in the discussion section. Thryduulf (talk) 12:57, 26 October 2025 (UTC)[reply]
  • Oppose per IAWW and Thryduulf. All issues arising from AI use are already covered by other criteria, and there are legitimate uses of AI, which should not be prohibited. Kovcszaln6 (talk) 12:33, 26 October 2025 (UTC)[reply]
  • Oppose. I sympathize with the intent of this RfC but it's the state of the article, not the process by which it got there, that GA criteria should address. Mike Christie (talk - contribs - library) 12:39, 26 October 2025 (UTC)[reply]
  • Oppose. I agree with this in spirit, but I don't think it would be a useful addition. If a reviewer spots blatant and problematic AI usage (e.g. AI-generated references), almost all would quickfail the article immediately anyway. I can't imagine this proposal saving any additional reviewer time or reducing the handful of flawed articles that slip through that process. But if a nominator used AI for something entirely unproblematic and left an edit summary saying something like "used ChatGPT to change table formatting" or "fixed typos identified by ChatGPT", that would be obvious evidence of LLM usage and yet clearly doesn’t warrant a quickfail. MCE89 (talk) 12:52, 26 October 2025 (UTC)[reply]
    Hmm, if “content” was in the proposed text somewhere, would that assuage your legitimate use thoughts? Kingsif (talk) 14:15, 26 October 2025 (UTC)[reply]
    I think that would be slightly better, but I still don't really see what actual problem this proposal is trying to solve. If an article consists of unreviewed or obviously problematic LLM output and contains things like fake references, reviewers aren't going to hesitate to quickfail it (and potentially G15 it) already. I don't see any signs that GAN is currently overwhelmed by AI-generated articles that reviewers just don't have the tools to deal with. And given that lack of a clear benefit, I'm more worried about the potential for endless arguments about process rather than content in the marginal cases (e.g. Can an article be quickfailed if the creator discloses that they used ChatGPT to help copyedit? What if they say they've manually verified and rewritten the LLM output? What is the burden of proof to say that LLM usage is "obvious", e.g. could I quickfail an article solely based on GPTZero?) MCE89 (talk) 15:11, 26 October 2025 (UTC)[reply]
    About the problems, I have a lot of thoughts and happy to discuss, perhaps we should move it to the section below? I also assume and hope people take obvious to mean obvious: if it’s marginal, it’s not obvious. Genuine text/code leftovers from copypasting LLM output is obvious, having to ask a different AI isn’t. Kingsif (talk) 15:29, 26 October 2025 (UTC)[reply]
  • Oppose largely per Thryydulf, except that I don't believe that AI content necessarily violates criterion 1. AI style is often recognisable but if it's well-written then I wouldn't care and we should investigate if the sources were not hallucinated. Fake references (as opposed to incomplete/obscure/not readily available references) should be an instafail reason. Szmenderowiecki (talk) 13:13, 26 October 2025 (UTC)[reply]
  • Support Per my comments in discussion and here. I also see no objection that couldn’t be quelled by the proposed text already having the qualifier “obvious”: the proposal includes benefit of the doubt, even if I personally would take it much further. Kingsif (talk) 14:12, 26 October 2025 (UTC)[reply]
  • Support per Kingsif and following the spirit of the guidance contained in WP:HATGPT and adjacently WP:G15. Fortuna, imperatrix 14:36, 26 October 2025 (UTC)[reply]
  • Support. On a volunteer-led project, it is an insult to expect a reviewer to engage with the extruded output of a syntax generator and not the work of a human volunteer. I am not interested in debating this; please don't ping me to explain that I'm being a Luddite in holding this view. ♠PMC(talk) 14:52, 26 October 2025 (UTC)[reply]
  • Oppose per MCE89. LLM use isn't necessarily problematic (even if it often is), and the proposed wording would discourage people from disclosing LLM use in their edit summaries. Anne drew (talk · contribs) 15:28, 26 October 2025 (UTC)[reply]
  • Weak support -- Did not realize this discussion had been ongoing, I noped out because I was frankly way too exhausted to sisypheanly re-explain things I had already tried to explain. Anyway, I don't object to these criteria per se but this is a really low bar. What I would really support is mandatory disclosure of any AI use, because if AI was used then the spot-checking that is required in GA review is not going to be nearly enough. Nor is the problem really fake sources anymore, the problem is "interpretations" of sources that might not seem worth checking if you don't know what AI text sounds like, but if you do know what AI text sounds like, are huge blaring alarms that the text is probably wrong. Here's an example (albeit for a Featured Article and not a Good Article). All the sources were real, but the text describing the sources was fabricated. And it took me about 15 minutes to zero in on the references that were likely to have issues because I know how LLMs word things; without AI disclosure, reviewers are likely to spot-check the wrong things (as happened here). Gnomingstuff (talk) 17:34, 26 October 2025 (UTC)[reply]
  • Weak oppose While I fully agree with the intent of this proposal, in practice I am concerned that this is subject to misuse by labeling anything as "AI". I agree with Thryduulf and others that any sort of poorly done AI use (which is almost all of it) will already be failable per the existing GA criteria. I share others' concern about the proliferation of AI generated articles and reviews but I'm not convinced this is the solution. Trainsandotherthings (talk) 18:37, 26 October 2025 (UTC)[reply]
  • Weak oppose per my comments at WT:GAN. I also agree that LLM-generated articles are problematic, but the existing criteria already cover most of what's proposed - for instance, evidence of persistent failed verification is a valid reason to quickfail already. I'm concerned that a reviewer would use an LLM detector to check an article, the detector incorrectly says that the article is AI, and the reviewer fails based on that basis. AI detectors are notoriously unreliable - you can run a really old document, like the United States Declaration of Independence, through an AI detector to see what I'm talking about. (Edit - I would support changing WP:GACR criterion 3 - It has, or needs, cleanup banners that are unquestionably still valid. These include {{cleanup}}, {{POV}}, {{unreferenced}} or large numbers of {{citation needed}}, {{clarify}}, or similar tags  - to list {{AI-generated}} as an example of a template that would merit a quickfail, since AI articles can already be quickfailed under that criterion. 13:01, 27 October 2025 (UTC)) Epicgenius (talk) 20:18, 26 October 2025 (UTC)[reply]
    Wrong. I don't use AI detectors, but the best ones achieve ~99% accuracy. The Declaration of Independence is one of the worst possible counterexamples -- no shit, a famous English-language public domain text is all over the training data? Gnomingstuff (talk) 02:24, 27 October 2025 (UTC)[reply]
    They have high numbers of both false positives and false negatives. See, for instance, this study: Looking at the GPT 3.5 results, the OpenAI Classifier displayed the highest sensitivity, with a score of 100%, implying that it correctly identified all AI-generated content. However, its specificity and NPV were the lowest, at 0%, indicating a limitation in correctly identifying human-generated content and giving pessimistic predictions when it was genuinely human-generated. GPTZero exhibited a balanced performance, with a sensitivity of 93% and specificity of 80%, while Writer and Copyleaks struggled with sensitivity. The results for GPT 4 were generally lower, with Copyleaks having the highest sensitivity, 93%, and CrossPlag maintaining 100% specificity. The OpenAI Classifier demonstrated substantial sensitivity and NPV but no specificity.
    The link you provided says annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text. This is about human writers detecting AI, not AI detectors detecting AI. That is not what I am talking about. Other studies like this one state that human reviewers have significant numbers of false positives and false negatives when detecting AI: In Gao et al.’s study, blind human reviewers correctly identified 68% of the AI-generated abstracts as generated and 86% of the original abstracts as genuine. However, they misclassified 32% of generated abstracts as real and 14% of original abstracts as generated.Epicgenius (talk) 02:57, 27 October 2025 (UTC)[reply]
    The study also contains a chart comparing the performance of automatic AI detectors such as Pangram, GPTZero, and Binoculars. As you would have noticed if you read it fully. Gnomingstuff (talk) 16:03, 27 October 2025 (UTC)[reply]
    If you'd read to the conclusion you'd see While AI-output detectors may serve as supplementary tools in peer review or abstract evaluation, they often misclassify texts and require improvement. The limitations section also notes that paraphrasing the AI output significantly decreases the detection rate. This clearly indicates they are not fit for the purpose they would be used for here - especially when the false positive rate is sometimes over 30%. We absolutely cannot afford to tell a third of users that their submission was rejected because they used AI when they didn't actually use AI. Thryduulf (talk) 17:42, 27 October 2025 (UTC)[reply]
    Seconding what Thryduulf said. I've said my piece, though, so I won't belabor it any further. – Epicgenius (talk) 01:38, 28 October 2025 (UTC)[reply]
  • Support It's not fair to submit GA checkers to the noxious task of checking everything in a long detailed article for AI problems. Even without a rule, if you see evidence of AI, say so in the review, that everyone can see, the AI rabbit hole has been found. Nobody is obligated to go down that warren, note it and pass it by. Heck make some warning templates or essays, so future reviewers understand their obligation. It should take 10+ hours to correctly verify an AI article, requires reading all sources and understanding topic in depth. -- GreenC 20:52, 26 October 2025 (UTC)[reply]
    It's not fair to submit GA checkers to the noxious task of checking everything in a long detailed article for AI problems they don't have to at the moment. If there are problems the review is already failed regardless of whether or not the problems result from AI use. If there are no problems then whether AI was used is irrelevant. Thryduulf (talk) 20:57, 26 October 2025 (UTC)[reply]
    Also, I should note that if a reviewer finds so many issues that the article requires 10+ hours to fix, it is already acceptable to quickfail based on these other issues. GA is supposed to be a lightweight process; reviewers already can fail articles if they find things like failed verification or issues needing maintenance banners, and determine that the issues can't be reasonably fixed within a week or so. The proposed GA criterion is well-intentioned, but I think focusing on the means of writing the articles, rather than the ends, is not the correct way to go about it. Epicgenius (talk) 22:40, 26 October 2025 (UTC)[reply]
    With AI you don't even know errors exist. It took me 7 days once to find all the problems in an AI generated article. Turned out to have a reasonable sounding but nationalistic-bent supported by errors of omission. How do you know this without research on the topic? This is why so many are against AI, it's incredibly difficult to debug. Normally a nationalistic writer is easy to spot, but AI is such a good liar, not even the operators realize what it is doing. Not to say AI is impossible to use correctly, with a skilled, disciplined, and intellectually honest operator. — GreenC 23:41, 26 October 2025 (UTC)[reply]
    I agree, and based on some known AI model biases, any controversial topic (designated or just by common sense) should probably have AI use banned completely. Kingsif (talk) 23:50, 26 October 2025 (UTC)[reply]
    I do see, and agree with, the point that you would have to very carefully examine all claims in an article that is suspected of containing AI content. However, WP:GAQF criterion 3 (It has, or needs, cleanup banners that are unquestionably still valid. These include {{cleanup}}, {{POV}}, {{unreferenced}} or large numbers of {{citation needed}}, {{clarify}}, or similar tags ) already covers this. If an article is suspected of containing AI, and thus deserves (or has) {{AI-generated}}, it is already eligible for a quick fail under QF criterion 3. – Epicgenius (talk) 02:53, 27 October 2025 (UTC)[reply]
    Epicgenius, yeah that sounds right. Maybe somewhere in the GA rules there could be a reminder about adding {{AI-generated}} if AI is discovered during the vetting process. Then QF#3 takes effect. — GreenC 06:27, 27 October 2025 (UTC)[reply]
    That's fair, and I can agree with adding it to QF#3. – Epicgenius (talk) 13:02, 27 October 2025 (UTC)[reply]
  • Weak oppose while i am against use of AI in GA and the GANR process I think this is a somewhat misguided proposal as it covers things that would already fall under quick fail criteria and does not actually identify the scope of the issues (i.e what is considered obvious evidence of AI use?). I would be able to support a non-redundant and more detailed proposal but it would need to be more fleshed out than this. IntentionallyDense (Contribs) 02:18, 27 October 2025 (UTC)[reply]
    The proposal clearly identifies 'obvious evidence of AI use' as AI-generated references, such as those that can be detected by Headbomb's script, and remnants of AI prompt. Yours, &c. RGloucester 04:03, 27 October 2025 (UTC)[reply]
    So you want people to quick-fail a nomination on the basis of @Headbomb's script, about which the documentation for the script says it "is not necessarily an issue ("AI, find me 10 reliable sources about Pakistani painter Sadequain Naqqash")". That sounds like a bad idea to me. WhatamIdoing (talk) 06:20, 27 October 2025 (UTC)[reply]
    I think I have made my stance on LLM use in relation to good articles very clear. You are free to object as you see fit. Yours, &c. RGloucester 07:12, 27 October 2025 (UTC)[reply]
    your wording says 6. It contains obvious evidence of LLM use, such as AI-generated references or remnants of AI prompt. this tells me that you can’t have AI prompts in your writing and no AI generated references. Okay… so both of those would be covered by the current criteria. It didn’t mean took the headbomb script. And I definitely would not support any quickfail criteria that relies on a used script especially when the script states “ This is not a tool to be mindlessly used.” also on what basis of HB script are we quick failing?
    The current proposal tells me nothing about what is considered suspicious for AI usage outside of the current existing quick fail criteria. It gives me no guidance as a reviewer as to what may be AI unless it is blatantly obvious. IntentionallyDense (Contribs) 13:18, 27 October 2025 (UTC)[reply]
    WP:AISIGNS is a good start. Gnomingstuff (talk) 20:15, 28 October 2025 (UTC)[reply]
    I agree, we do have some pretty solid parameters around what is a red flag for AI but I believe any proposal around policy/guidelines for AI needs to incorporate those and lay out what that looks like before we take action on it. I would just like an open conversation on what editors think signs of AI use are, and what we can gain some consensus around regarding indications of AI, then it will be a lot easier to implement policy on how to deal with those indications.
    My main issue with this proposal is that it completely skipped that first step of gaining consensus about what the scope of the problem is and jumped to implementing measures to resolve said problem that we have not properly reached consensus on. IntentionallyDense (Contribs) 20:26, 28 October 2025 (UTC)[reply]
  • Oppose. If a GAN is poorly written, it fails the first criterion. If references are made up, it fails the second criterion. If the associated prose does not conform with the references, then it fails the second criterion. We shouldn't be adding redundant instructions to the good article criteria. I don't want new reviewers to be further intimidated by a long set of instructions. Steelkamp (talk) 04:53, 27 October 2025 (UTC)[reply]
  • Oppose, as I pointed in other discussions, and many already pointed here, GA criteria should be focused on the result, not the process. But besides that, I see a high anti-AI sentiment in those discussions and fear that if those proposals are approved, they will be abused. Cambalachero (talk) 13:16, 27 October 2025 (UTC)[reply]
  • Support. Playing whack-a-mole with AI is a problematic time sink for good-faith Wikipedia editors because of the disparity in how much time it takes an AI-using editor to make a mess and how much time it takes the good-faith editors to figure it out and clean it up. This is especially problematic in GA where even in the non-AI cases making a review can be very time consuming with little reward. The proposal helps reduce this time disparity and by doing so helps head off AI users from gaming the system and clogging up the nomination queue, already a problem. —David Eppstein (talk) 17:29, 27 October 2025 (UTC)[reply]
    Please rewrite that. By writing "good-faith Wikipedia editors" meaning editors who do not use AI, you are implying that those who do are acting in bad faith. Cambalachero (talk) 17:37, 27 October 2025 (UTC)[reply]
    "Consensus-abiding Wikipedia editors" and "editors who either do not know of or choose to disrespect the emerging consensus against AI content" would be too unwieldy. But I agree that many new editors have not yet understood the community's distaste for AI and are using it in good faith. Many other editors have heard the message but have chosen to disregard it, often while using AI tools to craft discussion contributions that insist falsely that they are not using AI. I suspect that the ones who have reached the stage of editing where they are making GA nominations may skew more towards the latter than the broader set of AI-using editors. AGF means extending an assumption of good faith towards every individual editor unless they clearly demonstrate that assumption to be unwarranted. It does not mean falsely pretending the other kind of editor does not exist, especially in a discussion of policies and procedures intended to head off problematic editing. —David Eppstein (talk) 18:31, 27 October 2025 (UTC)[reply]
  • Support. People arguing that any article containing such things would ultimately fail otherwise are missing the point. The point is to make it an instant failure so further time doesn't need to be wasted on it - otherwise, people would argue eg. "oh that trace of a prompt / single hallucinated reference is easily fixed, it doesn't mean the article as a whole isn't well-written or passes WP:V. There, I fixed it, now continue the GA review." One bad sentence or one bad ref isn't normally an instant failure; but in a case where it indicates that the article was poorly-generated via AI, it should be, since it means the entire article must be carefully reviewed and, possibly, rewritten before GA could be a serious consideration. Without that requirement, large amounts of time could be wasted verifying that an article is AI slop. This is especially true because the purpose of existing generative AI is to create stuff that looks plausible at a glance - it will often not be easy to demonstrate that it is a long way from meeting any one of the six good article criteria, wasting editor time and energy digging into material that had little time and effort put into it in the first place. That's not a tenable situation; once there is evidence that an article was badly-generated with AI, the correct procedure is to immediately terminate the GA assessment to avoid wasting further time, and only allow a new one once there is substantial evidence that the problem has been addressed by in-depth examination and improvement. Determining whether an article should pass or fail based only and strictly only on the quality of the article is a laborious, time-intensive process; it is absolutely not appropriate to demand that an article be given that full assessment once there's a credible reason to believe that it's AI slop. That's the entire point of the quickfail criteria - to avoid wasting everyone's time in situations where a particular easily-determined criteria means it is glaringly obvious that the article won't pass. --Aquillion (talk) 19:46, 27 October 2025 (UTC)[reply]
    Bravo, Aquillion! You explained my rationale for this proposal better than I could have done. I am much obliged. Yours, &c. RGloucester 23:55, 27 October 2025 (UTC)[reply]
  • Support in principle, although perhaps I'd prefer such obvious tells in the same criteria as the copyvio one. Like copyvio, the problems might not be immediately apparent, and like copyvio, the problems can be a headache to fix. Llm problems are possibly even much more of a timesink, checking through and potentially cleaning up llm stuff is not a good use of reviewer time. This QF as proposed will only affect the most blatant signals that llm text was not checked, which has its positives and negatives but worth noting when thinking about the proposal. CMD (talk) 01:34, 28 October 2025 (UTC)[reply]
  • Support. Deciding on the accuracy and relevance of every LLM's output is not sustainable on article talkpages or in articles. Sure, it could produce something passable, but there is no way to be sure without unduly wasting reviewer time. They're designed to generate text faster than any human being can produce or review it and designed in such a way as to make fake sources or distorted information seem plausible.--MattMauler (talk) 19:34, 28 October 2025 (UTC)[reply]
  • Support: per Aquillion who's reasoning matches my own thoughts exactly. fifteen thousand two hundred twenty four (talk) 21:12, 28 October 2025 (UTC)[reply]
  • Support AI is killing our planet (to an even worse extent than other technologies) and we need to strongly discourage its use. JuxtaposedJacob (talk) | :) | he/him | 00:46, 29 October 2025 (UTC)[reply]
  • Oppose This proposal is far too broad. This would mean that an article with a single potentially hallucinated reference (that may not have even been added by the nominator) would be quickfailed. Nope. voorts (talk/contributions) 01:19, 29 October 2025 (UTC)[reply]
    If an article has even a single hallucinated reference, it should be quickfailed, as that means the nominator has failed to do the bare minimum of due diligence. CaptainEek Edits Ho Cap'n! 19:28, 2 November 2025 (UTC)[reply]
    That's why I said potentially hallucinated. I'm worried this will be interpreted by some broadly and result in real, but hard to find, sources being deemed hallucinated. Also, sometimes editors other than the nominator edit an article in the months between nomination and review. We shouldn't penalize such editors with a quickfail over just one reference that they may not have added. voorts (talk/contributions) 19:34, 2 November 2025 (UTC)[reply]
  • Comment Here's a list of editors who have completed a GAN review. voorts (talk/contributions) 01:29, 29 October 2025 (UTC)[reply]
    "Whoever wants to know a thing has no way of doing so except by coming into contact with it, that is, by living (practicing) in its environment. ... If you want knowledge, you must take part in the practice of changing reality. If you want to know the taste of a pear, you must change the pear by eating it yourself.... If you want to know the theory and methods of revolution, you must take part in revolution. All genuine knowledge originates in direct experience." – Mao Zedong
    Editors who have never done a GA review or who have done very few should consider that they may not have adequate knowledge to know what GAN reviewers want/need as tools. It seems to me like a lot of support for this is a gut reaction against any AI/LLM use, and I don't think that's a good way to make rules. voorts (talk/contributions) 15:48, 29 October 2025 (UTC)[reply]
    I like the way you’ve worded this as it is my general concern as well. While I’m not too high up on that list I’ve done 75 ish reviews and have never encountered AI usage. I know it exists and do see it as problem, however I don’t feel it deserves such a hurried reaction to create hard and fast rules. I would much prefer we take the time to properly flesh out a plan to deal with these issues that involves community input from a range of experiences and reviewers on the scope of the problem, how we should deal with it and to what extent. IntentionallyDense (Contribs) 20:32, 29 October 2025 (UTC)[reply]
    Yes. Recently, I've noticed a lot of editors rushing to push through new PAGs without much discussion or consideration of the issues beforehand. It's not conducive to good policymaking. voorts (talk/contributions) 20:35, 29 October 2025 (UTC)[reply]
    I echo this sentiment. In my 100+ reviews done in the last year I have only had a few instances where I suspected AI use, and I can't think of any that had deep rooted issues clearly caused by AI. IAWW (talk) 22:10, 29 October 2025 (UTC)[reply]
    This rule would’ve been useful years ago when we had users who really wanted to contribute but couldn’t write well enough, their primitive chat bot text was poor and they were unable to fix it, and keeping a review open to go through everything was the response because they didn’t want to close it and insisted it just needed work. As gen AI use is only increasing, addressing the situation before it gets that bad is a good thing. Kingsif (talk) 19:13, 30 October 2025 (UTC)[reply]
    Cool, but I am easily highest up that list (which doesn’t count the reviews I did before it, or took over after) of everyone in this discussion, so your premise is faulty. Kingsif (talk) 19:07, 30 October 2025 (UTC)[reply]
    I don't think my premise is faulty. I never said everyone who does GAN reviews needs to think the same way, nor do I believe that, and I see that you and other experienced GAN reviewers disagree with me. My point was that editors who have never done one should consider whether they have enough knowledge to make an informed opinion one way or the other. voorts (talk/contributions) 19:12, 30 October 2025 (UTC)[reply]
    While you didn’t speak in absolutes, your premise was based in suggesting the people who disagree with you aren’t aware enough. Besides being wrong, you must know it was unnecessary and rather unseemly to bring it up in the first place: this is a venue for everyone to contribute. Kingsif (talk) 19:19, 30 October 2025 (UTC)[reply]
    That wasn't my premise. I just told you what my premise is and I stand by it. I felt like it needed to be said in this discussion because AI/LLM use is a hot button issue and we should be deliberative about how we handle it on wiki. If editors who have never handled a GAN review want to ignore me, they can. As you said, anyone can participate here. voorts (talk/contributions) 19:51, 30 October 2025 (UTC)[reply]
    Forgive me for disagreeing with your point, then, but I don’t think it even really requires editing experience in general to have an opinion on “should we make people waste time explaining why gen AI content doesn’t get a Good stamp or just let them say it doesn’t” Kingsif (talk) 20:11, 30 October 2025 (UTC)[reply]
    Fair enough. No need to apologize. I'm always open to disagreement. voorts (talk/contributions) 20:39, 30 October 2025 (UTC)[reply]
  • Oppose. The proposal lacks clarity in definitions and implementation and the solution is ill-targeted to the problems raised in this and the preceding discussion. Editors have stated that the rationale for new quick fail criteria is to save time. On the other hand, editors have said it takes hours to verify hallucinated references and editors disagree vehemently about the reliability of subjective determinations of AI writing or use of AI detectors. Others have stated that it is already within the reviewer's purview to quick fail an article if they determine that too much time is required to properly vet the article. It is not clear how reviewers will determine that an article meets the proposed AI quick fail criterion, how long this will take, or that a new criterion is needed to fail such articles. Editors disagree about which signs of AI writing are "obvious" and as to whether all obvious examples are problematic. The worst examples would fail, anyway, and seemingly without requiring hours to complete the review so again it is unclear that this new criterion addresses the stated problem. Editors provided examples of articles with problematic, (allegedly) AI-generated content that have passed GA. New quick fail criteria would not address these situations where the reviewer apparently did not find the article problematic while another felt the problems were "obvious". Reviewers who are bad at detecting AI writing or don't verify sources or whatever the underlying deficit is won't invoke the new quick fail criterion and won't stop AI slop from attaining GA status.—Myceteae🍄‍🟫 (talk) 01:42, 29 October 2025 (UTC)[reply]
  • Support in the strongest possible terms. This is half practical, and half principle: the principle being that LLM/AI has no place on Wikipedia. Yes, there may be some, few, edge-cases where AI is useful on Wikipedia. But one good apple in a barrel of bad apples does not magically make the place that shipped you a barrel of bad apples a good supplier. For people who want a LLM-driven encylopedia, Grokipedia is thataway →. For people who want an encyclopedia actually written by and for human useage, the line must be drawn here. - The Bushranger One ping only 01:53, 29 October 2025 (UTC)[reply]
  • Oppose per IntentionallyDense and because detecting AI generation isn't always "obvious", and because the nom's proposed method for detecting LLM use to generate the article's contents will also flag people who use (e.g.) ChatGPT as a web search engine without AI generating even a single word in the whole article article. Also: if you want to make any article look very suspicious, then spam ?utm_source=chatgpt.com at the end of every URL. The "AI detecting" script will light up every source on the page as being suspicious, because it's not actually detecting AI use; it's detecting URLs with some referral codes. I might support adding {{AI generated}} to the list of other QF-worthy tags. WhatamIdoing (talk) 02:09, 29 October 2025 (UTC)[reply]
  • Oppose. If it's WP:G15-level, G15 it (no need to quickfail). Otherwise, we shouldn't go down the rabbit hole of unprovable editor behaviour and should focus on the actual quality of the article in front of us. If it has patently non-neutral language or several things fail verification, it can already be quick-failed as being a long way from the criteria. ~ L 🌸 (talk) 07:01, 29 October 2025 (UTC)[reply]
  • Oppose per MCE89. If you use an LLM to generate text and then use it as the basis for creating good, properly verified content, who cares? It's not as if a reviewer has to check every single citation — if you find one that's nonexistent, that alone should be sufficient to reject the article. Stating "X is Y"<ref>something</ref>, when "something" doesn't say so or doesn't even exist, is a hoax, and any hoax means that the article is a long way from meeting the "verifiable with no original research" criterion. And if we encounter "low effort usage of AI", that's certainly not going to pass a GA review. And why should something be instantly failed just because you believe that it's LLM-generated? Solidly verifying that something is automatically written — not just a high suspicion, but solidly demonstrating — will take more work than checking some references, and as Whatamidoing notes, it's very difficult to identify LLM usage conclusively; we shouldn't quick-fail otherwise good content just because someone incorrectly thinks that it was automatically written. I understand that LLMs tend to use M-dashes extensively. I've always used them a lot more than the average editor does; this was the case even when I joined Wikipedia 19 years ago, long before LLMs were a problem this way. Nyttend (talk) 10:44, 29 October 2025 (UTC)[reply]
  • Support per PMC, David Epstein and Aquillion. A lot of opposes to me look like they are either completely missing the point of a useful practical measure over irrelevant theoretical concerns. I also do find it absolutely insulting to not give reviewers every possible tool to deal with this trash, making them waste precious time and effort to needlessly satisfy another of the existing criteria. Choucas0 🐦‍⬛💬📋 15:25, 29 October 2025 (UTC)[reply]
    I also do find it absolutely insulting to not give reviewers every possible tool to deal with this trash, making them waste precious time and effort to needlessly satisfy another of the existing criteria. I've reviewed a lot of GAs and oppose this because it's vague and a solution in search of a problem. I see that you've completed zero GAN reviews. voorts (talk/contributions) 15:44, 29 October 2025 (UTC)[reply]
    You are entitled to your opinion, but so am I, and I honestly do not see what such a needlessly acrimonious answer is meant to achieve here. The closer will be free to weigh your opposition higher than my support based on experience, but in the meantime that does not entitle you to gate-keep and belittle views you disagree with because you personally judge them illegitimate. Choucas0 🐦‍⬛💬📋 15:58, 29 October 2025 (UTC)[reply]
    You are entitled to your opinion. But when your opinion is based on the fact that something is insulting to a group to which I belong, I am entitled to point out that you're not part of that group and that you're not speaking on my behalf. I don't see how it's acrimonious or gate-keep[ing] or belittl[ing] to point out that fact. voorts (talk/contributions) 16:06, 29 October 2025 (UTC)[reply]
    That is not what my opinion is based on (the first half of my comment pretty clearly is), and I did not mean to speak on anyone's behalf; I apologize if it was not clearer, since it is something that I aim to never do. I consider being exposed to raw LLM output insulting to anyone on this site, so I hope what I meant is clearer now. On another hand, your comment quoting Mao Zedong immediately after your first answer to me clearly shows that you do intend to gate-keep this discussion at large, so you will forgive me for being somewhat skeptical and not engaging further. Choucas0 🐦‍⬛💬📋 16:31, 29 October 2025 (UTC)[reply]
    I'm not sure how pointing out that editors should think before they opine on something with which they have little to no experience is a form of gatekeeping. That's why I didn't say "those editors can't comment" in this discussion. It's a suggestion that people stop and think about whether they actually know enough to have an informed opinion. voorts (talk/contributions) 16:44, 29 October 2025 (UTC)[reply]
  • Oppose per LEvalyn. Like WhatamIdoing, I would rather treat {{AI generated}} as reason to quick-fail under GA Criteria 1 and 2. ViridianPenguin🐧 (💬) 15:35, 29 October 2025 (UTC)[reply]
    I've seen a couple people suggest this, and... I don't really get how this is different at all? Anything under the proposed criterion can be tagged as AI-generated already, this would just be adding an extra step. Gnomingstuff (talk) 20:25, 31 October 2025 (UTC)[reply]
  • Oppose. If it has remnants of a prompt, that's already WP:G15. If the references are fake, that's already WP:G15. If it's not that bad, further review is needed and it shouldn't be QF'd. If AI-generated articles are being promoted to GA status without sufficient review, that means the reviewer has failed to do their job. Telling them their job is now also to QF articles that have signs of AI use won't help them do their job any better - they already didn't notice it was AI-generated. -- asilvering (talk) 15:56, 29 October 2025 (UTC)[reply]
  • Oppose. The article should be judged on its merits and its quality, not the matter or methods of its creation. The judgment should be based only on its quality. Any AI-generated references will fail category 2. If the AI-generated text is a copyright violation, it would be an instant failure as well. We didn't need to write up new rules for things that are forbidden in the first place anyway. Another concern for me is the term "obvious". While there may be universal agreement that some AI slop are obvious AI ("This article is written for your request...", "Here is the article...") some might not be obvious for other people. The use of em-dashes might not be an obvious AI use as some ESL writers might use them as well. The term "obvious" will be vague and it will create problems. Obviously AI slop can be dealt with G15 as well. SunDawn Contact me! 02:55, 30 October 2025 (UTC)[reply]
  • Support - too much junk at this point to be worthwhile. Readers come here exactly because it is written by people and not Grokipedia garbage. We shouldn't stoop to that level. FunkMonk (talk) 13:38, 30 October 2025 (UTC)[reply]
  • Support If there is an obvious trace of LLM use in the article and you are the creator, then you have no business being anywhere near article creation. If you are the nominator, then you have failed to apply a basic level of due diligence. Either way the article will have to be gone over with a fine comb, and should be removed from consideration. --Elmidae (talk · contribs) 13:54, 30 October 2025 (UTC)[reply]
  • Support Per nom. LLM-generated text has no place on Wikipedia. The Morrison Man (talk) 13:59, 30 October 2025 (UTC)[reply]
  • Support GAN is not just a quality assessment – it also serves as a training ground for editors. LLM use undermines this; using LLMs just will not lead to better editors. As a reviewer, I refrain from reading anything that is potentially AI generated, as it is simply not worth my time. I want to help actual humans with improving their writing; I am not going to pointlessly correct the same LLM mistakes again and again, which is entirely meaningless. LLM use should be banned from Wikipedia entirely. --Jens Lallensack (talk) 15:59, 30 October 2025 (UTC)[reply]
  • Oppose. The Venn Diagram crossover of "editors who use LLMs" and "editors who are responsible enough to be trusted to use LLMs responsibly" is incredibly narrow. It would not surprise me if 95% of LLM usage shouldn't merely be quickfailed, but actively rolled back. That said, just because most editors cannot be trusted to use it properly does not mean it is completely off the table - using an LLM to create a table in source given input is fine, say. Additionally, AI accusations can prove a "witch hunt" where just because an editor's writing style includes m-dashes or bold, it gets an AI accusation - even though real textbooks may often also use bolding and m-dashes and everything too! If a problematic LLM article is found, it can still be quick-failed on criterion 1 (if the user wrote LLM-style rather than Wikipedia-style) or criterion 2 (if the user used the LLM for content without triple-verifying everything to real sources they had access to). We don't need a separate criterion for those cases. SnowFire (talk) 18:40, 30 October 2025 (UTC)[reply]
  • Oppose – either the issues caused by AI make an article a long way from meeting any one of the six good article criteria, in which case QF1 would apply, or they do not, in which case I believe a full review should be done. With the current state of LLMs, any article in the latter category will be one that a human has put significant work into. Some editors would dislike reviewing these nominations, but others are willing; I think making WP:LLMDISCLOSE mandatory would be a better solution. jlwoodwa (talk) 04:20, 31 October 2025 (UTC)[reply]
    I would also fully support mandatory LLM disclosure IAWW (talk) 08:58, 31 October 2025 (UTC)[reply]
    But wouldn't those reviewers that are possibly willing to review an LLM generated article be primarily those that use LLMs themselves, have more trust in them, and probably even use them for their review? A situation where most LLM-generated GAs are reviewed by LLMs does not sound healthy. --Jens Lallensack (talk) 12:00, 31 October 2025 (UTC)[reply]
    I think that's a stretch. I've used an LLM to create two articles, but wouldn't trust it to review an article against GAN criteria. ScottishFinnishRadish (talk) 12:15, 31 October 2025 (UTC)[reply]
    LLM usage is a scale. It is not as black-and-white as those who use LLMs vs those who don't. I am of the opinion that LLMs should only be used in areas where their error rate is less than humans. In my opinion LLMs pretty much never write adequate articles or reviews, yet they can be used as tools effectively in both. IAWW (talk) 13:22, 31 October 2025 (UTC)[reply]
  • Oppose - Redundant. Stikkyy t/c 11:53, 31 October 2025 (UTC)[reply]
  • Oppose. GA is about assessing the quality of the article, not about dealing with prejudice toward any individual or individuals. If the article is bad (poorly written, biased, based on rumour rather than fact, with few cites to reliable sources), it doesn't matter who has written it. Equally, if an article is good (well written, balanced, factual, and well cited to reliable sources), it doesn't matter who has written it, nor what aid(s) they used. Lets assess the content not the contributor. SilkTork (talk) 12:15, 31 October 2025 (UTC)[reply]
  • Oppose. Focuses too much on the process rather than on the end result. Also, the vagueness of 'obvious' lays the ground for after-the-event arguments on such things as "I already know this editor uses LLMs in the background; the expression 'stands as a ..' appears, and that's an obvious LLM marker". MichaelMaggs (talk) 18:20, 31 October 2025 (UTC)[reply]
    I already know this editor uses LLMs in the background
    How is this not a solid argument? Gnomingstuff (talk) 00:57, 3 November 2025 (UTC)[reply]
  • Support per Aquillion. Nikkimaria (talk) 18:50, 1 November 2025 (UTC)[reply]
  • Support GA is a mark of quality. If you read something and you can obviously tell it is AI, that does not meet our standards of quality. Florid language, made up citations, obvious formatting errors a human wouldn't make, whatever it is that indicates clear AI use, that doesn't meet our standards. Could we chock that up to failing another criteria? Maybe. But it's nice to have a straightforward box to check to toss piss poor AI work out--and to discourage the poor use of AI. CaptainEek Edits Ho Cap'n! 19:34, 2 November 2025 (UTC)[reply]
  • Support In my view, if someone is so lazy that they generate an entire article to nominate without actually checking if it complies with the relevant policies and guidelines, then their nomination is not worth considering. Reviews are already a demanding process, especially nowadays. Why should I or anyone else put in the effort if the nominator is not willing to also put in the effort. Lazman321 (talk) 03:10, 3 November 2025 (UTC)[reply]
    This proposal would impact those people, but it would also speedily fail submissions by people who do (or are suspected of) using LLMs but who do put in the effort to check that the LLM-output complies with all the relevant policies and guidelines. For example:
    • Editor A uses an LLM to find a source, verifies that that source exists, is reliable, and supports the statement it is intended to support but doesn't remove the associated LLM metadata from the URL. This nomination is speedily failed, despite being unproblematic.
    • Editor B uses an LLM to find a source, verifies that that source exists, is reliable, and supports the statement it is intended to support, and removes the associated LLM metadata from the URL. This nomination is speedily failed if someone knows or suspects that an LLM was used, it is accepted if someone doesn't know or suspect LLM use, despite the content being identical and unproblematic.
    • Editor D finds a source without using an LLM, verifies that that source exists, is reliable, and supports the statement it is intended to support. This nomination is accepted, even though the content is identical in all respects to the preceding two nominations.
    • Editor D adds a source, based on a reference in an article they don't know is a hoax without verifying anything about the source. The reviewer AGFs that the offline source exists and does verify the content (no LLMs were used so there is no need to suspect otherwise) and so the article gets promoted.
    Please explain how this benefits readers and/or editors. Thryduulf (talk) 04:36, 3 November 2025 (UTC)[reply]
    I am nitpicking, but you got two "Editor D" there. SunDawn Contact me! 00:55, 4 November 2025 (UTC)[reply]
    Whoops, the second should obviously be Editor E (I changed the order of the examples several times while writing it, obviously I missed correcting that). Thryduulf (talk) 01:23, 4 November 2025 (UTC)[reply]
  • Oppose "Obvious" is subjective, especially if AI chatbots become more advanced than they are now and are able to speak in less stilted language. Furthermore, either we ban all AI-generated content on Wikipedia, or we allow it anywhere, this is just a confusing half-measure. (I am personally in support of a total ban, since someone skilled enough to proofread the AI and remove all hallucinations/signs of AI writing would likely just write it from scratch, it doesn't save much time). Or if it did, they'd still avoid it out of fear of besmirching their reputation given the sheer amount of times AI is abused. ᴢxᴄᴠʙɴᴍ () 11:31, 3 November 2025 (UTC)[reply]
    A total ban of AI has not gained consensus, in part because there are few 'half-measures' in place that would be indicative that there is a widespread problem. The AI image ban came only after a BLP image ban, for example. CMD (talk) 11:53, 3 November 2025 (UTC)[reply]
    Adding hypocritical half-measures just to push towards a full ban would be "disrupting Wikipedia to make a point". As long as it's allowed, blocking it in GAs would make no sense. It's also likely that unedited AI trash will be caught by reviewers anyway because it's incoherent, even before we get to the AI criterion. ᴢxᴄᴠʙɴᴍ () 15:46, 4 November 2025 (UTC)[reply]
    I'm not sure where the hypocrisy is in the proposal. Whether reviewers will catch unedited AI trash is also not affected by the proposal, the proposal provides a route for action following the catch of said text. CMD (talk) 16:01, 4 November 2025 (UTC)[reply]
  • Support-I think at some point LLMs like to cite wikipedia whenever they spit out an essay or any kind of info on a given topic. Then an editor will paste this info into the article, which the AI will cite again, and wikipedia articles will basically end up ouroboros'd User:shawtybaespade (talk) 12:01, 3 November 2025 (UTC)[reply]
  • Support Aquillion's comment above expresses my view very well. Stepwise Continuous Dysfunction (talk) 20:11, 3 November 2025 (UTC)[reply]
  • Support. Obviously a sensible idea. Stifle (talk) 21:15, 3 November 2025 (UTC)[reply]
    There is extensive explanation above of why this is not a good proposal, so this comment just indicates you haven't read anything of the discussion, which is something for the closer to take note of. Thryduulf (talk) 21:25, 3 November 2025 (UTC)[reply]
    I urge everyone not to make inferences about what others have read. The wide diversity of opinions makes it clear that different editors find different arguments compelling, even after reading all of them. isaacl (talk) 23:58, 3 November 2025 (UTC)[reply]
    If Stifle had read and thought about any of the comments on this page it would be extremely clear that it is not "obviously" a sensible idea. Something that is "obviously" a sensible proposal does not get paragraphs of detailed explanation about why it isn't sensible from people who think it goes too far and from those who think it doesn't go far enough. Thryduulf (talk) 01:27, 4 November 2025 (UTC)[reply]
  • Oppose I feel the scope that this criteria would cover is already redundant by the other criterias (see Thryduulf's !vote). Additionally, I am concerned that this will raise false positives for those whose writing style is too close to what an LLM could generate. Gramix13 (talk) 23:02, 3 November 2025 (UTC)[reply]
  • Support, as per expressed by Aquillion. Furthermore would rather not see Wikipedia become a Grokipedia. Lf8u2 (talk) 01:41, 4 November 2025 (UTC)[reply]
    Grokipedia is an uncontrollable AI slop where no one can control the content (except for Elon Musk and his engineers). Current Wikipedia's rules is enough to stop such travesty without adding this quickfail category. GA criteria #1 and #2 is more than enough to stop the AI slop. G15 is still there as well. No need to put rules on top of other rules. SunDawn Contact me! 04:37, 4 November 2025 (UTC)[reply]
  • Oppose For the statements made above and in the discussion that the failures of AI (hallucination) are easily covered by criterions 1 and 2. But, additionally, because I am not confident that AI is easily detected. AI-detector tools are huge failures, and my own original works on other sites have been labeled AI in the past when they're not. So I personally have experience being accused of using AI when I know my work is original all because I use emdashes. And since AI is only going to improve and become even harder to detect, this criterions is most likely going to be used to give false confidence to over-eager reviewers ready to quick-fail based on a hunch. Terrible idea.--v/r - TP 01:40, 4 November 2025 (UTC)[reply]
  • Support after consideration. I do not love how the guideline is currently written - I think all criteria for establishing "obvious" LLM use should be defined. However, I would rather support and revise than oppose. Seeing multiple frequent GA reviewers !vote support also suggests there is a gap with the current QF criteria. NicheSports (talk) 04:58, 4 November 2025 (UTC)[reply]
  • Support, the fact that people are starting to write like AI/bots/LLMs means that "false positives" will be detecting (in some cases) users who are too easily influenced by what they are reading. Let's throw those babies out with the bathwater. Abductive (reasoning) 05:16, 4 November 2025 (UTC)[reply]
    It's literally the opposite of that. LLMs and GenAI are trained on human writing. It mimics human writing. Not the other way around. And are you suggesting banning users for writing in similar prose to the highly skilled published authors that LLMs are trained on? What the absolute fuck?!?--v/r - TP 15:21, 4 November 2025 (UTC)[reply]
    LLMs are trained on highly skilled published authors? Pull the other one, it's got bells on. I didn't know highly skilled published authors liked to delve into things with quite so many emojis. Cremastra (talk · contribs) 15:26, 4 November 2025 (UTC)[reply]
  • Support per Aquillion and Lf8u2, most editors I assume would not want Wikipedia to become Grokipedia, a platform of AI slop. LLMs such as Grok and ChatGPT write unencyclopedically or unnaturally and cite unreliable sources such as Reddit. Alexeyevitch(talk) 07:55, 4 November 2025 (UTC)[reply]
    Users !opposed to this proposal are not supportive of AI slop or a 'pedia overrun by AIs. It's just a bad proposal.--v/r - TP 15:22, 4 November 2025 (UTC)[reply]

Discussion (GA quick fail)

[edit]
  • I sometimes use AI as a search engine and link remnants are automatically generated. I'd rather not face quickfail for that. I'm also not seeing how the existing criteria are not sufficient; if links are fake or clearly don't match the text, that is already covered under a quickfail as being a long way from demonstrated verifiability. Can a proponent of this proposal give an example of an article they would be able to quickfail under this that they can't under the current criteria? Rollinginhisgrave (talk | contributions) 10:47, 26 October 2025 (UTC)[reply]
    The purpose of this proposal is to draw a line in the sand, to preserve the integrity of the label 'good article', and make clear where the encyclopaedia stands. Yours, &c. RGloucester 12:55, 26 October 2025 (UTC)[reply]
    In a nutshell the difference is that with AI-generated text, every single claim and source must be carefully checked, and not just for the source's existence; GA only requires spot-checking a handful. The example I gave above was a FA, not GA, but it's basically the same thing. Gnomingstuff (talk) 17:56, 26 October 2025 (UTC)[reply]
    Thankyou for this example, although I'm not sure how it's applicable here as it wouldn't fall under "obvious evidence of LLM use". At what point in seeing edits like this are you invoking the QF? Rollinginhisgrave (talk | contributions) 21:23, 26 October 2025 (UTC)[reply]
    The combination of "clear signs of text having been written by AI..." plus "...and there are multiple factual inaccuracies in that text." Or in other words, obvious evidence (#1) plus problems that suggest that the output wasn't reviewed well/at all (#2). Gnomingstuff (talk) 02:20, 27 October 2025 (UTC)[reply]
    I've spent some time thinking about this. Some thoughts:
    • What you describe as obviously AI is very different to what RGloucester describes here, which makes me concerned about reading any consensus for what proponents are supporting.
    • I would describe what you encountered at La Isla Bonita as "possible/probable AI use" not "obvious", and your description of it as "clear" is unconvincing, especially when put against cases where prompts are left in etc.
    • If I encountered multiple substantial TSI issues like that and suspected AI use, I would be more willing to quickfail as I would have less trust in the text's verifiability. I would want other reviewers to feel emboldened to make the same assessment, and I think it's a problem if they are not currently willing to do so because of how the QF criteria is laid out.
    • I see no evidence that this is actually occuring.
    • I think that the QF criteria would have to be made more broad than proposed ("likely AI use") to capture such occurrences, and I would like to see wording which would empower reviewers in that scenario but would avoid quickfails where AI use is suspected but only regular TSI issues exist (for those who do not review regularly, almost all spot checks will turn up issues with TSI).
    Rollinginhisgrave (talk | contributions) 17:49, 29 October 2025 (UTC)[reply]
    Not a fan of RGloucester's criteria tbh, I don't feel like references become quickfail-worthy just because someone used ChatGPT search, especially given that AI browsers now exist.
    As far as the rest this is why I !voted weak support and not full support -- I'm not opposed to quickfail but it's not my preference. My preference is closer to "don't promote until a lot more review/rewriting than usual is done." Gnomingstuff (talk) 05:00, 1 November 2025 (UTC)[reply]
  • What's "obvious evidence of AI-generated references"? For example, I often use the automatic generation feature of the visual editor to create a citation template. Or I might use a script to organise the references into the reflist. The proposal seems to invite prejudice against particular AI tells but these include things like using an m-dash, and so are unreliable. Andrew🐉(talk) 10:53, 26 October 2025 (UTC)[reply]
    Yeah, it's poorly written. "Obvious evidence of AI-generated references" in this context means a hallucination of a reference that doesn't exist. Viriditas (talk) 02:43, 28 October 2025 (UTC)[reply]
  • What about something similar to WP:G15, for example 6. It contains content that could only plausibly have been generated by large language models and would have been removed by any reasonable human review. Kovcszaln6 (talk) 10:59, 26 October 2025 (UTC)[reply]
  • This would be the first GA criterion that regulates the workflow people use to write articles rather than the finished product, which doesn't make much sense because the finished product is all that matters. Gen AI as a tool is also extremely useful for certain tasks, for example I use it to search for sources I may have missed (it is particularly good at finding multilingual sources), to add rowscopes to tables to comply with MOS:DTAB, to double check table data matches with the source, and to check for any clear typos/grammar errors in finished prose. IAWW (talk) 11:05, 26 October 2025 (UTC)[reply]
    It’s irrelevant to this discussion but I don’t think it’s right to call something “extremely useful” when the tasks are layout formatting, and source-finding and copy editing skills you can and should develop for yourself. You will get better the more you try, and when even just pretty good, you will be better than a chatbot. You also really don’t need gen AI to edit tables, there are completely non-AI tools to extract datasets and add fixed content in fixed places, tools that you know won’t throw in curveballs at random. Kingsif (talk) 14:24, 26 October 2025 (UTC)[reply]
    Well, "extremely useful" is subjective, and in my opinion it probably saves me about 30 mins per small article I write, which in my opinion justifies the adjective. I still do develop all the relevant skills myself, but I normally make some small mistakes (like for example putting a comma instead of a full stop), which AI is very good at detecting. IAWW (talk) 14:55, 26 October 2025 (UTC)[reply]
    You still don’t need overconfident error-prone gen AI for spellcheck. Microsoft has been doing it with pop ups that explain why your text may or may not have a mistake for almost my whole life. Kingsif (talk) 15:02, 26 October 2025 (UTC)[reply]
    GenAI is just faster and easier to use for me. IAWW (talk) 16:15, 26 October 2025 (UTC)[reply]
    Well yes, if you consider speed and ease of use to be more important than accuracy, generative AI is probably the way to go... AndyTheGrump (talk) 21:09, 26 October 2025 (UTC)[reply]
    @AndyTheGrump is there any evidence that the uses to which IAWW puts generative AI result in a less accurate output than doing it manually? Thryduulf (talk) 21:11, 26 October 2025 (UTC)[reply]
    I have no idea how accurately IAWW can manually check spelling, grammar etc. That wasn't the alternative offered however, which was to use existing specialist tools to do the job. They can get things wrong too, but rarely in the making-shit-up tell-them-what-they-want-to-hear way that generative AI does. AndyTheGrump (talk) 21:40, 26 October 2025 (UTC)[reply]
    Generative AI can do that in certain situations, but things like checking syntax doesn't seem like one of those situations. Anyway, if the edits IAWW makes to Wikipedia are accurate and free of neutrality issues, fake references, etc. why does it matter how that content was arrived at? Thryduulf (talk) 21:48, 26 October 2025 (UTC)[reply]
    'If' is doing a fair bit of work in that question, but ignoring that, it wouldn't, except in as much as IAWW would be better off learning to use the appropriate tools, rather than using gen AI for a purpose other than that it was designed for. I'd find the advocates of the use of such software more convincing if they didn't treat it as if it was some sort of omniscient and omnipotent entity capable of doing everything, and instead showed a little understanding of what its inherent limitations are. AndyTheGrump (talk) 23:02, 26 October 2025 (UTC)[reply]
    I clearly don't treat it as an omniscient and omnipotent entity, and I welcome any criticism of my work. IAWW (talk) 08:05, 27 October 2025 (UTC)[reply]
    To me - and look, as much as it's a frivolous planet-killer, I am not going to go after any individual user for non-content AI use, but I will encourage them against it - if we assume there are no issues with IAWW's output, my main concern would be the potential regression in IAWW's own capabilities for the various tasks they use an AI for, and how this could affect their ability to contribute to the areas of Wikipedia they frequent. E.g. if you are never reviewing your own writing and letting AI clean it up, will your ability to recognise in/correct grammar and spelling deteriorate, and therefore your ability to review others' writing. That, however, would be a personal concern, and something I would not address unless such an outcome became serious. As I said, with this side point, I just want to encourage people to develop and use these skills themselves. Kingsif (talk) 23:21, 26 October 2025 (UTC)[reply]
    why does it matter how that content was arrived at? Value? Morality? If someone wants ChatGPT, it's over this way. We're an encyclopedia. We have articles with value written by people who care about the articles. LLM-generated articles make a mockery of that. Why would you deny our readers this? I genuinely can't understand why you're so pro-AI. Do you not see how AI tools, while they have some uses, are completely incompatible with our mission of writing good articles? Cremastra (talk · contribs) 01:57, 28 October 2025 (UTC)[reply]
    Once again, Wikipedia is not a vehicle for you to impose your views on the morality of AI on the world. Wikipedia is a place to write neutral, factual encyclopaedia articles free of value judgements - and that includes value judgements about tools other people use to write factual, neutral articles. Thryduulf (talk) 02:17, 28 October 2025 (UTC)[reply]
    Your refusal to take any stance on a tool that threatens the value of our articles is starting to look silly. As I say here, we take moral stances on issues all the time, and LLMs are right up our alley. Cremastra (talk · contribs) 02:28, 28 October 2025 (UTC)[reply]
    That LLM is a tool that threatens the value of our articles is your opinion, seemingly based on your dislike of LLMs and/or machine learning. You are entitled to that opinion, but that does not make it factual.
    If an article is neutral and factual then it is neutral and factual regardless of what tools were or were not used in its creation.
    If an article is not neutral and factual then it is not neutral and factual regardless of what tools were or were not used in its creation. Thryduulf (talk) 02:52, 28 October 2025 (UTC)[reply]
    You missed two: If an article is not neutral and factual and was written by a person, you can ask that person to retrace their steps in content creation (if not scan edit-by-edit to see yourself) so everyone can easily identify where the inaccuracies originated and fix them. If an article is not neutral and factual and you cannot easily trace its writing process, it is hard to have confidence in any content at all when trying to fix it. Kingsif (talk) 03:01, 28 October 2025 (UTC)[reply]
    Now, relevantly, this proposal clearly does not regulate workflow, only the end product. It only refers to the article itself having evidence of obvious AI generation in its actual state. Clean up after your LLMs and you won’t get caught and charged 😉 Kingsif (talk) 14:28, 26 October 2025 (UTC)[reply]
    The "evidence" in the end product is being used to infer things about the workflow, and the stuff in the workflow is what the proposal is targeting. IAWW (talk) 14:50, 26 October 2025 (UTC)[reply]
    Y’all know I think gen AI is incompatible with Wikipedia and would want to target it, but I don’t think this proposal does that. If there’s AI leftovers, that content at least needs human cleanup, and that shouldn’t be put on a reviewer. That’s no different to identifying copyvio and quickfailing saying a nominator needs to work on it rather than sink time in a full review. Kingsif (talk) 14:59, 26 October 2025 (UTC)[reply]
  • Regarding "fake references", I can see the attraction in this being changed from a slow fail to a quick fail, but before it can be a quick fail there needs to be a reliable way to distinguish between references that are completely made up, references that exist but are inaccessible to (some) editors (e.g. offline, geoblocked, paywalled), references that used to be accessible but no longer are (e.g. linkrot), and references with incorrect details (e.g. typos in URIs/dois/ISBNs/titles/etc). Thryduulf (talk) 12:56, 26 October 2025 (UTC)[reply]
    If you cannot determine if a reference that doesn’t work is AI or not, then it’s not obvious AI and this wouldn’t apply… Kingsif (talk) 14:09, 26 October 2025 (UTC)[reply]
    I think this is the problem: The proposal doesn't say "a reference that doesn’t work". It says "AI-generated references". Now maybe @RGloucester meant the kind of ref that's completely fictional, rather than real sources that someone found by using ChatGPT as a type of cumbersome web search engine, but that's not clear from what's written in the proposal.
    This is a bit concerning, because there have been problems with citations that people can't check since before Wikipedia's creation – for example:
    • Proof by reference to inaccessible literature: The author cites a simple corollary of a theorem to be found in a privately circulated memoir of the Slovenian Philological Society, 1883.
    • Proof by ghost reference: Nothing even remotely resembling the cited theorem appears in the reference given.
    • Proof by forward reference: Reference is usually to a forthcoming paper of the author, which is often not as forthcoming as at first.
    – and AI is adding to the traditional list the outright fabrication of sources: "Proof by non-existent source: A paper is alleged to exist, except that no such paper ever existed, and sometimes the alleged author and the alleged journal are made-up names, too". These are all problems, but these need different responses in the GA process. Made-up sources should be WP:QF #1: "It is a long way from meeting any one of the six good article criteria" (specifically, the requirement to cite real sources. A ghost reference is a real source but what's in the Wikipedia article {{failed verification}}; depending on the scale, that's a surmountable problem. A forward reference is an unreliable source, but if the scale is small enough, that's also a surmountable problem. Inaccessible literature is not grounds for failing a GA nom.
    If this is meant to be "most or all of the references are to sources that actually doesn’t exist (not merely offline, not merely inconvenient, etc.)", then it can be quick-failed right now. But if it means (or gets interpreted as) "the URL says ?utm=chatgpt", then that's not an appropriate reason to quick-fail the nomination. WhatamIdoing (talk) 06:10, 27 October 2025 (UTC)[reply]
    Perhaps a corollary added to existing crit, saying that such AI source invention is a QF, would be more specific and helpful. I had thought this proposal was good because it wasn’t explicitly directing reviewers to “this exact thing you should QF”, but if there are reasonable concerns (not just the ‘but I like AI’ crowd) that the openness could instead confuse reviewers, then adding explicit AI notes to existing crit may be a better route. Kingsif (talk) 16:05, 27 October 2025 (UTC)[reply]
  • Suggestion: change the fail criterion to read "obvious evidence of undisclosed LLM use". There are legitimate uses of LLMs, but if LLM use is undisclosed then it likely hasn't been handled properly and shouldn't be wasting reviewers' time, since more than a spot-check is required as explained by Gnomingstuff. lp0 on fire () 09:17, 27 October 2025 (UTC)[reply]
    My concern here is that these in practice are basically the same thing: WP:LLMDISCLOSE is not mandatory, so almost all LLM use is undisclosed, even when people are doing review. Gnomingstuff (talk) 16:08, 27 October 2025 (UTC)[reply]
    It would also be so hard to implement making it mandatory, in practice. Heavy rollout means some users may not even know when they’ve used it. Left google on AI mode (or didn’t turn it off…)? Congrats, when you searched for a synonym you “used” an LLM. Kingsif (talk) 16:12, 27 October 2025 (UTC)[reply]
  • Any evidence of LLM use? Does that include disclosing LLM use used in article development/creation? See Shit flow diagram and Malacca dilemma for examples. Should both of those quick fail based only on LLM use? ScottishFinnishRadish (talk) 11:10, 27 October 2025 (UTC)[reply]
    I took evidence to mean things in the article. I hope no reviewer would extend the GA crit to things not reviewed in the GAN process - like an edit reason or other disclosure. I can see the concern that this wording could allow or encourage them to, now that you bring it up. Kingsif (talk) 15:56, 27 October 2025 (UTC)[reply]
    A difficult part of workshopping any sort of rule like this is you have to remember not everyone who uses it will think the same way you do, or even the way the average person does. What I'd hate to see happen is we pass something like this and then have to come back multiple times to edit it because of people using it as license to go open season on anything they deem AI, evidence or no evidence. I don't mean to suggest you would do anything like that, Kingsif, but someone out there probably will. Trainsandotherthings (talk) 01:52, 28 October 2025 (UTC)[reply]
    I didn't think you were suggesting so ;) As noted, I agree. As much as obvious should mean obvious and evidence should be tangible evidence, and the spirit of the proposal should be clear... I still support it, as certainly less harmful than not having something like it, but I can see how even well-intentioned reviewers trying to apply it could go beyond this limited proposal's intention. Kingsif (talk) 01:59, 28 October 2025 (UTC)[reply]
  • I mentioned this above in my !vote, but isn't this already covered by WP:GAQF #3 (# It has, or needs, cleanup banners that are unquestionably still valid. These include {{cleanup}}, {{POV}}, {{unreferenced}} or large numbers of {{citation needed}}, {{clarify}}, or similar tags)? Any blatant use of AI means that the article deserves {{AI-generated}} and, as such, already is QF-able. All that has to be done is to modify the existing QF criterion 3 to make it explicit that AI generation is a rationale that would cause QF criterion 3 to be triggered. – Epicgenius (talk) 01:44, 28 October 2025 (UTC)[reply]
    To keep it short, isn't QF3 just a catch-all for "any clean-up issues that might not completely come under 1 & 2" and theoretically both those quickfail conditions come under it and they're unnecessary? But they're important enough to get their own coverage? Then we ask is unmonitored gen AI more or less significant than GA crit and copyvio. Kingsif (talk) 02:06, 28 October 2025 (UTC)[reply]
  • Suggestion: combining the obvious use of AI with evidence that the submission falls short of any of the other six GA criteria (particularly criteria 2). Many of the current Opposes reflect a sentiment that this policy would encapsulate too much: instead of reflecting the state of the article, it punishes those who use AI in their workflow. This suggestion would cover a quickfail of articles with AI-hallucinated references (so, for instance, if a reviewer notes a source with a ?utm_source=chatgpt.com tag and determines that the sentence is not verifiable, they can quickfail it); however, this suggestion limits the quickfail potential for people who use AI, review its outputs, and put work into making sure it meets the guidelines for a Wikipedia article. Staraction (talk | contribs) 07:41, 30 October 2025 (UTC)[reply]
    We already have this: it's WP:QF #1. Kovcszaln6 (talk) 08:23, 30 October 2025 (UTC)[reply]
    Sorry, I don't think I worded the tqi part well. I mean that, if there is obvious use of AI and any evidence of a hallucinated source, unverified citation, etc. at all that the reviewer is allowed to quickfail.
    If this still is WP:QF #1, then I sincerely apologize for wasting everybody's time. Staraction (talk | contribs) 08:39, 30 October 2025 (UTC)[reply]
    I see. I support this. Kovcszaln6 (talk) 08:45, 30 October 2025 (UTC)[reply]

RfC: Should edit filter managers be allowed to use the "revoke autoconfirmed" action in edit filters?

[edit]

An edit filter can perform certain actions when triggered, such as warning the user, disallowing the edit, or applying a change tag to the revision. However, there are lesser known actions that aren't currently used in the English Wikipedia, such as blocking the user for a specified amount of time, desysopping them, and something called "revoke autoconfirmed". Contrary to its name, this action doesn't actually revoke anything; it instead prevents them from being "autopromoted", or automatically becoming auto- or extended-confirmed. This restriction can be undone by any EFM at any time, and automatically expires in five days provided the user doesn't trigger that action again. Unlike block and desysop (called "degroup" in the code), this option is enabled for use on enwiki, but has seemingly never been used at all.

Fast forward to today, and we have multiple abusers and vandalbots gaming extended confirmed in order to vandalize or edit contentious topics. One abuser in particular has caused an edit filter to be created for them, which is reasonably effective in slowing them down, but it still lets them succeed if left unchecked. As far as I'm aware, the only false positive for this filter was triggered by PaulHSAndrews, who has since been community-banned. In theory, setting this filter to "revoke autoconfirmed" should effectively stop them from being able to become extended confirmed. Some technical changes were recently made to allow non-admin EFMs to use this action, but since it has never been used, I was told to request community consensus here.

So, should edit filter managers be allowed to use the "revoke autoconfirmed" action in edit filters? Children Will Listen (🐄 talk, 🫘 contribs) 05:04, 28 October 2025 (UTC)[reply]

Survey (edit filters)

[edit]

Discussion (edit filters)

[edit]

In general, this proposal seems highly dangerous, and policy shouldn't change. Just go to WP:STOCKS and you'll find some instances in which misconfigured filters prevented edits by everyone; imagine that these filters also included provisions to block or revoke rights from affected editors. However, the proposal seems to be talking about a filter for one particularly problematic user; I could support a proposal to make an exemption for egregious cases, but I think such an exemption should always be discussed by the community, so the suggested reconfiguration is the result of community consensus. Nyttend (talk) 10:51, 29 October 2025 (UTC)[reply]

Ireland categories

[edit]

Please see Wikipedia talk:WikiProject Ireland/Ireland Category Norms#RFC on moving or removing this guideline. WhatamIdoing (talk) 00:00, 29 October 2025 (UTC)[reply]

Add [[Special:Contributions/%Username%|contribs]] to default signatures

[edit]

Often I find when dealing with other editors - for example to see if they are active, or which areas they may be most active in - that I need to type Special:Contributions and then the username. I'm not active in SPI or much of the admin areas, partly because I'm not one, but particularly in SPIs you need to investigate patterns of contributions and initial work needs again to be done through Special:Contributions.

This is sort of tedious, so what I propose is adding a link to contributions just like in my signature. No, it's not because I have it, but because it's going to be a small improvement that is going to make cooperation easier. Note I'm not saying that we need to add logs because those are mostly for administrative purposes and typical users won't benefit from including it in the signature. And anyway Special:Contributions has links to all logs you may need if you really need them. Szmenderowiecki (talk · contribs) 14:35, 29 October 2025 (UTC)[reply]

I don't think we should change the default signature. As far as the user story above goes, you certainly do not need to manually type in Special:Contributions. If you follow any user: link to the user page, you can just click the "user contributions" link. Also, there are plenty of scripts that can add a popup for this. — xaosflux Talk 14:57, 29 October 2025 (UTC)[reply]
Your comment actually proves that we need it.
If there are plenty of scripts that do the thing, maybe this behaviour is requested enough that it should be by default.
As for the first part, by this measure you don't need the link to the talk because it's easily accessible through the userpage with just two clicks.
The way to access user contribs without manually typing it isn't intuitive.
  • On desktop, you go to the user page, click "Tools" and choose User contributions from the dropdown; so three clicks away and hidden behind two curtains
  • On mobile, it's hidden behind the middle symbol on the top bar of the user menu, but the symbol (a hamburger button with a schematic outline of a person (presumably user) does not scream "user contributions" - my first thought would be probably "More user preferences" with a dropdown. The contributions page does look like lines on desktop, I'll give them that, but on mobile that's more like boxes/containers. These instructions actually is not contained in the how-to guide for user contributions
As opposed to IP addresses, for which you just click on the IP address. Szmenderowiecki (talk · contribs) 15:28, 29 October 2025 (UTC)[reply]
  • Manually typing in 'Special:Contribution' is certainly not intuitive
  • The scripts I mentioned use client-side javascript to make an on-hover popup, we generally don't force that sort of thing on others
  • I think you didn't mean to link to Help:Using colours above, but if Help:User contributions is outdated, feel free to boldly correct it - you can do that immediately
xaosflux Talk 16:21, 29 October 2025 (UTC)[reply]
  • I fixed the wikilink, and I added relevant information to the manual. Not that it resolves the situation because we shouldn't assume anyone reads the manual, even if they maybe should.
  • Manually typing in 'Special:Contributions' is certainly not intuitive - I agree, and this is confirmed by the fact that you missed an s when typing this, which is why I'm floating this proposal. The change will just make it one click away, regardless of the skin you use.
  • I never mentioned anything about shoving javascript down any editor's throat; what I was saying that the abundance of implementations for this functionality indicates that such functionality is in fact needed in some form. It's about changing MediaWiki:Signature to add contributions after talk using just wikisyntax, nothing more. Szmenderowiecki (talk · contribs) 19:15, 29 October 2025 (UTC)[reply]
@Szmenderowiecki, have you tried enabling the gadget Wikipedia:Tools/Navigation popups in your preferences? With that gadget enabled, just hover over a username/signature and you'll see special user groups, total edits, how long they've been an editor, when they made their most recent edit, and then nested hover-menus include a link to their contributions. Schazjmd (talk) 14:46, 30 October 2025 (UTC)[reply]
I have enabled it and tested it. It works sort of, but I hate it that it breaks the default "enable page previews" function, which is so much better (combined with shifting the default Wikipedia font to Bahnscrift/DIN 1451, as Arial is an eyesore). A mouseover would obviously also not work on mobile.
I enabled the "user info" functionality, which looks so much better. I think it is a great feature that should be made default once WMF decides that it is mature enough (it appeared just this year) and that it isn't for just "patrollers and moderators". Contributions are hidden behind two clicks and a hover on the username icon, which actually makes sense; but there's so much more. However, because I learned of the "no client-side javascript by default" attitude I don't think this resolves the issue. It is even worse because it only works in article history and not at each instance that a username is mentioned (i.e. if it detected [[User:%Username% or {{u|%Username%}} or {{ping|Username}} etc. in-text Szmenderowiecki (talk · contribs) 20:51, 30 October 2025 (UTC)[reply]

Should we delete outlines?

[edit]

I started an AfD with an "article" I stumbled across:

Outline of Wikipedia.

Come to find out there are a whole slew of Wikipedia:Outlines that, I guess, are supposed to be some sort of cross between cliff notes and DMOZ. They are classified as "lists", but they aren't lists. They are, I guess, the private project of a few people who seem to be operating parallel articles to main topics but without any narrative structure.

Why do these exist? How are they controlled editoirally? Should they all be merged into other articles?

Could I just propose deleting all of them?

jps (talk) 00:24, 30 October 2025 (UTC)[reply]

IMHO outlines is what categories should ideally be. They have their purpose. Szmenderowiecki (talk · contribs) 00:59, 30 October 2025 (UTC)[reply]
How so? Categories are a structured and hierarchical data type. Outline articles seem to be attempts to force an article into something like an article without narrative or prose. jps (talk) 01:36, 30 October 2025 (UTC)[reply]
I look at it from another perspective - it's an enhanced version of categories where you very briefly describe what's going on and what the reader is likely to see of relevance to the outlined topic when they land on any given page (The comparison I like is .txt file and HTML). We don't give sources in categories, and outlines are supposed to do the same. I think we can obviate this in a way by forcing categories to show short descriptions for each article; but for example Greta Thunberg is mentioned in Outline of autism but this would be inappropriate info for a short description, at least at the current understanding of what short descriptions should be ("autistic climate activist" sounds denigrating)
Maybe outline should be its own namespace. Szmenderowiecki (talk · contribs) 02:02, 30 October 2025 (UTC)[reply]
I would be more comfortable with them in their own namespace since the ostensible subject of the article is not the "outline of Wikipedia" for example which, I gather, has not notability outside of Wikipedia's invention of the outline structure. jps (talk) 02:56, 30 October 2025 (UTC)[reply]
I've always thought categories should have a more user-oriented view by default with the shortdescs and thumbnail, like the Vector 2022 search suggestions. The current view with the plain links would be accessible with a quick toggle, and it'd remember your preference so it wouldn't be a burden on existing editors that prefer the current layout.  novov talk edits 03:30, 30 October 2025 (UTC)[reply]
Wikipedia:Why do we have outlines in addition to...? addresses at least one reason why outlines are in article space, in describing how they differ from Wikipedia:Portals, which among other things are in their own namespace:

Outlines show up when you search Wikipedia, which is important because we want people to be able to find them easily. Portals don't show up in searches by default, and when they are included, their subpage entries make the search results very hard to read (because their many subpages clutter the results).

I think it's fair to ask whether individual outlines, or outlines as a whole, are serving their purpose, but they clearly are intended to live alongside and complement lists and articles. —Myceteae🍄‍🟫 (talk) 04:47, 30 October 2025 (UTC)[reply]
Why do "we want people to be able to find them easily" but we don't want them to be able to find portals easily? jps (talk) 17:39, 30 October 2025 (UTC)[reply]
Fair question about portals. I've seen other editors say they think the should be in article space. The quote above mentions the navigation issues with portal subpages cluttering search results. I don't know the history of portals or what went into their creation and placement into a dedicated namespace. —Myceteae🍄‍🟫 (talk) 18:28, 30 October 2025 (UTC)[reply]
Pretty much because the community decided it likes outlines more than portals. Dege31 (talk) 15:17, 1 November 2025 (UTC)[reply]
I like these articles. I don't see why they should be deleted. Aaron Liu (talk) 01:27, 30 October 2025 (UTC)[reply]
How do you maintain editorial control over them? Who decides how Outline of Wikipedia should be different from Wikipedia? What is the source upon which we are basing this artform? jps (talk) 01:35, 30 October 2025 (UTC)[reply]
The same way as every other article? —Myceteae🍄‍🟫 (talk) 02:29, 30 October 2025 (UTC)[reply]
Doubtful. I see no examples on which to draw. Outlines are the precursor to writing. The ones I'm looking at look like they are stubs of the main articles. jps (talk) 02:55, 30 October 2025 (UTC)[reply]
Outlines are the precursor to writing. I mean that is one type of outline but that's not what these are. —Myceteae🍄‍🟫 (talk) 04:37, 30 October 2025 (UTC)[reply]
I have no idea what these are. As far as I can tell, they are the wholecloth invention of this website. jps (talk) 17:40, 30 October 2025 (UTC)[reply]
You can outlining something before expanding that outline into a full article, or you can summarize existing information with an outline. This is the latter. Aaron Liu (talk) 23:57, 30 October 2025 (UTC)[reply]
As mentioned below, the example you're looking for is disambiguation pages. Aaron Liu (talk) 11:33, 30 October 2025 (UTC)[reply]
I'm vaguely aware of outlines but have rarely ever looked at them. I don't think we should delete them and I'm struggling to see the problem. Of course, individual outlines can be nominated for deletion when they have issues. I anticipate that deletion will be a hard sell when the topic being outlined is notable and the outline contains relevant links to many notable articles, similar in a way to how the notability of standalone lists is assessed. —Myceteae🍄‍🟫 (talk) 02:46, 30 October 2025 (UTC)[reply]
I don't see an issue with them, though I don't use them personally. They're in mainspace for the same reason disambs are in mainspace, both being non article content... they are still a useful navigational aid for the encyclopedia. PARAKANYAA (talk) 06:23, 30 October 2025 (UTC)[reply]
At least disambigs serve a navigational purpose. Outlines are just a bunch of links some Wikipedians think belong together, as far as I can tell. How does one decide what does or does not belong in an outline? I see no means to adjudicate the content whereas with disambiguation, one can refer to the outside world or the spelling of the term as a means to decide what belongs on the page. I dunno, I am just having a really hard time wrapping my head around the use case. jps (talk) 17:43, 30 October 2025 (UTC)[reply]
How does one decide what does or does not belong in an outline? Consensus. i.e. how the content of every page on Wikipedia is decided. Thryduulf (talk) 17:51, 30 October 2025 (UTC)[reply]
Sure, but we sometimes labor under the illusion that there are certain principles worked out that we attempt to adhere to. I'm just not clear what the principles are for writing outlines, and, yes, I read WP:OUTLINES. Still clear as mud to me, but apparently there are a buncha others who get it even as I might not understand what they're saying. jps (talk) 18:00, 30 October 2025 (UTC)[reply]
It's basically a hierarchical overview ("outline") of a subject. They'd don't appear to get much attention so it's quite possible that individual outlines or the concept as a whole is poorly developed. If I thought a specific outline needed work I would edit it myself or start a discussion on talk. If that wasn't satisfactory I would reach out to Wikipedia:WikiProject Outlines or a more specific WikiProject or notice board related to the topic or type of issue. Or post here, as you've done. —Myceteae🍄‍🟫 (talk) 18:42, 30 October 2025 (UTC)[reply]
Outlines do obviously serve a navigational purpose. And yes, consensus, like everything else. How do you decide what goes in a category? The same way. PARAKANYAA (talk) 18:00, 30 October 2025 (UTC)[reply]
"Obviously" is a pretty strong word. They look to me like study guides or something, but I struggle to understand how they are part of an encyclopedia intead of, say, Wikiversity or something. jps (talk) 18:06, 30 October 2025 (UTC)[reply]
In what way do they look like a study guide moreso than any of our mainspace pages, categories, or navboxes? Aaron Liu (talk) 23:58, 30 October 2025 (UTC)[reply]
In the sense that they look like crib notes. jps (talk) 00:05, 31 October 2025 (UTC)[reply]
I can say the same about our mainspace nav pages, categories, and navboxes. Outlines help navigate like a set-index article, therefore it is part of the encyclopedia. Aaron Liu (talk) 01:51, 31 October 2025 (UTC)[reply]
You can say whatever you want. I'm not sure that this makes much sense, however. jps (talk) 18:39, 31 October 2025 (UTC)[reply]
They are intended to give an outline of a topic so the fact that they are reminiscent of a study guide or crib sheet doesn't seem off. —Myceteae🍄‍🟫 (talk) 02:12, 31 October 2025 (UTC)[reply]

Surely if they are articles they should be properly sourced? But looking at Outline of the United Kingdom and Outline of political science they are not. It's tempting to just strip those of anything without a source, which would leave hardly anything. Doug Weller talk 14:22, 1 November 2025 (UTC)[reply]

Often categories are added to an article or articles to a list without a source because the grouping is not disputed and sourcing it would be hard. As long as you can add Category:Protected areas of the United Kingdom to Environmentally sensitive area, you can add that article to "List of protected areas of the United Kingdom" or the relevant outline section. Though I agree with you on the facts mentioned like that there are 33 shires of Scotland—though most likely un-WP:Likely to be challenged they would do good with a source. Aaron Liu (talk) 15:53, 1 November 2025 (UTC)[reply]
The guidance at Wikipedia:Outlines says outlines are a type of list article and should follow the guidance at MOS:LIST for reference citations (and other sections). The MOS:LIST guidance in the section MOS:SOURCELIST says that inline citations are required for any of the four kinds of material absolutely required to have citations. At first glance, the near-total lack of references is surprising for these large pages but having a source to support inclusion of every single entry under, say, Outline of political science § Political issues and policies seems unnecessary. —Myceteae🍄‍🟫 (talk) 17:23, 1 November 2025 (UTC)[reply]
They're entirely valid navigation articles. See Outline of lichens and Outline of the Marvel Cinematic Universe for ones that have been classified as featured lists. Personally, I think they're neat features that could be interesting for our many rabbit-hole-oriented readers if we improved them and made them more visible. Thebiguglyalien (talk) 🛸 02:34, 2 November 2025 (UTC)[reply]
  • I do not use outlines often, but I find them very useful as a reader. I don't think they need sources: Ideally they're more of a collection of links rather than the quasi-prose Doug linked above. Toadspike [Talk] 20:11, 4 November 2025 (UTC)[reply]

Discussion notice: Removing permissions from users banned by ArbCom

[edit]

I've started a thread about this idea at Wikipedia talk:Banning policy. Please comment there, not here. JuniperChill (talk) 21:43, 28 October 2025 (UTC)[reply]

Reviving Wikipedia:popular pages by updated all the lists up to the start of 2026 (when the time comes)

[edit]

I think it’s a real shame that Wikipedia:popular pages became dormant in 2023 and eventually abandoned completely. I think it should’ve revived in 2026 with updated data including over 3 years of events, from the war/genocide in Gaza to the UK general election and Labour’s Kier Starmer becoming Prime Minister to the re-election of Trump to the 2nd and 3rd seasons of Squid Game on Netflix, and so much in between. Lots has happened since the start of 2023 and I think showing the new all time top 100 with all that new stuff would be amazing. TrainFan2005 (talk) 19:23, 30 October 2025 (UTC)[reply]

The Wikipedia Signpost includes a traffic report with every issue that basically has what you are looking for. -- Reconrabbit 19:29, 30 October 2025 (UTC)[reply]
I mean a weekly roundup is nice and all, but I want to see a revival of the *all-time* lists. To have a regularly updated page of the various most visited Wikipedia pages of all time lists, both in general and by category. TrainFan2005 (talk) 19:47, 30 October 2025 (UTC)[reply]
There are a couple dozen places at Wikipedia:Statistics#Page views. Wikimedia Statistics lets you look at the most viewed articles for a given month but nothing for "all time" data as I can tell. -- Reconrabbit 20:24, 30 October 2025 (UTC)[reply]

Proposed guideline on using large language models to write new articles

[edit]

Please see Wikipedia talk:Writing articles with large language models § RfC (and the preceding frequently asked questions section) for a discussion on establishing consensus for a guideline on using large language models to write new articles. isaacl (talk) 22:42, 30 October 2025 (UTC)[reply]

[edit]

Increase the frequency of Today's Featured Lists from 2 per week to 3 or 4 per week, either on a trial basis, with the option to expand further if sustainable, or without a trial at all. Vanderwaalforces (talk) 07:02, 2 November 2025 (UTC)[reply]

Bff ~2025-31256-28 (talk) 21:00, 4 November 2025 (UTC)[reply]
Background

Right now, Today's Featured List only runs twice a week; that is Mondays and Fridays. The problem is that we've built up a huge (and happy?) backlog because there are currently over 3,400 Featured Lists that have never appeared on the Main Page (see category). On top of that and according to our Featured list statistics we're adding about 20 new Featured Lists every month, which works out to around 4 to 5 a week, and looking at the current pace of just 2 per week, it would take forever to get through what we already have, and the backlog will only keep growing.

Based on prior discussion at WT:FL, I can say we could comfortably increase the number of TFLs per week without running out of material. Even if we went up to 3 or 4 a week, the rate at which new lists are promoted would keep things stable and sustainable. Featured Lists are one of our high-quality contents and they get this less exposure compared to WP:TFAs or WP:POTDs, so trust me, this isn't about numbers, and neither is it about FL contributors being jealous (we could just be :p). Giving them more space would better showcase the work that goes into them. We could run a 6‑month pilot, then review the backlog impact, scheduling workload, community satisfaction, etc.

Of course, there are practical considerations. Scheduling is currently handled by Giants2008 the FL director, and increasing the frequency would mean more work, which I think could be handled by having one of the FL delegates (PresN and Hey man im josh) OR another experienced editor to help with scheduling duties. Vanderwaalforces (talk) 07:03, 2 November 2025 (UTC)[reply]

Options
  • Option 1: Three TFLs per week (Mon/Wed/Fri)
  • Option 2: Four TFLs per week (e.g., Mon/Wed/Fri/Sun)
  • Option 3: Every other day, with each TFL staying up for two days (This came up at the WT:FL discussion, although it might cause imbalance if comparing other featured content durations.)
  • Option 4: Three TFLs per week (Mon/Wed/Fri) as a 6‑month pilot and come back to review backlog impact, scheduling workload, community satisfaction, etc.
  • Option 5: Four TFLs per week (e.g., Mon/Wed/Fri/Sun) as a 6‑month pilot and come back to review backlog impact, scheduling workload, community satisfaction, etc.
  • Option 6: Retain status-quo

Discussion (TFLs)

[edit]
  • Generally supportive of an increase, if the increase has the support of Giants2008, PresN, and Hey man im josh. Could there be an elaboration on the potential main page balance? TFL seems to slot below the rest of the page, without the columnar restrictions. CMD (talk) 10:01, 2 November 2025 (UTC)[reply]
    @Chipmunkdavis Per the former, yeah, I totally agree, which is why I suggested earlier that one of the FLC delegates could help share the load, alternatively, an experienced FLC editor or someone familiar with how FL scheduling works could assist. Per the latter, nothing changes actually, the slot for TFL remains the same, viewers only get to see more FLs than the status-quo. It might fascinate you that some editors do not know if we have TFLs (just like TFAs) on English Wikipedia either because they have never viewed the Mainpage on a Monday/Friday or something else. Vanderwaalforces (talk) 17:06, 2 November 2025 (UTC)[reply]
  • Support Option 2 with the Monday list also showing on Tuesday, the Wednesday list also showing on Thursday and the Friday list also showing on Saturday — Preceding unsigned comment added by Easternsahara (talkcontribs) 16:28, 2 November 2025 (UTC)[reply]
  • Option 1, for two main reasons: (1) there is no reason to rush into larger changes (we can always make further changes later), and (2) FL topics tend to be more limited and I think it's better to space out similar lists (e.g., having a "List of accolades received by <insert movie/show/actor>" every other week just to keep filling slots would get repetitive). Strongly oppose any option that results in a TFL being displayed for 2 days; this would permanently push POTD further down, break the patterns of the main page (no other featured content is up for more than 1 day), and possibly cause technical issues for templates meant to change every day. RunningTiger123 (talk) 18:08, 2 November 2025 (UTC)[reply]
  • Option 1 – Seeing the notification for this discussion pop up on my talk page really made me take a step back and ponder how long I've been active in the FL process (and my mortality in general, but let's not go there). I can't believe I'm typing this, but I've been scheduling lists at TFL for 13 years now. That's a long time to be involved in any one process, as this old graphic makes even more clear. Where did the time go? Anyway, I agree with RunningTiger that immediately pushing for 4+ TFLs per week when we may not have enough topic diversity to probably support that amount would do more harm than good, but I think enough lists are being promoted through the FL process to support an increase to three TFLs weekly. In addition, I agree with RT that we don't need to be running lists over multiple days when none of the other featured processes do.
    While I'm here, I do want to address potential workload issues. My suggestion is that, presuming the delegates have the spare time to take this on, each of us do one blurb per week. With the exception of the odd replaced blurb once in a blue moon, I've been carrying TFL by myself for the vast majority of the time I've been scheduling TFLs (over a decade at this point). If I take a step back and ignore the fact that I'm proud to have had this responsibility for the site for this many years (and that the train has been kept on the tracks fairly well IMO), it really isn't a great idea for the entire process to have been dependent on the efforts of a single editor for that long. I just think it would be a good sign of the strength of the TFL process for a rotation of schedulers to be introduced. Also, in the event of an emergency we would have a much better chance of keeping TFL running smoothly with a rotation. Of course, this part can be more thoroughly hammered out at TFL, but I did want to bring it up in case the wider community has any thoughts. Giants2008 (Talk) 01:42, 4 November 2025 (UTC)[reply]
  • Option 1, and I'd be willing to do some TFL scheduling. --PresN 15:59, 4 November 2025 (UTC)[reply]
  • Option 1, though I would support any permanent increase to the frequency of TFLs as long as the coords or other volunteers have the capacity for that. Toadspike [Talk] 20:13, 4 November 2025 (UTC)[reply]

Change the banner on the main page

[edit]

make it look like this instead: (see source code)

Welcome to Wikipedia,
the free encyclopedia that anyone can edit.

Overview · Searching · Editing · Questions · Help

Categories · Featured content · A–Z index

69 (i crave violence :D) 17:29, 2 November 2025 (UTC)[reply]

I'm in support of such a change, but not until general navigation pages are revamped and updated to be useful to the readers who like to go down Wikipedia rabbit holes. For portals, see my proposal at User:Thebiguglyalien/Portal sample and User:Thebiguglyalien/Portal sample 2. The only reason I haven't formally moved toward their adoption is because I lack the technical abilities to make automation work. Thebiguglyalien (talk) 🛸 18:24, 2 November 2025 (UTC)[reply]
Wikipedia:Village_pump_(idea_lab)/Archive_54#Portals on the main page. Cremastra (talk · contribs) 19:48, 2 November 2025 (UTC)[reply]
@Vita69, you should probably look at the 2024 discussion that Cremastra linked, as well as the 2022 RFC that led to the current state. WhatamIdoing (talk) 03:58, 3 November 2025 (UTC)[reply]
Looks pretty ugly to me: odd bullets beside the counts, and content strangely squashed to the sides. Checked on Monobook and Vector 2022 in a normal-sized browser window. Anomie 21:59, 2 November 2025 (UTC)[reply]
Because it's a table layout. @Vita69: can you make this banner responsive/use div? sapphaline (talk) 13:50, 3 November 2025 (UTC)[reply]

Incofni, DeleteMe etc main category proposal

[edit]

Incogni, DeleteMe, Aura, Optery, HelloPrivacy[8] ... They are not in any notable category. A category is needed. Suggestions: Internet data-privacy services or Internet data-removal services or Personal information removal services or Personal data removal services. Setenzatsu.2 (talk) 18:00, 2 November 2025 (UTC)[reply]

Paraphrasing allowed for species descriptions about 'obscure' and 'newly described' species

[edit]

Hello all, I am running into a problem. I am adding articles about beetles. Many beetle species are very poorly studied. Hence: often there is only a few or even one source available that gives a description of the species (i.e. it's appearance). Another editor stated that I am not allowed to use close paraphrasing when adding an article about a species to wikipedia, and stated he intends to remove all of these paraphrased statements. I do not agree with his stance in this matter, because there is literally no other way to add these species discriptions. To make it clear: I do not just copy-paste the species description, and only use one or two sentences (a typical modern species description is about 1 page long). I will give two examples of what the other editor thinks is not acceptable, but I think is:

I changed this: Original - "The head (except black mandibles and labrum), antennae (except antennomeres 8-11 black), and legs chestnut-brown; eyes and scutellum black; pronotum shiny reddish-brown with medial 3 black (with bluish reflections) longitudinal vittae- 1 medial and 2 lateral;elytra shiny reddish-brown with 3 shining black oblique vittae from lateral to sutural margins; venter and legs reddish." into this: Wikipedia entry - "The head, antennae and legs are chestnut-brown, while the pronotum is shiny reddish-brown with three black vittae with bluish reflections. The elytra are shiny reddish-brown with three shining black vittae." That is not a copyvio in my mind. How else should I ever get a species description on wikipedia.
This second source is in German and I translated and changed it: Original - "Beschreibung. Länge 7,4-7,7 mm, Elytrenlänge 5,4-5,7 mm, Breite 4,7-4,8 mm. Körper eiförmig oval, dunkel kastanienbraun, Oberfläche mit matter Beschichtung, Labroclypeus, Tarsen und Schienen glänzend, bis auf laterale Bewimperung und einige Borsten auf dem Kopf kahl." Wikipedia entry - "Adults reach a length of about 7.4-7.7 mm. They have a dark chestnut brown, oval body. The dorsal surface is dull and glabrous, except for the lateral cilia and some setae on the head."

I think that this would be ok, IF it is a species for which only very few sources are available to work with (for a lot of these species there is one source with an actual description, and some listings in checklists an databases, but nothing else. By the way, I am not the only one that thinks that feels that species descriptions should be free of restrictions. The database/website Plazi.org follows the following reasoning about the legality of using species descriptions published in copyrighted journals: [9]. I was searching for any (legal) challenges to Plazi (could not find one). Did find this: Scientific names of organisms: attribution, rights, and licensing | BMC Research Notes | Full Text, it is mainly about databases and checklists, but also states this: "Taxonomic treatments are not copyrightable: Taxonomic treatments and descriptions of species are not copyrightable because they lack creativity of form. Rather, they are presented with a standardized form of expression for better comprehension." They also drafted a 'blue list', which includes components of names and taxonomy that are not subject to copyright

- A hierarchical organization (= classification), in which, as examples, species are nested in genera, genera in families, families in orders, and so on.
- Alphabetical, chronological, phylogenetic, palaeontological, geographical, ecological, host-based, or feature-based (e.g. life-form) ordering of taxa.
- Scientific names of genera or other uninomial taxa, species epithets of species names, binomial combinations as species names, or names of infraspecific taxa; with or without the author of the name and the date when it was first introduced. An analysis and/or reasoning as to the nomenclatural and taxonomic status of the name is a familiar component of a treatment.
- Information about the etymology of the name; statements as to the correct, alternate or erroneous spellings; reference or citation to the literature where the name was introduced or changed.
- Rank, composition and/or apomorphy of taxon.
- For species and subordinate taxa that have been placed in different genera, the author (with or without date) of the basionym of the name or the author (with or without date) of the combination or replacement name.
- Lists of synonyms and/or chresonyms or concepts, including analyses and/or reasoning as to the status or validity of each.
- Citations of publications that include taxonomic and nomenclatural acts, including typifications.
- Reference to the type species of a genus or to other type taxa.
- References to type material, including current or previous location of type material, collection name or abbreviation thereof, specimen codes, and status of type.
- Data about materials examined.
- References to image(s) or other media with information about the taxon.
- Information on overall distribution and ecology, perhaps with a map.
- Known uses, common names, and conservation status (including Red List status recommendation).
- Description and/or circumscription of the taxon (features or traits together with the applicable values), diagnostic characters of taxon, possibly with the means (such as a key) by which the taxon can be distinguished from relatives.
- General information including but not limited to: taxonomic history, morphology and anatomy, reproductive biology, ecology and habitat, biogeography, conservation status, systematic position and phylogenetic relationships of and within the taxon, and references to relevant literature.
- It would appear that no copyright law is infringed if a user extracts elements of the blue list from material that lacks legitimate user agreements.

They argue all of the above is not copyrightable. I can imagine wikipedia would not just want to accept that as truth, however: I do feel this support the argument that we could at the very least paraphrase these copyrighted sources, if we stick to one or two sentences, rewrite them, and only do it for 'obscure' species (so not for species like a kangaroo, a duck, etc., where countless of sources are available, but for species like a mosquito that is endemic to one forest in Sumatra, or a mollusk described last year, etc.) B33tleMania12 (talk) 18:23, 2 November 2025 (UTC)[reply]

Pinging some people: Moneytrees, Sennecaster, The Knowledge Pirate, Myceteae, WhatamIdoing. — Preceding unsigned comment added by B33tleMania12 (talkcontribs) 18:28, 2 November 2025 (UTC)[reply]
As the edit was unsigned those pings will not have worked, so pinging on B33tleMania12's behalf: @Moneytrees, Sennecaster, The Knowledge Pirate, Myceteae, and WhatamIdoing:. Thryduulf (talk) 18:39, 2 November 2025 (UTC)[reply]
Background: Wikipedia:Village pump (miscellaneous)#Question about Plazi.org and copyright. That discussion is still active-ish and includes links to other relevant discussions: Wikipedia talk:WikiProject Biology/Archive 3#Direct copies of species descriptions from external website and the related User:Moonriddengirl/copyright FAQ#Taxonomic descriptions; descriptions of facts. I'm just posting this for visibility; I think B33tleMania12 has done a decent job re-presenting the issue based on the most recent discussion and, with much appreciated assistance from Thryduulf, alerting other participants. —Myceteae🍄‍🟫 (talk) 19:15, 2 November 2025 (UTC)[reply]
It'd probably be more pointful to ping people who know something about copyright, like Diannaa.
Facts are not copyrightable, so (e.g.,) a fact "about the etymology of the name" is not copyrightable. But the expression of a fact can be (=is not always) copyrightable. Editors should write in your own words and sentences. However, if the expression is simple enough ("E. expertia was named after Alice Expert"), then even though Wikipedia wants you to write in your own words, that sentence wouldn't constitute a copyvio. WhatamIdoing (talk) 03:53, 3 November 2025 (UTC)[reply]
I agree, something of de minimis originality is not copyrightable. Andre🚐 03:56, 3 November 2025 (UTC)[reply]
  • We shouldn't have a wp article if there is only one source with significant coverage. (t · c) buidhe 05:22, 3 November 2025 (UTC)[reply]
    WP:NSPECIES does not have that rule.
    Long-term, if you'd like that to be a rule for all articles, then I suggest getting an actionable definition of "significant coverage" into the GNG. We still have disagreements about whether SIGCOV is about importance or volume, or if it is determined by the number of word in a source or the number of facts that could be used in an encyclopedia article. To give you an idea of how this matters, see User:WhatamIdoing/Database article, where I've written a 225-word-long Wikipedia article from a source that does not contain a single complete sentence about the subject of the article. Some editors say that source is SIGCOV, because obviously it covered enough facts for me to write a Start-class article about the subject, easily meeting the goal of SIGCOV as explained in WP:WHYN. And others say that it's not, because it's obviously impossible to have SIGCOV if the source presents the information about the subject of the article in any form other than multiple consecutive sentences of prose. WhatamIdoing (talk) 05:41, 3 November 2025 (UTC)[reply]
    Yes, I know it's against NSPECIES. My view is that it's a bad guideline because it leads to mass generation of low quality articles that are poorly watched and maintained. (t · c) buidhe 05:43, 3 November 2025 (UTC)[reply]
    Well the idea was to make the article longer than just one sentence saying where it lives, but I cannot if I am not allowed to use anything else. There is enough to write an article that is actually saying something about the species in the original description, but if there is no way to use it, the article will indeed stay a stubby sub-stub until someone else writes something about it. B33tleMania12 (talk) 07:41, 3 November 2025 (UTC)[reply]
    It's a good and much needed guideline in my opinion, and I do not really see the problems you mention. --Jens Lallensack (talk) 08:43, 3 November 2025 (UTC)[reply]
    Regarding "We shouldn't have a wp article if there is only one source with significant coverage", which you argue "lead to mass generation of low quality articles that are poorly watched and maintained." This article is what you can do with a single source: Maladera cardamomensis (luckily it is CC-BY, so no issues with using the species description). I think this is substantial enough to deserve an article. In essence, this could be done for every species, because there will always be a species description. But then again: we must be allowed to use it (hence this discussion) B33tleMania12 (talk) 11:34, 3 November 2025 (UTC)[reply]
    It is probably worth finding something more than a stub for future "This article is what you can do with a single source" arguments. Much more is possible, with a good enough source. CMD (talk) 16:15, 3 November 2025 (UTC)[reply]
  • @B33tleMania12: The important thing is to not copy-paste anything that could be remotely considered to be a creative choice. In your first example, I would replace "shiny" with the synonym "glossy" (or "reflective"). In your second example, I would not copy-paste "chestnut-brown", but instead say "reddish-brown" (and pipe-link that to "chestnut (color)"), which is also more accessible to lay readers, and this is the term you use in your first example. More importantly, try to reduce/explain technical language (see WP:MTAU). The goal is to rephrase this to make it as understandable as possible. For example, three shining black vittae need to be explained; something like "On the elytra there are [your explanation], called vittae, that are black" and you will have a very different sentence. You could also change the structure by first describing general features rather going section by section. For example, you could write something like "Both the pronotum ([explanation of term]) and the elytra ([explanation of term]) are red-brown with a reflective surface", followed by the details of these parts, and that would be very different from the source and easier to understand than the highly technical and formalized way the source puts it. --Jens Lallensack (talk) 08:43, 3 November 2025 (UTC)[reply]
    Thanks, that is valuable feedback! If that would be acceptable, I could definitely work with that. B33tleMania12 (talk) 08:49, 3 November 2025 (UTC)[reply]
    Could I formally request that @The Knowledge Pirate: hold off on trimming any content he deems copyvio's until this discussion is done? When I started, I did not always add CC-BY and PD US Government tags.. adding these is off course no issue, but there are also many articles I made using sources that are not under a 'free' licence. Following his reasoning, these would be copyvio's, and thus be removed. However, if the conclusion of this discussion is that they are not, they would be removed and rev-del'd for nothing. B33tleMania12 (talk) 14:26, 3 November 2025 (UTC)[reply]
    Making the descriptions more accessible to a general audience is an added benefit here. —Myceteae🍄‍🟫 (talk) 15:44, 3 November 2025 (UTC)[reply]