Jump to content

Wikipedia:Bot requests/Archive 87

From Wikipedia, the free encyclopedia
Archive 80Archive 85Archive 86Archive 87

Bot Request to Add Vezina Trophy Winners Navbox to Relevant Player Pages

I would like to request a bot to automatically add the {{Vezina Trophy Winners}} template to all player pages that are currently listed in the Category:Vezina Trophy winners.


The template is already created and can be found at: Template:Vezina Trophy Winners.


Details:


1. The bot should check all pages within Category:Vezina Trophy winners.

2. For each page, if the {{Vezina Trophy Winners}} template is not already present, the bot should add it to the bottom of the page.

3. The template should be placed in the Navboxes section (before any categories or external links) on each player’s page.


Rationale:

This will ensure consistency and streamline the process of displaying the relevant information across all Vezina Trophy winners’ pages without having to manually add the template to each page. This will also make it easier to update the navbox in the future without needing to edit each individual page.


Please let me know if more information or clarification is needed. Thank you! 108.51.96.36 (talk) 23:46, 18 October 2024 (UTC)

altering certain tags on protected pages?

I recently read a comment from someone regarding them seeing the {{expand}} tag in an article, and wanting to go ahead and work on the section, only to discover that it was protected and they could not do so. Their feeling, which is understandable, is that it was discouraging to reply to a request for help, only to find that their help was not welcome. There is of course usually a lock icon on the page but not everyone knows to look for that or what it means.

I note that User:MusikBot II removes templates from pages where protection has just expired, and is an adminbot and can therefore also edit protected pages. I'm curious if it seems feasible/wise to have it (or some other bot) make some sort of modification to the expand template, and perhaps other similar templates, to reflect the current protection level and suggest using the talk page to propose edits? And of course it would undo those edits upon the expiration of protection. (as always with the caveat that I know nothing about bot coding) Pinging @MusikAnimal: as bot maintainer, but any and all feedback is of course welcome. Just Step Sideways from this world ..... today 22:25, 6 October 2024 (UTC)

@Just Step Sideways: Another way would be to have the {{expand}} tag automatically detect the protection level (which I know is possible, but I wouldn't know how to implement it) and alter its message. Rusty 🐈 22:37, 6 October 2024 (UTC)
Yes, altering the template would be better; templates like {{rcat shell}} already have this functionality using magic words. I'll paste the relevant code below if someone wants to sandbox something. Obviously it will be different to make it inline but the general gist of using the #switch will be the same. For example, a #switch in {{Expand section}} could change the text from "You can help by adding to it" to "You can make an edit request to improve it". Primefac (talk) 10:30, 7 October 2024 (UTC)
{{if IP}}, {{if autoconfirmed}}, and {{if extended confirmed}} should also be useful here. jlwoodwa (talk) 03:36, 14 October 2024 (UTC)
PROTECTIONLEVEL code
{{#switch: {{PROTECTIONLEVEL:edit}}
   |sysop={{pp-protected|small=yes}}{{R fully-protected|embed=yes}}
   |templateeditor={{pp-protected|small=yes}}{{R template protected|embed=yes}}
   |extendedconfirmed={{pp-protected|small=yes}}{{R extended-protected|embed=yes}}
   |autoconfirmed={{pp-protected|small=yes}}{{R semi-protected|embed=yes}}
   | <!--Not protected, or only semi-move-protected-->
}}
I agree automating this without the bot is preferable. The bot can and does add protection templates to pages like this, but I believe it didn't here because this template was protected long before the bot was introduced. MusikAnimal talk 15:35, 7 October 2024 (UTC)

Replace merged WikiProject template with parent project + parameter

WikiProject Reference works has been merged as a task force of WikiProject books. The referencework= parameter has been added to the Template:WikiProject Books banner template to indicate if it applies to the task force, so now all usages of the former Template:WikiProject Reference works need to be replaced with the books template with the task force parameter. When something similar was done with a previous project, it was done by bot (though I forgot what bot), could this be done again? Or is there another efficient way to do this? Also, there will be some duplicates, since some are tagged with both. The books project doesn't use importance and many articles tagged with ref works don't have it so the importance parameter on the old banner should be discarded not transferred. Thanks! PARAKANYAA (talk) 16:58, 16 October 2024 (UTC)

I can help with this. – DreamRimmer (talk) 14:09, 17 October 2024 (UTC)
Same; my bot is set up to handle these, and I thought it was TheSandBot that actually had a specific task for this but maybe it was actually Kiranbot... Looks like ~1500 pages where it will need to be folded into the Books banner, after which it can just be converted into a wrapper and autosubst by AnomieBOT. Primefac (talk) 16:00, 17 October 2024 (UTC)
Thanks for the info. Please feel free to handle this. As there wasn't a response in a reasonable time, I thought I should step in to help. – DreamRimmer (talk) 16:29, 17 October 2024 (UTC)
@Primefac Your bot did the job perfectly, thank you! Sorry for the annoyance, but could you do the same with another merged-into-task force project? WP:TERROR was mostly inactive, and the very few active editors reached a consensus to merge, see this discussion. {{WikiProject Terrorism}} should be folded into the
{{WikiProject Crime and Criminal Biography}} banner (I added the task force and importance parameters to the crime banner). WP:TERROR has importance parameters, but the attention= and infobox= parameters were never maintained and most of the articles tagged with them have had their issues addressed so those can probably be discarded as the crime banner doesn't have them. I promise this is the last one hahaha. PARAKANYAA (talk) 19:46, 19 October 2024 (UTC)
I'll put it on my list. Primefac (talk) 20:58, 19 October 2024 (UTC)
 Done. Primefac (talk) 10:04, 21 October 2024 (UTC)
Archive 80Archive 85Archive 86Archive 87

Request for WP:SCRIPTREQ

Would like to attain WP:SCRIPTREQ bot's buildup code. StefanSurrealsSummon (talk) 18:27, 8 November 2024 (UTC)

Not related to a bot task request. You should reach out to the operator of the bot. MolecularPilot 🧪️✈️ 04:43, 1 January 2025 (UTC)

LLM summary for laypersons to talk pages of overly technical articles?

Today I was baffled by an article in Category:Wikipedia articles that are too technical which I was easily able to figure out after pasting the pertinent paragraphs into ChatGPT and asking it to explain it to me in layman's terms. So, that got me thinking, and looking through the category by popularity there are some pretty important articles getting a lot of views per day in there. So I thought, what about a bot which uses an LLM to create a layperson's summary of the article or tagged section, and posts it to the talk page for human editors to consider adding?

I think I can write it, I just want others' opinions and to find out if someone is trying or has already tried something like this yet. Mesopub (talk) 09:38, 10 November 2024 (UTC)

Considering past discussions of LLMs, and WP:LLMTALK, I doubt the community would go for this. If you really want to try, WP:Village pump (proposals) or WP:Village pump (idea lab) would be better places to seek consensus for the idea. Anomie 14:17, 10 November 2024 (UTC)
I don't think WP:LLMTALK necessarily applies here: that's about using chatbots to participate in discussions, which is utterly pointless and disruptive. The idea here seems to be using an LLM on a talkpage for a totally different purpose it's much more suited to. That said, I also doubt people will get on board with this.
Mesopub, having a quick look at your list, I think your target category Category:All articles that are too technical (3,380) is not a great choice: I see articles towards the top like Conor McGregor, Jackson 5, Malaysia, and Miami-Dade County, Florida. All of these are members of the target category due to transclusion {{technical inline}}, which produces [jargon].
All of these would easily be fixed by a simple rewording or explanation of a single term: none of the examples would benefit from an LLM summary.
I don't necessarily think the basic idea is terrible, which I've bolded for emphasis. We do have a lot of articles that are written at a level most appropriate to grad students or professionals in a niche scientific field. Of course, any LLM summary of these articles would have to be sanity-checked by a human who actually understands the article, to ensure the LLM summarises it without introducing errors.
For that reason I think that if you're convinced of the utility of this process, you should start very slow, select a small number of articles in different fields, post the LLM summaries with proper attribution in your userspace, and notify appropriate WikiProjects to see if anyone is interested in double checking them, or working to incorporate more accessible wording into the summarised articles. If no one has any interest, there's no realistic future for this. Folly Mox (talk) 15:28, 10 November 2024 (UTC)
Isn't the current consensus that we cannot allow AI-written text because of questionable copyright status? Primefac (talk) 17:02, 10 November 2024 (UTC)
There is no ban AFAIK, just that editors need to be careful and check the LLM didn't spit out copyrighted text back at them (or closely paraphrased, etc.). I think this is less of a risk with the proposed use case, which is taking existing Wikipedia text and cutting it down.
I agree with Folly Mox mostly, if you think this is going to be useful, try it on a very small scale and see how it goes. Legoktm (talk) 19:00, 10 November 2024 (UTC)
I'm not sure what formal consensus looks like on the LLM copyright issue. Wikipedia:Large language models § Copyright violations is pretty scant, and of course it's not policy. m:Wikilegal/Copyright Analysis of ChatGPT concludes in part with all possibilities remain open, as key cases about AI and copyright remain unresolved. The heftiest discussion I was able to find lazily is Wikipedia talk:Large language models/Archive 1 § Copyrights (January 2023); there is also this essay. Folly Mox (talk) 20:54, 10 November 2024 (UTC)
I kinda wonder how reliable LLMs are at simplifying content without making it misleading/wrong in the process. Jo-Jo Eumerus (talk) 09:30, 11 November 2024 (UTC)
They aren't. Even setting aside the resources they waste and the exploitative labor on which they rely, they're just not suited for the purpose. Asking editors with subject-matter expertise to "sanity check" their output is just a further demand on the time and energy of volunteers who are already stretched too thin. XOR'easter (talk) 22:37, 11 November 2024 (UTC)
Some examples: User:JPxG/LLM_demonstration#Plot_summary_condensation_(The_Seminar) and Wikipedia:Using_neural_network_language_models_on_Wikipedia/Transcripts#New York City. Legoktm (talk) 17:50, 12 November 2024 (UTC)
I'm sorry, but if you don't know the subject material, then you're not in a position to judge whether ChatGPT did a good job or not. XOR'easter (talk) 22:46, 11 November 2024 (UTC)
Declined Not a good task for a bot. Would be a WP:CONTEXTBOT, and definitely subject to hallucinations. At the very least requires community consensus, this is not the correct place to get it, please see WP:VPT or WP:VPR. MolecularPilot 🧪️✈️ 03:38, 1 January 2025 (UTC)
Note: you may make a bot that post summaries to it's own userspace, this would be allowed per WP:EXEMPTBOT, if it would be helpful to have some demos for the proposal. MolecularPilot 🧪️✈️ 03:40, 1 January 2025 (UTC)

Redirects with curly apostrophes

For every article with an apostrophe in the title (e.g. Piglet's Big Game, it strikes me it would be useful to have a bot create a redirect with a curly apostrophe (e.g. Piglet’s Big Game).

This could also be done for curly quotes.

Once done, this could be repeated on a scheduled basis for new articles. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 12:27, 11 November 2024 (UTC)

The justification for creating redirects with ASCII hyphen-minus to pages titled with en-dashes is that en-dashes are hard for people to type since they aren't on most keyboards. The opposite would be the case with curly quotes: straight quotes and apostrophes are on most people's keyboards while curly versions are not. This seems like another one that would be better proposed at WP:Village pump (proposals) to see if people actually want this.
Further complicating this is that the bot would need a reliable algorithm for deciding when to use versus . The general algorithm may need to be part of the community approval. Anomie 12:50, 11 November 2024 (UTC)
This probably isn't a useful thing to be doing; as Anomie says ' is almost often shown as ' during regular typing, and curly apostrophes are often Office-related auto-changes. I wouldn't necessarily be opposed to a bot fixing in-text curly apostrophes, but we shouldn't be proactively creating redirects. Primefac (talk) 13:03, 11 November 2024 (UTC)
I think straight versus curly for single and double quotes is mostly an OS thing. When I tap on the redlink Piglet’s Big Game and close out the editor, I get {{Did you mean box}} linking the valid title at the top of the page; if I search for the title with &fulltext=0 I'm redirected to the bluelink. The curly apostrophe also resolves to the straight apostrophe if typed into the search box.
Really, the piece missing here – if any – is automated fixing of redlinks with curly punctuation in the target.
User:Citation bot replaces curly apostrophes and double quotes with ASCII versions within citation template parameters, even though Module:CS1 renders them identically. Some user scripts are capable of doing a gsub over an entire article, like User:Novem Linguae/Scripts/DraftCleaner.js. (I know this isn't directly related to the OP, but tangentially related to the suggestion just above.)
I suppose the genesis of this request was this Help desk request? Folly Mox (talk) 15:15, 11 November 2024 (UTC)
I will also note that a curly quote is on the title blacklist, so it should be the case that we shouldn't even be accidentally creating these in the first place. Primefac (talk) 16:52, 11 November 2024 (UTC)
I added that to the blacklist because I was tired of articles being created with curly quotes and having to move them to the correct title, when that's almost never correct. The intend wasn't to block redirects with curly quotes.
Nevertheless, I oppose this because we already have a {{did you mean box}} warning for the situation, and that's sufficient.
Finally, if this is done, it definitely needs some logic to auto-retarget and G7 any redirects that have diverged from their sources, as AnomieBOT already does for dashes. * Pppery * it has begun... 17:30, 11 November 2024 (UTC)
N Not done, consensus against the bot from multiple experienced users raising genuine concerns (I also think due to ‘ vs ’ it is inherently a WP:CONTEXTBOT), and redundant as curly quotes are now on the blacklist. MolecularPilot 🧪️✈️ 04:53, 1 January 2025 (UTC)

Bot for replacing/archiving 13,000 dead citations for New Zealand charts

Dead citations occur due to the website changing the URL format. For example https://nztop40.co.nz/chart/albums?chart=3467 is now https://aotearoamusiccharts.co.nz/archive/albums/1991-08-09.
Case 1: 9,025 pages that are using these URLs found through search. Some may already be archived.
Case 2: 4,133 citations using {{cite certification|region=New Zealand}} and {{Certification Table Entry|region=New Zealand}}, categorized Category:Cite certification used for New Zealand with missing archive (0).

An ideal transition seems difficult as it would require the following steps:

  1. Find an archived version through the wayback machine, e.g., https://web.archive.org/web/20240713231341/https://nztop40.co.nz/chart/albums?chart=3467 for the above. For case 2 this requires inferring the URL first (https://nztop40.co.nz/chart/{{#switch:{{{type|}}}|album={{#if:{{{domestic|}}}|nzalbums|albums}}|compilation=compilations|single={{#if:{{{domestic|}}}|nzsingles|singles}}}}?chart={{{id|}}}))
  2. Harvest the date 11 August 1991 either from the rendered archived page or from the archived page source, <p id="p_calendar_heading">11 August 1991</p>
  3. For case 1, translate the URL accordingly to https://aotearoamusiccharts.co.nz/archive/albums/1991-08-11.
  4. For case 2, add |source=newchart and replace |id=1991-08-11.

Note that for case 1, the word after "/archive/" changed according to the following incomplete table. For case 2 this is handled by the template so no need to worry about it.

Old text New text
albums albums
singles singles
nzalbums aotearoa-albums
nzsingles aotearoa-singles

If someone is willing to go through the above, at least for simple cases, I think it is the ideal solution, especially for case 2. Failing that, a simpler archiving procedure can be taken.

  • For case 1: add |archive-url= and |archive-date= per usual archiving procedure. Add |url-status=deviated. If no archive exists (which should be a minority), add {{dead link}}
  • For case 2: add |archive-url= and |archive-date= per usual archiving procedure as they are supported by the templates. Add |source=oldchart (even if no archive is found)

I will be happy to support any technical assistance. Muhandes (talk) 15:08, 14 November 2024 (UTC)

Muhandes, I believe WP:URLREQ is the place for requests like these. — Qwerfjkltalk 16:56, 14 November 2024 (UTC)
I thought case 2 above will require a post here, but I'll repost there. Muhandes (talk) 22:49, 14 November 2024 (UTC)
Deferred to WP:URLREQ and successfully completed there. :) MolecularPilot 🧪️✈️ 05:09, 1 January 2025 (UTC)

Meanings of minor-planet names

Should we move to a hyphenated version of "minor-planet" instead of without hyphen for "minor planet", which is moved per Talk:Minor-planet designation#Requested move 21 September 2021, by numbers ranging from 100001–101000 to 500001–501000. Absolutiva (talk) 16:20, 18 November 2024 (UTC)

 Working, the bot is running now completing that task. I will let you know when it is done. :) MolecularPilot 🧪️✈️ 03:34, 1 January 2025 (UTC)
Currently in the 300000s... will be done soon! :) MolecularPilot 🧪️✈️ 03:42, 1 January 2025 (UTC)
 Done, Absolutiva, it's moved all the long dash (–) between the numbers ones to use "minor-planet". The short dash (-) ones just redirect to the long dash versions (which then redirects to the correct name), I had it programmed to fix the double redirects created, but there's no need because another bot already fixed these automatically while my bot was moving the titles! :) MolecularPilot 🧪️✈️ 04:00, 1 January 2025 (UTC)

Reference examination bot

I want a bot to help me.can anyome pls help me with this. Wiki king 100000 (talk) 07:46, 19 November 2024 (UTC)

Could you please elaborate further? – DreamRimmer (talk) 10:44, 19 November 2024 (UTC)
guessing from the title, I think they want the bot to fact-check the reference or something similar that. —usernamekiran (talk) 13:02, 20 November 2024 (UTC)
Yes the same. Actually my english spelling problem. Wiki king 100000 (talk) 17:00, 25 November 2024 (UTC)
Declined Not a good task for a bot. Would be a WP:CONTEXTBOT. MolecularPilot 🧪️✈️ 03:35, 1 January 2025 (UTC)

VPNGate

Would any admin be interested in setting up a bot that automatically blocks vpngate.net IPs? VPNGate is frequently used by LTAs (notably MidAtlanticBaby) and is very hard to deal with because of the number of rotating IPs available. User:ST47ProxyBot used to do some of this but is no longer active. T354599 should also help but this would be an interim solution to prevent disruption.

I looked into this and it should be pretty simple. VPNGate apparently has a (hidden?) public API available at www.vpngate.net/api/iphone/ which lists all currently-active proxy addresses. You can use regex (e.g. \b(?:\d{1,3}\.){3}\d{1,3}\b) to find the listed IPs from the API endpoint, check if they're already blocked, and then block them for however long as an open proxy. Theoretically if this is run once or twice a day the vast majority of active VPNGate IPs would be blocked. I wrote a quick script and tested it on testwiki and it seems to work. C F A 19:10, 7 December 2024 (UTC)

@CFA: Looking near the bottom of vpngate.net/en/ you see the following warning: Using the VPN Server List of VPN Gate Service as the IP Blocking List of your country's Censorship Firewall is prohibited by us. The VPN Server List sometimes contains wrong IP addresses. If you enter the IP address list into your Censorship Firewall, unexpected accidents will occur on the firewall. Therefore you must not use the VPN Server List for managing your Censorship Firewall's IP blocking list. It sounds like they are intentionally putting incorrect IP addresses in those lists to discourage people from using the list for unintended purposes. Polygnotus (talk) 03:15, 8 December 2024 (UTC)
I think that's an empty threat. It's possible, I suppose, but I haven't seen any evidence of it. C F A 03:26, 8 December 2024 (UTC)
Possibly, but its wise to portscan instead of assume. Polygnotus (talk) 03:30, 8 December 2024 (UTC)
Yes, not a bad idea. C F A 03:31, 8 December 2024 (UTC)
CFA, I'll note that the "ingress" IPs listed on the website are not the IPs that will actually be editing Wikipedia if MAB uses them. For example, if I connect to VPNgate's "210.113.124.81" using OpenVPN, the IP address that requests are made to the outside internet with is 61.75.47.194.
To test this theory, I made a pywikibot that uses the list of IPs and OpenVPN config on the link CFA provided, connects to them and determines the actual "output" IP connecting to Wikipedia and then blocks with TPA revoked (I won't say why, but WP:ANI watchers will know). I tested it on a local MediaWiki installation and blocking the input IPs (i.e. the ones listed on the VPNgate website) had no effect but blocking the output ones (obtained by connecting to each input IP's VPN and making a request to [1]) effectively disabled VPNgate entirely.
Will this actually be useful and would you like me to submit a WP:BRFA? MolecularPilot 🧪️✈️ 02:38, 17 December 2024 (UTC)
To demonstrate: It's me, User:MolecularPilot, using the VPN listed as "210.113.124.81" (as that's the "input" IP) but actually editing with the output IP of 61.75.47.194. This output IP is not listed anywhere on the VPNgate website or CSV file. Can this IP also be WP:OPP blocked? 61.75.47.194 (talk) 02:48, 17 December 2024 (UTC)
Huh, well that's an interesting find. I suppose that's what they mean by The VPN Server List sometimes contains wrong IP addresses. It would certainly be useful, but the harder part is finding an admin willing to do this (non-admins can't operate adminbots). Maybe a crosspost to AN would help? C F A 02:49, 17 December 2024 (UTC)
CFA, thank you for your very fast reply! Actually, I just examined the config files (they are provided in Base64 format at the CSV you gave, that's what the script uses to connect), and the listed IPs they give are actually blatant lies. For example, the VPN listed as "210.113.124.81" actually has "74.197.133.217:955" set as the input IP that my computer makes requests to, and, as shown above, actually makes requests to the outside internet/edits with "61.75.47.194". In fact, the VPN server list always contains wrong IP addresses - they're not even the correct input IP. The people behind VPNgate are quite good at tricky/opsec it seems. MolecularPilot 🧪️✈️ 02:58, 17 December 2024 (UTC)
74.197.133.217 is actually listed in a completely different section of the VPNgate list (and the associated config file for it of course does not actually use 74.197.133.217), so the provided "IPs" do not match the actual input IP in the config file but the IPs for a different config file. Regardless, these input IPs are useless and it's the output IPs (only findable by testing) that we are interested in. MolecularPilot 🧪️✈️ 03:01, 17 December 2024 (UTC)
Wikipedia:Bot policy states In particular, bot operators should not use a bot account to respond to messages related to the bot. You mistakenly used bot account to respond below :) – DreamRimmer (talk) 13:40, 21 December 2024 (UTC)
Oh I'm so sorry, I meant to reply with my main account, I didn't realise I was still logged into my bot account (I needed to login to make a manual fix to the JSON file). Thank you for picking up on it and correcting the mistake. :) MolecularPilot 🧪️✈️ 01:39, 22 December 2024 (UTC)
Y Done bot is live and collating data about VPNgate egress IPs at User:MolecularBot/IPData.json. A frontend to lookup an IP address (and the number of times it's been seen as a VPNgate express node, as well as when it was last seen) is available [on Toolforge]. It also can generate statistics from the current list, currently with 146 IPs (only a small drop in the bucket generated during my testing and development, overly represents more obvious IPs) - 42.47% are currently blocked on enwiki and 23.97% are globally blocked. Working on guidelines for an adminbot to block these IPs based on number of sightings (ramping up in length as the IP has more sightings, to not overly punish short term volunteers) and will then post at WP:AN looking for a botop once ready. Consensus developed for both this bot and a future adminbot at WP:VPT. 06:33, 21 December 2024 (UTC) — Preceding unsigned comment added by MolecularBot (talkcontribs) <diff>

Creation for nano bot

The 'Nano bot'(Natural Auditor and Native Organiser) will be useful for helping users create their user pages based on their recent actions and edits. Prime Siphon (talk) 20:28, 7 December 2024 (UTC)

Since userpages are an expression of individuality, and are not necessary to create an encyclopedia, having a bot create them would not work. Polygnotus (talk) 03:17, 8 December 2024 (UTC)
Declined Not a good task for a bot. Primefac (talk) 16:30, 9 December 2024 (UTC)

Logging AfC drafts resubmitted without progress

This is essentially a request for the implementation of Option 2 of the RfC here. "The bot should add ... submissions [that haven't changed since the last time they were submitted] to a list, similar to the list of possible copyvios." JJPMaster (she/they) 15:29, 29 December 2024 (UTC)

{{Working}}, I expect to be done in around 20 minutes. MolecularPilot 🧪️✈️ 05:34, 31 December 2024 (UTC)
Success, it will flag any re-submitted drafts without changes to User:MolecularBot/AfCResubmissions.json. Working on an accessible frontend on tool forge so that's its easy to lookup if a draft has been re-submitted. MolecularPilot 🧪️✈️ 06:36, 31 December 2024 (UTC)
Frontend coding done. Deploying this task to run continuously and also host the frontend on Toolforge... MolecularPilot 🧪️✈️ 06:43, 31 December 2024 (UTC)
Frontend is now hosted at [2]. Waiting for a Toolforge task to complete in order to deploy the bot to run continously. MolecularPilot 🧪️✈️ 07:09, 31 December 2024 (UTC)
Will complete Toolforge deployment tomorrow. MolecularPilot 🧪️✈️ 07:17, 31 December 2024 (UTC)
I think you should use GET method instead of POST so that it can be used in the Template:AfC submission/tools template. – DreamRimmer (talk) 09:23, 31 December 2024 (UTC)
A GET-based JSON API is now avaliable by doing this (but replacing dummy with the actual name of the draft, excluding the Draft: prefix) https://molecularbot2.toolforge.org/resubAPI.php?pageName=Dummy Thanks for your feedback! :) However, I don't think it should be used in Template:AfC submission/tools because the RfC closed against commenting or labelling the actual submission page. MolecularPilot 🧪️✈️ 01:48, 1 January 2025 (UTC)
@DreamRimmer Is it possible for you to publish a wikicode frontend based on the json. Something like Wikipedia:AfC sorting? ~/Bunnypranav:<ping> 10:12, 31 December 2024 (UTC)
I think MolecularPilot can help with this. – DreamRimmer (talk) 10:43, 31 December 2024 (UTC)
Oops, wrong ping thanks to the tiny buttons on mobile. Sorry DreamRimmer! @MolecularPilot: the real intended ping. ~/Bunnypranav:<ping> 11:26, 31 December 2024 (UTC)
Done here: Wikipedia:Declined AfC submissions resubmitted without any changes! It uses a template I made ({{AfCResubmissions}}) that uses a module I made (Module:AfCResubmissions) which fetches the data from the bot's user JSON file. :) Just need to finish Toolforge deployment! MolecularPilot 🧪️✈️ 02:40, 1 January 2025 (UTC)

List of schools in the UK

Hello. I'd like to request for a list of pages in Category:Schools in England, Category:Schools in Northern Ireland, Category:Schools in Scotland and Category:Schools in Wales and whatever file is used in their infobox. Format will be as such [ PAGE ] , [ Link to file ]. If possible, skip pages that are using a SVG file. Pages will go into User:Minorax/Schools in England, User:Minorax/Schools in Scotland, etc. --Min☠︎rax«¦talk¦» 14:58, 30 December 2024 (UTC)

I assume you want to check the categories recursively, to some depth? — Qwerfjkltalk 15:05, 30 December 2024 (UTC)
Yeap, hopefully you can dig deep into the large category (or any level you define as a possible-to-do) whilst skipping pages based on the following criteria where 1) an .svg extension file is used in the infobox, 2) there is no infobox available. --Min☠︎rax«¦talk¦» 16:48, 30 December 2024 (UTC)
Minorax  Done User:Minorax/Schools in England, User:Minorax/Schools in Northern Ireland, User:Minorax/Schools in Scotland and User:Minorax/Schools in Wales. – DreamRimmer (talk) 09:05, 31 December 2024 (UTC)
Many thanks! --Min☠︎rax«¦talk¦» 09:24, 31 December 2024 (UTC)

Category:University and college logos

Good day. I'd like to request for all .svg files in the above category be moved to Category:SVG logos of universities and colleges. --Min☠︎rax«¦talk¦» 02:01, 1 January 2025 (UTC)

{{Working}}, the bot is running and doing the task right now! MolecularPilot 🧪️✈️ 02:17, 1 January 2025 (UTC)
Update: still running, there's a lot of pages hahaha, but it's all working :) MolecularPilot 🧪️✈️ 02:23, 1 January 2025 (UTC)
Seems to be {{Done}} now, thank you for all your work on schools! If it missed some or didn't do something right, please don't hesitate to reach out to me! :) MolecularPilot 🧪️✈️ 02:31, 1 January 2025 (UTC)
Wait, so it turns out some pages don't have the category explicitly set (the bot did move all the ones that explicitly set it), but use {{Non-free school logo}} which sets the category using a parameter. I'm running the bot now to update the template usages to use the new category. MolecularPilot 🧪️✈️ 02:35, 1 January 2025 (UTC)
A LOT of pages have the category implicitly set this way so it is still running... but I can confirm it's definitely working and fixing these. MolecularPilot 🧪️✈️ 02:41, 1 January 2025 (UTC)
Actually  Done now hahaha! As before, if you it didn't catch everything please don't hesitate to reach out to me, Minorax. Happy new year! :) MolecularPilot 🧪️✈️ 02:50, 1 January 2025 (UTC)
Thanks! Seems good. Happy new year to you too. --Min☠︎rax«¦talk¦» 04:18, 1 January 2025 (UTC)

"Was" in TV articles

Moved to WP:AWB/TASKS (diff)
Deferred to WP:AWBREQ as such a task would require human confirmation and is not appropriate for a bot to do. I have posted a copy of this discussion to that noticeboard now. MolecularPilot 🧪️✈️ 05:18, 1 January 2025 (UTC)

The domain www.uptheposh.com has been usurped, and all links (including sublinks like http://www.uptheposh.com/people/580/, http://www.uptheposh.com/seasons/115/transfers/) now redirect to a gambling site. I request InternetArchiveBot to replace all links containing www.uptheposh.com with their corresponding archived versions from the Wayback Machine.

This is my first time doing this - if I need to request somewhere else or anything is better done manually, please let me know! Nina Gulat (talk) 16:36, 4 January 2025 (UTC)

Nina Gulat, you want WP:URLREQ. Primefac (talk) 16:37, 4 January 2025 (UTC)
Thanks! Nina Gulat (talk) 16:41, 4 January 2025 (UTC)

Hello, I would like to kindly request that all articles, categories, files, etc. within the Category:Cinema of Belgium be tagged with the newly created Belgian cinema task force. This will help streamline efforts to improve the quality and coverage of Belgian cinema-related content on Wikipedia. Earthh (talk) 11:41, 23 November 2024 (UTC)

How far down? I went down 3 levels and found Crouching Tiger, Hidden Dragon, which does not seem likely. Primefac (talk) 13:06, 23 November 2024 (UTC)

Just a quick clarification: please tag all entries in the Category:Cinema of Belgium with the Belgian cinema task force, except for the entries of the following categories, as they may include films that are not necessarily Belgian:

Thanks for your help! --Earthh (talk) 13:39, 23 November 2024 (UTC)

I've cross-posted this to WP:AWB/TASKS as I think it's small enough for manual addition (though I may have miscounted, will check when I get home later). Primefac (talk) 14:35, 23 November 2024 (UTC)
Hey @Earthh, as Primefac suggested, this can be done at WP:AWB/TA. This query with depth of 2 shows 717, and 86 with depth 1. Could you clarify on that. Also, what template changes are you suggesting? Like what should one tag it with? ~/Bunnypranav:<ping> 14:41, 23 November 2024 (UTC)
Thanks for your patience as I worked through the depth question. For this request:
The template to use is: {{WikiProject Film|Belgian-task-force=yes}}. Let me know if there are any issues or further clarifications needed.--Earthh (talk) 18:35, 23 November 2024 (UTC)
If it's over 700 it does push it a little into the bot territory, but if we can nail down a final number that would be best. Primefac (talk) 20:13, 23 November 2024 (UTC)
This one for Category:Belgian films and this for Category:Cinema of Belgium shows 717+956=1673 total. Is that right? @Earthh Also, all these pages are already tagged with film banner right, only the param is required? ~/Bunnypranav:<ping> 04:45, 24 November 2024 (UTC)
1682 for Category:Belgian films and 1189 for Category:Cinema of Belgium, including articles, files, templates, categories and portals. If these are already tagged with the film banner, |Belgian=yes or |Belgian-task-force=yes parameters will be enough. Earthh (talk) 15:48, 24 November 2024 (UTC)
will file a brfa tomorrow.
Not a big deal, but If these are already tagged means are they or not? ~/Bunnypranav:<ping> 15:57, 24 November 2024 (UTC)
Just as a note of caution, per WP:Film#Scope, {{WikiProject Film}} should not be added to biographical articles/categories/etc., which should use {{WikiProject Biography|filmbio-work-group=yes}}.   ~ Tom.Reding (talkdgaf)  16:12, 24 November 2024 (UTC)
You're absolutely right. For entries tagged with {{WikiProject Film}}, the parameter |Belgian=yes should be used. For entries tagged with {{WikiProject Biography}}, the parameter |cinema=yes should be added to {{WikiProject Belgium}}. However, it seems that this parameter is not yet supported. I've submitted an edit request on Template talk:WikiProject Belgium to address this. Earthh (talk) 22:35, 24 November 2024 (UTC)
Thanks Tom for the disclaimer.
@Earthh I shall do it like this. Replace {{WikiProject Film with {{WikiProject Film|Belgian=yes similarly {{WikiProject Biography with {{WikiProject Biography|cinema=yes For the pages in both above petscan queries. Is that fine? ~/Bunnypranav:<ping> 10:24, 25 November 2024 (UTC)
Thanks for your availability.
For the entries in both Petscan queries, {{WikiProject Belgium}} should be replaced or added as {{WikiProject Belgium|cinema=yes}} if not already present. Earthh (talk) 18:32, 25 November 2024 (UTC)
BRFA filed. Now time to wait! ~/Bunnypranav:<ping> 12:59, 26 November 2024 (UTC)
@Earthh: All done from the lists I hade made. Please tell if I missed any! ~/Bunnypranav:<ping> 16:21, 18 December 2024 (UTC)
@Earthh: {{WikiProject Film|Belgian=yes}} should be used per Template:WikiProject Film#National and Regional task forces.   ~ Tom.Reding (talkdgaf)  12:38, 24 November 2024 (UTC)
They are both in use at the moment; the one you suggested is definitely shorter :) Earthh (talk) 15:49, 24 November 2024 (UTC)
@Bunnypranav: Thank you for all the work you've done! Many individuals are still missing because we excluded the addition of the {{WikiProject Belgium}} tag and opted to only modify the parameters where it was already present. Is there anything we can do about this? Earthh (talk) 15:52, 21 December 2024 (UTC)
@Earthh Unless we have a clear cut definite list, I'm afraid it can't be a bot run ~/Bunnypranav:<ping> 15:58, 21 December 2024 (UTC)
@Earthh Do you have anything more, or should I mark this as done and archive it? ~/Bunnypranav:<ping> 17:12, 4 January 2025 (UTC)
@Bunnypranav: everything is fine, it's all good to go. Thank you again for your help! Earthh (talk) 15:47, 6 January 2025 (UTC)

There are presumably hundreds of articles about calendar years (for example, 671) that contain the text "link will display the full calendar".

I believe this text violates the spirit of WP:CLICKHERE, specifically:

phrases like "click here" should be avoided [...] In determining what language is most suitable, it may be helpful to imagine writing the article for a print encyclopedia

The text "link will display the full calendar" would of course make no sense in a print encyclopedia, so I think it should be deleted. Given the number of articles that this text appears in, this deletion would best be done by a bot. Stephen Hui (talk) 07:00, 30 November 2024 (UTC)

A total of 1562 pages use this wording. – DreamRimmer (talk) 07:22, 30 November 2024 (UTC)
DreamRimmer, Stephen Hui, this has been discussed before at Wikipedia:Bot requests/Archive 84#(link will display the full calendar). — Qwerfjkltalk 11:08, 30 November 2024 (UTC)
It has also been raised a number of times at WT:YEARS including following the linked BOTREQ (1, 2, 3), all of which were asking to remove it (without reply). Suffice to say, I think per WP:SILENCE there is a general lack of concern about whether this text is removed. Unless there is any significant opposition raised here in the next few days, I would be okay putting in a BRFA to remove the offending text. Primefac (talk) 22:28, 1 December 2024 (UTC)
If it hasn't been done already, I'd also suggest doing a few dozen "by hand" first, to see if that provokes any objection. I don't expect it will. Dicklyon (talk) 04:16, 7 December 2024 (UTC)
Easy enough to mass-rollback, I just forgot I was planning on doing this. Primefac (talk) 16:31, 9 December 2024 (UTC)
Y Done by the way. Primefac (talk) 12:43, 14 January 2025 (UTC)

Bot to simplify "ref name" content

I have come across some very long ref names in ref tags, sometimes automatically generated by incorporating the title of the work that is being referenced, which is disfavored by WP:REFNAME, which asks that reference names be kept "short and simple" to avoid clutter. A ref name is nothing more than a piece of code by which to identify a reference, and can be as short as "A1" or the like. However, according to this search generously provided by Cryptic, insource:ref insource:/\< *ref *name *= *[^ <>][^<>]{76}/, there are over 1,600 Wikipedia articles containing ref names that are over 75 characters in length, which is ridiculous. I have started hand-fixing these, and that is arduous, and I suspect bot-fixable. I would therefore like a bot that checks each page for ref names over the length of some set number of characters, perhaps something is short as 30 or 40 characters, and where a ref name is excessively long, shorten it to a more reasonable length (that does not match any existing names on the page).

In the course of this search, I have also come across quite a few ref names that contain complete urls, [[bracketed]] terms as would appear in linked text, and various non-standard characters or text usually used for formatting. I would like any instance of brackets, use of "http://" or "https://" or characters outside of the English alphanumeric set of letters, numbers, and basic punctuation to be stripped out or replaced with characters in that set. BD2412 T 03:26, 3 January 2025 (UTC)

I'm not sure I'd support bot-shortnening without seeing a demo of it. This seems to be more in the wheelhouse of semi-automated fixes. I'm opened to be convinced though. Headbomb {t · c · p · b} 03:46, 3 January 2025 (UTC)
A ref name is not article text. It is literally just a signal to allow multiple uses of a reference. We could replace every ref name in the encyclopedia with a random string of three or four letters/numbers, so long as each ref name was unique within its article, and it would not change their functionality at all. Granted, there are some projects (like those working on comic book movies) where they craft ref names for more informative purposes, but where this is done, the ref names are never excessively long or made with brackets, URLs, or exotic characters. The only thing an excessively long or convoluted ref name accomplishes is reduce usability and clutter up the Wikitext wherever it is used. BD2412 T 03:56, 3 January 2025 (UTC)
See also User:Nardog/RefRenamer-core.js which is an awesome tool for quickly renaming all refs in a given article. -- GreenC 04:03, 3 January 2025 (UTC)
@GreenC: Does it rename all of them? I am just concerned about the ridiculous ones (see, e.g., this fix). BD2412 T 04:11, 3 January 2025 (UTC)
(edit conflict) Kudos on the fixes you've done already. One point of WP:REFNAME is that reference names should have semantic value (not, for example "A1" or a random string of characters). It doesn't make the slightest difference to the reader, of course, but makes it easier for human editors to cite the correct source. Are you familiar with RefRenamer? I use it to replace reference names like ":0" and "auto1" with reasonably short but meaningful names, and it makes the replacement process a lot easier. It could simplify the task of replacing obscenely long reference names with much shorter ones, although it's still a by-hand tool, not a bot. It only changes the ones you tell it to. --Worldbruce (talk) 04:24, 3 January 2025 (UTC)
I'll check it out, thanks. However, with thousands of articles having excessively long ref names, there is still likely an advantage for having a bot do some of this lifting. BD2412 T 04:32, 3 January 2025 (UTC)
I might be able to do it manually with this tool, but it does not always actually shorten the name, sometimes it just formats it. This should work on the worst cases, though. BD2412 T 04:59, 3 January 2025 (UTC)
I think it should be possible to use the code from RefRenamer in combination with Bandersnatch to run it semi-automatically. — Qwerfjkltalk 14:13, 3 January 2025 (UTC)
Seems like a good example of WP:CONTEXTBOT to me. While "Author year" style, for example, will probably work well for types of sources where an author usually only publishes one per year, it won't work all that well for others. And personally I wish people would make more use of naming conventions that are more likely to be globally unique rather than less, often enough I've seen multiple articles on related topics that all cite different articles from a website and, since each only uses one article, the website's name gets used as the ref name, which has led to accidental bad refs if content is copy-pasted between the articles. Anomie 15:48, 3 January 2025 (UTC)
That's a good point about copying to other articles. "Authorlastname" is usually sufficient, if not then "Authorlastname YYYY-MM". Likewise the citation should have |last= as the first field so it's easy to visually find. I've found these two methods (with RefRenamer) are my current best practice. -- GreenC 16:50, 3 January 2025 (UTC)
Even so, I think we can all agree that it is bad to have ref names like:
  • "Legitimate politics : a source of codification between theory and practice: A fundamental study of the uniting unity between politics and jurisprudence"
  • "urlThe Alkaloids of Tabernanthe iboga. Part IV.1 The Structures of Ibogamine, Ibogaine, Tabernanthine and Voacangine - Journal of the American Chemical Society (ACS Publications)"
  • "Dr. Coburn Stands for Science:Opposes Congressional efforts to honor debunked author linked to failed global malaria control"
That said, I have played around with the ref renamer tool pointed out be GreenC and Worldbruce, and I do think that I can use it to handle this task without needing a bot. BD2412 T 18:38, 3 January 2025 (UTC)
As such,  Request withdrawn by requester. NotAG on AWB (talk) 14:49, 4 January 2025 (UTC) I have amended the template used in this comment because it was causing errors. The BOTREQ template does not have a "withdrawn" option. MolecularPilot 🧪️✈️ 08:12, 6 January 2025 (UTC)
I don't agree those are bad, at least not for the reason you think they are. The second should drop the odd "url" prefix and the third could use a space after the colon if they're going to be titled like that, but if editors of an article find it useful to use the full title of a work as its ref name then 🤷 why should we care? I've seen you insist on doing things that I personally think are stranger. Anomie 15:10, 4 January 2025 (UTC)
Sortname redirects do not sit in the article's Wikitext, making it difficult to see where the reference even begins and ends. I literally just removed a "ref name=The Edinburgh Gazetteer: Or, Geographical Dictionary: Containing a Description of the Various Countries, Kingdoms, States, Cities, Towns, Mountains, &c. of the World; an Account of the Government, Customs, and Religion of the Inhabitants; the Boundaries and Natural Productions of Each Country, &c. &c. Forming a Complete Body of Geography, Physical, Political, Statistical, and Commercial with Addenda, Containing the Present State of the New Governments in South America..."; and a "ref name="[RECRUES 2020/2021] Arrivée officielle à Sapiac de @Dylan_Sage11 pour la saison prochaine! Médaille de bronze aux Jeux Olympiques de Rio 2016, 134 sélections pour 155 points, il arrive pour renforcer l'effectif de l'USM et faire vibrer la @rugbyprod2 #AllezSapiac". BD2412 T 04:27, 10 January 2025 (UTC)

Requesting for bot help to nominate 156 navboxes for deletion (listed above). These navboxes are for teams that finished lower than third place in the Olympic basketball tournaments. Such templates are subject to WP:TCREEP and were previously deleted per May 31, 2021, April 22, 2020, June 7, 2019, and March 29, 2019 (first, second and third) discussions (to name a few). – sbaio 17:13, 20 November 2024 (UTC)

Maybe one day I'll complete User:Qwerfjkl/scripts/massXFD and things like this will be much easier.— Qwerfjkltalk 17:32, 20 November 2024 (UTC)
It is very easy to list these templates at TFD as a batch. Someone with AWB should be able to help you apply the correct TFD template to all of them. Notification of the templates' creators might also be not-too-hard with AWB, especially if you provide a list of those editors. – Jonesey95 (talk) 22:07, 20 November 2024 (UTC)
Ill tag them, if I could get the editors users ill notify as well.  Working with AWB. Geardona (talk to me?) 23:48, 20 November 2024 (UTC)
Geardona, have you completed this task? Thanks! (I'm trying to sort out all the discussions and either make the bot to do them, or close them). MolecularPilot 🧪️✈️ 04:45, 1 January 2025 (UTC)

Basketball biography infobox request

On the basketball biography infobox, all instances of |HOF_player= and |HOF_coach= should just be |HOF= as there is no actual difference between the two. The issue can be seen on Lenny Wilkens where both parameters are used and link to the same page. ~ Dissident93 (talk) 19:54, 17 November 2024 (UTC)

Bill Sharman appears to not hold to that trend, so it would appear that it is not true to say "all" instances must be changed. I will also note that there are only 79 instances of both parameters even appearing in the same article, so this is too small a task for a bot (try WP:AWBREQ or just tweak things manually). Primefac (talk) 20:13, 17 November 2024 (UTC)
I meant that even single instances of |HOF_player= and |HOF_coach= should be moved to |HOF=, allowing the removal of the first two parameters within the infobox itself. ~ Dissident93 (talk) 20:31, 17 November 2024 (UTC)
If the other two parameters are reasonable alternate parameters, it would make more sense to have them as alternate parameters rather than edit every page using any single parameter. Primefac (talk) 20:36, 17 November 2024 (UTC)
They are redundant as they both link to the same exact page as there is no official designation between being inducted as a player or coach into the Naismith Basketball Hall of Fame. ~ Dissident93 (talk) 20:43, 17 November 2024 (UTC)
And yet, Bill Sharman uses both parameters with different values in them. The might have the same base URL, but clearly the values passed to them can be different. Primefac (talk) 20:44, 17 November 2024 (UTC)
Actually, it does seem like there are separate pages to represent an inductee's playing and coaching accomplishments despite there being no official difference between the two, as per Bill Sharman. In that case, I suppose there must be more of a consensus to merge/remove the parameters before this can be implemented. ~ Dissident93 (talk) 21:04, 18 November 2024 (UTC)
Marking for the table above that this Needs wider discussion. NotAG on AWB (talk) 14:52, 4 January 2025 (UTC)

Lowercasing the word "romanized"

I've never done this before, but I've noticed a rather annoying issue that no human has the time to sort out manually. (Incidentally, my request is very similar to another active one.) There are thousands of stubs on Iranian locations that have the word "romanized" needlessly capitalised mid-sentence (apparently all mass-created by the same now-retired user). I'm not sure exactly how to track all of them down, but see categories like Sarbisheh County geography stubs or Qom province geography stubs. Anonymous 23:22, 11 January 2025 (UTC)

It looks like there may be around 820 pages, though there might be some valid uses in there (beginnings of sentences, Title Case, etc). Primefac (talk) 13:20, 13 January 2025 (UTC)
I don't think this is a comprehensive list (indeed, many of them use "Romanized" in a different sense and most seem correct in doing so). The stub articles on Iranian villages I'm referencing all follow the same basic layout and all appear to be miscapitalising the word in the exact same spot. There have to be tens of thousands — I seem to land on one at least every twenty presses of the "random article" button. Anonymous 18:10, 13 January 2025 (UTC)
That is a case-sensitive regex search; it will only return capital-R-Romanized. If they aren't on that list, then they don't exist. Next time you see examples please post them here. Primefac (talk) 19:50, 13 January 2025 (UTC)
I'm getting this message on my end: A warning has occurred while searching: The regex search timed out, so only partial results are available. Try simplifying your regular expression to get complete results. If there are indeed only 820 instances in all of Wikipedia, then that means that the chance of landing on any article with uppercase "Romanized" when hitting "random article" should be around 0.01%. It happens to me with fair consistency. Does it seem plausible that I'm getting these at several hundred times the normal rate? Anonymous 20:39, 13 January 2025 (UTC)
So you're clicking the random article button, and landing on dozens of Iranian villages (of which you still haven't given any examples)? That seems more unlikely than there being mysteriously thousands of articles not showing up on a search. Primefac (talk) 20:44, 13 January 2025 (UTC)
Are you suggesting that I'm lying about something this mundane and pointless? Here are six examples of articles I have randomly landed on and corrected: Lal-e Tazehabad, Golab-e Pain, Neyneh, Rudbarak, Gilan, and Dahich (check the edit history if you need proof). If your numbers are correct, these six articles represent around 0.7% of all articles that use(d) "Romanized" (for comparison, they're around 0.00008% of all English Wikipedia articles). Anonymous 21:43, 13 January 2025 (UTC)
No, I was not suggesting you were lying, I was trying to discover the mismatch between what you were saying and what I was seeing. I was searching for Romanized, while the text you are seeing is [[Romanize]]d, which are two very different things. That search gives ~41k pages. Primefac (talk) 09:29, 14 January 2025 (UTC)
Probably worth letting {{langx}} handle this like this. Gonnym (talk) 09:35, 14 January 2025 (UTC)
That would probably also reduce or remove a lot of the CONTEXTBOT issues I was envisioning. Primefac (talk) 09:38, 14 January 2025 (UTC)

Bot to block proxy servers and VPNs automatically

I think we should have a bot that would block proxy servers and VPNs automatically as we have a LTA right now who is very disruptive and uses proxy servers and VPNs to spam over and over to a point where most help pages like WP:AN WP:ANI WP:Help desk WP:Teahouse etc are protected i was thinking that a bot could automatically block all VPNS and proxy servers to reduce this LTA while reducing the disruption to normal users by the semi protection Isla🏳️‍⚧ 22:45, 19 January 2025 (UTC)

Doesn't User:ProcseeBot already do this? Nyttend (talk) 22:54, 19 January 2025 (UTC)
Has not worked since 2020 Isla🏳️‍⚧ 01:16, 20 January 2025 (UTC)
Same here, I thought User:ST47ProxyBot did this but it turned out to have been retired in 2024 Rusty 🐈 01:33, 20 January 2025 (UTC)
There is a Phabricator task for this: T380917. – DreamRimmer (talk) 01:42, 20 January 2025 (UTC)

Serial commas in page titles

Hello, I'm not sure that this request can be completed automatically; please accept my apology if it can't. I just want some lists, without edits to anything except the page where you put the lists, so it's not a CONTEXTBOT issue: just a "good use of time" issue. Could you compile some lists of pages in which serial commas are present or are omitted? I just discovered List of cities, towns and villages in Cyprus and created List of cities, towns, and villages in Cyprus as a redirect to support serial commas. Ideally, whenever a page could have a serial comma in the title, we'd have a redirect for the form not used by the current title, but I assume this isn't always the case.

First off, I'd like a list of all mainspace pages (whether articles, lists, disambiguation pages, anything else except redirects) that use a serial comma. I think the criteria might be:

  • [one or more words]
  • comma
  • [one or more words]
  • comma
  • ["and" or "or"]
  • [one or more words]

I'm unsure whether they're rigid enough, or whether they might return a lot of false positives.

Secondly, I'd like a list of all pages whose titles are identical to the first list, except lacking a serial comma. Redirects would be acceptable here, since if I'm creating serial-comma redirects, it helps to know if it already exists.

Thirdly, I'd like a list of all mainspace pages (whether articles, lists, disambiguation pages, anything else except redirects) that could use a serial comma but don't. I think the criteria would be:

  • [Page is not on first or second list]
  • [one or more words]
  • comma
  • one or more words]
  • ["and" or "or", but no comma immediately beforehand]
  • [one or more words]

Once the list is complete, the bot checks each page with the following process: "if I inserted a comma immediately before 'and' or 'or', would it appear on the first list?" If the answer is "no", the bot removes it from the list.

Fourthly, I'd like a list of all pages whose titles are identical to the third list, except they have a serial comma. Again, redirects are acceptable.


Is this a reasonable request? Please let me know if it's not, so I don't waste your time. Nyttend (talk) 20:14, 19 January 2025 (UTC)

Nyttend, I guess intitle:/[A-Za-z ]+, [A-Za-z ]+, (and|or) [A-Za-z ]+/ would work for the first request and intitle:/[A-Za-z ]+, [A-Za-z ]+ (and|or) [A-Za-z ]+/ would work for the second.
The latter two lists are trickier. I think your best bet is probably WP:QUARRY. — Qwerfjkltalk 20:26, 19 January 2025 (UTC)
Is there a way to download a list of results from a particular search? As far as I know, the only way to get a list of results is to copy/paste the whole thing somewhere and delete everything that's not a page title. (With 11,544 results for the first search, this isn't something I want to do manually.) Also, the first search includes redirects, e.g. Orders, decorations, and medals of the United Nations is result #1. Nyttend (talk) 20:47, 19 January 2025 (UTC)
@Nyttend: If you have AWB installed, you can use its 'Make list' function to generate a list of pages. https://insource.toolforge.org is a tool that can generate a list of pages from search results. Currently, it only supports the insource: parameter, but I plan to add functionality for other parameters like intitle: and incategory: soon. If you are unable to generate the list yourself, feel free to message or ping me, and I will create it for you and add it to your userspace. – DreamRimmer (talk) 05:11, 20 January 2025 (UTC)
Thanks, DreamRimmer, but someone at WP:QUARRY provided me with a list. Now I just need to do the filtering work to create redirects. Nyttend (talk) 08:52, 20 January 2025 (UTC)
Note that search results are limited to the first 10,000 pages. — Qwerfjkltalk 16:10, 20 January 2025 (UTC)

Create redirects from human-curated list

Following User:Qwerfjkl's advice in #Serial commas in page titles, I filed a Quarry request and got the information I wanted. I'm about to begin checking them, and I know I'll have heaps of potential redirects to create. Are there any bots already in existence that are already approved to create redirects from human-curated lists? It would be convenient if I could just dump a few hundred pairs of links (the title to be created, and the title to be the target) and have a bot create them. Probably there will be many more than a few hundred, so I was envisioning providing lists here and there, without a schedule. Nyttend (talk) 05:34, 20 January 2025 (UTC)

There are not, and you would need a solid consensus to have a bot do so. Primefac (talk) 17:00, 21 January 2025 (UTC)
Doesn't DannyS712 bot III do something like this? JJPMaster (she/they) 17:03, 21 January 2025 (UTC)
No, that's for patrolling them. Primefac (talk) 17:14, 21 January 2025 (UTC)
Oh, sorry, I linked to the wrong one. I meant to link to this AnomieBOT BRFA. JJPMaster (she/they) 17:16, 21 January 2025 (UTC)
Yes, but that is to overcome a technical challenge (as it says in the edit summary, "endashes are hard") so there is precedent but for a different issue. Primefac (talk) 17:25, 21 January 2025 (UTC)
Also that task too was proposed to the community. Looking back I'm a little surprised the community's response was mostly WP:SILENCE, I guess people worried about it less back in 2016. Personally, if you were to run this by Wikipedia:Village pump (proposals) and get WP:SILENCE too, I'd be satisfied. You'll also want to consider details like which Rcat templates should be applied. Anomie 12:38, 22 January 2025 (UTC)

If someone could write a bot that

Compare both list, and create a report

Province over-capitalization

See WP:AutoWikiBrowser/Tasks#50K articles with over-capitalized "Province" and WT:WikiProject Iran#Fixing widespread over-capitalization of "Province" (just started). There are over 50,000 articles with over-capitalized Province since the Jan 2022 multi-RM that moved all the Iranian province titles to lowercase. Assuming the project discussion doesn't turn up resistance to fixing, many of these would be easily amenable to bot fixing, like the task that User:BsoykaBot did for NFL Draft over-capitalization. Dicklyon (talk) 18:50, 6 December 2024 (UTC)

Actually, there's not much activity at WikiProject Iran, so if anyone wants to see this discussed more, point me at a better place to bring it up. Dicklyon (talk) 04:00, 7 December 2024 (UTC)

I did a bunch of these by hand on Dec. 6 (example). No reaction from anyone. Dicklyon (talk) 04:10, 7 December 2024 (UTC)

@Bsoyka and DreamRimmer: Thank you both for volunteering to help if/when we see clear consensus or a closed discussion on this. Does anyone have a good idea how to provoke more response? All I've got so far is silence. The big RM was similarly quiet, with no opposition and just 2 supports. Dicklyon (talk) 21:45, 11 December 2024 (UTC)

This is why bots go through trials; not only does it allow for the bot operator to demonstrate that their bot operates as intended, it gives users the opportunity to give feedback on the task. If this is a potentially contentious task, we can have the bots not mark the edits as minor during the trial to raise more awareness of it prior to acceptance. Primefac (talk) 22:00, 11 December 2024 (UTC)
Yes, a trial not marked minor is a good idea beyond the bunch I did by hand not marked minor. Are you prepared to approve such a trial? Bsoyka has a bot that's got demonstrated competence at doing such things, while avoiding purely cosmetic edits. Dicklyon (talk) 02:32, 12 December 2024 (UTC)

I've got no response at the discussion I opened at the project. Is it OK to move forward with bot approval process? Dicklyon (talk) 19:20, 21 December 2024 (UTC)

@Bsoyka and DreamRimmer: would either of you be willing to file an RFBA on this now, or should I try to provoke discussion elsewhere? Dicklyon (talk) 23:39, 21 December 2024 (UTC)

Dicklyon, you can file a WP:BRFA yourself if you are seeking approval to use AWB for this. I recommend the easy-brfa.js script linked on the page. :) MolecularPilot 🧪️✈️ 03:50, 1 January 2025 (UTC)
I am not seeking to use AWB for this, but for someone else to do so. Dicklyon (talk) 07:12, 1 January 2025 (UTC)

Replacing FastilyBot

Now that Fastily has retired, FastilyBot is no longer running. Any chance of a replacement? On the bot's deleted userpage, one can see that it ran 17 tasks and updated 31 database reports. plicit 14:44, 19 November 2024 (UTC)

I have written code to update ten database reports. – DreamRimmer (talk) 14:48, 19 November 2024 (UTC)
I have restored the userpage to Special:Permalink/1258404221 for tracking purposes. If anyone needs any of the code from any of the subpages, please let me know. Primefac (talk) 14:53, 19 November 2024 (UTC)
I am taking over the following database reports: 4, 11, 12, 15, 18, 19, 20, 21, 22, 23, 24, 26, 28, 29, 30, and 31. – DreamRimmer (talk) 17:50, 19 November 2024 (UTC)
DreamRimmer, given the availability of the sql queries https://github.com/fastily/fastilybot-toolforge/tree/master/scripts, it might be better to convert the pages to usse {{Database report}}. — Qwerfjkltalk 18:14, 19 November 2024 (UTC)
I second this suggestion for database reports. I converted Wikipedia:Database reports/Transclusions of non-existent templates (number 18 on the list) to use {{database report}}, and after fixing a couple of bonehead oversights on my part, it is working well and more functional than the previous report. – Jonesey95 (talk) 22:42, 19 November 2024 (UTC)
I have spent time writing code to update these database reports. There are some config files for certain database reports, so I have included functionality to exclude files from the report that transclude any templates or belong to any category listed in the config file. Database reports cannot do that, but I have no problem if you all want to use SDZeroBot's database reports. – DreamRimmer (talk) 03:25, 20 November 2024 (UTC)
DreamRimmer, I agree that if it's not possible or feasible to use {{Database report}}, it makes sense for you to handle it; but where we can, it's nice to have some kind of standardisation for the reports. — Qwerfjkltalk 17:34, 20 November 2024 (UTC)
I'll look into taking over the deletion discussion notifier. DatGuyTalkContribs 16:26, 19 November 2024 (UTC)
BRFA filed DatGuyTalkContribs 23:18, 20 November 2024 (UTC)

Current progress

Just creating a table below for a quick idea of which tasks (based on Special:Permalink/1258404221) are being handled. Please add ~~~~~ at the bottom if you update things. Primefac (talk) 12:42, 22 November 2024 (UTC)

Original task Description New Task
1 Replace {{Copy to Wikimedia Commons}}, for local files which are already on Commons, with {{Now Commons}}. CanonNiBot 1
2 Remove {{Copy to Wikimedia Commons}} from ineligible files. CanonNiBot 1
3 Report on malformed SPI pages. MolecularBot 4
5 Add {{Wrong-license}} to files with conflicting (free & non-free) licensing information. KiranBOT 15
7 Replace {{Now Commons}}, for local files which are nominated for deletion on Commons, with {{Nominated for deletion on Commons}}. CanonNiBot 1
8 Replace {{Nominated for deletion on Commons}}, for local files which have been deleted on Commons, with {{Deleted on Commons}}. CanonNiBot 1
9 Remove {{Nominated for deletion on Commons}} from files which are no longer nominated for deletion on Commons. CanonNiBot 1
11 Fill in missing date parameter for select usages of {{Now Commons}}.
13 Post various database reports to Wikipedia:Database reports.
17 Remove instances of {{FFDC}} which reference files that are no longer being discussed at FfD. KiranBOT 14
15 Remove {{Now Commons}} from file description pages which also translcude {{Keep local}} CanonNiBot 1
14 Leave courtesy notifications for uploaders (who were not notified) when their files are proposed for deletion. DatBot 12
4 Remove {{Orphan image}} from free files which are not orphaned. DreamRimmer bot 3
6 Leave courtesy notifications for uploaders (who were not notified) when their files are nominated for dated deletion. DatBot 12
10 Add {{Orphan image}} to orphaned free files. DreamRimmer bot 2
12 Leave courtesy notifications for uploaders (who were not notified) when their files are nominated for discussion. DatBot 12
16 Leave courtesy notifications for article authors (who were not notified) when their contributions are proposed for deletion. DatBot 12

12:42, 22 November 2024 (UTC)

Hi. Since nobody's taking them, I'm gonna try working on tasks 5 and 15. '''[[User:CanonNi]]''' (talkcontribs) 02:15, 17 December 2024 (UTC)
Thanks for volunteering :) – DreamRimmer (talk) 03:32, 17 December 2024 (UTC)
updated task 17 with KiranBOT 14. —usernamekiran (talk) 23:37, 26 December 2024 (UTC)
I've claimed the malformed SPI task. My bot will be an WP:EXEMPTBOT for this task and not require BRFA because it will just report malformed SPIs to User:MolecularBot/MalformedSPIs.json, which will then be displayed on the malformed SPIs page in projectspace through a template and Lua module, similar to my AfC bot (botreq by JJPMaster below). MolecularPilot 🧪️✈️ 06:16, 1 January 2025 (UTC)
Completed & running continously (it watches RecentChanges) on Toolforge :) See Wikipedia:Malformed SPI Cases. MolecularPilot 🧪️✈️ 08:43, 1 January 2025 (UTC)
working on FastilyBot 5 "add {{Wrong-license}} to files", will file BRFA when code is ready. —usernamekiran (talk) 03:59, 3 January 2025 (UTC)

Hello, I would like to kindly request that all articles, categories, files, etc. within the Category:Cinema of Israel be tagged with the newly created Israeli cinema task force. This will help streamline efforts to improve the quality and coverage of Israeli cinema-related content on Wikipedia.

Please exclude the following subcategories, as they may include films that are not necessarily Israeli and these categories include biographies (which are not part of WP Film):

I saw the request above on Belgian films and used it as a template for my request. Thank you. LDW5432 (talk) 05:23, 2 January 2025 (UTC)

This query with depth of 5 returns 1003 articles. I have checked a few random ones, and they seem fine. @LDW5432, can you please go through some random articles and let me know if there are any that should be removed from the list? – DreamRimmer (talk) 15:25, 3 January 2025 (UTC)
Hold on with the bot. I need to remove people pages from the category as they aren't supposed to be in the film task force category. I will list a few that should not be included or we can manually remove later--:
Tel Aviv University
ICon festival
Jinni (search engine)
Kibbutzim College
- LDW5432 (talk) 20:27, 4 January 2025 (UTC)
I updated the bot request to not include biographies. If you run it now with the new parameters and exclude the following four pages then everything should be good to be tagged. @DreamRimmer
Tel Aviv University
ICon festival
Jinni (search engine)
Kibbutzim College
- LDW5432 (talk) 21:33, 4 January 2025 (UTC)
@LDW5432:This updated query shows 356 pages. If we remove these four, there will be 352 pages left to edit. Can you please confirm, so that I can tag them? – DreamRimmer (talk) 14:55, 5 January 2025 (UTC)
Yes, this is a good list. Please tag them. Thank you. LDW5432 (talk) 16:19, 5 January 2025 (UTC)
Hi, I am finding that the depth isn't high enough to tag many films. Can you run another query on this category? Category:Israeli films
And exclude this category: Category:Israeli–Palestinian conflict films
- LDW5432 (talk) 18:26, 5 January 2025 (UTC)
 Done Tagged 334 pages. – DreamRimmer (talk) 13:43, 6 January 2025 (UTC)
Hi, thank you for tagging those 334 pages.
In my previous comment, I mentioned that the initial query doesn't have a depth which reaches important articles. The query you ran missed important films like Lemon Popsicle and The House on Chelouche Street.
Can you run the query again but on this category: Category:Israeli films
And exclude this category: Category:Israeli–Palestinian conflict films
Thank you.
- LDW5432 (talk) 16:27, 6 January 2025 (UTC)
@LDW5432: Can you please tell me which depth is correct? – DreamRimmer (talk) 16:37, 6 January 2025 (UTC)
This query finds the missing films. LDW5432 (talk) 16:47, 6 January 2025 (UTC)
There are many articles, such as Hounds of War and Gwen Stacy (Spider-Verse), that do not belong to the Cinema of Israel. There are many such articles. If you can provide me with the correct query or a list, I can help. – DreamRimmer (talk) 16:56, 6 January 2025 (UTC)
Hounds of War is made by an Israeli director so it should be included. Some films have Israeli producers. You are correct that the other pages shouldn't be included. I reduced the depth to "2" and it fixes it. Query. - LDW5432 (talk) 17:04, 6 January 2025 (UTC)
@LDW5432: This list contains some articles that do not belong to Israeli cinema, and some users have complained about it. The good thing is that I have only tagged a few from the second query. Please create a correct list; otherwise, I will not be able to assist with this. I am going to revert the last few changes. – DreamRimmer (talk) 10:27, 7 January 2025 (UTC)
Do you not want to include foreign films made by Israeli directors and producers? They don't need to be included. LDW5432 (talk) 20:00, 7 January 2025 (UTC)

Bot to track usage of AI images in articles

There are two similar tasks, both relying on categorisation data from Commons, which are currently only being done manually and occasionally:

I don't know what output would be appropriate, whether it should be a hidden maintenance category or a list page somewhere. Would this be a good job for a bot? Belbury (talk) 10:51, 21 January 2025 (UTC)

Template:MLS

Per the RFD discussion here, the redirect above is to be retargeted, but there's still many cases where the redirect is being used instead of the actual template. A bot to replace the current transclusions of Template:MLS with Template:MLS player, allowing the former to be retargeted per the discussion is requested. oknazevad (talk) 01:55, 1 February 2025 (UTC)

This looks fairly simple and straightforward, and may be better suited for WP:AWBREQ ~ Rusty meow ~ 02:01, 1 February 2025 (UTC)
@Rusty Cat https://linkcount.toolforge.org/?project=en.wikipedia.org&page=Template%3AMLS shows that there are 1500+ transclusions, this is a job better suited for a bot. @Oknazevad: I am ready to do this task, but since this is basically a redirect bypassing job, do you think more consensus is needed apart from 3 people at the RfD, say at some wikiproject? ~/Bunnypranav:<ping> 07:21, 1 February 2025 (UTC)
If you think so, but since it would just be replacing the redirect with the actual template, I don't think there could be much objection. As I said at ten RFD, it would allow the redirect to point to the navbox template consistent with other similar template redirects. oknazevad (talk) 07:44, 1 February 2025 (UTC)
@Oknazevad Gotcha, BRFA filed ~/Bunnypranav:<ping> 07:55, 1 February 2025 (UTC)
For what it's worth, my bot is already approved for this sort of thing, but carry on. Primefac (talk) 07:59, 1 February 2025 (UTC)
If my task gets approved, can I also do such small runs without explicit approval? ~/Bunnypranav:<ping> 08:01, 1 February 2025 (UTC)
No, because your task is specific to this template. Primefac (talk) 08:05, 1 February 2025 (UTC)
@Oknazevad Job is  Done, MLS transclusions show no mainspace usage, I believe this is good to be retargeted. ~/Bunnypranav:<ping> 15:40, 3 February 2025 (UTC)
And retargeting  Done. Thanks all! oknazevad (talk) 15:47, 3 February 2025 (UTC)

IUCN Status Bot

Anyone have a bot to update the conservation statuses of organisms? If not, I can help make it myself (granted, my knowledge is very limited but I am willing to learn). AidenD (talk) 04:44, 21 January 2025 (UTC)

Hey @AidenD, I made a template to link TNC status from Wikidata to organisms. If you want to work on IUCN I would be willing to work with you on doing something similar. Template:TNCStatus Dr vulpes (Talk) 23:43, 2 February 2025 (UTC)
Sure! Are you free to reach out on, say, Discord? AidenD (talk) 06:28, 3 February 2025 (UTC)

Bot to go over pages at Category:Talk pages with comments before the first section

Category:Talk pages with comments before the first section description says that these pages can cause display issues on mobile. WP:AWB, as part of its general fixes adds "Untitled" to these sections, which I think is a good enough fix and is much better than just leaving these as is. If someone can get a bot to do that would be the easiest solution. Gonnym (talk) 15:52, 1 February 2025 (UTC)

I think the bot should ignore /todo and /GA pages for the first pass as those might need a different fix. Gonnym (talk) 16:14, 1 February 2025 (UTC)
Is that cat accurate? I randomly chose a page (Talk:Abersychan School) which does not fall into that category, and it's been unedited long enough I don't see it as a cache issue. Primefac (talk) 17:14, 1 February 2025 (UTC)
That page has {{WikiProject Schools}} which uses |info= that the software sees as a comment (not really sure how relevant that system of comments inside banners is in 2025. I doubt any comment from 2007 in a banner can be helpful). Gonnym (talk) 17:30, 1 February 2025 (UTC)
Yeah, I actually read the cat documentation (shocker!) after I posted; agree that's why. Curious how many other pages like that are technically fine but the system thinks they're messed up... Primefac (talk) 17:32, 1 February 2025 (UTC)

BOT to clean-up spaces round non-breaking spaces

Hello, there is a need for a BOT to remove leading and trailing spaces from the non-breaking space character (&nbsp;). If a space exists then you end up with 2 spaces in the rendered text and it negates the purpose of having a non-breaking space as a break can be made between the space and the non-breaking space. You should ignore the cases where a non-breaking space is used as a template parameter or a cell entry in a table. Keith D (talk) 00:10, 5 February 2025 (UTC)

Keith D, sounds like WP:CONTEXTBOT - try WP:AWBREQ. — Qwerfjkltalk 16:51, 5 February 2025 (UTC)
@Keith D - Would this be a candidate for WP:AWB/Typos? GoingBatty (talk) 18:12, 8 February 2025 (UTC)

Auto URL Access Level

Specific publications (generally newspapers) have global URL access requirements. Believe it would be useful for a bot to crawl for specific websites within citations and apply the {{registration required}}, {{subscription required}} and {{limited access}} based on a list somewhere

Example

www.smh.com.au -> {{limited access}}

www.afr.com -> {{subscription required}}

both of these publications particularly were heavily referenced prior to the subscription model being introduced. A fair chunk of other various rags would find their place in this list, too. :) Losbeth (talk) 15:11, 9 February 2025 (UTC)

@GreenC, does your bot do this? — Qwerfjkltalk 18:21, 9 February 2025 (UTC)
No, I don't. I think this sort of bot could be error prone. You have to assume nothing. If a website supposedly has a global policy, it almost surely is not a global policy, there will be exceptions. And that policy will change in the future. At best maybe a bot that detects known page warnings, such as a sub required banner, checks each URL one by one. It's adding |url-access= based on verification. Here is an afr.com page that is not subscription required. -- GreenC 22:26, 9 February 2025 (UTC)

A bot that blocks adblocks.

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


You know the deal here. Companies no likey adblockers. Swede the Great I (talk) 03:51, 12 February 2025 (UTC)

And what does this have to do with Wikipedia? * Pppery * it has begun... 03:53, 12 February 2025 (UTC)
Oh. Swede the Great I (talk) 15:07, 12 February 2025 (UTC)
  1. Wikipedia does not have ads, never has, and never will, so this is unnecessary.
  2. Wikipedia is not owned by a for-profit company. It's owned by the Wikimedia Foundation.
  3. That's not what bots are for. You seem to be suggesting a MediaWiki feature, and feature requests go to Phabricator, not here.
JJPMaster (she/they) 04:07, 12 February 2025 (UTC)
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Bot Request for ChemistBot – Modifying ChEMBL IDs in Drugboxes

Dear Wikipedia Bot Approvals Team,

I am writing to request approval for ChemistBot, a bot that I have created to assist with editing Wikipedia pages. The intended task for ChemistBot is to automate the process of modifying and updating ChEMBL IDs in the drugboxes for relevant drug articles.

Bot Description: ChemistBot will be used to: Search for drug articles with missing or incorrect ChEMBL IDs. Update the drugbox infoboxes with the correct ChEMBL ID. Ensure the data in the drugboxes is accurate and up-to-date.

Bot's Workflow: ChemistBot will focus on articles for pharmaceutical drugs and related compounds. It will use reliable external databases such as ChEMBL to gather accurate IDs. The bot will not make major editorial changes to the article text but will solely focus on modifying the ChEMBL ID within the drugbox template.

Approval Information: The bot will be fully automated and will operate under a clearly identified account named ChemistBot. I have ensured that the bot complies with all relevant Wikipedia guidelines, including those for bot edits and automated tasks.

Please let me know if you require any additional information or if there are further steps I need to follow. I look forward to your approval to help maintain and improve the quality of drug-related information on Wikipedia. Best regards,

ChemistBot (Bot Account) ChemistBot (talk) 23:33, 19 February 2025 (UTC)

A few things:
  1. Please read WP:BOTPOL
  2. Please do not edit using this account again until you
  3. File a BRFA with your personal account
Thank you. Primefac (talk) 23:53, 19 February 2025 (UTC)
Hello,
Thanks for your reply. I've read the WP:BOTPOL and ensured the Bot would comply with Wikipedia's policy. Kindly, consider the request for ChemistBot as outlined above to allow modifications of ChEMBL IDs. Chemist1986 (talk) 00:16, 20 February 2025 (UTC)
Step 3 is not optional. This page is BOTREQ, not BRFA. Primefac (talk) 00:46, 20 February 2025 (UTC)
Even if they do find BRFA, it's likely WP:BOTNOTNOW is going to apply. Anomie 02:10, 20 February 2025 (UTC)
True; I thought we had an essay on the matter but could not find it so I did not mention it. Primefac (talk) 11:50, 20 February 2025 (UTC)

Redlinked web-url categories being spawned by template

Resolved

Due to recent changes to {{non-free promotional}}, the redlinked category report has been hit with dozens and dozens of redlinked nonsense categories named after web urls. The issue is that the template has traditionally allowed the insertion of a web url to link the image source's terms of use — but because the coding of these wasn't always consistent in the past, the recent changes have caused the template to now interpret some of them as category declarations instead of terms of use.

This can easily be fixed by ensuring that any web url in the template is clearly coded as terms=, but there are just so damn many of them to fix that I'm not inclined to go through them all manually.

So is there a bot that can go through all uses of {{non-free promotional}}, to ensure that any iterations of |http or |1=http are replaced with |terms=http? Thanks. Bearcat (talk) 16:54, 24 February 2025 (UTC)

Bearcat, 82 results (using a flawed search that will have a few false positives). — Qwerfjkltalk 16:58, 24 February 2025 (UTC)
That's not what I need. Special:WantedCategories already has the resulting redlinked categories on it, so finding the affected pages in the first place wasn't the problem — the issue is that I'm looking for a bot to make the redlinked categories go away (ideally today so that they don't carry over to tomorrow's 72-hour update), so that I don't have to gnome my way through all of those pages for hours and hours fixing it manually. Bearcat (talk) 17:08, 24 February 2025 (UTC)
It looks like most of this was already fixes by others. I AWB-ed about 40 and fixed 4 manually, which should be all of it. * Pppery * it has begun... 17:26, 24 February 2025 (UTC)
Bearcat, ah, I see. I've just purged them all now. — Qwerfjkltalk 17:30, 24 February 2025 (UTC)
... though WP:AWBREQ is probably a better place for requests like these, in the future. — Qwerfjkltalk 17:31, 24 February 2025 (UTC)
  • Cool, thanks. Bearcat (talk) 17:32, 24 February 2025 (UTC)
  • Come to think of it, I should add that in theory I could have just done that in AWB myself, but I have no idea how to take a list of redlinked categories and turn it into a list of the individual pages inside the categories — I genuinely attempted that, but couldn't make heads or tails of how to make a batchable list of the pages that needed to be gone through. So if i do need to do something similar in the future, how would I even do that? Bearcat (talk) 17:57, 24 February 2025 (UTC)
    I used Qwerfjkl's list, which was done by an insource search specific to this scenario. I'm not aware of any good genericized way of doing that. * Pppery * it has begun... 20:07, 24 February 2025 (UTC)
    I suppose you could write a pywikibot script. Something like this:
    import pywikibot
    
    site = pywikibot.Site('wikipedia:en')
    
    categories = [ page for page in site.wantedcategories() if page.title(with_ns=False).startswith("Http") ]
    
    pages_to_edit = []
    
    for category in categories:
        for page in category.members():
            if page not in pages_to_edit: # avoid duplicates
                pages.to_edit.append(page)
                
    print(pages_to_edit)
    
    But there aren't many searching tools that work with special pages. — Qwerfjkltalk 21:38, 24 February 2025 (UTC)
Thanks for fixing, I did an AWB run and thought I made all the replacements after the template change, but apparently not!— TAnthonyTalk 01:12, 25 February 2025 (UTC)

How about this: A bot that keeps a eye on vandalism (more security measures, the better!)

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Just read the title. Swede the Great I (talk) 03:11, 13 February 2025 (UTC)

User:ClueBot NG. You clearly don't have the competence to be a useful contributor to this process and I would strongly suggest disengaging. * Pppery * it has begun... 03:21, 13 February 2025 (UTC)
Note that after a closer look I've blocked Swede the Great I as WP:NOTHERE. * Pppery * it has begun... 03:24, 13 February 2025 (UTC)
Redundant to ClueBot NG, and creator now indef'ed. Closing to update status in the table at the top of the page. MolecularPilot 🧪️✈️ 00:51, 18 February 2025 (UTC)
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Request to Rename and Merge Monthly Talk Page Archives into Yearly Archives

Resolved

I apologize in advance if this is the wrong venue.

Page affected: User talk:Nemov/Archives

Description: I want to update my ClueBot III archive settings to use yearly archives instead of monthly ones. However, my past archives are still stored as User talk:Nemov/Archives/2025/January, User talk:Nemov/Archives/2025/February, etc.

I would like a bot to:

1. Move and merge all my old YYYY/Month archives into a single YYYY archive (e.g., 2025/January → 2025).

2. Update any archive links on my talk page to reflect the new format.

Additional Notes: The content from each month's archive should be appended to the corresponding yearly archive. Redirects can be left behind to avoid breaking old links. Nemov (talk) 16:46, 27 February 2025 (UTC)

@Nemov:  Done I have moved the content and redirected all monthly archive pages to yearly ones. It's getting late here, so I will set up the new archive tomorrow unless someone else wants to jump in and help. – DreamRimmer (talk) 18:11, 27 February 2025 (UTC)
@Nemov::  Done Your settings are now updated to archive new sections by year. Your archive links on your talk page are updated to show by year now too, instead of all the year and month links. Matthew Yeager (talk) 07:35, 28 February 2025 (UTC)
@Matthew Yeager What should I do with all the old month pages? Request for them to be deleted? Thanks for all your help. Nemov (talk) 13:17, 28 February 2025 (UTC)
Personally speaking, I would just leave them be - in your talk page history there are links to every one of them, and having them be redlinks would confuse people trying to find a specific conversation or where it went. Redirects are cheap and keeping them around can only help people navigate your archives. The archive box can be adjusted to exclude them. Primefac (talk) 13:42, 28 February 2025 (UTC)
ok, thanks. Nemov (talk) 14:05, 28 February 2025 (UTC)

Whitespace fixes needed in stub templates

Sorry in advance for what might be seen as a talk fork, but circumstances have changed. At first, this was affecting only a dozen pages, so I fixed them. Then it was 1,000 pages, so I posted at AWB tasks. Now there are over 6,000 template pages that need whitespace fixes, so I am posting here, because that is probably too many pages for a non-BRFA AWB task. If you take this on as a bot task, I'll make sure to note that at the AWB page. – Jonesey95 (talk) 07:31, 2 March 2025 (UTC)

The solution appears to be a straightforward regex replace that doesn't require manual intervention. If no one else claims it, I'd take it up with User:CX Zoom AWB tomorrow. CX Zoom[he/him] (let's talk • {CX}) 09:29, 2 March 2025 (UTC)
For every instance that I have seen (about 50 of them), that is correct: removing a line break (or any consecutive white space) with a regex should work fine. – Jonesey95 (talk) 14:57, 2 March 2025 (UTC)
Should I file a BRFA then, or is it no longer required? CX Zoom[he/him] (let's talk • {CX}) 08:34, 3 March 2025 (UTC)
See below; we should probably figure out if this is a fixable bug before we spend time and effort editing things. Primefac (talk) 14:54, 3 March 2025 (UTC)
It appears that Anomie implemented a change to the underlying module such that the Linter errors are no longer present. That said, noinclude tags are supposed to be placed immediately after the end of template code for good reason. IMO these minor whitespace errors will cause visual trouble at some point. A quick bot run to tidy them would help reduce the spread, via copy-paste, of this suboptimal syntax. – Jonesey95 (talk) 22:06, 4 March 2025 (UTC)
Looking at the page you linked over there, it's "misnested tags" complaining about <code>? Where exactly is the misnested <code> tag that gets fixed by this proposed change? Anomie 15:50, 2 March 2025 (UTC)
Looks like this may be a bug of some sort in Parsoid. A Lua module invocation is supposed to return wikitext that already has any templates expanded. Module:Article stub box, when producing the documentation for e.g. Template:1850s-autobio-novel-stub, outputs text including Typing <code>{{1850s-autobio-novel-stub}}</code> produces the message shown at the beginning. But if I make an API query for action=parse&page=Template:1850s-autobio-novel-stub&parsoid=1, it appears that Parsoid is expanding that template-like text anyway. How exactly the proposed whitespace removal fixes the misnesting is unclear to me, but if we have to work around the Parsoid bug it would probably be better to alter the module's output to produce something like Typing <code>&#123;&#123;1850s-autobio-novel-stub}}</code> or the equivalent of Typing <code><nowiki>{{</nowiki>1850s-autobio-novel-stub}}</code> or the like. Anomie 16:22, 2 March 2025 (UTC)

Request to add a template on several pages (+generate a list)

Hello! I would like to add Template:TvN (South Korean TV channel) television dramas on the following pages:

  • 12 Signs of Love
  • Ice Adonis
  • The Wedding Scheme
  • Queen and I (South Korean TV series)
  • I Love Lee Taly
  • I Need Romance 2012
  • Reply 1997
  • Glass Mask (TV series)|Glass Mask
  • The Third Hospital
  • Flower Boys Next Door
  • Nine (TV series)
  • Crazy Love (2013 TV series)
  • Monstar
  • Dating Agency: Cyrano
  • Who Are You? (2013 TV series)|Who Are You?
  • Basketball (TV series)
  • Let's Eat (TV series)
  • I Need Romance 3
  • A Witch's Love
  • High School King of Savvy
  • The Idle Mermaid
  • The Three Musketeers (South Korean TV series)
  • My Secret Hotel
  • Liar Game (2014 TV series)
  • Family Secret (TV series)
  • Righteous Love (TV series)
  • Hogu's Love
  • A Bird That Doesn't Sing
  • Hidden Identity (TV series)
  • Ugly Miss Young-ae
  • Bubble Gum (TV series)
  • Cheese in the Trap (TV series)
  • Pied Piper (TV series)
  • Another Miss Oh
  • Bring It On, Ghost
  • Drinking Solo
  • Introverted Boss
  • The Liar and His Lover (TV series)
  • Circle (TV series)
  • The Bride of Habaek
  • Criminal Minds (South Korean TV series)
  • Argon (TV series)
  • Because This Is My First Life
  • Avengers Social Club
  • Prison Playbook
  • Mother (South Korean TV series)
  • Cross (South Korean TV series)
  • My Mister
  • A Poem a Day
  • About Time (TV series)
  • What's Wrong with Secretary Kim
  • Familiar Wife
  • The Smile Has Left Your Eyes (TV series)
  • 100 Days My Prince
  • Tale of Fairy
  • Top Star U-back
  • Encounter (South Korean TV series)
  • The Crowned Clown
  • Touch Your Heart
  • Ugly Miss Young-ae
  • He Is Psychometric
  • Her Private Life (TV series)
  • Abyss (TV series)
  • Search: WWW
  • Designated Survivor: 60 Days
  • When the Devil Calls Your Name
  • The Great Show
  • Pegasus Market
  • Miss Lee
  • Catch the Ghost
  • Psychopath Diary
  • Black Dog: Being A Teacher
  • Money Game (TV series)
  • The Cursed (TV series)
  • Memorist
  • Hospital Playlist
  • A Piece of Your Mind
  • Oh My Baby
  • My Unfamiliar Family
  • Flower of Evil (South Korean TV series)
  • Record of Youth
  • Tale of the Nine Tailed
  • Birthcare Center
  • Awaken (TV series)|Awaken
  • True Beauty (South Korean TV series)
  • L.U.C.A.: The Beginning
  • Mouse (TV series)
  • Navillera (TV series)
  • Doom at Your Service
  • My Roommate Is a Gumiho
  • You Are My Spring
  • The Road: The Tragedy of One
  • High Class (TV series)
  • Yumi's Cells
  • Hometown (South Korean TV series)
  • Secret Royal Inspector & Joy
  • Melancholia (TV series)
  • Ghost Doctor
  • The Witch's Diner
  • Dr. Park's Clinic
  • Work Later, Drink Now
  • Military Prosecutor Doberman
  • Kill Heel
  • The Killer's Shopping List
  • Eve (South Korean TV series)
  • Link: Eat, Love, Kill
  • Adamas (TV series)|Adamas
  • Poong, the Joseon Psychiatrist
  • Mental Coach Jegal
  • Love in Contract
  • Behind Every Star
  • Missing: The Other Side
  • Our Blooming Youth
  • The Heavenly Idol
  • Stealer: The Treasure Keeper
  • Family: The Unbreakable Bond
  • Delightfully Deceitful
  • My Lovely Liar
  • Twinkling Watermelon
  • A Bloody Lucky Day
  • Maestra: Strings of Truth
  • Marry My Husband
  • Captivating the King
  • Wedding Impossible
  • Queen of Tears
  • Lovely Runner
  • The Midnight Romance in Hagwon
  • The Player 2: Master of Swindlers
  • The Auditors
  • Serendipity's Embrace
  • Love Next Door
  • No Gain No Love
  • Dongjae, the Good or the Bastard
  • Jeongnyeon: The Star Is Born
  • Parole Examiner Lee
  • Love Your Enemy
  • When the Stars Gossip
  • The Queen Who Crowns
  • My Dearest Nemesis
  • The Potato Lab
  • The Divorce Insurance
  • Resident Playbook
  • Unknown Seoul
  • The Tyrant's Chef

Additionnaly, if possible, could you generate a list of pages that are from Category:TvN (South Korean TV channel) television dramas but do not contain the template?

Can you help, please? Juliepersonne2 (talk) 11:59, 9 March 2025 (UTC)

WP:AWBREQ is probably a better place for requests like this. – DreamRimmer (talk) 12:39, 9 March 2025 (UTC)
Okay, I'll move the request. Thank you for the reply. Juliepersonne2 (talk) 13:14, 9 March 2025 (UTC)

Remove external links from NASCAR entry lists

As seen in pages such as 2024 NASCAR Craftsman Truck Series Championship Race, there are external links listed in the entry lists. This is a violation of WP:ELLIST, and they should be unlinked. A discussion is here: Wikipedia_talk:WikiProject_NASCAR#Should_sponsor_website_links_be_included_in_entry_lists. Not sure how many years this goes back, but they would likely be on at least a hundred pages. I don’t see them on pages pre-2020, but have not gone through it all. I have also not seen anything like it on NASCAR Cup Series race pages. It would be a lot to go through, but the category Category:NASCAR races by track should have all the races in it. Thanks! Yoblyblob (Talk) :) 17:17, 6 March 2025 (UTC)

If you're looking at <250 pages, then WP:AWB/TASKS is the better location for this request. Primefac (talk) 15:35, 10 March 2025 (UTC)
Put a request in there. Thanks! Yoblyblob (Talk) :) 19:55, 14 March 2025 (UTC)

Bot to clean up ISBNs after buggy copy-paste by Visual Editor

The Visual Editor, when it is used to copy and paste an ISBN, turns {{ISBN|1234567890}} into the less friendly [[International Standard Book Number|ISBN]] [[Special:BookSources/1234567890|1234567890]] or similar. There are a few variations, including ones that insert <bdi>...</bdi> tags and nsbp characters. See T174303 for more details.

It appears that T174303 is not getting any attention, so it would be helpful for a bot to reformat these ISBNs periodically. The basic task would be to convert ISBNs in the following formats back to {{ISBN}}, like this.

[[International Standard Book Number|ISBN]] [[Special:BookSources/1234567890|1234567890]]
[[ISBN (identifier)|ISBN]]&nbsp;[[Special:BookSources/978-0-85745-565-9|<bdi>978-0-85745-565-9</bdi>]]
[[ISBN (identifier)|ISBN]] [[Special:BookSources/9780521562867|<bdi>9780521562867</bdi>]]
[[ISBN (identifier)|ISBN]] [[Special:BookSources/978-1846098567|978-1846098567]]

There may be a few more variations on the format; I have a set of replacement patterns for this bug, along with other ISBN issues that should probably not be addressed by this proposed bot task, at User:Jonesey95/AutoEd/twoisbnparams.js. You can see a nice sample at Wikipedia:CHECKWIKI/WPC 069 dump, which is updated monthly, or by searching for likely strings in article space.

This bot would need to run periodically, at least once a month. – Jonesey95 (talk) 18:12, 28 January 2025 (UTC)

Jonesey95, how many pages are affected? — Qwerfjkltalk 16:25, 31 January 2025 (UTC)
It looks like there are about 530 on the current report, though some of those may have been cleaned up since the 15 December report date. New ones are created all the time. I get about 275 from an insource search. Other identifiers (DOI, ISSN, etc.) have the same problem, so there are more pages affected by the bug. In any event, the article count appears to be in the hundreds. – Jonesey95 (talk) 17:03, 31 January 2025 (UTC)
Jonesey95, I think the easiest thing to do here would be running mw:Manual:Pywikibot/replace.py on Tooforge in this case. (Despite not mentioning it in the documentation, it does seem to support multiple replacements.) — Qwerfjkltalk 11:41, 1 February 2025 (UTC)
Don't use the current counts as a sign of how bad it is, I'm been fixing a large number of these with AWB or by hand.Naraht (talk) 18:26, 7 February 2025 (UTC)
@Jonesey95: What patterns need to be cleaned up for DOI, ISSN, etc.? Thanks! GoingBatty (talk) 18:10, 8 February 2025 (UTC)
I don't know if they are as consistent, but here are some DOIs in a form that needs to be templated, here are more DOIs, and here are ISSNs. And some JSTOR. And some S2CID. And some PMID. And some PMC. It looks like the article populations overlap quite a bit, so a scripted pass should probably check for all of these IDs and maybe more. – Jonesey95 (talk) 18:23, 8 February 2025 (UTC)
@Jonesey95 - The bot that generates Wikipedia:CHECKWIKI/WPC 069 dump hasn't updated the file since December 15. Is it still operational? GoingBatty (talk) 04:35, 3 March 2025 (UTC)
Reported here, at the bot operator's talk page. In the meantime, Wikipedia:WikiProject Check Wikipedia/ISBN errors is updated more frequently, and can be updated manually by anyone with WPCleaner access (including me). – Jonesey95 (talk) 06:01, 3 March 2025 (UTC)

VE has a long history of producing garbage syntax of infinite variety. It needs a garbage collector. If the garbage collector can't fix the problem, it can at least detect and stop the diff from posting. Patterns can be maintained by editors similar to spam blacklists. -- GreenC 16:29, 1 March 2025 (UTC)

Of course it does. This has been clear from very early in VE's existence, but not much appears to have changed, hence bug reports like T174303, from 2017. How do we go from "this is a problem" to "developers are working on fixing the problem"? – Jonesey95 (talk) 07:29, 2 March 2025 (UTC)
cleaning out the Augean stables. I admire your long term commitment. The last post by matmerex sounds a little optimistic. -- GreenC 16:10, 17 March 2025 (UTC)

Text swap

Change all "Cercanías Ferrol" references to "Cercanías Galicia", mostly in these pages: https://es.wikipedia.org/w/index.php?title=Especial:LoQueEnlazaAqu%C3%AD/Cercan%C3%ADas_Galicia&limit=100. The denomination Cercanías Ferrol was never correct, but absent of a better one, it was chosen as the standard on Wikipedia. Renfe has now declared it Cercanías Galicia. UnniMan (talk) 09:36, 17 March 2025 (UTC)

Use WP:AWBREQ instead. – DreamRimmer (talk) 09:39, 17 March 2025 (UTC)
Sounds goog thanks UnniMan (talk) 11:23, 17 March 2025 (UTC)

Drafts in categories

I've asked for this in various places previously, only to face constant runaround and buck-passing, so I thought I'd try again here.

The job of WP:DRAFTNOCAT cleanup inevitably hits a lot of pages that started as mainspace articles and then got draftified as inadequate, but the draftifier overlooked the part of the process where they need to also get the draft version of the page out of mainspace categories. This doesn't account for all categorized drafts, but it does account for a big chunk of the total, and it's a task that could easily be farmed out to a bot to substantially reduce the amount of time that human editors have to invest in that report.

There's a bot that automatically detects that the page move has occurred and tags the moved page with {{Drafts moved from mainspace}}, which could easily be modified to also automatically disable any categories on the page at the same time as it adds the tag — and there's a bot that automatically checks Category:AfC submissions with categories on a regular basis to disable categories on drafts that have an AfC submission template on them (but not all or even most drafts actually do have an AfC submission template on them, which is why this bot isn't fully controlling DRAFTNOCAT problems on its own), and could easily have "go through Category:All content moved from mainspace to draftspace to check for any categories on drafts" added to its task list. But I previously approached both of those bots' maintainers directly, only to have them each decline the request and tell me to approach the other bot's maintainer instead, and when I tried to escalate to VPT, I was just told to talk to the same bot maintainers who had already rejected my request.

So the need still remains for a bot that could disable any mainspace category declarations that are still on pages that have been moved from mainspace to draftspace, in order to substantially reduce the amount of time that human editors have to invest into cleaning up categorized drafts. If the existing bots can't or won't be modified to add this task, then is there a new one that could be created to take it on? Bearcat (talk) 16:33, 15 March 2025 (UTC)

If no one picks this up in the next few days, one of my bots can take care of it. – DreamRimmer (talk) 16:42, 15 March 2025 (UTC)
@Bearcat: This is the current list, but I am not seeing any drafts that can be fixed. Can you please take a look? – DreamRimmer (talk) 10:09, 17 March 2025 (UTC)
On a spot check of a few random titles, I found Draft:2025 Bielefeld mass shooting. But, of course, it's not just a one-time task: there are always new articles being moved into draftspace every day, so it's a regular check that would have to be performed at least once a day every day — several times a day would be even better, if possible, but certainly no less often than once daily.
I should add that I did also find Draft:Mad Max: The Wasteland, which has Category:Warner Bros. drafts on it — that's not a problem, because that's a category meant for drafts, but because it's directly declared on drafts itself rather than being transcluded by a template it's a complication that any bot would need to account for. So if this does go ahead, the bot needs to skip, and not disable, categories whose names specifically end in "drafts". Bearcat (talk) 14:46, 17 March 2025 (UTC)
@Bearcat: Would it work for you if I skip this list and instead check and disable categories in newly moved drafts daily? I'll also skip any category that ends with 'drafts'. – DreamRimmer (talk) 15:21, 17 March 2025 (UTC)
I'm not sure, but it kind of sounds like you think this is more complicated than it actually needs to be. All of the moved drafts are already tracked in Category:All content moved from mainspace to draftspace, the maintenance category that gets automatically transcluded by the {{Drafts moved from mainspace}} template. So it's really, truly just a matter of having a bot take a run through that category once or twice a day to check for any categories on those pages and disable them if necessary, rather than having to manually generate your own lists. Bearcat (talk) 15:37, 17 March 2025 (UTC)
This is not a manually generated list. These drafts come from Category:All content moved from mainspace to draftspace. This search query removes all drafts that either have no categories or are already disabled. I meant that since there are currently no fixable drafts in this category, it would be best to skip all current members and check only newly added ones. – DreamRimmer (talk) 15:44, 17 March 2025 (UTC)
@Bearcat: Are you okay with it? – DreamRimmer (talk) 11:27, 19 March 2025 (UTC)
I'm "okay" with any solution that gets the job done, so what works for you is the only thing that matters here. Bearcat (talk) 15:38, 19 March 2025 (UTC)
BRFA filedDreamRimmer (talk) 08:52, 21 March 2025 (UTC)

Text swap in refs

Replace interlanguage links for publication names in refs of Korea-related articles with redlinks. See my talk page; I was a dummy and used AWB to add these interlanguage links without realizing I shouldn't have done so. I'm not sure how many pages are affected by this; it's at least several hundred and I think likely over a thousand.

Task:

  1. Find (?<param>work|publisher|website|newspaper|encyclopedia|journal|agency)(?<space1>[ ]*)=(?<space2>[ ]*)\{\{ill\|(?<articlename>[^\|]+)\|lt=(?<ltval>[^\}]+)\|ko\|(?<kowikiname>[^\|\}]+)\}\}
    Replace ${param}${space1}=${space2}[[${articlename}|${ltval}]]
  2. Find (?<param>work|publisher|website|newspaper|encyclopedia|journal|agency)(?<space1>[ ]*)=(?<space2>[ ]*)\{\{ill\|(?<articlename>[^\|]+)\|ko\|(?<kowikiname>[^\|\}]+)\}\}
    Replace ${param}${space1}=${space2}[[${articlename}]]
  3. Find (?<param>work|publisher|website|newspaper|encyclopedia|journal|agency)(?<space1>[ ]*)=(?<space2>[ ]*)\{\{ill\|(?<articlename>[^\|]+)\|ko\|(?<kowikiname>[^\|\}]+)\|lt=(?<ltval>[^\}]+)?\}\}
    Replace ${param}${space1}=${space2}[[${articlename}|${ltval}]]

seefooddiet (talk) 01:36, 20 March 2025 (UTC)

Category:WikiProject Korea articles has a huge number of pages, and with your total edits at 29,923, processing the entire category may not be the best use of resources. If you can share the timestamps for when you started and stopped adding interlanguage links, we can pinpoint the relevant pages and make the replacements with minimal effort. – DreamRimmer (talk) 09:05, 21 March 2025 (UTC)
Bad news; I have a previous account with 77k edits and started doing this on that one too. The bulk of my edits involve AWB. I received AWB permissions on May 6, 2023. I probably started adding ILLs a few months after that. seefooddiet (talk) 23:52, 21 March 2025 (UTC)
Oh wait we can just search by regex for which pages are affected. busy atm but can write one up in a bit seefooddiet (talk) 23:56, 21 March 2025 (UTC)
insource:/(work|publisher|website|newspaper|encyclopedia|journal|agency)=\{\{(ill|interlanguage)/
This regex query yields 5,301 articles. A good chunk of these are not my doing, but they need fixing anyway so I think may as well do. seefooddiet (talk) 00:44, 22 March 2025 (UTC)
I can have a look if no one else beats me to it, it may take me a week or so to get something together. I think this is going to be a case where parsing the template is better than a regex replace, especially as we have non-latin characters involved. Mdann52 (talk) 15:16, 22 March 2025 (UTC)