This is an old revision of this page, as edited by Dbeef(talk | contribs) at 02:10, 23 May 2025(Transcluding the BRFA for DeadbeefBot II (easy-brfa)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.Revision as of 02:10, 23 May 2025 by Dbeef(talk | contribs)(Transcluding the BRFA for DeadbeefBot II (easy-brfa))
To run a bot on the English Wikipedia, you must first get it approved. Follow the instructions below to add a request. If you are not familiar with programming, consider asking someone else to run a bot for you.
If your task could be controversial (e.g. most bots making non-maintenance edits to articles and most bots posting messages on user talk pages), seek consensus for the task. Common places to start include WP:Village pump (proposals) and the talk pages of the relevant policies, guidelines, templates, and/or WikiProjects. Link to this discussion in your request for approval.
You will need to create an account for your bot if you haven't already done so. Click here when logged in to create the account, linking it to yours. (If you do not create the bot account while logged in, it is likely to be blocked as a possible sockpuppet or unauthorised bot until you verify ownership)
Create a userpage for your bot, linking to your userpage (this is commonly done using the {{bot}} template) and describing its functions. You may also include an 'emergency shutoff button'.
II
Filing the application
easy-brfa.js can be used for quickly filing BRFAs. It checks for a bunch of filing mistakes automatically! It's recommended for experienced bot operators, but the script can be used by anyone.
Enter your bot's user name in the box below and click the button. If this is a request for an additional task, put a task number as well (e.g. BotName 2).
Complete the questions on the resulting page and save it.
Your request must now be added to the correct section of the main approvals page: Click here and add {{BRFA}} to the top of the list, directly below the comment line.
For an additional task request: use {{BRFA|bot name|task number|Open}}
III
During the approvals process
During the process, an approvals group member may approve a trial for your bot (typically after allowing time for community input), and AnomieBOT will move the request to this section.
Run the bot for the specified number of edits/time period, then add {{Bot trial complete}} to the request page. It helps if you also link to the bot's contributions, and comment on any errors that may have occurred.
AnomieBOT will move the request to the 'trial complete' section by moving the {{BRFA}} template that applies to your bot
If you feel that your request is being overlooked (no BAG attention for ~1 week) you can add {{BAG assistance needed}} to the page. However, please do not use it after every comment!
At any time during the approvals process, you may withdraw your request by adding {{BotWithdrawn}} to your bot's approval page.
IV
After the approvals process
After the trial edits have been reviewed and enough time has passed for any more discussion, a BAG member will approve or deny the request appropriately.
For approved requests: The request will be listed here. If necessary, a bureaucrat will flag the bot within a couple of days and you can then run the task fully (it's best to wait for the flag, to avoid cluttering recent changes). If the bot already has a flag, or is to run without one, you may start the task when ready.
For denied/expired/withdrawn requests: The request will be listed at the bottom of the main BRFA page in the relevant section.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard. The result of the discussion was Approved.
And will start syncing from the Git file to the on-wiki script.
Any user script author intending to use the bot must (1) insert the header both on-wiki, and on the Git file themselves, serving as an authorization for the bot to operate. (2) Create an application/json webhook in their Git repository pointing to https://deadbeefbot-two.toolforge.org/webhook to notify the bot of new commits that have occured on the file.
The bot will then make edits using the commit message and author information to update the user scripts.
Currently, it only supports js files in the User namespace, but its scope could be trivial expanded to cover more formats (CSS/plain wikitext) depending on usage.
This is an improvement upon the previous DeltaQuadBot task: Auditability is achieved through linking on-wiki edits to GitHub/GitLab URLs that tell you who made what changes. Webhooks are used instead of a periodic sync. Authorization must be given on-wiki to allow syncs to happen.
The code is currently a working demo. I'm planning on expanding its functionality to allow Wikimedia GitLab's webhooks, and actually deploying it. I will also apply for Interface Administrator perms as this bot requires IA permissions. Will also request 2FA on the bot when I get to it.
Discussion
Just so we are aware of the alternatives here: bd808 suggested on Discord of an alternative solution to this problem which does not involve an IntAdmin bot, where script developers can create OAuth tokens and submit those tokens to a Toolforge service, and the Toolforge service would use those OAuth tokens to make edits as the script author (0xDeadbeef/GeneralNotability/etc.) instead of having the edits coming from a single bot account. There are different trade offs. I think if we're okay with a bot having IA permissions, then this solution is more convenient to setup, as the OAuth one requires going through the extra steps of creating a token. This bot also makes those edits in a centralized place when people want to inspect which scripts are maintained using this way. beef [talk]02:34, 23 May 2025 (UTC)[reply]
I see a risk here in having a bot blindly copy from the github without any human verification. Interface editor rights are restricted for very good reason, as editing the site's js would be very vaulable to a potential attacker. By introducing this bot, we now also have to be concerned about the security of the github repo's the bot is copying from. Something which is external to Wikipedia. We have no control over who might be granted access to those repos, and what they might do.
In fact, it may actually hinder development of tools/scripts. Currently, as a maintainer, one can be fairly liberal in who you add to your github repo, knowing that you can review any changes when you manually move them from the GitHub to on-wiki. With this change, anyone you add to the repo, realistically should be someone the community would trust with interface admin rights. --Chris09:49, 23 May 2025 (UTC)[reply]
I think the bot task is more aimed at user scripts than gadgets. You don't need to be an interface admin to edit your own scripts. Being an opt-in system, script maintainers who don't wish to take on the risk can choose not to use the system. As for security, it should be the responsibility of the script author to ensure that they, and others who have been added to the repo, have taken adequate measures (like enabling 2FA) to secure their github/gitlab accounts. – SD0001 (talk) 10:14, 23 May 2025 (UTC)[reply]
For what it's worth, there are already people doing this kind of things to their own userscripts, such as User:GeneralNotability/spihelper-dev.js. However, they were never done with a bot because the bot would need to be interface admin. So they just store BotPasswords/OAuth tokens in GitHub and write a CI job that uses that to edit on-wiki.
Being someone with some fair bit of the open source process, I don't see why someone who wants to personally review any changes themselves should choose to add people liberally to the GitHub repo, and then choose to use this bot if it gets approved. They should try to move the development/approval cycle onto GitHub, appropriately using pull requests and protected branches, or just keep doing what they are doing. beef [talk]10:22, 23 May 2025 (UTC)[reply]
Script maintainers might be happy to take the risk of automatically copying scripts from an external site to become active client-side scripts at Wikipedia, and they might be happy with the increased vulnerability surface area. The question here is whether the Wikipedia community thinks the risk–benefit ratio means the procedure should be adopted. Johnuniq (talk) 10:36, 23 May 2025 (UTC)[reply]
User scripts are an "install at your own risk" already, so feel free to avoid installing user scripts that do any automatic syncing. If the community doesn't like a bot that does this for whatever reason, I can also be fine with a "store OAuth tokens that give a toolforge service access to my account" approach which requires no community approval and no bots to run, just slightly less convenient to setup.
All I am saying is that the increased vulnerability surface area remains to be proven. WP:ULTRAVIOLET and WP:REDWARN have been doing this for years. Whether approval for code occurs on-wiki or off-wiki shouldn't matter. beef [talk]11:00, 23 May 2025 (UTC)[reply]
The bot as proposed crosses a pretty major security boundary by taking arbitrary untrusted user input into something that can theoretically change common.js for all users on Wikipedia.
@Chess:theoretically change common.js for all users on Wikipedia - no, only common.js that link to the specified page/transclude the specified page would be in scope for the bot. dbeef [talk]01:47, 9 June 2025 (UTC)[reply]
@Dbeef: I understand what's in scope, but is the authorization token actually that granular? If there's a vulnerability in the bot, I could exploit that to edit anything. Chess (talk) (please mention me on reply)02:04, 9 June 2025 (UTC)[reply]
I had thought about the security implications long before this BRFA:
The only public facing API of the bot is a webhook endpoint. While anyone can send in data that looks plausible, the bot will only update based on source code returned from api.github.com. So malicious actors have to be able to modify the contents of api.github.com to attack that.
The credentials are stored on Toolforge, standard for a majority of Wikipedia bots. Root access is only given to highly trusted users and I don't think it will be abused to obtain the bot's OAuth credentials. If you think otherwise, I can move the bot deployment to my personal server provided by Oracle.
The public facing part uses Actix Web, a popular and well-tested Web Framework. Toolforge provides the reverse proxy. Don't think there's anything exploitable to get RCE.
The bot always checks the original page for the template with the configured parameters before editing. If the sync template is removed by the original owner or any interface administrator, the bot will not edit the page.
@Dbeef: To answer Chess about BotPasswords, there is just one checkbox for "Edit sitewide and user CSS/JS" that encompasses both. ~ Amory(u • t • c)01:06, 10 June 2025 (UTC)[reply]
While anyone can send in data that looks plausible, the bot will only update based on source code returned from api.github.com. So malicious actors have to be able to modify the contents of api.github.com to attack that. How does the bot verify the contents_url field in a request made to the webhook is hosted on api.github.com in the same repository as the .js file it is synching to?
I'd be reassured by OAuth, mainly because it avoids taking untrusted user input into a bot with the permissions to edit MediaWiki:Common.js on one of the top ten most visited websites on Earth. Chess (talk) (please mention me on reply)01:58, 10 June 2025 (UTC)[reply]
How does the bot verify the contents_url field in a request made to the webhook is hosted on api.github.com in the same repository as the .js file it is synching to? That's a really good point. I need to fix that. dbeef [talk]02:10, 10 June 2025 (UTC)[reply]
@Dbeef: I'm uncomfortable with interface admin being granted to a bot that hasn't had anyone else do a serious code review.
Not verifying contents_url would've allowed me to modify any of the scripts managed by dbeef onwiki, to give an example.
@Chess: That is a valid concern and an oversight. It was originally not there when I queried raw.githubusercontent, but I noticed that that updated slowly. I then decided to use api.github.com but hadn't realized contents_url was user input.
I won't be of much help reviewing my own code, but maybe other people can take a look as well? Maybe we can ping some rust developers.. dbeef [talk]15:17, 13 June 2025 (UTC)[reply]
Approved for trial (30 edits or 30 days, whichever happens first). Please provide a link to the relevant contributions and/or diffs when the trial is complete. I will be cross-posting this to both WP:AN and WP:BN for more eyes. Primefac (talk) 13:23, 8 June 2025 (UTC)[reply]
I will be deploying the bot in a few days and do some deliberate test edits to get this started. If any user script authors are willing to try this for trial please let me know :) dbeef [talk]13:37, 8 June 2025 (UTC)[reply]
The linked discussion seemed to settle pretty quickly on using OAuth rather than interface editor permissions. Is that still the plan? Anomie⚔03:07, 9 June 2025 (UTC)[reply]
That's not how I read it. It was explored as an alternative but to me it looks like more editors expressed support for the interface editor bot. dbeef [talk]03:37, 9 June 2025 (UTC)[reply]
On reviewing again, it looks like I misremembered and misread. The subdiscussion that concluded in OAuth was about the possible alternative to interface editor. OTOH I'm not seeing much support for the conclusion that interface editor was preferred over (normal) OAuth either; the few supporting statements may have been considering only interface editor versus password sharing. Anomie⚔11:31, 9 June 2025 (UTC)[reply]
It isn't necessarily an either/or thing. Both solutions can co-exist. If some people prefer the OAuth-based approach, they can of course implement that – it doesn't even need a BRFA. What's relevant is whether the discussion had a consensus against the interface editor approach – I don't think it does. – SD0001 (talk) 11:39, 9 June 2025 (UTC)[reply]
What's relevant is whether the discussion had a consensus against the interface editor approach – I don't think it does. As I said, I misremembered and misread. OTOH, dbeef claimed but to me it looks like more editors expressed support for the interface editor bot which I don't unambiguously see in the discussion either.
If some people prefer the OAuth-based approach, they can of course implement that – it doesn't even need a BRFA. I don't see any exception in WP:BOTPOL for fully automated bots using OAuth from the requirement for a BRFA. WP:BOTEXEMPT applies to the owner's userspace, not anyone who authorizes the bot via OAuth. WP:ASSISTED requires human interaction for each edit. WP:BOTMULTIOP does not contain any exemption from a BRFA. Anomie⚔12:00, 9 June 2025 (UTC)[reply]
That's a fair observation. I do see support for an interface admin bot and I believe there are no substantial concerns that would make a blocker. I continue to think of interface admin bot as the easier solution but I am not opposed to figuring out the OAuth piece also at a later time. It is just that I don't have truckloads of time to focus on stuff that seems on its surface a bit redundant. dbeef [talk]12:46, 9 June 2025 (UTC)[reply]
With OAuth, the edits would be from the users' own accounts. No bot account is involved as edits are WP:SEMIAUTOMATED with each push/merge to the external repo being the required human interaction. – SD0001 (talk) 13:43, 9 June 2025 (UTC)[reply]
I look at WP:SEMIAUTOMATED as having the user approve the actual edit, not just do something external to Wikipedia that results in an edit that they've not looked at. But this discussion is getting offtopic for this BRFA; if you think this is worth pursuing, WP:BON or WT:BOTPOL would probably be better places. Anomie⚔12:01, 10 June 2025 (UTC)[reply]
Instead of requiring it to be linked and followed by some text in an arbitrary sequence, I'd suggest to use a transclusion for clarity, like: {{Wikipedia:AutoScriptSync|repo=<>|branch=<>|path=<>}} (perhaps also better to put the page in project space). – SD0001 (talk) 15:52, 8 June 2025 (UTC)[reply]
that's a little harder to parse but I suppose not too hard to implement, if parsoid can do it (hand-parsing is an option too). I'll take a look in the next few days. dbeef [talk]16:02, 8 June 2025 (UTC)[reply]
After reading comments here, I'm unsure. (1) Why do we need a bot for this? Is there a need to perform this task repeatedly over a significant period of time? (Probably this is answered in the VPT discussion linked above, but it's more technical than I can understand.) (2) Imagine that a normal bot copies content from github to a normal userspace page, and then a human moves it to the appropriate page, e.g. first the bot puts a script at User:Nyttend/pagefordumping, and then I move it to User:Nyttend/script. This should avoid the security issue, since there's no need for the bot to have any rights beyond autoconfirmed. Would this work, or is this bot's point to avoid the work involved in all those pagemoves? (3) On the other hand, before interface admin rights were created, and normal admins could handle this kind of thing, do we know of any adminbots that were working with scripts of any sort, and if so, how did the security process work out? Nyttend (talk) 10:17, 9 June 2025 (UTC)[reply]
(1) Yes, because platforms like GitHub give better experiences when developing user scripts, instead of having people copy from their local code editor and paste to Wikipedia each time. This includes CI and allowing transpiled languages such as TypeScript to work. (2) is this bot's point to avoid the work involved in all those pagemoves - Yeah. (3) I don't think there was any bot that did this. dbeef [talk]10:34, 9 June 2025 (UTC)[reply]
How are you handling licensing? When you, via your bot, publish a revision here you are doing so under CCBYSA4 and GFDL. What are you doing to ensure that the source content you are publishing is available under those licenses? — xaosfluxTalk10:07, 13 June 2025 (UTC)[reply]
I think I could put a section on WP:USync that says "by inserting the header you assert that any code you submit through the Git repository is licensed under CCBYSA4/GFDL or another compatible license", but that's the best I can do.
Would you want me to parse SPDX licenses or something? I think the responsibility is largely on the people who use the bot and not the bot itself when it comes to introducing potential copyvios. dbeef [talk]15:09, 13 June 2025 (UTC)[reply]
Is a compatible license even common on that upstream? You can't delegate authority, whoever publishes a revision is the one issuing the license on the derivative work. — xaosfluxTalk18:31, 13 June 2025 (UTC)[reply]
This appears that it may end up whitewashing licenses. Anyone that reads any page from our project should be able to confidentially trust the CCBYSA license we present, including required elements such as the list of authors. — xaosfluxTalk00:56, 14 June 2025 (UTC)[reply]
SPDX is an industry standard and is meant for automatically verifying the licence of a source file. Would that be inappropriate here? Chess (talk) (please mention me on reply)04:36, 14 June 2025 (UTC)[reply]
I was just wondering how exactly we should be doing it.
Including it in a template in the userscript makes sense, since then the list of authors' preferred attribution can be maintained on the repo instead of onwiki, while still being replicated onwiki.
The "license" field should probably be SPDX if that makes it easier to parse.
Specifically, the "licence" field should contain CC-BY-SA-4.0 OR GFDL-1.3-or-later since that matches the requirements for contributing to Wikipedia, which is that all content must be available under both licences. I don't think allowing MIT-only (or other arbitrary permissive licences) makes sense right now under the assumption it's compatible with CC-BY-SA/GFDL. We might have to maintain the MIT licence text, and the only people using this bot would be those writing userscripts specifically for Wikipedia. Multiply that by the many variants of licences that exist.
I think it's a good idea to keep the amount of parsing in the bot as small as possible given its permissions and impact. Chess (talk) (please mention me on reply)02:30, 15 June 2025 (UTC)[reply]
We can't take content in incompatible licenses and copy them to Wikipedia. Any page published needs to be available to those reading it CCBYSA and GFDL. Additionally, if the remote site uses a -BY- license, we need to ensure that the remote authors continue to be properly attributed when republishing here. — xaosfluxTalk12:59, 17 June 2025 (UTC)[reply]
I don't see a problem. All edits are implicitly released under those licenses, whether done from the UI or through some script. All you've to do is to declare in the bot documentation that "you agree to release all code you deploy to the wiki via the bot under CC-BY-SA and GFDL". – SD0001 (talk) 13:45, 17 June 2025 (UTC)[reply]
this scheme is what i had in mind as well. I am not entirely sure whether a mandatory license field is needed. The assertion that content is compatibly licensed should ideally come from the very edit that inserts the WP:USync header (and we should assume such), and it's not that inserting a license field will be any different from it. dbeef [talk]10:58, 18 June 2025 (UTC)[reply]
If it matters, I can vouch that CI/CD is a basic requirement now for much of software development, so I'm generally supportive of the intent of this proposal. It's better because it creates a single source of truth for what is currently deployed to the Wiki. Chess (talk) (please mention me on reply)16:35, 13 June 2025 (UTC)[reply]
I'm not sure if I want to do that. Using the name of the committer and linking to the original commits is helpful for attribution. It's bad that GitHub isn't in our Special:Interwiki map (and that edit summaries don't support external links), but once I add support for Wikimedia GitLab that a shorter link would be supported (example: (diff)) dbeef [talk]03:06, 29 June 2025 (UTC)[reply]
@BAG: and Novem Linguae (sorry for the mass-ping, but IA for a bot is kind of a "big deal"), do any of you see issue with this bot getting indefinite IA (i.e. "task approved")? I'm not seeing any issues but I'd like at least 1-2 other folk to sign off on this. Primefac (talk) 12:20, 12 July 2025 (UTC)[reply]
I have fairly major security concerns with this. What prevents me from going to GitHub, maliciously replace the userscript with a "Find Image.img --> Replace with Very_Gross_Image.img" type of script instead? This Bot would then sync Wikipiedia and GitHub, uploading my malicious script on Wikipedia, all this without anyone doing any review at any point.
I'm open to being convinced I'm not understanding the situation clearly, but an OAuth "upon request by the script maintainer" type of solution seems better to me. Headbomb {t · c · p · b}13:57, 12 July 2025 (UTC)[reply]
You'd need collaborator or member access to that GitHub repo. I imagine the user script creator would be aware and in control of who they add, and would only add people they trust. –Novem Linguae (talk) 14:00, 12 July 2025 (UTC)[reply]
Hi @Headbomb: the bot will only use content from the GitHub repo that you specify when inserting the header. Everything won't work unless you insert the header that contains the link to the GitHub repo on the Wikipedia script page first.
If you mean that, you yourself insert the malware to your own user script, but do it through the bot - I don't see how you'd evade scrutiny or force the bot to take responsibility in that situation. dbeef [talk]15:19, 12 July 2025 (UTC)[reply]
Testing - I was one of the testers. Looked good in testing. Was useful and not buggy. The bot responded very fast (like 5 seconds).
For security reasons, we should probably also scrutinize the security algorithm, and the security code. Dbeef, please correct me if I get anything wrong, or feel free to expand on thie below.
Security algorithm - Detailed at Wikipedia::USync. The bot checks for 1) an authorization string on the onwiki .js page, and 2) an authorization string in the GitHub repo. The authorization string contains the exact GitHub repo, and must be present in both places. So you need access to edit 1 and 2 in order to set up and authorize the bot. Access to edit 1 is obtained by a) that .js file being in your userspace, or b) being an interface administrator. Access to edit 2 is obtained by being a collaborator or member on that GitHub repo. The user script user picks which GitHub repo. The assumption is that the user script owner will own the repo, and will only grant access to the repo to people that they trust. The repo is specifically spelled out in the onwiki edit (1).
Everything in this chain of security checks starts with the edit to the .js page, and the edit contains the specific GitHub repo to link. So anyone who can edit that .js page has control over all this. The folks that can edit a .js page are the user if it's one of their subpages, and interface administrators. Those users are all trusted, so this should be fine.
Security code - Here's one of the more important lines of code for security. It takes the string it found onwiki specifying which GitHub repo is to be linked, and compares it to the GitHub page to make sure they're identical. parse_js_header() also looks important. Other eyes encouraged to make sure I didn't miss anything. –Novem Linguae (talk) 14:17, 12 July 2025 (UTC)[reply]
Attack vector 1 - Social engineering attacks. If a user script writer can be convinced to add a [[Wikipedia:USync]] that points to a repo they don't own, that could be an issue. However, I don't see that as a deal breaker. I can think of a worse social engineering vector that involves user scripts, and we still allow that.
Attack vector 2 - Adding someone to your GitHub repo who later goes rogue or gets hacked. It's a risk, but in my opinion not worth blocking this over. Multiple people being able to collaborate on a repo that has continuous deployment set up, which is what this bot enables, is worth it, in my opinion. –Novem Linguae (talk) 14:24, 12 July 2025 (UTC)[reply]
Attack vector 3 - Can this bot be tricked into updating a non-js page? If so, someone could trick it into spamming mainspace or something. Dbeef, can you talk a bit more about the bot's page whitelist algorithm? Things to think about... Could someone get the bot to edit a page that doesn't end in .js? Could someone get the bot to edit a page that ends in .js but isn't content model = javascript? –Novem Linguae (talk) 14:28, 12 July 2025 (UTC)[reply]
Thanks for the great summary, @Novem Linguae. I plan to elaborate a bit more with a walkthrough of the code on this, but to answer your question first, the bot's page allowlisting happens at parser.rs or the search function in particular. Only pages that (1) transclude the WP:USync page and (2) have the javascript contentmodel will be stored in an in-memory map.
(note that CSS support may be added in the future, which will result in a check for the css content model when that has been implemented) dbeef [talk]14:48, 12 July 2025 (UTC)[reply]
"Access to edit 2 is obtained by being a collaborator or member on that GitHub repo."
This is my fear/contention. On Wikipedia, we tightly control who can edit userscripts. The user themselves, or an IA. On GitHub, it's whoever the script maintainer decides. You might say "but that's the same as trusting the script coder on wiki", but really is not. If, on wikipeda, ScriptCoder31's account get taken over, we block them. On GitHub... now we're depending on a third party deciding to get involved in a credentials fight. Or maybe I trust ScriptCoder31 to be a sane coder, but they have poor judgment on granting codebase access and grants their high school sibling access because they think it'll be a good learning experience and said high school sibling decides that replacing all images on wikipedia with VERY_GROSS_IMAGE.IMG would be very funny. Or they run into an issue, ask for help on stack exchange and a rando asks to have access because they want to optimize the code / can fix a problem / makes up whatever excuse to gain code access.
@Headbomb: I have trouble understanding why it is all different. The responsibility of using the tool lies always at the person using the tool. If ScriptCoder31's account gets taken over, we will block them. If ScriptCoder31 inserts an authorization to their own userscript then the Git repository publishes malware, we block ScriptCoder31 for introducing malware.
Said careless ScriptCoder31 must face any consequences for the edits the bot makes on behalf of them. It isn't different from an OAuth scheme where the script owner provides authorization via OAuth, then the automated system will use the OAuth authorization to still sync Git (potentially malware) to Wikipedia.
Why do you think people who use the bot to make proxied edits will suddenly now get away with ignorance/bad decisions? dbeef [talk]12:57, 14 July 2025 (UTC)[reply]
The difference here is that if you, dbeef, have a script on Wikipedia, I can trust that only you, dbeef, can make changes to it. So if your coding friend suggest a gross image replacer on April 1st, you go "haha funny, but no, I'm not uploading that".
Whereas on github, while you may control who has access, I must trust that you and all those you grant access to are not nefarious actors.
Of course, nothing would stop dbeef from modifying the script in his userspace to load a script in someone else's userspace. Or letting someone else log in to his account, for that matter. I, for one, think Headbomb's concerns are overblown. All trust is by definition delegable in a system or set of systems that doesn't rely on some sort of external unforgable proof of identity. * Pppery *it has begun...02:00, 15 July 2025 (UTC)[reply]
Security Overview. To make sure that we trust this bot enough, here are three points to cover:
Make sure that the account itself is secure.
Make sure that the bot does not edit outside of pages that we want them to edit.
Make sure that the bot does not insert bad content (only the content we want them to insert)
And here's my summary:
The bot is run on Toolforge. The bot's OAuth secret is stored in a file:
-rw------- 1 tools.deadbeefbot-two tools.deadbeefbot-two 1312 Jun 9 04:34 secrets.toml
This means only people with access to the deadbeefbot-two tool (only me) plus people with root access (WMCS team plus a few trusted volunteers) have access to the account. The account is additionally enrolled with 2FA and with a strong password (note that using the OAuth token does not require 2FA).
We use GitHub webhooks as a trigger but not as a source of truth. The webserver that accepts GitHub webhooks is open to the public so all sorts of requests can come through. It is hard to do validation (whether the webhook content is actually coming from GitHub) but we don't need to. The link we use to fetch the content from GitHub is hardcoded to be in the format https://api.github.com/repos/{repo}/contents/{path} (may change in the future to allow Wikimedia GitLab as an alternative option). So any requests received by the webserver only acts as a notification for the bot to check the content from actual authoritative sources. The webhook content affects nothing except for the edit summary that the bot uses (which is pointless in attacking - an attacker needs to race with GitHub itself for scripts that are setup to use GitHub webhooks properly to get their malicious version to our server - we can also just stop using the webhook content entirely but I thought it was better in the current scheme) You also can't change the header from Git - the bot will error if the header (parameters containing repo, path, and refspec for Git) on the Git side has different content than the header on Wikipedia - so you can only change the header from Wikipedia.
Good callout on setting the Toolforge password file to 0600. That is easy to forget for someone used to non-Linux systems, or someone used to mainstream PaaS web hosting where your files are all kept private for you. –Novem Linguae (talk) 15:23, 12 July 2025 (UTC)[reply]
Okay, it turns out we can actually bulletproof this bad boy. GitHub's "ooOh let's validate your webhooks" suggestion is absolutely bonkers. If it was a simple secret parameter that gets attached to every request, at least we can store the hash of that secret to allow us to actually have it configurable per person on-wiki without any middleperson/database/web interface shenanigans. But GitHub choose to hash the entire request for whatever reason - something that GitLab has not decided to do (FOSS projects keep winning).
But it turns out we can use a way simpler method - only allow GitHub's IPs. I'll debug this a bit to see if there are any interactions with Toolforge proxying requests to us and write up a CIDR check tomorrow. dbeef [talk]15:39, 12 July 2025 (UTC)[reply]
I was wondering whether there are any X-Forwarded-For, but this is a little discouraging. I don't see it as very important for the BRFA to pass though. So I'll try to investigate later. dbeef [talk]10:31, 14 July 2025 (UTC)[reply]
Anonymizing proxies by design don't pass XFF headers. I agree though that webhooks don't really need to be validated. (GitHub hashing the entire request seems more secure as it makes it harder for the secret key to get leaked in logs. And Gitlab is certainly no paragon of FOSS; its development is non-inclusive and code is intentionally structured to reduce customizability, to promote their paid services.) – SD0001 (talk) 13:20, 14 July 2025 (UTC)[reply]
GitHub hashing the entire request seems more secure as it makes it harder for the secret key to get leaked in logs - yeah there are reasons for doing that but it can't work for us because then we'd need to store it somewhere instead of having it rely on all public information (just put a hash as parameter to WP:USync and be done)
development is non-inclusive and code is intentionally structured to reduce customizability, to promote their paid services should have figured. forgejo appears to be better. dbeef [talk]13:27, 14 July 2025 (UTC)[reply]
A user has requested the attention of a member of the Bot Approvals Group. Once assistance has been rendered, please deactivate this tag by replacing it with {{t|BAG assistance needed}}. Looks like discussion has mostly concluded. Please move forward with whatever actions you deem necessary here. cc @Primefacdbeef [talk]18:04, 21 July 2025 (UTC)[reply]
Would just like to reiterate my support for this. I presented some security scenarios above as part of my due diligence / thorough review, but I do not see those as blockers. I think this bot would be a net positive. –Novem Linguae (talk) 03:10, 22 July 2025 (UTC)[reply]
I'm pretty happy with the general concept of this BRFA (updating userscripts via GitHub). But I'm not sure why the preferred approach isn't either OAuth (to make the edits on the user's account), or creating a standardised GitHub Action (like GeneralNotability's) to have users set this up easily themselves?
Both of those options prevent having an intadmin bot account. The latter option seems ideal as it's also decentralised and doesn't rely on a central service, but would be limited to GitHub and other platforms with action runners. My main concern here is future security risks. While I'm happy to assume there's no security issues currently, BAG don't really review bots after their approval and bot development typically doesn't have code review standards, so bugs could be introduced later.
This isn't an 'oppose' per se and I don't want to scupper the effort to get some solution to the script problem out (as I agree the developer experience is currently incoherent), but I figure I may as well raise the above questions now as it's unlikely we'll revisit them after an approval. ProcrastinatingReader (talk) 15:33, 22 July 2025 (UTC)[reply]
@ProcrastinatingReader: I'm not sure why you think this won't be revisited. Finding ways to sync Git with Wikipedia script pages is a recurring problem for developers on this website.
On one hand, I am sympathetic to your concern about future security issues, but I also have self-serving biases that tell me I don't think there will be future security issues. On the other hand, I'm trying hard to get us a solution that we can use for now.
The problem was that I had just enough energy to develop this solution. An OAuth solution would involve running a server with a front end, allowing users to manage their syncs (which would need a database), talking with Wikimedia OAuth, figuring out the stuff with refresh tokens, applying for the OAuth application, etc. A GitHub Action solution is harder to setup for script authors, it also has some drawbacks that an IntAdmin bot doesn't have which I mention in the VPT thread (Auditability, Ease of use, Efficiency).
If we have consensus to support the IntAdmin bot solution, then it's a good solution to the problem. If anyone wants to develop a better solution, then we can all switch to that. I don't want us to make perfect be the enemy of good. dbeef [talk]14:19, 23 July 2025 (UTC)[reply]
The bot approval wouldn't necessarily mean this is the preferred approach. We just started using OAuth to deploy Twinkle and AfC-helper, with more gadgets along the way, and a GitHub Action already exists for user script deployments. But many folks don't want to learn to use Github actions, or entrust their login credentials with GitHub. (Also, note that GeneralNotability's GitHub action looks too simple – it only works because they are an admin. Others won't be able to use the same approach as Github IPs are hardblocked. Deploy-action works around this by proxying the edits via Toolforge.) – SD0001 (talk) 16:31, 23 July 2025 (UTC)[reply]
I wonder if the issue is that action isn't well advertised? A quick search shows no backlinks, but my wiki-search skills perhaps have gotten rusty. I'd definitely be in favour of advertising that action more widely, as IMO that's the idiomatic way to deploy scripts.
or entrust their login credentials with GitHub I presume isn't a common concern, GitHub Actions manages a lot more sensitive secrets than a GitHub BotPassword. Alternatively one could use Wikimedia Gitlab, presumably [1]. ProcrastinatingReader (talk) 09:53, 24 July 2025 (UTC)[reply]
One weakness of a GitHub secrets based approach (where you give GitHub your own account's credentials, let it log in, and edit as you) is that anyone else with member/collaborator access to your repo could 1) write a script to dump your secrets, or 2) change the GitHub Actions workflow to make edits using your account. I guess this could be mitigated by using a BotPassword that can only edit certain pages. But the edits would still be in your name.
An interface administrator bot solves these issues. The GitHub repo members/collaborators have no way to get the IA bot's credentials, and the IA bot makes it very clear who is making the edit in its edit summary. Even though an IA bot has more access, it does permissions checks to narrow this access, and it hides its secrets better. I find the IA bot approach more secure here.
P.S. To elaborate on the GeneralNotability example above, this edit says it was made by GeneralNotability, but was actually made by TheresNoTime. This edit made some folks at GN's talk page think he came out of retirement when GN did not. –Novem Linguae (talk) 10:19, 24 July 2025 (UTC)[reply]
Can use CODEOWNERS to prevent edits to certain files, so you can limit the creation and modification of Actions to the repository owner, hence also preventing secret leaks. These best practices could be encapsulated in a wiki-page on how to setup actions for your scripts. But the edits would still be in your name. Attribution-y issues exist with the IA approach as well IMO. On a repository without compulsory GPG commit signing, any user with push access can write commits and 'impersonate' another user as the author and committer, which then gets reflected in the edit summary.
To be clear, I don't view as any of these issues as problematic enough to make either approach unsuitable for usage, as they're obviously surmountable with correct configuration of the GitHub repository. ProcrastinatingReader (talk) 10:35, 24 July 2025 (UTC)[reply]
Attribution-y issues exist with the IA approach as well IMO. On a repository without compulsory GPG commit signing, any user with push access can write commits and 'impersonate' another user as the author and committer, which then gets reflected in the edit summary. I think these are two different problems. GitHub actions makes impersonation the default and proper attribution the harder thing to do. The sync approach we have taken here makes proper attribution the default (as most people correctly configure their commit names), and impersonation the harder thing to do. dbeef [talk]15:13, 24 July 2025 (UTC)[reply]
Approved. We have a working product, and right now I think the conversation above, while good, is forcing perfect to be the enemy of done. If there are issues with the implementation or we find that the plan as it has been tested has issues, we can (and should) revisit the task and its specifics. Primefac (talk) 17:17, 27 July 2025 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard. The result of the discussion was Approved.
Function details: It will go through Category: All content moved from mainspace to draftspace to check for any content categories on drafts, and if found, it will disable them. It will not touch any draft categories and will only disable content categories.
Discussion
If I see this right, all this does is change [[Category:Test]] to [[:Category:Test]] and leaves the ones inside {{Draft categories}} alone due to the andnotinside_template. I don't think this is controversial that needs more discussion (especially since that didn't really work as Bearcat said). Nobody (talk) 13:25, 3 April 2025 (UTC)[reply]
@DreamRimmer: I don't see a problem with this either. Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. --TheSandDoctorTalk16:15, 24 May 2025 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard. The result of the discussion was Approved.
Function details: This is the second in a series of tasks for this bot, which will run on the Toolforge and use a database report as the basis for its edits to correct errors in links on mainspace pages. Task 9 bypasses bad piped links to link directly to the title displayed to readers; this task will bypass mishyphenated links. I view edits to add or remove a horizontal line, or adjust the length of a horizontal line, as sufficiently cosmetic to be safely made in automated fashion by a bot.
I created Category:Redirects from incorrect hyphenation on 5 November 2023, to separate incorrect hyphenations from misspellings, as a lower priority for gnomes to fix than actual a–z misspellings. Misspellings need more scrutiny, as vandals can replace correctly-spelled words with a different, misspelled word. We need to avoid endorsing vandalism by correcting the spelling of the incorrect word rather than reverting back to the correct word. We continue to have an imbalance between "executive editors" declaring words to be misspelled or mishyphenated, and gnomes following their directives to correct these errors; my bot tasks are an effort to restore more balance between the executives and the gnomes.
I've built in a safeguard to ensure that this task's edits have community approval. The bot won't make edits when the redirect page triggering the edit has been edited within the past seven days. This will stop edit-warring over what the "correct" form of hyphenation should be, from causing the bot to edit war with itself over a short term. Editors may watchlist User:Bot1058/mishyphenation pending fixes if they want to monitor these pending edits before they're made.
For consistency, the bot will make similar changes outside of wikilinks when it determines it's safe to do so:
the term is in plain text
surrounded by spaces
leading space and ending period, comma, or semicolon
in (parenthesis)
in "quotes"
leading space, followed by an "s" (plural form)
led by a pipe (|), assumed to be a table element
led by an equal sign (=), assumed to be a parameter
The bot will explicitly avoid changing filenames, to avoid breaking image links.
It will also avoid changing links when the link is part of a longer linked title. This will avoid the bot creating red links; these will be left for human review.
The bot will leave anything not explicitly determined to be "safe" for human review. The initial run of this task is expected to leave about 120 pages for human review.
The bot will not make changes when more than two characters in a link are changed, leaving these for human review as well. One of the changes will be to a hyphen, dash, or space. A second accepted change may be to uppercase a character or put a diacritic on a character.
Discussion
I'm concerned that the "changes outside of wikitext" would get into WP:CONTEXTBOT territory. You seem to be explicitly stating that you're going to alter direct quotes, which we usually take pains not to modify, and may not be able to correctly identify things like hyphenated compound modifiers. Anomie⚔13:48, 8 March 2025 (UTC)[reply]
Yes, regarding this:
Generally, a compound modifier is hyphenated if the hyphen helps the reader differentiate a compound modifier from two adjacent modifiers that modify the noun independently. Compare the following examples:
"small appliance industry": a small industry producing appliances
"small-appliance industry": an industry producing small appliances
Redirects should only be tagged as "incorrect" if they are always incorrect under all contexts. If there are contexts where they are correct, then they should be tagged as valid alternatives. My bot only edits to correct things that are incorrect in all contexts. Some editors have been over-prescriptive, tagging things are incorrect when there are contexts where they are correct. When I find these mislabeled redirects, I correct them, e.g. age-of-consent laws, where I corrected TARDIS Builder. This is why I created User:Bot1058/mishyphenation pending fixes – to allow time for reversions of mislabeled redirects. – wbm1058 (talk) 16:40, 8 March 2025 (UTC)[reply]
You're relying on every editor to use your definition of "incorrect", or at least some human reacting to every other definition quickly. That does not seem like a very reliable assumption to me. Anomie⚔00:12, 9 March 2025 (UTC)[reply]
Speaking only about that example, I tagged it as incorrect because someone searching "age of consent" is almost certainly looking for it as a term (noun) rather than as an adjective. As well, the article it points to talks about the concept as a concept, not as a descriptor.
I disagree with your assertion that redirects need to be incorrect under all contexts to use that template; I could probably comb through my own history and find examples where that's not true and you would still agree that the choice of template was appropriate.
I think nuance & judgment matter. Correct category depends on the redirect and the target article.
@TARDIS Builder: You were requiring the link to be piped [[age of consent|age-of-consent]] laws to avoid the "mishyphenation". This is bad, especially in the future, if someone were to write a separate article about the adjective, as a distinctly different concept from the noun. You can also tag age-of-consent with {{R from adjective}}, which I just did. – wbm1058 (talk) 12:30, 9 March 2025 (UTC)[reply]
MOS:SIC states that "insignificant spelling and typographic errors should simply be silently corrected." I'm assuming that shortening or lengthening a hyphen/dash would be an acceptable silent correction. – wbm1058 (talk) 17:02, 8 March 2025 (UTC)[reply]
I believe the context issues are limited to those that call for replacing a hyphen with a space – specifically where the redirect consists of one word (no embedded spaces) which is not a proper noun (the letter following the hyphen is lower case).
I've highlighted the four that meet these criteria – showing examples here where the hyphen is appropriate in context:
Not sure about African-Americans, which has over 500 links. African-American was tagged as an adjective, and then as alternative hyphenation, but it's hard to imagine the plural form being used as an adjective, so this is probably OK to go ahead with the bypass-corrections.
I can modify my algorithm to skip the pages that meet these criteria and report them on the console as likely-valid alternatives in some contexts.
I'm OK with running this on an as-needed or on demand basis rather than daily, in a hybrid between automatic and supervised, where I make a dry run with the $objwiki->edit commented out, and review User:Wbm1058/Reports/Linked mishyphenations and the console report for any issues needing to be addressed before running it in automated mode. – wbm1058 (talk) 17:56, 12 March 2025 (UTC)[reply]
@Hyphenation Expert: any help you might contribute with determining the algorithm for deciding whether a hyphenated form is correct in some contexts (and thus automated correction should be avoided) versus always incorrect (and thus automated correction is safe) would be appreciated. – wbm1058 (talk) 13:58, 14 March 2025 (UTC)[reply]
For compound modifiers, determining "hyphenation correctness" is very case-by-case basis; per Hyphenated compound modifier: Major style guides advise consulting a dictionary to determine whether a compound modifier should be hyphenated ... Not normally hyphenated: Compound modifiers that are not hyphenated in the relevant dictionary or that are unambiguous without a hyphen. So for example, Fatty-acid could be appropriately {{R from incorrect hyphenation}}: Fatty acid synthesis, Fatty acid metabolism are unambiguous.
Right, "case-by-case" is precisely what {{R from alternative hyphenation}} is for, and I do not intend for my bot to make any case-by-case determinations.
In the rare event that this bot might get the context wrong, the solution is easy. Change the offending redirect's Rcat from {{R from incorrect hyphenation}} to {{R from alternative hyphenation}}, and revert the bot. We don't expect perfection from human editors, who all make occasional mistakes, nor should we expect 100% perfection from bots, as long as their mistakes are not too frequent or cause intolerable harm. – wbm1058 (talk) 15:40, 27 March 2025 (UTC)[reply]
The above discussion seems to have stalled, but I'm not sure wbm has necessarily made a strong enough argument to bring this to a trial. That being said, all discussions I'm seeing on this matter seem to stall out, so the only way to prod people into getting us to a consensus may be to have a trial. That's my thinking on the matter, will circle back around in a few days and see if anyone has any significant issue with this line of thinking. Primefac (talk) 23:54, 25 May 2025 (UTC)[reply]
Approved for trial (100 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. I am still concerned about context issues and edit-warring over bot edits (it is one thing to just say "revert the bot" but if a change to a redirect template results in hundreds of reverts, people will obviously not be happy), and the VPR discussion linked by Anomie strengthens that concern, but I think we're still at the point where we won't get enough people to care about the matter until edits start being made. Primefac (talk) 13:12, 8 June 2025 (UTC)[reply]
Thank you for the belated approval for a trial. Part of the reason for this stalling is long waits for approval, which I'm used to, as that's been rather typical of my bot requests. Another part of the reason is that my time continues to be oversubscribed and I need to find larger blocks of time to work on more complex tasks like this, while still supporting more routine tasks which should be below my pay grade, but I'm obligated to do them anyway because I haven't been able to recruit gnome trainees to do them for me. Trial complete., actually a 120-edit trial run on 28 March 2025. An editor brought up an issue with that run on my talk page. I obviously need to patch my code to fix that problem, so I'll need another trial run after I've done that. I've reviewed some, but not all, of the 120 edits made by that 28 March trial. Oh, and regarding a particular editor's idea of "miscapitalization", see User talk:wbm1058#Re Wikipedia:Database reports/Linked miscapitalizations. I've abandoned my work on Wikipedia:Database reports/Linked miscapitalizations, as that report's become a massive pile of low-priority or no-priority work due to that editor's activities. The point of this bot request is that I need to do something to continue working my way out from under the piles of work that "executive editors without tools" and "hyphenation experts" are piling on me. This was my idea of a relatively "safe" way to grow from task 9. – wbm1058 (talk) 16:43, 27 June 2025 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard.
Bots in a trial period
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard. The result of the discussion was Approved.
Function overview: Notifies AfC reviewers of drafts that are still marked as being reviewed after 48 hours, and returns the draft to the AfC queue after 72 hours if no action is taken.
Function details: Bot goes through Category:Pending AfC submissions being reviewed now and checks if 48 hours have passed, and notifies the reviewer if so. Where the reviewer has been notified and 72 hours have passed, the bot will mark the draft as pending again.
Discussion
Approved for trial (30 days). Please provide a link to the relevant contributions and/or diffs when the trial is complete. – DreamRimmer■16:27, 20 May 2025 (UTC)[reply]
Trial complete.Relevant contribs. There's an issue with inserting data into the toolsdb its using, which I tried to fix during the trial (unsuccessfully), meaning I had to input data manually. Not exactly sure on what is causing the issue however. Tenshi! (Talk page) 21:52, 19 June 2025 (UTC)[reply]
@Tenshi Hinanawi: You should call connection.commit() after INSERT or UPDATE statements to make sure the changes are saved, as Connector/Python does not autocommit by default. Calling cursor.fetchone() after these operations is unnecessary, since they do not return any rows, so those calls should be removed. Currently, a new database connection is opened for each check or notification, but the best approach would to reuse a connection whenever possible. – DreamRimmer■11:29, 20 June 2025 (UTC)[reply]
I see the code has been modified significantly, so I am giving this another trial to make sure everything works as intended. – DreamRimmer■13:37, 22 June 2025 (UTC)[reply]
Approved for extended trial (10 days). Please provide a link to the relevant contributions and/or diffs when the trial is complete. – DreamRimmer■13:40, 22 June 2025 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard. The result of the discussion was Approved.
Function details: The bot will look for missing language tracking categories generated by templates such as {{lang}} and {{in lang}}, and will create them if they are not empty.
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard. The result of the discussion was Approved.
Function details: This bot modifies the results sections of Indian Lok Sabha/assembly constituencies. It takes the 'Results' section and for the most recent two elections with published data it adds in all candidates with vote percentages above 0.9% and removing candidates with vote percentages under 0.9%. It does not edit candidate data (i.e. hyperlinks are preserved) except to correctly capitalise candidate names in all upper case. 'change' parameter is only filled if there is no elections which take place between the two data.
Candidates are sorted by vote totals and the subsections are sorted by election years in descending order (most recent election comes first). If a 'Results' section does not exist, it is placed in front of the 'References' section and the results from the two most recent elections are placed there.
The ECI website: eci.gov.in (it is geoblocked for users outside India). It has reports for every Parliamentary and Assembly election in India since Independence, and the ones after 2015 are in PDF form and those after 2019 have csv files. C1MM (talk) 01:19, 14 December 2024 (UTC)[reply]
Thanks for the response. I have used data from eci.gov.in for my bot task, and it is a good source. I tried searching for results data for recent elections, but I only found PDFs and XLSX files; I did not find any CSV files containing the full candidate results data. Perhaps I missed some steps. I will try to provide some feedback after reviewing the edits if this goes for a trial. – DreamRimmer (talk) 09:56, 14 December 2024 (UTC)[reply]
There might be good reasons to keep a candidate's data even if they get less than 0.9% of the vote. I'd say that if the candidate's name is wikilinked (not a red link), then the bot should not remove that row.
Good point. I forgot to mention I did treat 'None of the above' as a special case, don't cut it and in fact add it in where it is not in the template. I also add 'majority' and 'turnout' and when there is no election in between the two most recent elections for which I have data I also add a 'gain' or 'hold' template.
How do you check if a page exists and is not a disambigution? I say this because a lot of politicians in India share names with other people (example Anirudh Singh) so I would rather only keep people below 0.9% of the vote if they are linked to an article which is actually about them. C1MM (talk) 13:47, 14 December 2024 (UTC)[reply]
If you are using Pywikibot, you can use the page.BasePage class methods, such as the exists() method, to check whether a wikilinked page exists on the wiki. It returns a boolean value True if the page exists on the wiki. To check whether this page is a disambiguation page, you can use the isDisambig() method, which returns True if the page is a disambiguation page, and False otherwise. – DreamRimmer (talk) 17:07, 16 December 2024 (UTC)[reply]
I've made the suggested changes and the pages produced look good (I haven't saved obviously). I unfortunately don't know how to run Python pywikibot source code on Wikimedia in a way that accesses files on my local machine, is this possible? C1MM (talk) 05:56, 23 December 2024 (UTC)[reply]
Are you saying that you have stored CSV files on your local machine and want to extract the result data from them? Let me know if you need any help with the source code. – DreamRimmer (talk) 11:04, 23 December 2024 (UTC)[reply]
Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Please do not mark these edits as minor. Primefac (talk) 13:34, 1 January 2025 (UTC)[reply]
[3] Here are the contributions asked for. I think there are a couple of issues: I haven't actually added a source technically for these contributions and also for a certain party (Peace Party) I added the disambiguation links by mistake. I also accidentally made the replacement headings 3rd level instead of 2nd level, which I have now fixed. C1MM (talk) 03:47, 2 January 2025 (UTC)[reply]
Please also go back and manually fix these 50 edits for the problems that you've noticed. Additionally, if you could also use the {{formatnum}} template for all the votes figures it would be great. The other parts of the edits look good. -MPGuy2824 (talk) 05:05, 2 January 2025 (UTC)[reply]
Approved for extended trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Please fix the code to avoid repeating the same issues. Also, please link this BRFA in the edit summary and do not mark these edits as minor. – DreamRimmer (talk) 12:23, 29 January 2025 (UTC)[reply]
Thanks, unfortunately I am busy in real life and don't have the time to work on this much at the moment. I am trying to add functionality to automatically add the correct references for templates I add which currently don't have references. C1MM (talk) 04:45, 3 March 2025 (UTC)[reply]
Trial complete.Special:Contributions/C1MM-bot last 50 edits. I need to fix references: sometimes reference name is duplicated and sometimes it has an accidental "ref name = ref name =", so that needs to be fixed. Unfortunately open sourcing is a little difficult, although definitely not impossible. I have several previous files I need to run to get the election data into a wikitemplate format. One to save candidate information from PDF in a CSV, another to process CSV data, another to combine the PDF with CSV data so that vote percent changes are correctly added and finally the one which edits the current Wikipedia page by filling in missing information and reformatting. I would appreciate any help with getting this open sourced although I am still busy in real life. C1MM (talk) 05:17, 24 March 2025 (UTC)[reply]
@C1MM I unfortunately have a wrist injury and won't be able to do too much, but if you batch upload the files to a cloud service, and email me the link, I can try and link it all together cohesively. What environment do you typically run in? JarJarInks٩(◕_◕)۶Tones essay12:00, 24 March 2025 (UTC)[reply]
I wanted to check if you had fixed the code for the issues from the first trial. However, you made extended trial edits to the same pages as the initial trial. Since those pages had already been manually fixed, there are no changes to review. I apologies if my extended trial approval was unclear; I intended for you to edit different pages so that we could verify whether the code has been fixed for the original issues. Also, we requested that these edits not be marked as minor, yet the code was not updated to prevent this. You also mentioned encountering problems, such as occasional duplication of reference names and instances of accidental 'ref name ='. Have you addressed these problems? I am giving you another extended trial. For this trial, please edit pages different from those edited during the first two trials. Ensure these edits are not marked as minor. Also, please resolve the reference name duplication and any other code issues before proceeding with these edits. Approved for extended trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. – DreamRimmer (talk) 17:18, 21 April 2025 (UTC)[reply]
The --> was present before I edited the page, it was part of an incomplete comment.
I only add the formatnum template for templates I edit substantially, I don't want to change too much of templates which I don't have a source for.
As for the last point, I noticed that too, and I think it is only when the PDF numbers agree with the correct numbers that you see the script get it correct. I changed the script to calculate the percentages manually when processing the results PDFs. C1MM (talk) 03:13, 12 May 2025 (UTC)[reply]
Despite being told multiple times, you have still marked all edits as minor. This gives the impression that you are not reading BAG comments before making changes. – DreamRimmer◆03:34, 12 May 2025 (UTC)[reply]
Ah, I realize the issue now. Very sorry about that: I didn't realize they had changed how minor edits were displayed recently. I've fixed the issue in my script so that the edits are no longer marked as minor. C1MM (talk) 13:00, 15 May 2025 (UTC)[reply]
Approved. Thanks for your patience. I hope you have fixed the code for the issues mentioned above. The edits look good. Please keep it under some supervision rather than making it fully automatic, as the trails suggest it may occasionally make mistakes where template or section formatting is slightly different. It would be good to keep an eye on the edits. – DreamRimmer■06:33, 8 July 2025 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard. The result of the discussion was Request Expired.
A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) Status of trial? – DreamRimmer (talk) 17:06, 14 February 2025 (UTC)[reply]
It would be nice to have a generalized bot that can do this for all projects (just a comment, not against this specific bot). --Gonnym (talk) 21:15, 26 March 2025 (UTC)[reply]
Sorry, I forgot about this. The code has already been written. I'll see if I have time to deploy it this weekend. CFA02:24, 27 March 2025 (UTC)[reply]
A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) Anything happening here? * Pppery *it has begun...16:03, 20 May 2025 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard.
Function details: Near identical functionality of the previous bot, just rewritten in a different (and better) language. All are modifying templates on File description pages, so I'm merging this into one task.
Remove {{Now Commons}} from file description pages which also translcude {{Keep local}}
Discussion
Thanks for stepping up to help! For easier review and tracking, could you please list all these tasks and their descriptions in the "Function details" section? You can use a wikitable for this. – DreamRimmer (talk) 13:51, 17 December 2024 (UTC)[reply]
Approved for trial (120 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Please perform 20 edits for each task. Primefac (talk) 12:35, 23 December 2024 (UTC)[reply]
Sorry about that - I've been inactive for the past three months or so. (probably should've got this thing done before disappearing, my bad) Will get those trial edits done soon! '''[[User:CanonNi]]''' (talk • contribs)10:14, 22 May 2025 (UTC)[reply]
@Primefac very sorry for the late reply - I uh, accidentally removed this page from my watchlist a while back. (Aaron Liu thanks for the reminder) I've performed several trial edits for all tasks, some of which were to pages that are now deleted, which is why they're not showing up on Special:Contribs. The tasks that required the most edits were 1, 7, 8, and 9. [[User:CanonNi]] (💬 • ✍️) 09:51, 23 June 2025 (UTC)[reply]
Please do at least a few edits for each of the other tasks. Feel free to post individual diff links, I'll keep an eye on this page (and of course, I can see deleted revisions). Primefac (talk) 23:24, 23 June 2025 (UTC)[reply]
A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) Primefac (talk) 00:16, 21 June 2025 (UTC) Re-enabling, asked a question and need a reply. Primefac (talk) 12:23, 12 July 2025 (UTC)[reply]
Already has a bot flag(Yes/No): No, on enwiki, yes, for other wikis on other tasks
Function details:
Use the eventstream API to listen for new AfDs
Extract page name by parsing the AfD wikitext
Identify previous reviewers of page at AFD
Notify said reviewers on their talk pages with a customised version of the existing AfD notification message
Discussion
I like this concept in general. I tried to make a user script that does this (User:Novem Linguae/Scripts/WatchlistAFD.js#L-89--L-105), but it doesn't work (I probably need to rewrite it to use MutationObserver). Would this bot be automatic for everyone, or opt in? Opt in may be better and easier to move forward in a BRFA. If not opt in, may want to start a poll somewhere to make sure there's some support for "on by default". –Novem Linguae (talk) 07:58, 17 July 2024 (UTC)[reply]
Support - seems like a good idea. I've reviewed several articles that I've tagged for notability or other concerns, only to just happen to notice them by chance a few days later get AfD'ed by someone else. A bot seems like a good idea, and I can't see a downside. BastunĖġáḍβáś₮ŭŃ!16:31, 17 July 2024 (UTC)[reply]
This is the sort of thing that would be really good for some people (e.g., new/infrequent reviewers) and really frustrating for others (e.g., people who have reviewed tens of thousands of articles). If it does end up being opt-out, each message needs to have very clear instructions on how to opt out. It would also be worth thinking about a time limit: most people aren't going to get any value out of hearing about an article they reviewed a decade ago. Maybe a year or two would be a good threshold. Extraordinary Writ (talk) 18:48, 17 July 2024 (UTC)[reply]
The PREVIOUS_NOTIF regex should also account for notifications left via page curation tool ("Deletion discussion about xxx"). The notification also needs to be skipped if the previous reviewer themself is nominating. In addition, I would suggest adding a delay of at least several minutes instead of acting immediately on AfD creation – as it can lead to race conditions where Twinkle/PageTriage and this bot simultaneously deliver notifications to the same user. – SD0001 (talk) 13:41, 19 July 2024 (UTC)[reply]
{{Operator assistance needed}} Thoughts on the above comments/suggestions? Also, do you have the notice ready to go or is that still in the works? If it's ready, please link to it (or copy it here if it's hard-coded elsewhere). Primefac (talk) 12:48, 21 July 2024 (UTC)[reply]
@Primefac I've implemented a few of the suggestions, I've reworked the code to exclude pages containing {{User:SodiumBot/NoNPPDelivery}}, which should serve as a opt out mechanism :) I've also reworked the code to include SD0001's suggestion of adding a significant delay by making the bot wait at least a hour and also added modified the regex to account for the messages sent by PageTriage.
Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Please make sure this BRFA is linked in the edit summary. Primefac (talk) 23:50, 4 August 2024 (UTC)[reply]
I had left the bot running, it hasn't picked up a single article by the looks of the logs. I'mm gonna try to do some debugging on what the issue is/was. Sohom (talk) 14:22, 26 December 2024 (UTC)[reply]
A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) What is the status of this? * Pppery *it has begun...16:05, 20 May 2025 (UTC)[reply]
Noting that I'm aware -- I'll try some stuff over the weekend and report back -- If it doesn't work out, I'll close this as declined. Sohom (talk) 15:09, 2 July 2025 (UTC)[reply]
Came across this after receiving some (unwanted) messages at User_talk:Joe_Roe#Nomination_of_Vũ_Duy_Hoàng_for_deletion. The NppNotifier task should be opt-in. It would make sense to run it opt-out if notifying NPP reviewers of AfDs manually was previously a common practice, but that isn't the case. I'm pretty much average in terms of reviewing activity and I've reviewed 2100 pages in the last three years – that's potentially a lot of unwanted messages to send to someone without asking! Also, regarding Links to relevant discussions (where appropriate): Initial discussions on NPP Discord – this is really not good enough. It should have been discussed on-wiki, in advance with the people who are actually going to receive the automated messages (i.e. NPP reviewers). – Joe (talk) 08:52, 28 July 2025 (UTC)[reply]
@Joe Roe, It should have been discussed on-wiki, in advance with the people who are actually going to receive the automated messages (i.e. NPP reviewers). -- I will start by noting that the NPP discord is made up primarily of NPP reviewers. Also, this BRFA was also advertised onwiki for the large portion for a month last year at WT:NPR during which multiple people commented above that having this be opt-out was a good idea (see above). I only implemented this to be opt-out after the onwiki consensus had been formed. Sohom (talk) 09:23, 28 July 2025 (UTC)[reply]
A small minority of NPP reviewers. Consensus is reached through on-wiki discussion or by editing. Discussions elsewhere are not taken into account. This is a policy. And I'm aware of the notification. That's what I meant by discussed in advance (of the BRFA). Doesn't it make sense to find out whether people actually want to receive these messages, before you request permission to run a bot to send them? – Joe (talk) 09:28, 28 July 2025 (UTC)[reply]
I'm confused about what you are implying here, the initial idea of a bot to notify NPP folks of AFD discussions was borne out of a discussion on Discord, when the BRFA was filed onwiki it was advertised in places where you would expect NPP editors to show up, folks edited, discussed it onwiki (here or remained silent), they said that opt-out was fine and that was what was implemented. I don't understand what policy was violated here. The BRFA is explicitly meant to be a venue to discuss the working of the bot, it's not insular to only folks who have technical knowledge and it is not required to work out every single thing about the bot before filing a BRFA.
Regarding the meat of the complaint, as @Novem Linguae mentions above, there are cases where we do have opt-out notifications about AFDs from bots (another example of opt-out notifications would be the modus operandi of bots that WP:G13 drafts). There is a implicit consensus that these kinds of bots are fine and do not violate policy. Also, and I've reviewed 2100 pages in the last three years – that's potentially a lot of unwanted messages to send to someone without asking assumes that a large portion of these articles would get AFDed, when in actuality, you are probably expecting that number to be much much lower. Sohom (talk) 09:45, 28 July 2025 (UTC)[reply]
You are implying the existence of a consensus to run this bot opt-in that does not exist. I can only assume that this is because either you think it was formed on Discord, in which case again see WP:CON; or you have misread the discussion above where the very first commenter suggested opt-in, the second preferred opt-out, and no subsequent participant expressed a preference. I'm not saying this is a bad idea for a task, but you do not currently have consensus to run it this way and I'm suggesting that you perhaps could have done a better job in seeking that before writing and running a bot (even as a trial). The other AfD notification bots automate notifications that are or were previously commonly performed manually; as I mentioned above, this is not the case here. Thus I do not think you should send people automated messages that they did not ask for and would not currently expect to receive.
assumes that a large portion of these articles would get AFDed – I have no idea what the proportion would be. But 1% is still 21 messages and any value is greater than the number I would like to receive if asked, which is zero. – Joe (talk) 10:02, 28 July 2025 (UTC)[reply]
@Joe Roe, I have disabled the bot for the time being. I still stand by the fact that I have to the best of my ability tried to have folks be involved, advertised discussions and acted based on what I understood the on-wiki consensus on the matter was. I also want to be extremely clear, I resent and reject your implication that I somehow willingly violated WP:BOTREQUIRE by running a bot "without consensus". That is a serious accusation. The burden of proof (imo) is still on your end that I have somehow ignored some form of established consensus and policy. If you have problems with the opt-out nature of the bot and want to restart the discussion, feel free to do so on WT:NPR and ping me to it. Sohom (talk) 12:09, 28 July 2025 (UTC)[reply]
I see plenty of support for this bot in this BRFA and at WT:NPPR. Joe Roe is the only objection so far.
As evidenced by the WT:NPPR diff provided by Sohom above, and the existence of this BRFA that has had 10 unique editors post in it, this task was properly socialized onwiki. Focusing on the Discord part is distracting and a red herring.
Sometimes the best or even only way to get more comments on a software proposal is to deploy it more widely.
I think it'd be reasonable to resume and finish the trial. If there are other folks that feel like Joe Roe who want to comment against the bot task, I think the best way to find this out is for the bot to do its trial.
I am not unsympathetic to Joe's concerns. My initial instinct in the very first comment in this BRFA was to make the bot opt in. But that is not the way the onwiki BRFA discussion went.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard. The result of the discussion was Approved.
Function details: Go through Category:Unassessed articles (only deals with articles already tagged as belonging to a project). If an unassessed article is rated as a stub by ORES Liftwing, tag the article as a stub. Example
Discussion
Note: This bot appears to have edited since this BRFA was filed. Bots may not edit outside their own or their operator's userspace unless approved or approved for trial. AnomieBOT⚡00:10, 28 March 2023 (UTC)[reply]
The Bot run only affects unassessed articles rated as stubs by mw:ORES. The ORES ratings for stubs are very reliable (some false negatives – which wouldn't be touched under this proposal – but no false positives). Hawkeye7(discuss)00:03, 31 March 2023 (UTC)[reply]
Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Sounds reasonable as ORES is usually good for assessing stub articles as such. – SD0001 (talk) 11:41, 1 April 2023 (UTC)[reply]
Comment: Some behavior I found interesting is that the bot is reverting start-class classifications already assigned by a human editor, and overriding those with stub-class. [8] and [9]EggRoll97(talk) 03:28, 18 May 2023 (UTC)[reply]
The question is: what should be happening? The article were flagged because some of the projects were not assessed. Should the Bot (1) assess the unassessed ones as stubs and ignore the assessed ones or (2) align the unassessed ones with the ones that are assessed? Hawkeye7(discuss)04:21, 18 May 2023 (UTC)[reply]
Per recent consensus assessments should be for an entire article, not per WikiProject. The bot should amend the template to use the article wide code. If several projects have different assessments for an article it should leave it alone. Frostly (talk) 05:03, 18 May 2023 (UTC)[reply]
@Hawkeye7: Courtesy ping, I've manually fixed up the edits where the bot replaced an assessment by a human editor. 6 edits total to be fixed out of 52 total edits. EggRoll97(talk) 07:16, 18 May 2023 (UTC)[reply]
{{BAG assistance needed}} This has been waiting for over 2 months since the end of the trial, and over 4 months since the creation of the request. Given the concerns expressed that the bot operator has since fixed, an extended trial may be a good idea here. EggRoll97(talk) 05:19, 8 August 2023 (UTC)[reply]
Approved for extended trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. – SD0001 (talk) 19:10, 15 October 2023 (UTC)[reply]
Yes. I wrote the bot using my C# API, and due to a necessary upgrade here, my dotnet environment got ahead of the one on the grid. I could neither build locally and run on the grid nor on build on the grid. (I could have run the trial locally but would not have been able to deploy to production.) There is currently a push to move bots onto Kubernetes containers, but there was no dotnet build pack available. The heroes on Toolforge have now provided one for dotnet, and I will be testing it when I return from vacation next week. If all goes well I will finally be able to deploy the bot and run the trial at last. See phab:T311466 for details. Hawkeye7(discuss)22:54, 31 December 2023 (UTC)[reply]
The trial run was successful. The problems with the new Packbuild environment were resolved. I can run some more trials but would prefer permission to put the job into production. Hawkeye7(discuss)20:12, 23 December 2024 (UTC)[reply]
The edits look good. Please update the function details to include LiftWing. Before approving, I would like to ping @SD0001 to get their opinion on the reliability of LiftWing, as it is a new service being used here following the deprecation announcement of the ORES infrastructure. I am sure LiftWing is more reliable than ORES in most ways, but it is good to double-check. – DreamRimmer■17:34, 3 July 2025 (UTC)[reply]
I haven't migrated to LiftWing for my tasks since it's much slower with no caching at their side and doesn't allow batched requests. That being said, I think the models should be just as accurate. – SD0001 (talk) 03:13, 4 July 2025 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard.
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard. The result of the discussion was Request Expired.
Links to relevant discussions (where appropriate): I do not believe that discussions are required for this action, as this is the entire point of wikilinks
Edit period(s): Continuous
Estimated number of pages affected: Large burst at approval/during trial, then 25/day at the highest afterwards.
Many articles contain external Wikipedia links to templates, policy pages, and discussion, usually added as comments. On average, about 20 of these kinds of links are added per day, with 95% of them as commented-out text. Replacing these links would only lead to cosmetic changes, which should be avoided per WP:COSMETICBOT, as commented-out text are not visible to readers. For the remaining 5%, using a bot isn't a good idea, as these minor edits can be easily handled by a human editor. Currently, over 62,000 pages have these types of commented-out links, and none need replacement based on your criteria. This suggests that these types of external links are fixed regularly. – DreamRimmer (talk) 14:32, 14 November 2024 (UTC)[reply]
I do not want to pile-on, but for "en.wikipedia" this task wont be much useful like DreamRimmer explained above. However, in case the link is to some other wikipedia eg "de.wikipedia" (german), or "es.wikipedia" (spanish), this task would be useful, but again, the occurrences are extremely low, and they are generally handled/repaired by editors as soon as they are inserted. Also, bot operator is new (not extended confirmed), so this might get denied under WP:BOTNOTNOW. But this is actually a sound request, my first BRFA was outright silly. —usernamekiran (talk)15:45, 14 November 2024 (UTC)[reply]
DreamRimmer, I think CheckWiki #90 would probably be more useful for finding the number of pages affected by this; at the moment it's sitting at ~4500 pages so this probably does require some sort of intervention. Primefac (talk) 20:19, 17 November 2024 (UTC)[reply]
@Ow0cast: Given there are around 4500 pages, this is indeed a useful task. Would you be able to program it to handle the subdomains? Similar to the example I provided above? —usernamekiran (talk)20:25, 1 December 2024 (UTC)[reply]
To be completely honest, I stopped working on it after getting stumped on people using wikipedia as a ref, then completely forgot about it and wikipedia (lot of family related things happened, plus exams and holidays) until I saw the notification. I probably won't be working on this again, but if I do, I'll just have it modify files on my PC and copy them to wikipedia manually, unless you (or someone) else want my terrible code to implement into another bot. /etc/owuh $ (💬 | she/her)03:35, 26 January 2025 (UTC)[reply]
{{BotWithdrawn}} I am marking this as withdrawal for the time being. If you plan to continue, you can either file a new request or reopen this one—whichever works best for you. Just make sure to request a new trial if you decide to reopen it. – DreamRimmer (talk) 06:01, 28 January 2025 (UTC) Reopened at the user's request. – DreamRimmer (talk) 15:42, 19 February 2025 (UTC)[reply]
A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) What is the status of this? * Pppery *it has begun...16:06, 20 May 2025 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard.
Bots that have completed the trial period
The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard. The result of the discussion was Approved.
The bot uses multiple regular expressions to try and place the template in the correct location per MOS:SECTIONORDER (as a maintenance tag), falling back to prepending it at the beginning of the article. If there is a {{multiple issues}} template present, the bot will group the template into the multiple issues at the beginning; it does not place a multiple issues on the page if it is not already present.
Discussion
Thanks Rusty Cat! Tagging the pages on this list created by User:Alex 21 will help speed the conversion of the tables which then benefit the pages they're on with better MoS conformability and maintenance. Pinging also User:Favre1fan93. --Gonnym (talk) 22:57, 25 March 2025 (UTC)[reply]
Not sure the bot needs to do that. It's not set up as a monitor to check on progress, it's just an on request bot that adds the banner to lists generated from the report. Gonnym (talk) 12:26, 3 April 2025 (UTC)[reply]
Approved for extended trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Let's make sure those rewrites worked. Primefac (talk) 23:45, 25 May 2025 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at Wikipedia:Bots/Noticeboard.
Approved requests
Bots that have been approved for operations after a successful BRFA will be listed here for informational purposes. No other approval action is required for these bots. Recently approved requests can be found here (edit), while old requests can be found in the archives.
Bots that have been denied for operations will be listed here for informational purposes for at least 7 days before being archived. No other action is required for these bots. Older requests can be found in the Archive.
These requests have either expired, as information required by the operator was not provided, or been withdrawn. These tasks are not authorized to run, but such lack of authorization does not necessarily follow from a finding as to merit. A bot that, having been approved for testing, was not tested by an editor, or one for which the results of testing were not posted, for example, would appear here. Bot requests should not be placed here if there is an active discussion ongoing above. Operators whose requests have expired may reactivate their requests at any time. The following list shows recent requests (if any) that have expired, listed here for informational purposes for at least 7 days before being archived. Older requests can be found in the respective archives: Expired, Withdrawn.