If you want to report a JavaScript error, please follow this guideline. Questions about MediaWiki in general should be posted at the MediaWiki support desk. Discussions are automatically archived after remaining inactive for 5 days.
This tends to solve most issues, including improper display of images, user-preferences not loading, and old versions of pages being shown.
No, we will not use JavaScript to set focus on the search box.
This would interfere with usability, accessibility, keyboard navigation and standard forms. See task 3864. There is an accesskey property on it (default to accesskey="f" in English). Logged-in users can enable the "Focus the cursor in the search bar on loading the Main Page" gadget in their preferences.
No, we will not add a spell-checker, or spell-checking bot.
You can use a web browser such as Firefox, which has a spell checker.
If you changed to another skin and cannot change back, use this link.
Alternatively, you can press Tab until the "Save" button is highlighted, and press Enter. Using Mozilla Firefox also seems to solve the problem.
If an image thumbnail is not showing, try purging its image description page.
If the image is from Wikimedia Commons, you might have to purge there too. If it doesn't work, try again before doing anything else. Some ad blockers, proxies, or firewalls block URLs containing /ad/ or ending in common executable suffixes. This can cause some images or articles to not appear.
Did the CSS just change on mobilein desktop view as seen on mobile? Page body text on my phone is a lot denser today. Chrome on a Galaxy phone. Largoplazo (talk) 14:59, 15 May 2025 (UTC)[reply]
Not on mobile, but on the usual Vector2022 view on a desktop I'm seeing text that appears a lot denser today as well. It's hard to tell because I don't have before and after screenshots, but my suspicion is that the leading (space between consecutive lines of text) has been reduced. I went into my custom css and increased the line-height (mine has a line ".vector-body {font-size: 115%; line-height: 150%;}" but you may not want such extreme values) and it looked a lot better again. —David Eppstein (talk) 18:28, 15 May 2025 (UTC)[reply]
Not sure if bigger... MonoBook here, and while I don't see that, I do see the pink box atop the editing window that - for instance - has the this-user-has-been-blocked-by-who-and-why looks to have larger text today. Or maybe it's always been this way and I'm losing my mind, that can't be ruled out? - The BushrangerOne ping only19:27, 15 May 2025 (UTC)[reply]
Aha. That would explain the larger text in the block notices. I also noticed the text in the box here looks odd (@Parsecboy: since I'm linking your page for an example!). And I'm going to guess that Wikimedia Commons got rolled out on Wednesday, which might explain the Metadata text looking bigger that I noticed there yesterday... - The BushrangerOne ping only22:48, 15 May 2025 (UTC)[reply]
and since it seems to be general for pink boxes, it also affects the pink box shown whn you visit the redlink of a deleted page. --Redrose64 🌹 (talk) 23:11, 16 May 2025 (UTC)[reply]
Are my eyes playing tricks on me, or ... is the font size in deletion summaries, such as the one posted at Yoshi Falls, bigger and/or has less space between lines than they previously did? And if so, was this intentional? (In case it is relevant, I'm using Vector legacy 2010.) Steel1943 (talk) 22:51, 16 May 2025 (UTC)[reply]
Neither of those 'fixes' should be pursued. The 'fix' is to fix Module:Footnotes so that it recognizes {{#invoke:Cite|xxxx|....}} and can then extract the necessary info from the invoke.
Editor Hike395 rewrote most of Module:Footnotes/anchor_id_list to the point where I no longer recognize the code so I am not the best person to say if what you have added was a good addition. I do notice that the 'template' names created from #invokes are not listed in template_list. Edit this version of my sandbox (don't change anything) and click Show preview. Then, in the Parser profiling data dropdown, click show under Lua logs. You should see something like this:
template_list=table#1{["Cite SSRN/new"]=1,["Template:Harvard citation no brackets/sandbox"]=1,}
In my sandbox page there are two {{#invoke:Cite|news|...}}. Change one of them to {{Cite news|...}}. Show preview; Show Lua logs. You should see something like this:
template_list=table#1{["Cite SSRN/new"]=1,["Cite news"]=1,["Template:Harvard citation no brackets/sandbox"]=1,}
The answer to Trappist's question is because the template_list variable is populated by template_list_add() here, and that function accepts the raw template string Module:Footnotes/anchor_id_list/sandbox#L-762 at line 762, not the hacked version produced by template_get_name Module:Footnotes/anchor_id_list/sandbox#L-762 at line 761. Is that right? It seems to me that template_list_add() at line 762 can be replaced by list_add(template_name, template_list). But I'm not 100% sure.
By the way, 95% of the code in Module:Footnotes/anchor_id_list is still Trappist's code. You can see my diff here. I changed the way the global variables were updated (for caching and to handle errors correctly), rewrote a little code for efficiency, handled "fascicles", and added citeref_patterns_make(), but that was it.
To answer Ahect's question -- I'm a bit nervous about turning all invokes into fake-o templates, because I don't know if that will generate any false positives or negatives. You may want to only turn {{#invoke:cite|XYZ}} into {{cite XYZ}}, because that seems safer and more predictable. — hike395 (talk) 02:54, 16 May 2025 (UTC)[reply]
@Hike395 My impression from the code is that template_params_get() only looked for named parameters, so it should automatically ignore lua function names, and similarly date_get() only looks for parameters starting with date=() (or an alias). sfnreg_get(), anchor_id_make_harvc(), and anchor_id_make_anchor() don't look at citation templates, so it doesn't matter there. The only other function that might potentially be impacted is template_list_add, which despite a comment saying that it handles case differently than template_params_get() is functionally identical as far as I can tell, so I just refactored it to use template_params_get()here. @Trappist the monk can correct me if my assumptions are wrong. --Ahecht (TALK PAGE)20:34, 18 May 2025 (UTC)[reply]
This version (permalink) of Gaza genocide directly precedes the edit that converted Module:Cite web and others to Module:Cite. Note that that §References section in that older version also has lots of harv errors. From that, I conclude that the change to Module:Citedid not cause any new problems. It appears that these errors began appearing 2025-05-03 at this edit (permalink). Since that time no one has bothered to notice or if they noticed, did not say anything. Of course, that harv error message is hidden so that might explain why the silence. No doubt, there are other articles where these sorts of error messages have not been noticed.
For those interested, Module:Cite web, Module:Cite news, and a few others are soon to be deleted so look now before their deletion prevents you from confirming what I have written.
The edit on the 3rd May marked the use of #invoke, so it seems to be #invoke that is causing the problem. As for not raising it earlier, I am sorry but I have a life. DuncanHill (talk) 21:45, 15 May 2025 (UTC)[reply]
Thus does appear impossible on certain articles, apparently they need to contain every detail in one central location and to hell with readability and usability. -- LCU ActivelyDisinterested«@» °∆t°10:35, 23 May 2025 (UTC)[reply]
Ever since yesterday, when I use the desktop version on my mobile phone, the first item in my watchlist is a long column of text with 1-2 characters per line, and partially obscured by the "List of abbrevations" box. If you don't know what I mean, it's similar to how it looks on mobile with desktop view when discussions get too indented. Only happens when reading phone in portrait mode, not in landscape mode, I assume because there is more width available in landscape. The rest of the entries look normal, except maybe (not sure) the margin is wider than it used to be?
Is this something I just need to get used to, or is it something about to be fixed? Or am I the only person on the planet it is happening to? Is this somehow related to the "Font change in desktop view on mobile" thread above? Have I explained it so poorly that no one knows what I mean?
This is one of those things that's only a minor annoyance, except I see it every time I look at the watchlist, so the annoyance kind of accumulates after a while.... Floquenbeam (talk) 19:34, 16 May 2025 (UTC)[reply]
Seems unrelated to the other font changes. T331086 changed how line breaking of text works on the watchlist, and this seems to be an unintended consequence of that. I proposed a patch to improve it: [1]Matma Rextalk22:50, 16 May 2025 (UTC)[reply]
... and I guess the corollary is that maybe it would make sense to allow special pages to tell the skin (or vice versa, idk which way) whether they support mobile friendliness, as we work toward e.g. Vector being available as a 'mobile' skin rather than something which phones have to guess at what's important and what's not. Izno (talk) 05:24, 18 May 2025 (UTC)[reply]
Before this disappears into the archive: is there any way I can do some kind of css/js thing to make it stop? I have to scroll 3 screens each time I refresh my watchlist to get to the actual watchlist. Floquenbeam (talk) 16:22, 22 May 2025 (UTC)[reply]
Hi, can anyone tell me the link/ discussion thread for the patrol's new user page? As you can see below, there are two editor user pages with the "mark this page as patrol" tag. [2] and [3] . Thank you. Cassiopeiatalk23:46, 16 May 2025 (UTC)[reply]
The second page was deleted, so I can't comment on that. The first page was patrolled by you.
Andrybak} Thank you and I know. The second page I nominated for user name violation and that is the reason it was deleted. Stay safe and thank you. Cassiopeiatalk
Pages in any namespace can be patrolled, and the "Mark this page as patrolled" option is not new. You don't see it on mainspace pages because the Page curation toolbar hides it. Patrolling a page with this option is the same as marking it reviewed through the Page curation toolbar. The toolbar is easier to use and offers more features, which is why NPRs prefer that. You can mark userspace pages as patrolled, but we usually review only mainspace pages because patrolling other namespaces isn't necessary and is a waste of time. – DreamRimmer■16:28, 17 May 2025 (UTC)[reply]
DreamRimmer I have been patrolling for both NPR for many years and this is the first time I saw the in new user page. To me it doesn't make sense as we can mark patrol if all content added by the new user is adhere to the Wikipedia user page guidelines but we can guarantee the new user add something out the guidelines the next day or in the future. Thank you and stay safe. Cassiopeiatalk23:20, 17 May 2025 (UTC)[reply]
Patrol marking are about new pages, not about every revision, it is the same as new articles - if they get vandalized later they don't become unpatrolled. — xaosfluxTalk10:53, 20 May 2025 (UTC)[reply]
Now the section opens with the word "Show / Hide". This decision was justified at the dawn of the Internet 30 years ago, when HTML was version 1.0.
Now there are tags <details>, <summary>.
Probably, there are more beautiful solutions with an arrow (left, down).
In Wiki there are enough arrows as it is, even this topic, which I opened, has arrow (on mobile). But at the top of the page, the main sections are still opened using «Show/Hide». Seregadu (talk) 19:55, 17 May 2025 (UTC)[reply]
It would be a massive amount of work to transition literally every use of mw-collapsible, and that's even ignoring that I have had to yank teeth and still haven't 'won' the yanking to get these whitelisted. See phab:T25932 for the general discussion (and teeth yanking) and phab:T31118 for the specific. Izno (talk) 05:18, 18 May 2025 (UTC)[reply]
But then there is also no effective benefit to it. Additionally, mw-collapsible does more than what details/summary can do, so no matter what, you have to do both, in which case the original mw-collapsible is more maintainable. —TheDJ (talk • contribs) 19:23, 18 May 2025 (UTC)[reply]
I can’t analyze the page in DevTools, in terms of external scripts (it’s easier to look at clean code). But I will assume that the old implementation from the 2000s (as now) Show/Hide with (addEventListener("click", (event) => { })) is much more complex than the modern <details>, <summary>.
I was shocked when I discovered that there are collapsible tables on Wikipedia, but they they don't work without javascript and rely on some third-party library. Therefore I propose to allow <details> and <summary> tags to be used on Wikipedia - this will allow to be less JS-dependent. This is not a new feature (available since at least Firefox 49, released in September of 2016), so there won't be any massive compatibility issues. MinervaNeue (talk) 15:40, 22 May 2025 (UTC)[reply]
I think you should publicise this on WP:VPT because I doubt people interested in VPPR care much about the code implementation of things as long as those things are working. —CX Zoom[he/him](let's talk • {C•X})15:44, 22 May 2025 (UTC)[reply]
This is something that would need to be changed in the software. A feature request is open on this topic, you are welcome to contribute to the discussion or submit patches. See phab:T31118. — xaosfluxTalk18:16, 22 May 2025 (UTC)[reply]
It is itself concerning that this pretty obscure issue has come up twice in the matter of a week. Where did you learn about this problem? Izno (talk) 00:26, 23 May 2025 (UTC)[reply]
We at Wiki Project Med have been working on decreasing bandwidth usage since this was raised as a significant concern during the last discussion. We have succeeded in dropping usage from 36 Mb down to 498KB. See MDWiki:WikiProjectMed:OWID
The scrolling is honestly is somehow more confusing than before. Scrolling on the image (where my cursor is most likely positioned after clicking the images) moves, not the scrollbar, but a slider off screen, making me wonder what is going on. The initial concerns about unselectable text have also not bee addressed. This still feels a bit half-baked and probably needs more iteration before being considered as a default-on template gadget. Sohom (talk) 04:41, 21 May 2025 (UTC)[reply]
The latest run of Special:WantedCategories once again features two redlinks being autogenerated by modules I can't edit, which I can't figure out what to do with:
Category:Sweden by county category navigation with 6–15 links, autogenerated by the use of {{Sweden by county category navigation}} on various Swedish "by county" categories. While I can find evidence that this template generates numerous other categories tracking grey links, including the already-existing Category:Sweden by county category navigation with 6–15 grey links, I can't find any evidence of any other categories existing for any other number of just plain "links" without the "grey" modifier — so I can't figure out why this exists for the 6-15 range, but not for any other number, and thus can't create it if it isn't expected and doesn't have any other siblings. So could somebody with module-editing privileges figure out how to make it go away?
Category:Pages using old style mw-ui-constructive, autogenerated by the use of {{Clickable button/sandbox}} on various userspace pages. I could probably just wrap the template invocations in {{suppress categories}}, but I note that these invocations aren't new ones — they've all been on the pages for a long time without ever generating this category until now, meaning the category results from a new module edit within the past couple of days, and thus possibly could recur in the future if it isn't addressed some other way. So if this is a tracking category we would want, then could somebody who knows what they're doing create it — and if it isn't desired, then again, I need somebody with module-editing privileges to make it go away.
Never mind on the Sweden one, it turns out I was able to clear that out just by null-editing it, because it was just one of those "category not actually on the pages despite nominally appearing to have pages in it, because it had already been corrected but failed to purge" things. Bearcat (talk) 14:36, 19 May 2025 (UTC)[reply]
I remember seeing a template that let me combine two images into one placement. Anyone remember what that template is named?
Also, I have a bunch of templates on my user page that put images down the left edge of the page. If I want to put them horizontally is there an easy way to do it?
Latest tech news from the Wikimedia technical community. Please tell other users about these changes. Not all changes will affect you. Translations are available.
Weekly highlight
The Editing Team and the Machine Learning Team are working on a new check for newcomers: Peacock check. Using a prediction model, this check will encourage editors to improve the tone of their edits, using artificial intelligence. We invite volunteers to review the first version of the Peacock language model for the following languages: Arabic, Spanish, Portuguese, English, and Japanese. Users from these wikis interested in reviewing this model are invited to sign up at MediaWiki.org. The deadline to sign up is on May 23, which will be the start date of the test.
Updates for editors
From May 20, 2025, oversighters and checkusers will need to have their accounts secured with two-factor authentication (2FA) to be able to use their advanced rights. All users who belong to these two groups and do not have 2FA enabled have been informed. In the future, this requirement may be extended to other users with advanced rights. Learn more.
Multiblocks will begin mass deployment by the end of the month: all non-Wikipedia projects plus Catalan Wikipedia will adopt Multiblocks in the week of May 26, while all other Wikipedias will adopt it in the week of June 2. Please contact the team if you have concerns. Administrators can test the new user interface now on your own wiki by browsing to Special:Block?usecodex=1, and can test the full multiblocks functionality on testwiki. Multiblocks is the feature that makes it possible for administrators to impose different types of blocks on the same user at the same time. See the help page for more information. [4]
Later this week, the Special:SpecialPages listing of almost all special pages will be updated with a new design. This page has been redesigned to improve the user experience in a few ways, including: The ability to search for names and aliases of the special pages, sorting, more visible marking of restricted special pages, and a more mobile-friendly look. The new version can be previewed at Beta Cluster now, and feedback shared in the task. [5]
The Chart extension is being enabled on more wikis. For a detailed list of when the extension will be enabled on your wiki, please read the deployment timeline.
Wikifunctions will be deployed on May 27 on five Wiktionaries: Hausa, Igbo, Bengali, Malayalam, and Dhivehi/Maldivian. This is the second batch of deployment planned for the project. After deployment, the projects will be able to call functions from Wikifunctions and integrate them in their pages. A function is something that takes one or more inputs and transforms them into a desired output, such as adding up two numbers, converting miles into metres, calculating how much time has passed since an event, or declining a word into a case. Wikifunctions will allow users to do that through a simple call of a stable and global function, rather than via a local template.
Later this week, the Wikimedia Foundation will publish a hub for experiments. This is to showcase and get user feedback on product experiments. The experiments help the Wikimedia movement understand new users, how they interact with the internet and how it could affect the Wikimedia movement. Some examples are generated video, the Wikipedia Roblox speedrun game and the Discord bot.
View all 29 community-submitted tasks that were resolved last week. For example, there was a bug with creating an account using the API, which has now been fixed. [6]
Updates for technical contributors
Gadgets and user scripts that interact with Special:Block may need to be updated to work with the new manage blocks interface. Please review the developer guide for more information. If you need help or are unable to adapt your script to the new interface, please let the team know on the talk page. [7]
The mw.title object allows you to get information about a specific wiki page in the Lua programming language. Starting this week, a new property will be added to the object, named isDisambiguationPage. This property allows you to check if a page is a disambiguation page, without the need to write a custom function. [8]
User script developers can use a new reverse proxy tool to load javascript and css from gitlab.wikimedia.org with mw.loader.load. The tool's author hopes this will enable collaborative development workflows for user scripts including linting, unit tests, code generation, and code review on gitlab.wikimedia.org without a separate copy-and-paste step to publish scripts to a Wikimedia wiki for integration and acceptance testing. See Tool:Gitlab-content on Wikitech for more information.
The 12th edition of Wiki Workshop 2025, a forum that brings together researchers that explore all aspects of Wikimedia projects, will be held virtually on 21-22 May. Researchers can register now.
Hey, can you please ask an admin to fix the internal error with the RTS games list page?
Whenever I try to make a new small edit, this small error message always comes up after clicking to the next preview window in my mobile Wikipedia app on android phone.
It says:
"{"status":500,"type":"Internal error"}".
What is it? Is it a database upload problem with the page?
I don't use a VPN on phone.
Later on, someone suggested to me to edit and preview the page in a desktop PC browser. I did that and it worked. I haven't tested it in a mobile browser on the phone (but they should work the same way as the desktop PC browsers).
Recently, I'm noticing this different error message on the same page:
"{"status":413,"type":"Internal error"}".
I read about that online and it points to the payload data being too big for the server to handle. Can you fix these problems?
I'm also posting this topic here in a mobile browser (not the Wikipedia app) on my phone because the app didn't recognise me as logged in on this page. ObiKKa (talk) 23:31, 19 May 2025 (UTC)[reply]
This site shows geolocation data as well as proxy/VPN detection, whois, ISP/ASN, and abuse contacts all in one. It has no usage restrictions either. disclaimer: I'm the developer of the site in question Tally-IPLocate (talk) 09:57, 20 May 2025 (UTC)[reply]
Following the creation of a module grabbing specific specific data from Wikidata, I am in the process of ensuring its use "as is" across several wikis (to make it easier to maintain), and this requires adapting it to other wikis' local implementation of the {{Wikidata}} template.
For now, I have managed to do this for Portuguese (where the Wikidata module is more or less the same as here) and French (where it is noticeably different) -- see the current translation roadmap. But this is a lot of work and, in the case of the French version, it still results in some functionalities not working, which is disappointing.
@Trappist the monk, who has kindly provided very useful support for this project for some time, opined that it would be more efficient to develop an own implementation of calls to Wikidata, to make this language/wiki-independent. I still tried the other way because it seemed easier, but, now that the limitations appear more clearly, I think this is probably a better solution.
@Julius Schwarz I'm confused as to why your module should ever need to rely on the {{Wikidata}} template. Can't the module just directly call the mw.wikibase functions?
If that's too complicated, you can fork the English Wikipedia Module:Wd as something like Module:European_and_national_party_data/Wd and then replace calls such as returnframe:expandTemplate({title='wikidata',args={data_requested,local_reference,local_preferred,local_raw,local_linked,qid,property_id,[qualifier_id]=value_of_qualifier}}); (which is a bit of a hack anyway) with something like returnrequire('Module:European_and_national_party_data/Wd')['_'..data_requested]({local_reference,local_preferred,local_raw,local_linked,qid,property_id,[qualifier_id]=value_of_qualifier});. --Ahecht (TALK PAGE)14:54, 22 May 2025 (UTC)[reply]
Hi @Ahecht, you seem like exactly the person I was waiting for :) More seriously, I am not sure what is best, but Trappist had indeed mentioned mw.wikibase, and I would really like to see this implemented. I simply cannot do it myself -- I am happy to learn and contribute, but I don't even know where to start and need help for this. Even forking the Wd module is a bit above my skill set. The mw.wikibase does seem like a promising option. Wanna help build this? Julius Schwarz (talk) 15:22, 22 May 2025 (UTC)[reply]
Much appreciated, @Ahecht! As for mw.wikibase, a quick crash-course on how to integrate this in the module with a couple of examples might be sufficient to put me on the right track. Julius Schwarz (talk) 15:35, 22 May 2025 (UTC)[reply]
Hi everyone! I'm writing on behalf of the Web team. Over the past year, the team has been exploring ways to improve browsing for readers. We want to increase reader retention and create pathways for deepening reader connections with the wikis. We would like readers to use the wikis more frequently and potentially set towards the path of editing.
One of our experiments was to provide suggestions in the empty state of the search bar for logged-out users. The goal was to show suggestions to those who show interest in spending time on Wikipedia (by opening the search bar). We performed two experiments - showing the feature in a browser extension on desktop, and showing some readers the feature via an A/B test on mobile. It turned out that engagement with this feature is high when compared to other suggestion features, and readers who use the feature tend to read more articles overall. For more details, please check out the project page.
The next step is to make this feature available across wikis. We will begin rolling out the feature over the next month and a half. Catalan, Hebrew, and Italian Wikipedias, as well as a number of sister projects will see the change on desktop between May 21 and June 4, and on mobile June 4 and June 15. All other Wikipedias will see the change on desktop between June 4 and June 15, and on mobile – between June 15 and June 30. EBlackorby-WMF (talk) 18:55, 21 May 2025 (UTC)[reply]
Problem with Full party name with color, it outputs the same party name for two different parties.
I had raised this exact same issue on the Module talk:Political party. I was informed that this exact issue had been brought up at the Template talk:Party name with color, but no one has fixed this issue yet. User:CX Zoom suggested that I should try to bring this up here so that someone might be able fix this. I have described the exact situation below:
style="width: 2px; color:inherit; background-color: #f50222;" data-sort-value="Communist Party of India" | | scope="row" style="text-align: left;" | Communist Party of India
style="width: 2px; color:inherit; background-color: #cc0d0d;" data-sort-value="Communist Party of India (Marxist)" | | scope="row" style="text-align: left;" | Communist Party of India
This question has presumably come up before, but I couldn't find an answer. The closest was Wikipedia:Village_pump_(technical)/Archive_80#No_registration_date. The question is why is the user.user_registration column null for some accounts in the enwiki database? The Basie and Pnslotero accounts are 2 examples. There are many more. The registration dates for the accounts (20150316205729 and 20130416213919) are present in the globaluser.gu_registration column in the centralauth database. Sean.hoyland (talk) 16:57, 22 May 2025 (UTC)[reply]
How much more of an answer do you want? The only way I can think of to make TheDJ's comment there in 2010 more precise is to give the exact dates when user_registration began being recorded at the time of registration (in MediaWiki 1.6) and when the last time the backfill script was run (certainly well before August 2006, probably just once when 1.6 was deployed). Global accounts weren't created until the better part of a decade later. —Cryptic17:37, 22 May 2025 (UTC)[reply]
Possibly no more of an answer if I assume TheDJs statement "if you only registered and never made an edit until after 2005 (or whenever they last ran the script to fill in registration dates based on first edit), then you will not have a registration date" is correct and applicable in all of these cases. The 'Global accounts weren't created until the better part of a decade later' was a missing piece of the puzzle for me. It's accounts like Pnslotero that got my attention because I need to assume that they registered before 2005 and didn't make an edit until 2013-04-03, which seems surprising. And they attached to the global account just a few days later on 2013-04-16. Sean.hoyland (talk) 18:20, 22 May 2025 (UTC)[reply]
If global accounts are the same as WP:SUL, that occurred in late May 2008, almost a year before I registered. I certainly didn't need to do anything special to become registered on 240+ other wikis, apart from (for some reason) Urdu Wikipedia, where I needed steward assistance. --Redrose64 🌹 (talk) 20:26, 22 May 2025 (UTC)[reply]
Yes, those two terms refer to the same thing. Global accounts became indeed available around 2008, but it wasn't until 2015 when the meta:SUL finalisation process converted the final remaining accounts that predated the system to be global accounts. (And that 2015 date is when the final accounts were migrated, the migration was an opt-in process for some time before that.) taavi (talk!) 21:05, 22 May 2025 (UTC)[reply]
Indeed. Another measure of an account's registration date is their user ID number, which corresponds to when their account was registered in the current database (not necessarily the recorded date of their first edit, due to database imports and glitches). You can find this in the page information link attached to their user page (which is present whether or not their user page is a red link). Another way to get there is by going to Special:PageInfo and typing "User:<username>" in the box. Either way, the user ID for Basie is 119330, which indicates that they registered in 2004, or at least before my registration (with user ID 194203) in February 2005; Pnslotero has user ID 303782, indicating that they registered later in 2005 than I did. I don't know of a way to convert user ID's to rough registration dates besides experience, but the very long page Wikipedia:Arbitration Committee Elections December 2024/Coordination/Voter Role Preview just happens to be ordered by user ID number, so that can be used as a measuring stick. Graham87 (talk) 04:35, 23 May 2025 (UTC)[reply]
Look at the backfilled user_registration timestamps of adjacent user_id's, with a query like this one. Enough accounts from around then that have any edits at all made their first one shortly after registration to give you a pretty good idea. —Cryptic13:23, 23 May 2025 (UTC)[reply]
What to do about possibly excessive talk page header notices/warnings...
Am wondering when other editors think the number of warnings/notices on a talk page is excessive. What notices can or should be removed? Case in point, Talk:LGBTQ rights in the United States has a total of 6 notices/warnings - controversial/not a forum/calm/round in circles etc - and some of them seem somewhat redundant. To me the sheer number & page volume visually overwhelms posts & threads. Also am wondering if there is any way to combine contentious topics notices...yeah, I know, probably not. Anyway, looking for other editors' thoughts on how many talk page notices might be "just too much". - Shearonink (talk) 18:25, 22 May 2025 (UTC)[reply]
This might be an unpopular opinion here (lots of editors seem to love talk page banners) but I think that honestly all of them could probably go.
"this article is controversial" yeah, obviously, what a waste of a warning. The information about citations, NPOV, edit warring etc should be in an edit notice where people editing the article will see them, not on the talk page.
"This is not a forum" - this is already in the talk page header - could that template not have a parameter to make that message bold/bigger/have a stop sign if needed?
"please stay calm and civil" has adding a "calm down" template to a talk page ever lead to an argument being defused? My guess would be no.
"this talk page has arguments that are often repeated". This page has had 360 non-minor edits since 2007. It has 2 archives. The only discussion from this year is the one about the number of warnings. I don't see how this is justified.
The contentious topics template should be modified to support multiple topics IMO. I don't see the point of having two separate notices for two CT regimes with identical sanctions.
But yes, having 50% of the talk page be warning is ridiculous, especially considering that the only disruption in the last year seems to be some obvious IP trolling that could have been deleted on sight. 86.23.109.101 (talk) 20:05, 22 May 2025 (UTC)[reply]
Does anyone know if it is possible to combine contentious topic template/notices? Why couldn't a single combined template say something like
The contentious topics procedure applies to this page. This page is related to 2 contentious topics:
gender-related disputes or controversies or people associated with them.
post-1992 politics of the United States and closely related people.
Editors who repeatedly or seriously fail to adhere to the purpose of Wikipedia, any expected standards of behaviour, or any normal editorial process may be blocked or restricted by an administrator. Editors are advised to familiarise themselves with the contentious topics procedures before editing this page.
@Shearonink I think the best way to merge these templates would be to list the sanctions regimes that apply, what those regimes mean, then have the standard boilerplate language. Something like:
The following contentious topics procedures apply to this page: foo and bar. This means that:
Standard discretionary sanctions apply to edits made in these topics
Editors may not make more than 1 revert in a 24 hour period
All editors must be logged into an account, and have at least 30 days tenure and made at least 500 edits
Editors who repeatedly or seriously fail to adhere to the purpose of Wikipedia, any expected standards of behaviour, or any normal editorial process may be blocked or restricted by an administrator. Editors are advised to familiarise themselves with the contentious topics procedures before editing this page.
I'm on Pop OS. If I install Konsole natively or as a snap or flatpak it keeps saying Missing executable 'shellcheck', please install when I click on a Quick Command, even though shellcheck is installed and should be available. I even installed shellcheck natively and as a snap and a flatpak so I am sure it is in the PATH but it seems like Konsole refuses to acknowledge its existence. Is there a magic trick I am unaware of? Polygnotus (talk) 11:24, 23 May 2025 (UTC)[reply]
There are some common problems when developing user scripts:
While local development usually occurs through a version control system, usually Git with additional continuous integration provided by sites like GitHub or Wikimedia GitLab, publication of new versions of user scripts still require on-wiki edits to the user script page, which need to be done manually, and can be tedious.
Update of user scripts are restricted to their owners. This creates a large bottleneck for projects maintained by multiple people. This can be especially problematic when a script owner leaves Wikipedia or goes on an extended wikibreak.
Store a BotPassword/OAuth token of the owner account somewhere, and use it to make an edit whenever new code needs to be deployed (per CI results/manual approval/etc)
However, 1 to me feels unwieldy and suffers from the amount of effort the engineering/linking everything required, 2 can have issues with regards to caching per the maintainer, and is not as good as hosting the script on-wiki.
My proposal for how to resolve the problems above involves hosting an interface admin bot, and allowing user script authors to opt in to syncing their user script from a Git repository to Wikipedia using webhooks.
Any script wishing to be synced by the bot needs to be edited on-wiki (to serve as an authorization) to have the following header at the top of their file:
// [[User:0xDeadbeef/usync]]: LINK_TO_REPO REF FILE_PATH// so, for example:// [[User:0xDeadbeef/usync]]: https://github.com/fee1-dead/usync refs/heads/main test.js
Here are some questions you may have:
Why is this being posted here?
Running this bot requires community discussion and approval. I'd like to see whether the community is willing to adopt this.
What are some benefits of this proposal?
Auditability. If this scheme was to be adopted, there is an easy way to know whether a script is being automatically synced, there is an easy way to get the list of all scripts that are being synced. All edit summaries are linked to the Git commit that created that edit.
Ease of use. It is very easy to setup a sync for a user script (just insert a header to the file and configure webhooks), and flexible as the format above allows the branch and file name to be configured. It removes the need for all script developers to create BotPasswords or OAuth tokens.
Efficiency. Only webhooks will trigger syncs. There is no unnecessary periodic sync being scheduled, nor does it require CI jobs to be run each time the script needs to be deployed.
What are some drawbacks of this proposal?
Security. Even though there are already ways to allow someone else or an automated process to edit your user script as described above, allowing this bot makes it slightly easier, which could be seen a security issue. My personal opinion is that this shouldn't matter much as long as you trust the authors of all user script developers whose scripts you use. This bot is aimed primarily at user scripts.
Centralization of trust. The bot having interface administrator rights requires the bot to be trusted to not go rogue. I have created a new bot account (User:DeadbeefBot II) to have separate credentials, and it will have 2FA enrolled, and the code will be open source and hosted on Toolforge.
What are some alternatives?
We can do nothing. This remains a pain point for user script developers as syncing is hard to setup with careful CI configuration required or a less reliable reverse proxy would be required.
We can create a centralized external service (suggested by BryanDavis on Discord) that stores OAuth tokens and which project files are synced with which titles. There would be a web interface allowing developers to enter in their information to start automating syncs. However, this may not be as auditable as edits would go through the bot owners' accounts and not a bot account. This is less easy to use as an owner-only OAuth token would need to be generated for each sync task.
Feel free to leave a comment on how you think about this proposal. I'd also be happy to answer any questions or respond to potential concerns. beef [talk]12:03, 23 May 2025 (UTC)[reply]
Am I reading this correct that one of methods you are proposing is to ask other users to give you their (bot)passwords? That is a horrible idea. — xaosfluxTalk12:25, 23 May 2025 (UTC)[reply]
Yep. It will probably be stored on Toolforge's tooldb though. Preferably it would be an OAuth token that is only limited to editing the specific user script.
We explicitly tell our users to never share their authentication secrets with others, I can't possibly support processes that go against that. — xaosfluxTalk14:52, 23 May 2025 (UTC)[reply]
If the bot receives community approval, then we won't need one that collects OAuth tokens. But according to WP:BOTMULTIOP it might be preferred to use OAuth instead of having a bot?
A different question would be whether we should require all commits to be associated with a Wikipedia username. I personally don't see a need, but WP:BOTMULTIOP and the community might think otherwise. beef [talk]15:01, 23 May 2025 (UTC)[reply]
I don't have a preference to either approach, but let's not confuse things here. No one's asking for passwords to be shared. OAuth tokens are not the same as passwords. Every time you make an edit through an OAuth tool (like Refill), you are sharing your OAuth tokens. This is very normal, and safe because OAuth-based edits are tagged and can be traced back to the application that did it. (Worth noting that owner-only OAuth credentials don't have such protections and indeed should not be shared.) – SD0001 (talk) 15:38, 23 May 2025 (UTC)[reply]
This. I'm concerned that having people upload a BotPassword or owner-only OAuth token was even considered, when a "normal" OAuth token is so much more obviously the way to go for that option. Anomie⚔13:03, 24 May 2025 (UTC)[reply]
I might just be a Luddite here, but I don't think using GitHub for on-wiki scripts is a good idea to begin with. First, I feel that the git model works when there is a "canonical" version of the source code (the main branch, say), that people can branch off of, re-merge into, etc. But the problem here is that a git repo for a MW user script can *never* be the canonical source code; the canonical source code is inherently what's on-wiki, since that's what affects users. There is an inherent disconnect between what's on-wiki and what's elsewhere, and the more we try to pretend that GitHub is the source of truth for a script, the bigger the problems with that disconnect will be. Personally, I've seen many problems caused by the confusion generated just when projects use git branches other than "main" for their canonical code; here, the canon isn't even on git at all. How would this bot handle changes made on-wiki that aren't in git (if it would handle those at all)?
Second, this doesn't solve the problem of "inactive maintainer makes it difficult to push changes to production", since a repo maintainer can disappear just as easily as a mediawiki user; it just adds an ability to diffuse it a little bit by adding multiple maintainers, at the cost of this inherent disconnect.
source of truth is a vague and subjective term. I would personally call the latest version the source of truth, which of course lives on GitHub. Wikipedia hosts the published version, which may not be from the default branch on GitHub (dev branch for development, as the latest source of truth, main branch for the published version).
But that's of course a personal preference. There are many, many people out there that use Git for version control and for development of user scripts. You may be fine with using MediaWiki as version control and primarily updating code on-wiki, but some of us have different workflows. It might be helpful to write unit tests and force them to pass before getting deployed. It might be helpful to use a more preferred language that transpiles to javascript instead of using javascript directly. Having this benefits these use cases.
It does solve the problem by allowing additional maintainers to be added. There's no native MediaWiki support for adding collaborators to a user script, so this can help with that, in addition to the benefits of a Git workflow.
Attribution is given by using the commit author's name in the edit summary. I'm sure user script developers can include a license header and all that to deal with the licensing part.
I think this thing should happen, and I think it will happen even if there is no community support for the bot to run, it will just involve the proposed toolforge service that collects OAuth credentials. I sure hope that the bot proposal passes but I'm fine with writing the extra code for the alternative too. I also want to think about whether I have enough energy to keep justifying for why I think this would be a good bot task, when all the negative feedback I get are from people who won't use it. The automatic syncing has occurred in one form or another. And personally, I want to be able to use TypeScript to write my next sophisticated user script project, and I want to add collaborators. beef [talk]14:42, 23 May 2025 (UTC)[reply]
I would want to get approval for only userspace edits first. Extending it to gadgets is an even bigger stretch and less likely to get approved. beef [talk]14:53, 23 May 2025 (UTC)[reply]
I also want to think about whether I have enough energy to keep justifying for why I think this would be a good bot task, when all the negative feedback I get are from people who won't use it: None of this happens in a vacuum. I commented on this because I've *already* had people complaining that I didn't submit a pull request on some GitHub repo when I responded to an intadmin edit request and implemented the change on-wiki--despite the fact that the GitHub repo was already several onwiki edits out of date before I made the change. We already have a process for multiple maintainers and code change requests; it's the intadmin edit request template. It's sub-optimal, for sure, but the solution to a sub-optimal process is not to create an entirely separate process to run in parallel. If development happens on GitHub, it doesn't affect anything unless it gets replicated onwiki. If development happens onwiki, it affects everyone regardless of what GitHub says. That's why I call the onwiki version the canonical source of truth--because that's the one that matters. I could see the benefit here if the bot also worked in reverse--if it were set up to automatically keep the main branch of the git repo in sync with the onwiki script. But as it is, I feel this will add more headache than it's worth. Sorry if that's tiring for you. Writ Keeper⚇♔15:03, 23 May 2025 (UTC)[reply]
If there is a critical fix, you can remove the header and the bot will stop syncing. That is by design. And then you can ping the maintainers to incorporate the fix. I personally wouldn't mind giving committer access of my user scripts to every interface admin on this site.
A two-way sync involves storing authentication to the Git repo, and yeah, harder to implement. Everyone that uses this sync scheme will have all development activity on GitHub, with potentially occasional bug reporting happening at the talk page, so I don't see that much point in programming the sync the other way. beef [talk]15:16, 23 May 2025 (UTC)[reply]
Everyone that uses this sync scheme will have all development activity on GitHub[citation needed] My whole point is that hasn't been my experience so far. Maybe I just caught an unusual case. Writ Keeper⚇♔15:25, 23 May 2025 (UTC)[reply]
If someone does choose to sync from Git to Wikipedia, then they must use the Git repo as their primary place for development. I cannot think of any case where people would have an onwiki version that is more up-to-date than the Git version, given that the idea of having it sync is based on the assumption that Git is used as the most up-to-date place. beef [talk]03:29, 24 May 2025 (UTC)[reply]
We already have a process for multiple maintainers and code change requests; it's the intadmin edit request template. This seems like wishful thinking. It's just not true. I'm reminded of a time when a heavily used script broke and multiple interface admins refused to apply an unambiguous 1-line bug fix. At best, edit requests get accepted for bug fixes, not for anything else. – SD0001 (talk) 16:26, 23 May 2025 (UTC)[reply]
That's true of almost all kinds of software on GitHub. By your logic, the canonical version of, say mediawiki itself, is what actually runs on the production machines, not what's on GitHub. Similarly, for a library the canon would be what's released to npm/pypi, etc. How would this bot handle changes made on-wiki that aren't in git (if it would handle those at all)? That's like asking if a wikimedia sysadmin shells into a production host and edits the code there, how is it reflected back to gerrit? It isn't. That might sounds non-ideal, but it isn't unprecedented. Already, most big gadgets including Twinkle, afc-helper, and xfdcloser are developed externally and deployed to wikis via automated scripts. Manual edits on-wiki aren't allowed as they'll end up overwritten.Second, ... It does solve that problem – a git repo can have multiple maintainers to avoid bus factor, unlike a user script which can only be edited by one single userspace owner (technically interface admins can edit as well, but on this project, we appear to have adopted a mentality that doing so is unethical or immoral). Having said that, I personally don't use GitHub or Gitlab for any of my user scripts. But I respect the wishes of those who choose to do so. – SD0001 (talk) 15:05, 23 May 2025 (UTC)[reply]
I would argue there is a substantial difference between someone SSHing into a production host to make manual changes and the process of talk-page-int-admin-edit request, and the difference is that the latter *is* a process. But also, yes, to an extent I *would* argue that, from a holistic perspective, the code that is active in production and that users are seeing, interacting with, and using *is* the canonical version, and that what is in a code repo, main, develop, or otherwise, is only important to the extent that it reflects what's on the production machine. The reader or normal editor using a website feature doesn't care what's in the repo, they care what they're using, and they're going to be frustrated if that feature suddenly disappears, regardless of whether that's the fault of some bot overwriting code or some dev not committing their changes to the off-site repo or what have you. Writ Keeper⚇♔15:32, 23 May 2025 (UTC)[reply]
If I have to choose between two processes that can't co-exist, I'll choose the one that offers more benefits. A git-based workflow enables unit testing, transpilation, linting and better collaboration. It offers a change review interface that allows for placing comments on specific lines. As for talk page requests, refer to my comment above about how useful they are. – SD0001 (talk) 12:41, 24 May 2025 (UTC)[reply]
There's pros and cons. I talk about it in my essay User:Novem Linguae/Essays/Pros and cons of moving a gadget to a repo. Popular, complex gadgets are often the use case that benefits the most from a github repo. A github repo enables automated tests (CI), a ticket system, and a PR system, among other things. These benefits are well worth the slight downside of having to keep things in sync (deploying). And in fact this proposed bot is trying to fix this pain point of deploying/syncing. –Novem Linguae (talk) 15:16, 23 May 2025 (UTC)[reply]
I have talked to BDavis on Discord and he said he thinks having it synced to an on-wiki page is better than a reverse proxy. It's in the thread under the #technical channel on Discord. I originally thought that gitlab-content was going to be the ultimate solution but apparently not. And I had already written some code for this thing to happen, so I figured why not propose it. beef [talk]15:09, 23 May 2025 (UTC)[reply]
An alternative that doesn't require any advanced permissions or impersonation issues is for the bot to just sync to itself. It could sync from anywhere upstream to User:Botname/sync/xxxx/scriptyyy.js). Then, any interested user could just import that script. — xaosfluxTalk15:16, 23 May 2025 (UTC)[reply]
For gadgets, we already have a manual process - a bot that opens an edit request when an upstream repo wants to be loaded to the on-wiki one. That does allow us to ensure that changes are only made when we want them, and allows for local code review. For userscripts, users that want to do what this thread is about are already going to have to just trust the bot directly regardless. — xaosfluxTalk15:22, 23 May 2025 (UTC)[reply]
That might be fine, but to me less preferable than the main proposal because then it would be harder to know who is maintaining what script. (I guess it wouldn't be the case if the xxxx refers to the user who asked for the script) I'm also slightly lazy about adding a new proxy-script-creation system in addition too.
A slight concern would be that the name could shift the responsibility of trust and maintaining the script to the bot instead of the actual maintainer. beef [talk]15:24, 23 May 2025 (UTC)[reply]
This would absolutely require that anyone's space that you were publishing to trusted the bot. By publishing a revision you would be responsible for the revision you publish. — xaosfluxTalk15:53, 23 May 2025 (UTC)[reply]
The problem with this alternative approach is that it is just hard to manage.
If I make a user script, it should be my own. Under a bot's userspace, you'd need a separate process for requesting creation and deletion.
Also this makes it harder for pre-existing scripts to be synced. People already using and developing a script at an existing location cannot choose to adopt a Git sync. And it makes it much more harder for the person to disable syncing (compared to editing in your own userspace to remove the header). beef [talk]03:32, 24 May 2025 (UTC)[reply]
I know this is not going to happen, but i consider it unfortunate that we have to do all these hacks. A more reasonable approach would be if there was a spot on gerrit where script authors could put their gadget scripts (With CR excpectations being similar to on wiki instead of normal gerrit) and have them deployed with normal mediawiki deployments. I guess there's all sorts of political issues preventing that, but it seems like it would be the best approach for everyone. Gadgets deserve to be first-class citizens in the Wikimedia code ecosystem. Bawolff (talk) 18:03, 23 May 2025 (UTC)[reply]
We're a top-10 website in the world, I wouldn't call it "political" that we could be hesitant about loading executable code from an external commercial platform in to our system without our review. — xaosfluxTalk23:47, 23 May 2025 (UTC)[reply]
If the community wants to restrict the sync to only Wikimedia GitLab, there wouldn't be any objections on my part, though I don't see why we can't do GitHub as well. beef [talk]03:37, 24 May 2025 (UTC)[reply]
I'm on the website on my phone. I'm using Brave browser, but I checked with Chrome and it's the same. Anyway, maybe this is intended, if so I apologize, but I figured there's no harm in checking. I am not an extended-confirmed user, so when I attempt to make an edit on the talk page of a contentious topic, there should be a banner that shows up and warns me not to. On my computer it does show up. But on my phone I originally couldn't find it at all. I've looked more carefully, there is a symbol ⓘ where it says "Learn more about this page". If I tap on that then I see the banner in question. That seems odd though when new users often would tap it. But maybe it's how it's supposed to work? It's also odd that when I switch to desktop mode on my phone, the banner briefly appears and then disappears. And can then no longer be seen without tapping on the aforementioned ⓘ. So that seems odd and is what pushed me to decide to write this, I apologize again if I overstepped tho. I am a pretty new user as shown by me now being extended confirmed and having to deal with this lol Ezra Fox🦊 • (talk)22:32, 23 May 2025 (UTC)[reply]
I'm not sure what you mean. But it's the banner that says "Stop: You may only use this page to create an edit request". Since I'm not extended confirmed. But for me that's not automatically there, I have to press the link at the top that says "ⓘLearn more about this page". Which I usually wouldn't bother to do. So like when I first edited a talk page aboutthe "the Arab–Israeli conflict" I wasn't supposed to, but I didn't know that because I hadn't seen the banner. I'm just wondering if it's intended to not have that banner show up automatically Ezra Fox🦊 • (talk)07:18, 24 May 2025 (UTC)[reply]
I don't see that banner unless I click on ⓘLearn more about this page, which I don't usually think to click on. And imo most new users probably wouldn't. And so the banner would be pretty useless. And my question is if that's intended. I feel like I've been pretty clear and I'm getting frustrated repeating myself, so my apologies if I've come of as terse here. Ezra Fox🦊 • (talk)11:02, 24 May 2025 (UTC)[reply]
Database error
To avoid creating high replication lag, this transaction was aborted because the write duration (4.789069890976) exceeded the 3 second limit. If you are changing many items at once, try doing multiple smaller operations instead.
[ad93a55c-9cb6-4d73-8210-931e35864660] 2025-05-24 19:05:40: Fatal exception of type "Wikimedia\Rdbms\DBTransactionSizeError"
The only fix to the problem described that anyone on wiki has any power to enact is to reduce the size of the page. 420kb is obscene.
And one would probably be told, were one to report it to Phabricator, to reduce the size of the page, for the same reason. Izno (talk) 21:40, 24 May 2025 (UTC)[reply]
I run queries on Quarry and right now I'm getting the message:
Error
This web service cannot be reached. Please contact a maintainer of this project.
Maintainers can find troubleshooting instructions from our documentation on Wikitech.
proxy-5.project-proxy.eqiad1.wikimedia.cloud
It says this for all of the queries I run and just trying to view the directory page of recent queries. Any idea what the problem is and when it might be fixed? LizRead!Talk!04:38, 25 May 2025 (UTC)[reply]
"Page Size" and "Who Wrote That?" tools not working on James Cook article
I frequently use the WP:Prosesize gadget (aka Page Size tool) and the WP:Who_Wrote_That? tool from my tool menu (the page size tool required enabling a gadget in my Preferences; the latter tool required installing a JS script). I've used both tools on many articles.
Neither tool works on the James Cook article. I don't think the issue is my computer, because I've tried the tools on several computers, and they don't work on any computer. The problem is only with that one article.
Strangely, the tools seem to work for the James Cook article when I am not logged in. They only fail on that article when I am logged in. Again, the tools work on every article except the James Cook article.
Thanks for the reply. I don't fully understand what that linked script is. Is it an alternative version of the prose size tool? or is that the original prose size tool, with a bug fix?