Jump to content

Wikipedia:WikiProject User warnings/Testing

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Jayen466 (talk | contribs) at 02:05, 30 November 2011 (Participants: +1). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

This page is the tracking page for efforts on the English Wikipedia to improve the quality of user talk warnings. Links to this project on other Wikipedias can be found on the cross-wiki hub on Meta.

Scope

The purpose of this page is to measure the efficacy of different user talk warning templates through randomized testing. This effort does not entail creating new categories of user warnings or to organize them more efficiently, but rather to improve on the quality of our current communication methods.

We're aiming to fine tune the template messages we send to editors, in order to encourage more good faith contributors and discourage outright vandals, spammers, and other bad faith contributors.

Participants

Sign up here if you'd like to stay notified about testing updates.

  1. User:Steven (WMF)
  2. User:Maryana (WMF)
  3. User:Staeiou
  4. User:EpochFail
  5. User:Vitor Mazuco
  6. User:Philippe (WMF)
  7. User:Kudpung
  8. User:Kubigula
  9. User:DGG
  10. User:Fluffernutter
  11. User:Wikipelli
  12. Ebe123
  13. Σ
  14. User:The Blade of the Northern Lights
  15. User:Writ Keeper
  16. -- Eraserhead1 <talk> 21:53, 25 October 2011 (UTC)[reply]
  17. User:Jtmorgan
  18. User:Chzz
  19. Fred Gandt (fg)
  20. Hurricanefan25
  21. Ljhenshall
  22. XLinkBot (forced by its operators)
  23. Beetstra (as one of the operators of XLinkBot)
  24. Andy Mabbett (Pigsonthewing)
  25. User:Racconish
  26. User:Recon Etc
  27. User:Jayen466

Tasks

Any effort to make templates more simple, friendly, and accessible is welcome. Specifically, you can:

  • Draft new templates or assess the templates that are currently in the draft phase and suggest improvements in their content or structure. We aim to lessen the bitey-ness of warnings, but templates should still comply with the design guidelines and the usage and layout best practices.
  • Suggest which templates to test next and what changes should be made.
  • Help us analyze our data. We're using mixed methods, with both quantitative statistics and qualitative coding.
  • Recruit more participants.

Testing schedule

Templates have been tested in the following places:

Those pages each have lists and dates of all the templates tested. You can find details about the results of our tests on our documentation page.

Testing method

The following are the requirements to conduct comparative A/B testing of any user talk template. Doing randomized experiments allows us to get hard data about what kinds of content are most successful at helping us achieve these goals. What you'll need is...

  1. A "randomizer" that delivers all the templates in your test. This is the template that should be included in the configuration of whichever bot or tool you are testing with, and it randomly delivers one of the templates via a parser function.
  2. A control, usually the existing default template. Note that you should replicate the default in a new template rather than use the current template page, in order to avoid including old instances of the default in your experiment.
  3. A new version or versions of the template you want to test. Try to use a canonical name that matches the type, purpose, and level of the warning you're interested in.
  4. A Z number tracking template for all templates being tested. If you do not include a separate Z number in each template, you will lose track of your test cases once they are substituted.

In some cases, such as for bots where all contribs are by one account, this method can be greatly simplified.

Analysis

We have so far used a mixed method of both quantitative measurement and qualitative assessment. If you'd like to help sort and analyze tests, please sign up above. There are more details about how the analysis works on the associated tool pages above.