Jump to content

Wikipedia talk:Moderator Tools/Automoderator

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

ClueBot

Do you think ClueBot NG has a substantial impact on the volume of edits you need to review?

  • Not really, no. A few years ago my answer would have been different, but there has been a noticeable decrease in how much I encounter ClueBot's edits while I'm patrolling. I think it's primarily due to increasing use of filters and more aggressive proxy blocking; it's getting harder and harder to get to hit "publish changes" on a piece of vandalism. On top of that, the pool of counter-vandals is continually increasing and tools are being developed to help them, which makes the whole human side more efficient. Nowadays it feels like ClueBot mainly catches things nobody bothered to revert, or occasionally something I'm hesitating to revert for one reason or another. Ironically, we're taking work away from the automaton. Giraffer (talk·contribs) 18:03, 25 August 2023 (UTC)[reply]

Do you review ClueBot NG's edits or false positive reports? If so, how do you find this process?

Are there any feature requests you have for ClueBot NG?

Open questions about use on the English Wikipedia

How would English Wikipedia evaluate Automoderator to decide whether to use it?

  • To me at least, the really relevant portion is "what edits could Automoderator catch that ClueNG isn't already". So any evaluation would be easied by seeing examples/scale of what it's catching. Attached to that is being confident that automoderator isn't making more false positives than ClueNG (we could already configure ClueNG to catch way more problematic edits if we accepted twice the false positive rate, so 0.1% or less would be needed). Nosebagbear (talk) 11:16, 25 August 2023 (UTC)[reply]

Would the community rather test this software early (e.g. in log-only modes), or wait until further in development after other communities have trialled it?

What configuration options would you want to see in the tool?

Open questions about Automoderator in general

What is the support model for this feature?

  • Who specifically will be responsible for bug fixes and feature requests for this utility? If this includes paid WMF staff, are any SLOs going to be committed to? — xaosflux Talk 13:10, 25 August 2023 (UTC)[reply]
    @Xaosflux The Moderator Tools team will be responsible for development and ongoing maintenance of the Automoderator configuration interface, and the Machine Learning and Research teams are responsible for the model and its hosting via LiftWing. SLOs aren't something we've talked about for this project in particular, but I think it's an interesting idea, so let me chat about it and I'll get back to you. Samwalton9 (WMF) (talk) 13:48, 25 August 2023 (UTC)[reply]
    Thank you, some issues we have run in to before with anything that runs at massive-scale is that bugs can quickly become serious problems, and conversely if a process gets built that the community becomes dependent upon (to the point where volunteers may stop running other bots, etc) having the process break down also becomes disruptive. — xaosflux Talk 13:57, 25 August 2023 (UTC)[reply]

Where does Automoderator sit in the workflow?

  • The description says "prevent or revert", will this tool function at multiple layers? There are many steps of the publish workflow today, where in the workflow would this tool function? Specifically before/after which step, keeping in mind certain existing/planned steps such as editcheck, captcha, sbl, abusefilter, ores, publish, copyvioscore, pagetriage. — xaosflux Talk 13:18, 25 August 2023 (UTC)[reply]
    @Xaosflux This is something we haven't decided on yet, so I'm curious where you think it would be best positioned. My initial thinking was that this would be a revert, much like ClueBot NG, both for simplicity and to maximise community oversight. Rather than try to overcomplicate things, allowing an edit to go through and then reverting it means we have all the usual processes and tools available to us - diffs, reverting, a clear history of actions the tool is taking, etc. That said, we're open to exploring other options, which is why I left it vague on the project page. There are benefits to preventing a bad edit rather than allowing and reverting (as AbuseFilter demonstrates), or we might imagine more creative solutions like a Flagged Revisions-style 'hold' on the edit until a patroller has reviewed it. I think that for simplicity we'll at least start with straightforward reversion, but I'd love to hear if any other options seem like good ideas to you. Samwalton9 (WMF) (talk) 13:57, 25 August 2023 (UTC)[reply]
    @Samwalton9 (WMF) I don't think that starting with something that would increase editor workload would be good here for "edits" (e.g. requiring edit approval) - but maybe as part of the new page creation process. (Please advertise for input to Wikipedia talk:New pages patrol/Reviewers and Wikipedia talk:Recent changes patrol. — xaosflux Talk 14:06, 25 August 2023 (UTC)[reply]