Wikipedia:Wikipedia Signpost/2011-08-01/Research interview
The Huggle Experiment: an interview with the team
How did the idea to experiment with the standardised warning system originate?
- Name:
How do lofty strategic goals like "Support the recruitment and acculturation of newer contributors" get translated into practical initiatives such as this?
An experiment of this kind seeks to understand social phenomena using technical methodologies. Does this involve coordination between, for instance, the Community Department and the Technology Department, or is the experiment conducted by researchers proficient in social statistics or the digital humanities? Can you talk a little about the backgrounds of those involved?
How were the parameters of the experiment – number of warnings delivered, proportion of changed warnings – decided upon?
The Huggle experiment is not the first attempt to investigate the interactions of patrollers and new page creators. A notable community-lead effort was the Newbie treatment at Criteria for speedy deletion experiment in 2009, where experienced editors (this interviewer included) posed as inexperienced article creators in order to gain an insight into how new contributors were treated in the patrolling process. The experiment attracted significant controversy, due to ethical concerns surrounding informed consent of the subjects. To what extent did the research team consider or engage with the relevant subjects from the editing community (i.e. Huggle patrollers, new contributors) prior to this experiment?
What do the researchers hope to learn from the experiment and what are the preliminary expectations or hypotheses to be tested?
What sorts of approaches we might expect from the Foundation in testing and improving usability, reader engagement and editor retention in the months to come?
Discuss this story
"Fewer than half of the newbies investigated received a response from a real person during their first 30 days". I think we really dropped the ball here. Interaction is a major way to recruit newbies and hopefully turn them into "regulars". OhanaUnitedTalk page 05:18, 2 August 2011 (UTC)[reply]
Perhaps two critical concerns will govern the efficiency with which the problem can be addressed: (i) how long into a newbie's edit-history the patterns become clear, and (ii) the extent to which they can be identified by a bot (including whether a bot could do the initial "easy" filtering and pass a minority on to human eyes for higher-level sorting to identify the promising newbie-pluses for human interaction – a three-tiered filtering, as it were). Of particular interest might be the grey area of newbies – not those who will clearly stay and those who clearly won't (or who we clearly do or don't want to stay), but those where final stage, human interaction, has a reasonable likelihood of making the difference, of bringing them over the line. Finding the best bot/human mechanism for rationing the supply of "newbie mentors" to this prioritised editorial demographic, IMO, is the challenge. After that, a future project could work on developing guidelines for the best ways in which to interact with newbie-pluses. Tony (talk) 02:41, 3 August 2011 (UTC)[reply]