Jump to content

Wikipedia:Wikipedia Signpost/2011-08-01/Research interview

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Skomorokh (talk | contribs) at 20:52, 20 July 2011 (+shots). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Research interview

The Huggle Experiment: an interview with the research team

An example of the Huggle interface

As part of the 2011 Summer of Research, the Wikimedia Foundation's Community Department has announced an experiment to investigate potential improvements to first-contact between new editors and patrollers using the Huggle anti-vandalism tool. The experiment aim to test "warning templates that are explicitly more personalized and set out to teach new editors more directly, rather than simply pointing them to policy and asking them not to do something", according to research fellow Steven Walling. To gain an insight into how such initiatives come about and what goes into planning them, the Signpost interviewed researcher Steven Walling.

Can you tell us a little about the Summer of Research project, how it came about, how you got involved and what sorts of questions you hope to investigate this year?

Steven:




How did the idea to experiment with Huggle's standardised warning system originate?




How do lofty strategic goals like "Support the recruitment and acculturation of newer contributors" get translated into practical initiatives such as this?




An experiment of this kind seeks to understand social phenomena using technical methodologies. Does this involve coordination between, for instance, the Community Department and the Huggle developers, or is the experiment conducted by researchers proficient in social statistics or the digital humanities? Can you talk a little about the backgrounds of those involved?




How were the parameters of the experiment – number of warnings delivered, proportion of changed warnings – decided upon?




The Huggle experiment is not the first attempt to investigate the interactions of patrollers and new page creators. A notable community-lead effort was the Newbie treatment at Criteria for speedy deletion experiment in 2009, where experienced editors (this interviewer included) posed as inexperienced article creators in order to gain an insight into how new contributors were treated in the patrolling process. The experiment attracted significant controversy, due to ethical concerns surrounding informed consent of the subjects. To what extent did the research team consider or engage with the relevant subjects from the editing community (i.e. Huggle patrollers, new contributors) prior to this experiment?




What do the researchers hope to learn from the experiment and what are the preliminary expectations or hypotheses to be tested?




What sorts of approaches we might expect from the Foundation in testing and improving usability, reader engagement and editor retention in the months to come?