License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/DagRep.5.11.103
URN: urn:nbn:de:0030-drops-57676
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2016/5767/
Go back to Dagstuhl Reports


Archambault, Daniel ; Hoßfeld, Tobias ; Purchase, Helen C.
Weitere Beteiligte (Hrsg. etc.): Daniel Archambault and Tobias Hoßfeld and Helen C. Purchase

Crowdsourcing and Human-Centred Experiments (Dagstuhl Seminar 15481)

pdf-format:
dagrep_v005_i011_p103_s15481.pdf (1 MB)


Abstract

This report documents the program and the outcomes of Dagstuhl Seminar 15481 "Evaluation in the Crowd: Crowdsourcing and Human-Centred Experiments". Human-centred empirical evaluations play important roles in the fields of human-computer interaction, visualization, graphics, multimedia, and psychology. The advent of crowdsourcing platforms, such as Amazon Mechanical Turk or Microworkers, has provided a revolutionary methodology to conduct human-centred experiments. Through such platforms, experiments can now collect data from hundreds, even thousands, of participants from a diverse user community over a matter of weeks, greatly increasing the ease with which we can collect data as well as the power and generalizability of experimental results. However, such an experimental platform does not come without its problems: ensuring participant investment in the task, defining experimental controls, and understanding the ethics behind deploying such experiments en-masse.

The major interests of the seminar participants were focused in different working groups on (W1) Crowdsourcing Technology, (W2) Crowdsourcing Community, (W3) Crowdsourcing vs. Lab, (W4) Crowdsourcing & Visualization, (W5) Crowdsourcing & Psychology, (W6) Crowdsourcing & QoE Assessment.

BibTeX - Entry

@Article{archambault_et_al:DR:2016:5767,
  author =	{Daniel Archambault and Tobias Ho{\ss}feld and Helen C. Purchase},
  title =	{{Crowdsourcing and Human-Centred Experiments (Dagstuhl Seminar 15481)}},
  pages =	{103--126},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2016},
  volume =	{5},
  number =	{11},
  editor =	{Daniel Archambault and Tobias Ho{\ss}feld and Helen C. Purchase},
  publisher =	{Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{http://drops.dagstuhl.de/opus/volltexte/2016/5767},
  URN =		{urn:nbn:de:0030-drops-57676},
  doi =		{10.4230/DagRep.5.11.103},
  annote =	{Keywords: Crowdsourcing; Human Computation; Crowdsourcing Design, Mechanisms, Engineering; Practical Experience; Computer Graphics; Applied Perception; HCI; }
}

Keywords: Crowdsourcing; Human Computation; Crowdsourcing Design, Mechanisms, Engineering; Practical Experience; Computer Graphics; Applied Perception; HCI;
Freie Schlagwörter (englisch): Visualization
Collection: Dagstuhl Reports, Volume 5, Issue 11
Issue Date: 2016
Date of publication: 31.03.2016


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI