The 2nd Workshop on Social Computing and User Generated Content

June 7, 2012, Valencia, Spain

in conjunction with

ACM Conference on Electronic Commerce (ACM-EC 2012)

Home Call for Papers Program Committee Schedule & Papers 2011 Workshop 2013 Workshop

We solicit research contributions (both new and recently published) and participants for the Workshop on Social Computing and User Generated Content, to be held in conjunction with the 13th ACM Conference on Electronic Commerce (ACM-EC 2012). The workshop will bring together researchers and practitioners from a variety of relevant fields, including economics, computer science, and social psychology, in both academia and industry, to discuss the state of the art today, and the challenges and prospects for tomorrow in the field of social computing and user generated content.

Timeline

April 12, 2012, Submissions due Midnight HST (Hawaii time)

April 30, 2012: Notification of accepted research contributions

June 7, 2012: Workshop date

Organizing Committee

Yiling Chen, Harvard University

Arpita Ghosh, Cornell University

Contact and Further Information

Email the organizing committee: sc13.organizers@gmail.com

Webmaster: Ann Marie King, webmaster-sc2012@seas.harvard.edu

The 2nd Workshop on Social Computing and User Generated Content June 7, 2012, Valencia, Spain

Home Call for Papers Program Committee Schedule & Papers 2011 Workshop 2013 Workshop

Call for Papers

Social Computing and User Generated Content

Social computing systems are now ubiquitous on the web– Wikipedia is perhaps the most well-known peer production system, and there are many platforms for crowdsourcing tasks to online users, including Games with a Purpose, Amazon’s Mechanical Turk, the TopCoder competitions for software development, and many online Q&A forums such as Yahoo! Answers. Meanwhile, the user-created product reviews on Amazon generate value to other users looking to buy or choose amongst products, while Yelp’s value comes from user reviews about listed services; and a significant fraction of the content consumed online consists of user-generated, publicly viewable social media such as blogs or YouTube, as well as comments and discussion threads on these blogs and forums.

Workshop Topics

The workshop aims to bring together participants with diverse perspectives to address the important research questions surrounding social computing and user generated content: Why do users participate- what factors affect participation levels, and what factors affect the quality of participants’ contributions? How can participation be improved, both in terms of the number of participants and the quality of user contributions? What design levers can be used to design better social computing systems? Finally, what are novel ways in which social computing can be used to generate value? The answers to these questions will inform the future of social computing; both towards improving the design of existing sites, as well as contributing to the design of new social computing applications. Papers from a rich set of experimental, empirical, and theoretical perspectives are invited. The topics of interest for the workshop include, but are not limited to

  • Incentives in peer production systems
  • Experimental studies on social computing systems
  • Empirical studies on social computing systems
  • Models for user behavior
  • Crowdsourcing and Wisdom of the Crowds
  • Games with a purpose
  • Online question-and-answer systems
  • Game-theoretic approaches to social computing
  • Quality and spam control in user generated content
  • Rating and ranking user generated content
  • Manipulation resistant ranking schemes
  • User behavior and incentives on social media
  • Trust and privacy in social computing systems
  • Social-psychological approaches to incentives for contribution
  • Usability and user experience

Submission Instructions

We solicit both new work and work recently published or soon to be published in another venue. For submissions of the latter kind, authors must clearly state the venue of publication. The workshop will not have an archival proceedings. Reports on work in progress, position papers and panel discussion proposals are also welcome. Research contributions will be selected based on relevance, technical merit, and likelihood to catalyze discussion.

Submissions can be in any format and can be up to 18 pages long (excluding appendices). We recommend the ACM ‘s single-column format (LaTeXWord). Please submit all papers through link below by Midnight HST (Hawaii time) on April 12, 2012.

At least one author of each accepted research contribution will be expected to attend and present their work at the workshop.

Important Dates

April 12, 2012, Submissions due Midnight HST (Hawaii time)

April 30, 2012: Notification of accepted research contributions

June 7, 2012: Workshop date

Organizing Committee

Yiling Chen, Harvard University

Arpita Ghosh, Cornell University

More Information

For more information or questions, email the organizing committee: sc13.organizers@gmail.com


Home Call for Papers Program Committee Schedule & Papers 2011 Workshop 2013 Workshop

Program Committee


Judd Antin,
Yahoo! Research

Shuchi Chawla, University of Wisconsin, Madison

Yan Chen, University of Michigan

Sanmay Das, Rensselaer Polytechnic Institute

Edith Elkind, Nanyang Technological University

Lian Jian, University of Southern California

Radu Jurca, Google, Zurich

Ece Kamar, Microsoft Research, Redmond

Ian Kash, Microsoft Research, Cambridge

Winter Mason, Stevens Institute of Technology

Alex “Sandy” Pentland, Massachusetts Institute of Technology

Rahul Sami, University of Michigan

Sven Seuken, University of Zurich

Jennifer Wortman Vaughan, University of California, Los Angeles


Home Call for Papers Program Committee Schedule & Papers 2011 Workshop 2013 Workshop

Schedule & Papers

9:00am-10:00am: Invited Talk

10:00am-10:30am: Coffee break

10:30am-11:30am

11:30am-12:30am: Invited Talk

12:30 – 2:30: Lunch break

2:30 3:30

3:30-4:30: Invited Talk

References

Panos Ipeirotis

Title: Crowdsourcing: Quality Management and Scalability
Abstract

I will discuss the use of crowdsourcing for building machine learning models quickly and under budget constraints, with a focus on the case where humans are noisy and the of “labels” provided by humans for data items are imperfect. I will present strategies of managing quality in a crowdsourcing environment, showing in parallel how to integrate data acquisition with the process of learning machine learning models. I illustrate the results using real- life applications drawn from the field of online advertising. Time permitting, I will also discuss our latest results showing that mice and Mechanical Turk workers are not that different after all.

Bio

Panos Ipeirotis is an Associate Professor at the Department of Information, Operations, and Management Sciences at the Stern School of Business of New York University. His recent research interests focus on crowdsourcing and on mining user-generated content on the Internet. He received his Ph.D. in Computer Science from Columbia University in 2004. He has received three “Best Paper” awards (IEEE ICDE 2005, ACM SIGMOD 2006, WWW 2011), two “Best Paper Runner Up” awards (JCDL 2002, ACM KDD 2008), and is also a recipient of a CAREER award from the National Science Foundation. He also maintains the blog “A Computer Scientist in a Business School” where he blogs about crowdsourcing, user-generated content, and other random facts, and his blogging activity seems to generate more interest and recognition than any of the other activities mentioned in this bio.

Radu Jurca

Title: Peer-driven Incentive Mechanisms
Abstract

Crowdsourcing is an important component of the internet, with profound implications on the future of producing and consuming information. The commercial potential of harnessing the wisdom of the crowds is self-evident; unfortunately, effective mechanisms for quality control (i.e., spam filtering, encourage effort and truthfulness) are less understood. The economic theory offers a framework for designing explicit incentives (monetary or in-kind) able to encourage honest participation in some types of crowdsourcing applications. In this talk, I will survey a family of such incentive mechanisms that are “peer-driven”, in the sense that truthfulness is measured against the collective information provided by a user’s peers.

Bio

Radu Jurca obtained the Ph.D. degree in Computer Science from Ecole Polytechnique Federale de Lausanne (EPFL) in 2007. His thesis investigates mechanisms for rewarding truthful feedback in online systems, and was awarded the IFAAMAS Victor Lesser Distinguished Dissertation Award (2007) and the EPFL’s Best PhD Thesis Award (2008). Radu’s research interests focus around the design of feedback and reputation mechanisms in social networks, crowdsourcing applications and other online systems where the information shared by individual participants cannot be verified by a trusted third party. Radu is currently working for Google in Zurich.

Craig Boutilier

Title: Social Choice and Preference Models: New Approaches and Challenges for Group Decision Making
Abstract

Social choice has been the subject of intense investigation within computer science, AI, and operations research, in part because of the ease with which preference data from user populations can now be elicited, assessed, or estimated in online settings. In many domains, the preferences of a group of individuals must be aggregated to form a single consensus recommendation, placing us squarely in the realm of social choice.

The application of social choice and voting schemes to domains like web search, product recommendation, and social networks places new emphasis on issues such as: articulating suitable decision criteria; approximation; incremental preference elicitation; learning methods for population preferences; and more nuanced analysis of manipulation.

In this talk, I’ll provide an overview of some of these challenges and outline some of our recent work tackling of them, including: learning probabilistic models of population preferences from choice data; robust optimization (winner determination) with incomplete user preferences; incremental preference elicitation for group decision making; and new analyses of manipulation. I’ll also outline challenges and opportunities for exploiting social networks in assessing user preferences.

Bio

Craig Boutilier is a Professor of Computer Science at the University of Toronto. He received his Ph.D from Toronto in 1992, and joined the faculty of University of British Columbia in 1991 (where he remains an Adjunct Professor). He returned to Toronto in 1999, and served as Chair of the Department of Computer Science from 2004-2010. Boutilier has held visiting positions at Stanford, Brown, Carnegie Mellon and Paris-Dauphine, and served on the Technical Advisory Board of CombineNet for nine years.

Boutilier’s has published over 180 refereed articles covering topics ranging from knowledge representation, belief revision, default reasoning, and philosophical logic, to probabilistic reasoning, decision making under uncertainty, multiagent systems, and machine learning. His current research efforts focus on various aspects of decision making under uncertainty: preference elicitation, mechanism design, game theory and multiagent decision processes, economic models, social choice, computational advertising, Markov decision processes and reinforcement learning. Boutilier served as Program Chair for both UAI-2000 and IJCAI-09, and is currently Associate Editor-in-Chief of the Journal of Artificial Intelligence Research (JAIR). He is also a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI).