Apologies for cross-posting!
=======================
NLP & DBpedia Workshop 2013
=======================
Free, open, interoperable and multilingual NLP for DBpedia and DBpedia
for NLP:
http://nlp-dbpedia2013.blogs.aksw.org/
Collocated with the International Semantic Web Conference 2013 (ISWC 2013)
21-22 October 2013, in Sydney, Australia (*Submission deadline July 8th*)
**********************************
Recently, the DBpedia community has experienced an immense increase in
activity and we believe, that the time has come to explore the
connection between DBpedia & Natural Language Processing (NLP) in a yet
unpreceded depth. The goal of this workshop can be summarized by this
(pseudo-) formula:
NLP & DBpedia == DBpedia4NLP && NLP4DBpedia
http://db0.aksw.org/downloads/CodeCogsEqn_bold2.gif
DBpedia has a long-standing tradition to provide useful data as well as
a commitment to reliable Semantic Web technologies and living best
practices. With the rise of WikiData, DBpedia is step-by-step relieved
from the tedious extraction of data from Wikipedia’s infoboxes and can
shift its focus on new challenges such as extracting information from
the unstructured article text as well as becoming a testing ground for
multilingual NLP methods.
Contribution
=========
Within the timeframe of this workshop, we hope to mobilize a community
of stakeholders from the Semantic Web area. We envision the workshop to
produce the following items:
* an open call to the DBpedia data consumer community will generate a
wish list of data, which is to be generated from Wikipedia by NLP
methods. This wish list will be broken down to tasks and benchmarks and
a GOLD standard will be created.
* the benchmarks and test data created will be collected and published
under an open license for future evaluation (inspired by OAEI and
UCI-ML). An overview of the benchmarks can be found here:
http://nlp-dbpedia2013.blogs.aksw.org/benchmarks
Please sign up to our mailing list, if you are interested in discussing
guidelines and NLP benchmarking:
http://lists.informatik.uni-leipzig.de/mailman/listinfo/nlp-dbpedia-public
Important dates
===========
8 July 2013, Paper Submission Deadline
9 August 2013, Notification of accepted papers sent to authors
Motivation
=======
The central role of Wikipedia (and therefore DBpedia) for the creation
of a Translingual Web has recently been recognized by the Strategic
Research Agenda (cf. section 3.4, page 23) and most of the contributions
of the recently held Dagstuhl seminar on the Multilingual Semantic Web
also stress the role of Wikipedia for Multilingualism. As more and more
language-specific chapters of DBpedia appear (currently 14 language
editions), DBpedia is becoming a driving factor for a Linguistic Linked
Open Data cloud as well as localized LOD clouds with specialized domains
(e.g. the Dutch windmill domain ontology created from
http://nl.dbpedia.org ).
The data contained in Wikipedia and DBpedia have ideal properties for
making them a controlled testbed for NLP. Wikipedia and DBpedia are
multilingual and multi-domain, the communities maintaining these
resource are very open and it is easy to join and contribute. The open
license allows data consumers to benefit from the content and many parts
are collaboratively editable. Especially, the data in DBpedia is widely
used and disseminated throughout the Semantic Web.
NLP4DBpedia
==========
DBpedia has been around for quite a while, infusing the Web of Data with
multi-domain data of decent quality. These triples are, however, mostly
extracted from Wikipedia infoboxes. To unlock the full potential of
Wikipedia articles for DBpedia, the information contained in the
remaining part of the articles needs to be analysed and triplified.
Here, the NLP techniques may be of favour.
DBpedia4NLP
==========
On the other hand NLP, and information extraction techniques in
particular, involve various resources while processing texts from
various domains. These resources may be used e.g. as an element of a
solution e.g. gazetteer being an important part of a rule created by an
expert or disambiguation resource, or while delivering a solution e.g.
within machine learning approaches. DBpedia easily fits in both of these
roles.
We invite papers from both these areas including:
1. Knowledge extraction from text and HTML documents (especially
unstructured and semi-structured documents) on the Web, using
information in the Linked Open Data (LOD) cloud, and especially in DBpedia.
2. Representation of NLP tool output and NLP resources as RDF/OWL, and
linking the extracted output to the LOD cloud.
3. Novel applications using the extracted knowledge, the Web of Data or
NLP DBpedia-based methods.
The specific topics are listed below.
Topics
=====
- Improving DBpedia with NLP methods
- Finding errors in DBpedia with NLP methods
- Annotation methods for Wikipedia articles
- Cross-lingual data and text mining on Wikipedia
- Pattern and semantic analysis of natural language, reading the Web,
learning by reading
- Large-scale information extraction
- Entity resolution and automatic discovery of Named Entities
- Multilingual entity recognition task of real world entities
- Frequent pattern analysis of entities
- Relationship extraction, slot filling
- Entity linking, Named Entity disambiguation, cross-document
co-reference resolution
- Disambiguation through knowledge base
- Ontology representation of natural language text
- Analysis of ontology models for natural language text
- Learning and refinement of ontologies
- Natural language taxonomies modeled to Semantic Web ontologies
- Use cases for potential data extracted from Wikipedia articles
- Use cases of entity recognition for Linked Data applications
- Impact of entity linking on information retrieval, semantic search
Furthermore, an informal list of NLP tasks can be found on this
Wikipedia page:
http://en.wikipedia.org/wiki/Natural_language_processing#Major_tasks_in_NLP
These are relevant for the workshop as long as they fit into the
DBpedia4NLP and NLP4DBpedia frame (i.e. the used data evolves around
Wikipedia and DBpedia).
Submission formats
==============
Paper submission
-----------------------
All papers must represent original and unpublished work that is not
currently under review. Papers will be evaluated according to their
significance, originality, technical content, style, clarity, and
relevance to the workshop. At least one author of each accepted paper is
expected to attend the workshop.
* Full research paper (up to 12 pages)
* Position papers (up to 6 pages)
* Use case descriptions (up to 6 pages)
* Data/benchmark paper (2-6 pages, depending on the size and complexity)
Note: data and benchmarks papers are meant to provide a citable
reference for your data and benchmarks. We kindly require, that you
upload any data you use to our benchmark repository in parallel to the
submission. We recommend to use an open license (e.g. CC-BY), but
minimum requirement is free use. Please write to the mailing list, if
you have any problems.
Full instructions are available at:
http://nlp-dbpedia2013.blogs.aksw.org/submission/
Submission of data and use cases
--------------------------------------------
This workshop also targets non-academic users and developers. If you
have any (open) data (e.g. texts or annotations) that can be used for
benchmarking NLP tools, but do not want or needd to write an academic
paper about it, please feel free to just add it to this table:
http://tinyurl.com/nlp-benchmarks or upload it to our repository:
http://github.com/dbpedia/nlp-dbpedia
Full instructions are available at:
http://nlp-dbpedia2013.blogs.aksw.org/benchmarks/
Also if you have any ideas, use cases or data requests please feel free
to just post them on our mailing list: nlp-dbpedia-public [at]
lists.informatik.uni-leipzig.de or send them directly to the chairs:
nlp-dbpedia2013 [at] easychair.org
Program committee
==============
* Guadalupe Aguado, Universidad Politécnica de Madrid, Spain
* Chris Bizer, Universität Mannheim, Germany
* Volha Bryl, Universität Mannheim, Germany
* Paul Buitelaar, DERI, National University of Ireland, Galway
* Charalampos Bratsas, OKFN, Greece, Αριστοτέλειο Πανεπιστήμιο
Θεσσαλονίκης, (Aristotle University of Thessaloniki), Greece
* Philipp Cimiano, CITEC, Universität Bielefeld, Germany
* Samhaa R. El-Beltagy, جامعة_النيل (Nile University), Egypt
* Daniel Gerber, AKSW, Universität Leipzig, Germany
* Jorge Gracia, Universidad Politécnica de Madrid, Spain
* Max Jakob, Neofonie GmbH, Germany
* Anja Jentzsch, Hasso-Plattner-Institut, Potsdam, Germany
* Ali Khalili, AKSW, Universität Leipzig, Germany
* Daniel Kinzler, Wikidata, Germany
* David Lewis, Trinity College Dublin, Ireland
* John McCrae, Universität Bielefeld, Germany
* Uroš Milošević, Institut Mihajlo Pupin, Serbia
* Roberto Navigli, Sapienza, Università di Roma, Italy
* Axel Ngonga, AKSW, Universität Leipzig, Germany
* Asunción Gómez Pérez, Universidad Politécnica de Madrid, Spain
* Lydia Pintscher, Wikidata, Germany
* Elena Montiel Ponsoda, Universidad Politécnica de Madrid, Spain
* Giuseppe Rizzo, Eurecom, France
* Harald Sack, Hasso-Plattner-Institut, Potsdam, Germany
* Felix Sasaki, Deutsches Forschungszentrum für künstliche Intelligenz,
Germany
* Mladen Stanojević, Institut Mihajlo Pupin, Serbia
* Hans Uszkoreit, Deutsches Forschungszentrum für künstliche
Intelligenz, Germany
* Rupert Westenthaler, Salzburg Research, Austria
* Feiyu Xu, Deutsches Forschungszentrum für künstliche Intelligenz, Germany
Contact
=====
Of course we would prefer that you will post any questions and comments
regarding NLP and DBpedia to our public mailing list at:
nlp-dbpedia-public [at] lists.informatik.uni-leipzig.de
If you want to contact the chairs of the workshop directly, please write
to:
nlp-dbpedia2013 [at] easychair.org
Kind regards,
Sebastian Hellmann, Agata Filipowska, Caroline Barrière,
Pablo N. Mendes, Dimitris Kontokostas
--
Dipl. Inf. Sebastian Hellmann
Department of Computer Science, University of Leipzig
Events: NLP & DBpedia 2013 (http://nlp-dbpedia2013.blogs.aksw.org,
Deadline: *July 8th*)
Venha para a Alemanha como PhD: http://bis.informatik.uni-leipzig.de/csf
Projects: http://nlp2rdf.org , http://linguistics.okfn.org ,
http://dbpedia.org/Wiktionary , http://dbpedia.org
Homepage: http://bis.informatik.uni-leipzig.de/SebastianHellmann
Research Group: http://aksw.org
Does anyone know of research on images on Wikipedia/Wikimedia Commons?
Thanks in advance!
Best,
heather.
Heather Ford
Oxford Internet Institute Doctoral Programme
www.ethnographymatters.net
@hfordsa on Twitter
http://hblog.org
BOOK ANNOUNCEMENT
The People’s Web Meets NLP: Collaboratively Constructed Language Resources
Iryna Gurevych and Jungi Kim (Eds.)
In Series: Theory and Applications of Natural Language Processing, E. Hovy, M. Johnson and G. Hirst (eds.)
2013, XXIV, 378 p. 86 illus., 16 illus. in color.
http://www.springer.com/978-3-642-35084-9
Collaboratively Constructed Language Resources (CCLRs) such as Wikipedia, Wiktionary, Linked Open Data, and various resources developed using crowdsourcing techniques such as Games with a Purpose and Mechanical Turk have substantially contributed to the research in natural language processing (NLP). Various NLP tasks utilize such resources to substitute for or supplement conventional lexical semantic resources and linguistically annotated corpora. These resources also provide an extensive body of texts from which valuable knowledge is mined. There are an increasing number of community efforts to link and maintain multiple linguistic resources.
This book offers comprehensive coverage of CCLR-related topics, including their construction, utilization in NLP tasks, and interlinkage and management. Various Bachelor/Master/Ph.D. programs in natural language processing, computational linguistics, and knowledge discovery can use this book both as either main textbook or supplementary reading. The book also provides a valuable reference guide for researchers and professionals for the above topics.
Keywords > Collaboratively Constructed Resource - Collective Intelligence / Human Computation - Computational Linguistics - Language Resource - Natural Language Processing
Calling all Social Media and Online Communities Researchers!
Please consider submitting your paper to one or more of the CFPs below.
Deadlines are fast approaching. Please contact Anatoliy Gruzd
<gruzd(a)dal.ca> if you have any questions about these calls.
*******************************************
(1) American Behavioral Scientist – Special Issue on Measuring Influence
in Social Media
Editors: Anatoliy Gruzd (Dalhousie University), Barry Wellman
(University of Toronto)
Papers Due: April 30, 2013
More info: http://socialmedialab.ca/?page_id=7645
(2) International Conference on Social Media and Society (#SMSociety13)
Location: Halifax, NS, Canada
When: September 14-15, 2013
Paper Abstracts/Panel Proposals Due: May 1, 2013
Poster Abstracts Due: May 30, 2013
More info: http://SocialMediaAndSociety.com/
(3) Hawaii International Conference on System Sciences (HICSS)
Minitracks: Social Networking & Community | Social Media & Learning,
Location: Big Island, Hawaii, USA
When: January 6-9, 2014
Papers Due: June 15, 2013
More info: http://haythorn.wordpress.com/hicss-minitracks-cfp/
*******************************************
This is currently published on my user space on Wikinews at
http://en.wikinews.org/wiki/User:LauraHale/Wikinews_Review_Analysis and
as done in order to determine some of the issues that students are facing
with the review process at Wikinews in light of student complaints to their
instructor. I am hoping to do a follow up for it at some point in the
future to provide more detailed analysis on review processes on project,
but thought in the interim it might be of interest to both researchers and
people in education.
Wikinews Review Analysis
< User:LauraHale <http://en.wikinews.org/wiki/User:LauraHale>
<http://en.wikinews.org/wiki/User:LauraHale/Wikinews_Review_Analysis#mw-navi…><http://en.wikinews.org/wiki/User:LauraHale/Wikinews_Review_Analysis#p-search>
One of the greatest challenges in attracting and retaining new contributors
to English Wikinews is the project's review process. This process requires
the journalist to have a symbiotic relationship with the reviewer as the
pair both work towards getting an article published. The review process
involves the submitted news article being checked for copyright,
newsworthiness, verifiability, neutral point of view and compliance with
the style guide. This process is compacted ideally into a window of
opportunity of no longer than 24 to 48 hours. It can be extremely
challenging for those not used to the style requirements, neutrality
requirements, verification requirements and above all doing this sort of
writing quickly.
For those from Wikipedia, the review process on English Wikinews is close
in scope to English Wikipedia's Good Article
criteria<http://en.wikipedia.org/wiki/WP:GAN> only
on a more compressed timetable. For those who have been to secondary school
or university, it mirrors a teacher giving you an assignment with your
grade being based on meeting the criteria stated on a rubric, and failure
to pass requires re-submission. If you do not work to the rubric, you just
do not pass and on English Wikinews, that means not getting published.
When you review regularly, you begin to notice certain patterns that
frequently occur in the review process. Most often, these appear to be a
failure to understand what is news or a failure to read and try to write to
the style guide specifications. Observational analysis only gets you so far
though when trying to determine a problem and how to develop solutions for
what appears to be a problem of articles not passing review or people being
discouraged from submitting.
>From the period between January 1, 2013 and April 12, 2013, 203 failed
reviews were examined to determine which criteria were the biggest
stumbling block. The articles reviewed included published articles,
non-published articles moved to user space and deleted articles that were
not published. For each review, the primary author was assessed as either
accredited reporter, regular contributor with 10 or more published
articles, new reporter with 9 or fewer articles, or University of
Wollongong student. Of the 202 reviews examined, 104 were for articles by
new contributors, 47 by University of Wollongong students, 40 by regular
contributors and 11 by accredited reporters. Each review looked at also
noted if the article finally reached a published state. There were 110
different articles reviewed, of which 33 were published and 77 were not.
What were problems for articles did not pass review? Bearing in mind
articles can be marked not published for multiple reasons, 23 articles were
not passed for copyright reasons, 103 for newsworthiness issues, 70 for
verifiability issues, 43 for neutrality issues and 96 for style issues.
[image: English Wikinews Review
Issues.png]<http://en.wikinews.org/wiki/File:English_Wikinews_Review_Issues.png>
The different cohorts appear to have different sets of issues. Accredited
reporters did not have any problems with copyright or plagiarism. 57% of
University of Wollongong reviews and 51% of new reporters had not passing
reviews because of newsworthiness concerns. Accredited reporters had
problems with verifiability at 45% versus 26% for University of Wollongong
reporters. 25% of new contributors and 29% of University of Wollongong
reporters had problems with neutrality. 60% of University of Wollongong had
problems complying with the style guide compared to 33% of regular
reporters.
Some of these patterns are explainable. There are times when it is
difficult to get reviewers, and an article may languish for 24 to 48 hours.
By that time, the article is no longer news or requires more and new
information in order to stay fresh. This does not explain all of it though.
Observational analysis suggests that at least two thirds of these articles
fail newsworthiness because of a lack of a clear focus on the news topic,
writing a news article as the topic itself is set to go stale, taking too
long to address problems with previous reviews or going through the review
process repeatedly to the point where by the time everything else gets
fixed, the story is no longer news.
In many cases, reviewers look at some things before others. Newsworthiness
and style guide compliance are two of the most easily visible problems.
They do not require looking at external links and doing intensive
examination of the text to look for more systemic, underlying problems. The
article does not state when an event happened or makes clear it happened 4
days ago? The article is written using lots of non-relative dating? There
are lots of external links inside the article? The article is clearly not
written in inverted pyramid style? The title lacks a verb? The first
paragraph does not answer who, what, when, how or why this is news? There
is no reason to look beyond newsworthiness and style as these obvious
problems need to be fixed before going forward. Copyright, NPOV and
verification can come later.
For accredited reporters, many of them are accredited because they do
original reporting. This often requires sending things to reviewers via
e-mail or posting extensive notes, pictures, audio, video on the talk page.
Verification is also often one of the last steps in the review process.
Thus, it makes sense that reporters with a track record of success are
likely to get caught up here.
Multiple problems identified on a review decreases the likelihood of
publication. Only 22 failed reviews were present on articles that
subsequently were published. This compares 74 reviews that identified
multiple problems where the article was never published. The only review
for an accredited reporter which identified multiple problems was
subsequently published. Regular contributors had 11 total reviews with
multiple problems, of which 7 of those reviews were done on articles that
were subsequently published. 14 reviews identifying multiple problems for
articles by new contributors went on to be published, with 45 reviews on
articles that were not published. All 24 of the UoW student articles with
reviews indicating multiple problems failed to reach a published state. The
percentages form an almost predictable slope based on experience for
chances of an article becoming published at 100% published for accredited
reporters reviewed with multiple issues, 63% for regulars, 23% for new
contributors, and 0% for University of Wollongong students.
On the other hand, there is no significant difference between articles with
only one issue identified being on an article subsequently getting
published. 81 reviews identifying only one problem were on articles that
were not subsequently published versus 24 reviews identifying only one
problem on articles that were subsequently published. This puts a
publishing rate at 22.8% for 1 problem reviews versus 22.9% for 2+ problems
reviews. There are differences in cohort performance when only one problem
is identified: 70% of accredited reporter reviews are on articles
subsequently published, 36% for regulars, 11% for new contributors and 4%
for University of Wollongong reporters. With the exception of the
University of Wollongong cohort, reviews that identify only one problem are
less likely to lead to an article eventually arriving at a published state.
With newsworthiness a major reason for all cohorts as a reason for a failed
review, there are distinct differences in the likelihood of this problem
being overcome based on cohort. Overall, 15% of all articles with
newsworthiness cited as a reason for an article not being published
subsequently becoming published. 40% of accredited reporter reviews
indicating this problem were on articles that eventually became published.
This rate is comparable to regular reporter reviews, with 38% of that
cohort becoming published after a failed review citing newsworthiness. New
reporters have an 11% rate of later publishing. 3% of University of
Wollongong reporters reviews indicating newsworthiness problems reach a
published state.
54% of the time when newsworthiness is a problem, a reviewer indicates some
other problem with the article. For articles with multiple problems
including newsworthiness, 93% of the time there is also a style problem,
46% of the time there is a verifiability problem , 44% of the time there is
a point of view problem and 4% of the time there is a copyright problem.
[image: English Wikinews Average
Submissions.png]<http://en.wikinews.org/wiki/File:English_Wikinews_Average_Submissions.png>
For articles that are not passed on their first attempt, there are
different continuing progress responses for each cohort. For accredited
reporters, they have one failed review before either submitting
successfully on their second attempt or before abandoning their work. This
suggests that accredited reporters are able to successfully respond to
feedback or understand when an article has systemic problems that will
result in it never being published. New and University of Wollongong
reporters are much more likely to continue to try to resubmit multiple
times, both successfully and unsuccessfully, than their regular reporter
counterparts. 4 new reporters out of 46 submitted their work 4 or more
times unsuccessfully. This contrasts to 3 out of 10 for regular reporters
and 4 out of 18 for University of Wollongong students : 8% to 30% to 22%.
High number of submissions for rereview are unlikely to lead to publication
of the article. Of the articles finally published, only one had failed at
review more than three times, United States deportation policies challenged
in Santa Clara County,
California<http://en.wikinews.org/wiki/United_States_deportation_policies_challenged_i…>,
which had 6 failed reviews before being published. In this particular case,
the article was failed 3 times for copyright reasons, once for
newsworthiness, 3 times for verifiability, 4 times for neutrality and 2
times for style.
Accredited reporters were the most successful as a percentage of total
articles with failed initial reviews subsequently getting published, with 8
articles published after only 1 failed review. Regular reporters had 15
published articles out of 27 for a 55% success rate, new reporters had 9
published out of 57 for a 15% success rate, and University of Wollongong
students had 1 out of 21 for a success rate of 4%. As reporters become more
acclimatized, they are more likely to translate failed reviews into
successfully published articles.
This confirms observational bias that English Wikinews has a high barrier
of entry in terms of adapting to the local review process. It also confirms
that the feedback system for the review system works for established
contributors who have figured out the basics of preparing an article for a
published state. New reporters and University of Wollongong students have
problems that are similar, but new reporters are either more willing to
work through failure to accomplish a goal or more likely to find community
members who are willing to assist them getting the article over the line.
That the percentage of new and University of Wollongong students getting
not publish ready reviews for style suggests they are unfamiliar with the
style guide. How this can be addressed is difficult because it appears as
if they are not reading the style guide. One thought for increasing the
likelihood of getting an article published is to provide a form of
motivation that will encourage a reporter to keep with it until their work
—though not necessarily a specific article— is published. This may need to
be coupled with an improved feedback system, though how this would work
with a cohort of editors who are unmotivated to read existing materials
designed to increase their chances of getting published or interact with
contributors to seek advice in getting published, calls an over reliance on
an improved feedback system as a primary method to increase chances of
getting published into question.
--
twitter: purplepopple
blog: ozziesport.com
Hello,
As stated in the latest issue of the Wikimedia Research Newsletter, I have recently initiated a new research newsletter on the French Wikipedia, les Nouvelles du Wikilab : http://fr.wikipedia.org/wiki/Wikipédia:Nouvelles_du_Wikilab
If you are interested in a bit of publicity in the French-speaking wiki communities, do not hesitate to send me a copy of your work. So far, I have seldom the ability to access anything but OA publishings.
Pierre-Carl Langlais
Dear Colleagues,
Might be of your interest.
bests,
.Taha
CALL FOR ABSTRACT
Satellite Meeting of ECCS '13
Computational Social Science: From Social Contagion to Collective Behaviour
to be held in Barcelona, 19th September 2013
Website: http://microsites.oii.ox.ac.uk/collectivecontagion/
Important Dates:
Abstract submission deadline June, 30 2013
Conference date September, 19 2013
Event Overview
Intense scientific debate is going around the definition of the
foundational concepts and appropriate methodological approaches to deal
with the understanding of social dynamics. These challenges are aiming to
understand human behavior in its complexity driven by intentional (and not
necessarily rational) decisions and influenced by a multitude of factors.
The functioning of communication-based mechanisms requires individuals to
interact in order to acquire information to cope with uncertainty and thus
deeply rely on the accuracy and on the completeness of information (if
any). In fact, people’s perceptions, knowledge, beliefs and opinions about
the world and its evolution, get (in)formed and modulated through the
information they can access. Moreover their response is not linear as
individuals can react by accepting, refusing, or elaborating (and changing)
the received information.
Technology-mediated social collectives are taking an important role in the
design of social structures. Yet our understanding of the complex
mechanisms governing networks and collective behaviour is still quite
shallow. Fundamental concepts like authority, leader-follower dynamics,
conflict or collaboration in online networks are still not well defined and
investigated – but they are crucial to illuminate the advantages and
pitfalls of this form of collective decision-making (which can cancel out
individual mistakes, but also make them spiral out of control).
The aim of this satellite is to address the question of ICT mediated social
phenomena emerging in multiple scales ranging from the interactions of
individuals to the emergence of self-organized global movements. We would
like to gather researchers from different disciplines to form a forum to
discuss ideas, research questions, recent results, and future challenges in
this emerging area of research and public interest.
TOPICS OF INTEREST
Interdependent social contagion process
Peer production and mass collaboration
Temporally evolving networks and stream analytics
Cognitive aspects of belief formation and revision
Online communication and information diffusion
Viral propagation in online social network
Crowd-sourcing: herding behaviour vs. wisdom of crowds
E-democracy and online government-citizen interaction
Online socio-political mobilizations
Public attention and popularity
Questions about the conference scope should be directed to the program
co-chairs at eccs2013collectivecontagion(a)bifi.es
Submission Instructions
Submission of abstracts will be made by sending one A4 page abstract in pdf
via EasyChair<https://www.easychair.org/conferences/?conf=eccs2013collectiveco>.
The deadline for abstract submission is 30 June 2013.
The contributions to the event will be evaluated by the programme committee
through a peer review process that will accounts for the scientific quality
as well as for the relevance of the contribution to the aim of the
satellite.
The authors of accepted abstracts will be notified via e-mail by 15 July
2013.
Once the selection process is completed, the authors of the accepted
abstracts will be notified by e-mail.
Organising Committee
Javier Borge-Holthoefer (BIFI, University of Zaragoza, Spain)
Guido Caldarelli (IMT Lucca, Italy)
Rosaria Conte (ISTC CNR, Italy)
Sandra González-Bailón (Oxford Internet Institute, University of Oxford, UK)
Márton Karsai (Northeastern University, USA; Aalto University, Finland)
Helen Margetts (Oxford Internet Institute, University of Oxford, UK)
Walter Quattrociocchi (Northeastern University, USA)
Luca Rossi (Northeastern University, USA)
Alessandro Vespignani (Northeastern University, USA; ISI Foundation, Italy)
Taha Yasseri (Oxford Internet Institute, University of Oxford, UK)
Programme Committee
Javier Borge-Holthoefer (BIFI, University of Zaragoza, Spain)
Sara Brunetti (University of Siena, Italy)
Guido Caldarelli (IMT Lucca, Italy)
Rosaria Conte (ISTC, CNR, Italy)
Gennaro Cordasco (University of Naples, Italy)
Santo Fortunato (Aalto University, Finland)
Bruno Gonçalves (Aix-Marseille Université, France)
Sandra González-Bailón (Oxford Internet Institute, University of Oxford, UK)
Márton Karsai (Northeastern University, USA; Aalto University, Finland)
Helen Margetts (Oxford Internet Institute, University of Oxford, UK)
Mario Paolucci (ISTC CNR, Italy)
Walter Quattrociocchi (Northeastern University, USA)
Luca Rossi (Northeastern University, USA)
Antonio Scala (ISC CNR, Italy)
Flaminio Squazzoni (University of Brescia)
Alessandro Vespignani (Northeastern University, USA)
Taha Yasseri (Oxford Internet Institute, University of Oxford, UK)
--
.t