***APOLOGIES FOR MULTIPLE POSTINGS***
CALL FOR ABSTRACTS
International Conference on Computational Social Science
Finlandia Hall, Helsinki, Finland, 8-11 June 2015
WEBSITE
http://www.iccss2015.eu/
IMPORTANT DATES
Deadline for abstract submission: 15 November 2014
Opening of registration: 15 January 2015
Conference dates: 8-11 June 2015
EVENT OVERVIEW
The conference will bring together scientists from different areas to meet and
discuss problems on social systems and dynamics, as well as research questions
motivated by large datasets, either extracted from real applications (e.g.
social media, communication systems), or created via controlled experiments.
PROGRAM CHAIRS
Karen Cook (Stanford)
Santo Fortunato (Aalto University)
Michael Macy (Cornell)
KEYNOTE SPEAKERS
Opening talk by Michael Macy (Cornell)
Lada Adamic (Facebook)
Sinan Aral (MIT)
Albert-Laszlo Barabasi (Northeastern University and CEU)
Nicholas Christakis (Yale)
Robin Dunbar (Oxford)
Andreas Flache (University of Groeningen)
Dirk Helbing (ETH Zurich)
Matthew Jackson (Stanford)
Jure Leskovec (Stanford)
Alex Pentland (MIT)
Alessandro Vespignani (Northeastern University)
Duncan Watts (Microsoft)
ORGANIZING COMMITTEE
Santo Fortunato (Aalto University),
Aristides Gionis (Aalto),
Heikki Hämmäinen (Aalto),
Kimmo Kaski (Aalto),
Walter Quattrociocchi (IMT Lucca),
Jari Saramäki (Aalto),
Juuso Valimäki (Aalto)
TOPICS OF INTEREST INCLUDE (but are not limited to)
Social networks
Social contagion
Communication dynamics
Information diffusion and other spreading phenomena
Social influence
Crowd-sourcing
Popularity dynamics
Smart cities
Attention economics
Social design and user behavior
Group formation, evolution and group behavior analysis
Human mobility
Mobility and context-awareness
Economics of trust
SUBMISSION INSTRUCTIONS
Contributions to the conference have to be submitted via Easychair
(www.easychair.org), the name of the event there is IC2S2.
Each submission consists of an extended abstract of max 2 pages (A4). Please
give a sufficiently detailed description of your work, put at least
one figure,
otherwise it will be difficult for the PC to assess its relevance. Short,
paper-like abstracts will not be considered. Abstracts do not need to refer to
unpublished work. If the work is published or under submission elsewhere it is
fine. We want to give to everyone the opportunity to present the most relevant
work to the topics of the conference. There will be no proceedings, but we are
exploring the possibility of having a special journal issue, where selected
contributions will be published. Authors of those contributions would
be invited
to submit full papers after the conference. Each extended abstract will be
reviewed by two PC members. Abstracts can be submitted from September the 15th
till November 15th, 2014. We will do our best to have mostly oral
presentations
of the selected contributions, both plenary and in parallel sessions. However,
there will be a poster session as well. During the submission process,
you will
be asked to specify whether your contribution is intended for a)
Plenary session
presentation, b) Parallel session presentation or c) Poster session
presentation. The final allocation of each contribution will be decided by the
Program Committee.
CONTACT
For any question you might have please contact Prof. Santo Fortunato
(santo.fortunato(a)aalto.fi)
I am currently on vacation and will not be able to answer your mail before
November 10. But I will get back then as soon as possible.
Best regards, Aileen Oeberst
Forwarding an article from Wikimedia-l about readership and authorship of
English Wikipedia's information about Ebola, and the quality and authorship
of English Wikipedia's medical content in general.
Pine
*This is an Encyclopedia* <https://www.wikipedia.org/>
*One gateway to the wide garden of knowledge, where lies The deep rock of
our past, in which we must delve The well of our future,The clear water we
must leave untainted for those who come after us,The fertile earth, in
which truth may grow in bright places, tended by many hands,And the broad
fall of sunshine, warming our first steps toward knowing how much we do not
know.*
*—Catherine Munro*
---------- Forwarded message ----------
From: MZMcBride <z(a)mzmcbride.com>
Date: Sun, Oct 26, 2014 at 10:27 PM
Subject: [Wikimedia-l] "Wikipedia Is Emerging as Trusted Internet Source
for Information on Ebola"
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
http://nyti.ms/1rHy4fK
Wikipedia Is Emerging as Trusted Internet Source for Information on Ebola
Noam Cohen
October 26, 2014
The New York Times
Neat! (And a bit terrifying.)
MZMcBride
_______________________________________________
Wikimedia-l mailing list, guidelines at:
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l(a)lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
<mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
Forwarding comments from Wikimedia-l that may be of interest to a number of
subscribers on other lists.
Pine
---------- Forwarded message ----------
From: "Erik Moeller" <erik(a)wikimedia.org>
Date: Oct 25, 2014 5:59 PM
Subject: Re: [Wikimedia-l] Chapters and GLAM tooling
To: "Wikimedia Mailing List" <wikimedia-l(a)lists.wikimedia.org>
Cc:
On Sat, Oct 25, 2014 at 7:16 AM, MZMcBride <z(a)mzmcbride.com> wrote:
> Labs is a playground and Galleries, Libraries, Archives, and Museums are
> serious enough to warrant a proper investment of resources, in my view.
> Magnus and many others develop magnificent tools, but my sense is that
> they're largely proofs of concept, not final implementations.
Far from being treated as mere proofs of concept, Magnus' GLAM tools
[1] have been used to measure and report success in the context of
project grant and annual plan proposals and reports, ongoing project
performance measurements, blog posts and press releases, etc. Daniel
Mietchen has, to my knowledge, been the main person doing any
systematic auditing or verification of the reports generated by these
tools, and results can be found in his tool testing reports, the last
one of which is unfortunately more than a year old. [2]
Integration with MediaWiki should IMO not be viewed as a runway that
all useful developments must be pushed towards. Rather, we should seek
to establish clearer criteria by which to decide that functionality
benefits from this level of integration, to such an extent that it
justifies the cost. Functionality that is not integrated in this
manner should, then, not be dismissed as "proofs of concept" but
rather judged on its own merits.
GWToolset [3] is a good example. It was built as a MediaWiki extension
to manage GLAM batch uploads, but we should not regard this decision
as sacrosanct, or the only correct way to develop this kind of
functionality. The functionality it provides is of highly specialized
interest, and indeed, the number of potential users to-date is 47
according to [4], most of whom have not performed significant uploads
yet. Its user interface is highly specialized and special permissions
+ detailed instructions are required to use it. At the same time, it
has been used to upload 322,911 files overall, an amazing number even
without going into the quality and value of the individual
collections.
So, why does it need to be a MediaWiki extension at all? When
development began in 2012, OAuth support in MediaWiki did not exist,
so it was impossible for an external tool (then running on toolserver)
to manage an upload on the user's behalf without asking for the user's
password, which would have been in violation of policy. But today, we
have other options. It's possible that storage requirements or other
specific desired integration points would make it impossible to create
this as a Tool Labs tool -- but if we created the same tool today, we
should carefully consider that.
Indeed, highly specialized tools for the cultural and education sector
_are_ being developed and hosted inside Tool Labs or externally.
Looking at the current OAuth consumer requests [5], there are
submissions for a metadata editor developed by librarians at the
University of Miami Libraries in Coral Gables, Florida, and an
assignment creation wizard developed by the Wiki Education Foundation.
There's nothing "improper" about that, as Marc-André pointed out.
As noted before, for tools like the ones used for GLAM reporting to
get better, WMF has its role to play in providing more datasets and
improved infrastructure. But there's nothing inherent in the
development of those tools that forces them to live in production
land, or that requires large development teams to move them forward.
Auditing of numbers, improved scheduling/queuing of database requests,
optimization of API calls and DB queries; all of this can be done by
individual contributors, making this suitable work for even chapters
with limited experience managing technical projects to take on.
On the analytics side, we're well aware that many users have asked for
better access to the pageview data, either through MariaDB, or through
a dedicated API. We have now said for some time that our focus is on
modernizing the infrastructure for log analysis and collection,
because the numbers collected by the old webstatscollector code were
incomplete, and the infrastructure subject to frequent packet loss
issues. In addition, our ability to meet additional requirements on
the basis of simple pageview aggregation code was inherently
constrained.
To this end, we have put into production use infrastructure to collect
and analyze site traffic using Kafka/Hadoop/Hive. At our scale, this
has been a tremendously complex infrastructure project which has
included custom development such as varnishkafka [6]. While it's taken
longer than we've wanted, this new infrastructure is being used to
generate a public page count dataset as of this month, including
article-level mobile traffic for the first time [7]. Using
Hadoop/Hive, we'll be able to compile many more specialized reports,
and this is only just beginning.
Giving community developers better access to this data needs to be
prioritized relative to other ongoing analytics work, including but
not limited to:
- Continued development and maintenance of the above infrastructure
foundations;
- Development of "Vital Signs": public reports on editor activity,
content contribution, sign-ups and other metrics. This tool gives us
more timely access to key measures than WikiStats [9] (or the
reportcard [10], which to-date still consumes Wikistats data). Rather
than having to wait 4-6 weeks to know what's happening with regard to
editor numbers, we can see continuous updates on a day-to-day basis.
- Development of Wikimetrics, which analyzes the editing activity of a
group of editors, and which is essential for measuring all movement
work that targets increased activity by a targeted group (e.g.
editathon), and is a key tool used for grants evaluation (was a funded
program worth the $$?). A lot of thought has gone into the development
of standardized global metrics [12] for program work, much of it
using this technology and dependent on its continued development.
- Measurement (instrumentation) of site actions and
development/maintenance of associated infrastructure. As an example,
in-depth data collection for features like Media Viewer (see
dashboards at [13] ) is only possible because of the EventLogging
extension developed by Ori Livneh, and the increasing use of this
technology by WMF developers. EventLogging requires significant
management, maintenance and teaching effort from the analytics team.
Lila is requesting visibility into all primary funnels on Wikimedia
sites (e.g. sign-ups, edits/saves through wikitext, edits/saves
through VisualEditor, etc.), and this will require lots of sustained
effort from lots of people to get done. What it will give us is a
better sense of where people succeed and fail to complete an action --
by way of example, see the initial UploadWizard funnel analysis here:
https://www.mediawiki.org/wiki/UploadWizard/Funnel_analysis
- Improved software and infrastructure support for A/B testing,
possibly including adoption of existing open source tooling such as
Facebook's PlanOut library/interpreter [14].
- Improved readership metrics, possibly including a privacy-sensitive
approach to estimating Unique Visitors, and better geographic
breakdowns for readers/editors.
These are all complex problems, most of which are dependent on the
small analytics team, and feedback on projects and priorities is very
much welcome on the analytics mailing list:
https://lists.wikimedia.org/mailman/listinfo/analytics
With regard to better embedding of graphs in wikis specifically, Yuri
Astrakhan has led the development of a new extension, inspired by work
by Dan Andreescu, to visualize data directly in wikis. This extension
has been deployed already to Meta and MediaWiki.org and can be used
for dynamic graphs where it's appropriate to not have a fallback to a
static image, for example in grant reports. See:
https://www.mediawiki.org/wiki/Extension:Graphhttps://www.mediawiki.org/wiki/Extension:Graph/Demohttps://meta.wikimedia.org/wiki/Graph:User:Yurik_(WMF)/Obama
I agree this is the kind of functionality that should make its way
into Wikipedia. Again, we need to judge throwing a full team behind
that against the relative priority of other work. In the meantime,
Yuri and others will continue to push it along and may even be able to
get it all the way there in due time. The main blockers, from what I
can tell, are generation of static fallback images for users without
JavaScript, and a better way to manage the data sources.
In general, the point of my original message was this: All
organizations that seek to improve Wikipedia and the other Wikimedia
projects ultimately depend on technology to do so; to view WMF as the
sole "tech provider" does not scale. Larger, well-funded chapters can
take on big, hairy challenges like Wikidata; smaller, less-funded orgs
are better positioned to work on specialized technical support for
programmatic work.
I would caution against requesting WMF to work on highly specialized
solutions for highly specialized problems. If such solutions are
needed, I would caution against building them into MediaWiki unless
they can be generalized to benefit a larger number of users, at which
point it's appropriate to seek partnership with WMF, or to ask WMF for
the relative priority of such work. But often, it's perfectly fine
(and much faster) to build such tools and reports independently, and
to ask WMF for help in providing APIs/services/data/infrastructure to
get it done.
Cheers,
Erik
[1] http://tools.wmflabs.org/glamtools/
[2]
https://outreach.wikimedia.org/wiki/Category:This_Month_in_GLAM_Tool_testin…
[3] https://www.mediawiki.org/wiki/Extension:GWToolset
[4]
https://commons.wikimedia.org/w/index.php?title=Special%3AListUsers&usernam…
[5]
https://www.mediawiki.org/wiki/Special:OAuthListConsumers?name=&publisher=&…
[6] https://github.com/wikimedia/varnishkafka
[7] https://wikitech.wikimedia.org/wiki/Analytics/Pagecounts-all-sites
[8] https://metrics.wmflabs.org/static/public/dash/
[9] http://stats.wikimedia.org/
[10] http://reportcard.wmflabs.org/
[11] https://metrics.wmflabs.org/
[12]
https://meta.wikimedia.org/wiki/Grants:Learning_%26_Evaluation/Global_metri…
[13] http://multimedia-metrics.wmflabs.org/dashboards/mmv
[14] https://github.com/facebook/planout
--
Erik Möller
VP of Product & Strategy, Wikimedia Foundation
_______________________________________________
Wikimedia-l mailing list, guidelines at:
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l(a)lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
<mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
I am currently on vacation and will not be able to answer your mail before
November 10. But I will get back then as soon as possible.
Best regards, Aileen Oeberst
I am currently on vacation and will not be able to answer your mail before
November 10. But I will get back then as soon as possible.
Best regards, Aileen Oeberst
Hi Ditty, there might be some other relevant literature in this list:
https://wikimedia.org.uk/wiki/Talk:Technology_Committee/Project_requests/Wi… (it's an area Wikimedia UK are interested in exploring)
Best
Simon
-----Original Message-----
From: wiki-research-l-bounces(a)lists.wikimedia.org [mailto:wiki-research-l-bounces@lists.wikimedia.org] On Behalf Of wiki-research-l-request(a)lists.wikimedia.org
Sent: 24 October 2014 19:14
To: wiki-research-l(a)lists.wikimedia.org
Subject: Wiki-research-l Digest, Vol 110, Issue 15
Send Wiki-research-l mailing list submissions to
wiki-research-l(a)lists.wikimedia.org
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
or, via email, send a message with subject or body 'help' to
wiki-research-l-request(a)lists.wikimedia.org
You can reach the person managing the list at
wiki-research-l-owner(a)lists.wikimedia.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of Wiki-research-l digest..."
Today's Topics:
1. Tool to find poorly written articles (Ditty Mathew)
2. Re: Tool to find poorly written articles (Aileen Oeberst)
3. Re: Tool to find poorly written articles (Aaron Halfaker)
4. Re: Tool to find poorly written articles (Ziko van Dijk)
5. Re: Tool to find poorly written articles (Ditty Mathew)
6. Re: Tool to find poorly written articles (Ditty Mathew)
----------------------------------------------------------------------
Message: 1
Date: Fri, 24 Oct 2014 11:30:19 -0400
From: Ditty Mathew <dittyvkm(a)gmail.com>
To: wiki-research-l(a)lists.wikimedia.org
Subject: [Wiki-research-l] Tool to find poorly written articles
Message-ID:
<CACQ6-UtD1dOHfUfD6sU7uEJ+jYmVHW4XOspaA+C_1MNz4TZ8wA(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Hi ,
I am planning to develop a tool to find out the poorly written articles and rank it accordingly. This will give a statistics about which all article we have to modify to make it well written. Also finding good article in one language helps to recommend that in other language where the same article has poorly written.
Is there any tool already exists which will do the same task. If it is not there, will this be helpful. Can you give me some suggestions?
with regards
Ditty
I am currently on vacation and will not be able to answer your mail before
November 10. But I will get back then as soon as possible.
Best regards, Aileen Oeberst
Hi ,
I am planning to develop a tool to find out the poorly written articles and
rank it accordingly. This will give a statistics about which all article we
have to modify to make it well written. Also finding good article in one
language helps to recommend that in other language where the same article
has poorly written.
Is there any tool already exists which will do the same task. If it is not
there, will this be helpful. Can you give me some suggestions?
with regards
Ditty
Hi folks,
Relaying a question from a Stanford medical researcher:
"Do you know if it is possible to extract PubMed ID (PMID) or PMCIDs from
Wiki references? Furthermore, could you dump those IDs out into a list for
analysis?"
Best,
Jake Orlowitz (Ocaasi)