netCommons is an interdisciplinary Horizon2020 EU project studying the
social, legal and technological reality and future of the Internet -
The University of Westminster’s project team conducts a survey among
experienced Internet users that asks:
What Internet do we want? What are the Internet’s main problems? How can
alternatives look like?
We appreciate if you could take approx. 20 minutes to participate in the
survey, which helps us to better understand the Internet’s problems and
Dimitris Boucas, Maria Michalis, Christian Fuchs
(Univ of Westminster netCommons research team)
Looks to be an interesting venue for people doing Wikipedia research with a
focus on cultural heritage (esp. GLAM/WikiLoves research?). Manuscripts are
due November 30, and it looks like they accept several types of submissions
---------- Forwarded message ----------
From: Roberto Scopigno <pubs(a)acm.org>
Date: Fri, Sep 8, 2017 at 7:00 AM
Subject: ACM JOCCH Special Issue Call for Papers on Evaluation of Digital
ACM Journal on Computing
and Cultural Heritage (JOCCH)
*Special Issue on Evaluation of Digital Cultural Resources*
*Maria Economou*, University of Glasgow, UK
*Ian Ruthven*, University of Strathclyde, Glasgow, UK
*Areti Galani*, University of Newcastle, UK
*Milena Dobreva*, UCL Qatar
*Marco de Niet*, University of Leiden Library, The Netherlands
*Scope and Context*
Digital technologies are affecting all aspects of our lives, reshaping the
way we communicate, learn, and approach the world around us. In the case of
cultural institutions, digital applications are used in all key areas of
operation, from documenting, interpreting and exhibiting the collections to
communicating with diverse audience groups. The communication of
collections information in digital form, whether an online catalogue,
mobile application, museum interactive or social media exchange,
increasingly affects our cultural encounters and shapes our perception of
cultural organizations. Although cultural and higher education institutions
around the world are heavily investing on digitization and working to make
their collections available online, we still know very little about who
uses digital collections, how they interact with the associated data, and
what the impacts of these digital resources are.
The issue seeks to address this gap by bringing together interested parties
from a range of disciplines (e.g. digital heritage, museology, information
studies, digital humanities), practices and sectors to discuss the latest
developments on evaluating the use of cultural digital resources.
*Topics and Themes*
The issue will appeal to academics and practitioners working in a range of
disciplines: cultural heritage workers, arts professionals and scholars
interested in issues relating to digital resources and their impact upon
curation, education, engagement and outreach. We invite submissions of both
theoretical and practical approaches, efforts and trends in this emergent
field presenting innovative research. Topics and issues to be addressed
include but are not limited to:
- Who uses digital cultural resources, where, and how these resources
changed the consolidated working practice
- Addressing diverse users' needs and expectations (i.e. from
schoolchildren and families to students and researchers)
- Assessing impact, use and value of digital cultural resources
(methodologies, approaches and issues)
- Ways of recording and assessing impact and value
- Models of access to digital collections
- Evaluating participatory models of work in digital cultural heritage
(crowdsourcing, citizen science, co-creation, co-curation)
- Moving from impact to value when assessing digital resources
- Use of evaluation data in the curation of digital collections
- Integrating evaluation when working with communities in digital
- Adapting old and testing new innovative methods when evaluating
quality, use and effectiveness of digital cultural resources
- User studies
- Metrics, webmetrics, infometrics and usage statistics
- Evaluating emotional impact in digital heritage
- Research on impact of social media on the usage of digital cultural
The idea for this special issue arose from the activities of the Scottish
Network on Digital Cultural Resources Evaluation (ScotDigiCH) (
funded by The Royal Society of Edinburgh in 2015-2016, and particularly
from the discussions and papers presented at the International Symposium on
Evaluating Digital Cultural Resources (EDCR2016) which took place in
Glasgow in December 2016 (scotdigich.wordpress.com/events/symposium/
ScotDigiCH is coordinated by Information Studies at the University of
Glasgow in collaboration with The Hunterian at the University of Glasgow,
Glasgow Life Museums, the Moving Image Archive of the National Library of
Scotland and the Department of Computer and Information Science at the
University of Strathclyde.
This focused issue arises from the work of ScotDigiCH but invites
submissions from all researchers and cultural heritage practitioners
working in this area.
Papers submitted to this special issue for possible publication must be
original and must not be under consideration for publication in any other
journal or conference. Previously published or accepted conference papers
must contain at least 30% new material to be considered for the special
Accepted papers will be published in the *ACM Journal on Computing and
Cultural Heritage*. Papers will be reviewed following the journal's
standard review process. Please follow the format instructions for the
All manuscripts must be prepared according to the journal publication
guidelines which can also be found on the website provided above.
All papers are to be submitted at mc.manuscriptcentral.com/jocch
Upon submission, under "Article Type", please select "*Evaluation of
Digital Cultural Resources*" or your manuscript will not be reviewed
correctly for the special issue.
Please address inquiries to Maria.Economou(a)glasgow.ac.uk.
- Paper submission deadline: *November 30, 2017*
- First Author Notification: January 30, 2018
- Revised papers expected: March 30, 2018
- Final acceptance notification: May, 2018
- Publication: Issue 4, 2018
If you do not want to receive future emails about publishing in ACM
Association for Computing Machinery
Two Penn Plaza, Suite 701, New York, NY 10121-0701, USA
Copyright 2017, ACM, Inc.
Jonathan T. Morgan
Senior Design Researcher
User:Jmorgan (WMF) <https://meta.wikimedia.org/wiki/User:Jmorgan_(WMF)>
I was exploring the dataset shared in the Wikipedia Detox
project. I was trying to use the similar diff logic to obtain the changes
from a page using *revid* but realized that the Wikipedia API provides only
the diff of the revision with its earlier version. I am able to fetch the
diffs for a set of *revids* using the Wikipedia API, but I am unable to
extract only the changed sentences in the revision. I found this
script from the project source files that contain bits of what might have
been used in the actual data collection process to obtain the changes from
the Talk pages, but I am unable to figure out the high-level information
such as input/output formats etc.
Can anyone provide a solution to this or any suggestions on how to proceed?
Also, It would be really beneficial if I could use the same diff logic as
used by the original authors to ensure consistency.
Meanwhile, I have asked a similar question on StackOverflow
emailed the original Wikimedia author of the paper.
tl;dr: Stop using stat100 by September 1st.
We’re finally replacing stat1002 and stat1003. These boxes are out of
warranty, and are running Ubuntu Trusty, while most of the production fleet
is already on Debian Jessie or even Debian Stretch.
stat1005 is the new stat1002 replacement. If you have access to stat1002,
you also have access to stat1005. I’ve copied over home directories from
stat1006 is the new stat1003 replacement. If you have access to stat1003,
you also have access to stat1006. I’ve copied over home directories from
I have not migrated any personal cron jobs running on stat1002 or
stat1003. I need your help for this!
Both of these boxes are running Debian Stretch. As such, packages that
your work depends on may have upgraded. Please log into the new boxes and
try stuff out! If you find anything that doesn’t work, please let me know
by commenting on https://phabricator.wikimedia.org/T152712.
Please be fully migrated to the new nodes by September 1st. This will give
us enough time to fully decommission stat1002 and stat1003 by the end of
I’ve only done a single rsync of home directories. If there is new data on
stat1002 or stat1003 that you want rsynced over, let me know on the ticket.
A few notes:
- stat1002 used to have /a. This has been removed in favor of /srv. /a no
- Home directories are now much larger. You no longer need to create
personal directories in /srv.
- /tmp is still small, so please be careful. If you are running long jobs
that generate temporary data, please have those jobs write into your home
directory, rather than /tmp.
- We might implement user home directory quotas in the future.
Thanks all! I’ll send another email in about a months time to remind you
of the impending deadline of Sept 1.
Hi, I’m currently looking for research/tools that give a complete (and fairly recent) overview about how much “sentiment” vocabulary is contained in *articles* of the English or any other Wikipedia. It could be just a simple matching of current sentiment dictionaries to article text. Just to get an idea how much (or better: little) vocabulary with a specific polarity Wikipedia contains recently and/or over time. Informal research welcome as well :)
I found some older papers (like ), but mostly also just for a smaller subset of articles. Mostly, sentiment analysis is (understandably) just a means to an end in papers, so they don’t provide a comprehensive overview of the whole article set.
Any pointers are appreciated.
Possibly of interest for those working on ML in the context of
*Applied Machine Learning Days* <https://www.appliedmldays.org> is the
largest event focusing on the application of machine learning and
artificial intelligence on various domains.
Following the successful inaugural event in January 2017, we are excited to
announce the second edition of AMLD, to be held again at the Swiss Tech
Convention Center at EPFL in Lausanne, Switzerland. The goal of the event
is to get machine learning techniques out to the larger community of both
scientific researchers and industry practitioners, and to new application
In addition, the event will offer plenty of opportunities to make
interesting connections with many industry professionals, scientists,
students and anyone interested.
This year, in addition to main conference, we will use the preceding
weekend to focus on “hands on” experiences using machine learning. We are
now opening a call for workshops (or other similar events). If you or your
organization would like to do a half day, full day or two day workshop at
AMLD, this is your chance to make it happen!
Three themes will be particularly featured during the weekend:
- open data & open source;
- women in technology;
- and "machine learning in business and in society".
We also accept workshops proposals related to applied machine learning in
The workshops will take place on January 27 & 28, 2018 (Saturday and
Sunday) on the ground floor of the Swiss Tech Convention Center. Food and
drinks will be provided throughout the day.
If in doubt, reach out! If you have an idea for a workshop or an event, but
are unsure, please talks to us! Reach out to one of the co-organizers Marcel
Salathé <marcel.salathe(a)epfl.ch>, Martin Jaggi <martin.jaggi(a)epfl.ch> or Robert
*Submission deadline:*October 1, 2017
*Notification:*October 15, 2017
*Submit proposals here:*https://www.appliedmldays.org/call_for_workshops