I'm starting a lit review for an upcoming research project, and I'm looking
for work related to 4 mechanisms:
1. What factors contribute to the general population's awareness of
2. What factors contribute to growth in Wikipedia's readership?
3. What factors contribute to editor population growth?
4. What factors contribute to increased volume and quality of
Any pointers towards relevant papers (especially existing lit reviews since
this is 4 large areas of research!) would be appreciated.
[apologies for double posting]
QUARE 2022: The 1st workshop on Measuring the Quality of Explanations in Recommender Systems, co-located with SIGIR 2022 (https://sigir.org/sigir2022/), July 11-15, 2022, in Madrid, Spain and Online
Workshop website: https://sites.google.com/view/quare-2022/home
Location: Hybrid - Madrid, Spain and Online
Paper submission: 3 May 2022
Author notification: 15 May 2022
Final version deadline: 15 June 2022
Workshop date: 15 July 2022
- Alessandro Piscopo (BBC, UK) <alessandro.piscopo(a)bbc.co.uk>
- Oana Inel (University of Zurich, CH) <inel(a)ifi.uzh.ch>
- Sanne Vrijenhoek (University of Amsterdam, NL) <s.vrijenhoek(a)uva.nl>
- Martijn Millecamp (AE NV, BE) <martijn.millecamp(a)hotmail.com>
- Krisztian Balog (Google Research) <krisztianb(a)google.com>
CALL FOR PAPERS:
Recommendations are ubiquitous in many contexts and domains due to a continuously growing adoption of decision-support systems. Explanations may be provided along with recommendations with the reasoning behind suggesting a particular item. However, explanations may also significantly affect a user's decision-making process by serving a number of different goals, such as transparency, persuasiveness, scrutability, among others. While there is a growing body of research studying the effect of explanations, the relationship between their quality and their effect has not been investigated in depth yet.
For instance, at an institutional level, organisational values may require a different combination of explanation goals; also, within the same organisation some combinations of goals may be more appropriate for some use cases and less for others. Conversely, end-users of a recommender system may be bearers of different values, and explanations can affect them differently. Therefore, understanding whether explanations are fit for their intended goals is key to subsequently implementing them in a production stage.
Furthermore, the lack of established, actionable methodologies to evaluate explanations for recommendations, as well as evaluation datasets, hinders cross-comparison between different explainable recommendations approaches, and is one of the issues hampering widespread adoption of explanations in industry settings.
This workshop aims to extend existing work in the field by bringing together and facilitating the exchange of perspectives and solutions from industry and academia, and aims to bridge the gap between academic design guidelines and the best practices in the industry regarding the implementation and evaluation of explanations in recommender systems, with respect to their goals, impact, potential biases, and informativeness. With this workshop, we provide a platform for discussion among scholars, practitioners, and other interested parties.
TOPICS AND THEMES:
The motivation of the workshop is to promote discussion upon future research and practice directions of evaluating explainable recommendations, by bringing together academic and industry researchers and practitioners in the area. We focus in particular on real-world use cases, diverse organisational values and purposes, and different target users. We encourage submissions that study different explanation goals and combinations of those, how they fit various organisation values and different use cases. Furthermore, we welcome submissions that propose and make available for the community high-quality datasets and benchmarks.
Topics include, but are not limited to:
Relevance of explanation goals for different use cases;
Soliciting user feedback on explanations;
Implicit vs. explicit evaluation of explanations and goals;
Reproducible and replicable evaluation methodologies;
Online vs. offline evaluations.
User-modelling for explanation generation;
Evaluation approaches for personalised explanations (e.g., content, style);
Evaluation approaches for context-aware explanations (e.g., place, time, alone/group setting, exploratory/transaction mode).
Evaluation of different explanation modalities (e.g., text, graphics, audio, hybrid);
Evaluation of interactive explanations.
Generation of datasets for evaluation of explanations;
Evaluation of explanations in relation to organisational values;
Evaluation of explanations in relation to personal values.
We welcome three types of submissions:
- position or perspective papers (up to 4 pages in length, plus unlimited pages for references): original ideas, perspectives, research vision, and open challenges in the area of evaluation approaches for explainable recommender systems;
- featured papers (title and abstract of the paper, plus the original paper): already published papers or papers summarising existing publications in leading conferences and high-impact journals that are relevant for the topic of the workshop
- demonstration papers (up to 2 pages in length, plus unlimited pages for references): original or already published prototypes and operational evaluation approaches in the area of explainable recommender systems.
Page limits include diagrams and appendices. Submissions should be single-blind, written in English, and formatted according to the current ACM two-column conference format. Suitable LaTeX, Word, and Overleaf templates are available from the ACM Website (use “sigconf” proceedings template for LaTeX and the Interim Template for Word).
Submit papers electronically via EasyChair: https://easychair.org/my/conference?conf=quare22.
All submissions will be peer-reviewed by the program committee and accepted papers will be published on the website of our workshop: https://sites.google.com/view/quare-2022/home.
At least one author of each accepted paper is required to register for the workshop and present the work.
I thank you for your contributions to Scientometrics. We are launching a new topic collection in Frontiers in Research Metrics and Analytics. It is entitled "Linked Open Bibliographic Data for Real-time Research Assessment". We will be dealing about how we can use open knowledge graphs involving bibliographic metadata such as Wikidata to provide a real-time research evaluation service. We will be honoured if you can contribute to the Collection. Further details about this topic collection can be found at https://www.frontiersin.org/research-topics/34449/linked-open-bibliographic…. Accepted article types for our collection can be easily found at https://www.frontiersin.org/journals/research-metrics-and-analytics#article…. The deadline for providing an abstract is 06 June 2022 and the one for providing the full text of the research work is 05 August 2022. The articles will be open access and this will require fees as explained at https://www.frontiersin.org/about/publishing-fees. Please feel free to contact us if you need further information about the Collection.
Do you know someone excited to support the Wikimedia movement who has
experience setting up systems for secure remote data access and analysis? CAT
Lab <http://citizensandtech.org/> is looking for a contractor who can help
us plan, budget, and potentially set up a first iteration of data
As CAT Lab starts to provide infrastructure to support third-party
researchers to collaborate with Wikipedia, reddit, and other communities on
research, we need to establish infrastructure for secure,
privacy-protecting data access and analysis across multiple projects.
That's where the data systems engineer will come in.
We have set aside $25k for this remote-available contract, and pending
approval from Cornell, can set up international contracts. Please send on
the opportunity and contact Elizabeth Eagen <ee263(a)cornell.edu> if you have
CAT Lab is committed to asking questions and facilitating a team
environment that transform the diversity of who asks questions in open
knowledge and the social sciences. So we would especially welcome referrals
to people who have a track record in that regard.
*About CAT Lab*
The Citizens and Technology Lab <http://citizensandtech.org/> (CAT Lab) at
Cornell University envisions a world where digital power is guided by
evidence and accountable to the public. We work with communities to study
the effects of technology on society and test ideas for changing digital
spaces in the public interest.
To achieve this mission, CAT Lab does industry-independent citizen science.
Using open science software we develop in-house, we collaborate with
communities and movements to discover practical, replicable knowledge that
contributes to science and is guided by the people affected by digital
All the best,
J. Nathan Matias <http://natematias.com/> : Cornell University : Citizens
and Technology Lab <https://citizensandtech.org> : @natematias
<http://twitter.com/natematias> : blog
<https://natematias.com/external-posts/> : daylight time photos