Pursuant to prior discussions about the need for a research
policy on Wikipedia, WikiProject Research is drafting a
policy regarding the recruitment of Wikipedia users to
participate in studies.
At this time, we have a proposed policy, and an accompanying
group that would facilitate recruitment of subjects in much
the same way that the Bot Approvals Group approves bots.
The policy proposal can be found at:
The Subject Recruitment Approvals Group mentioned in the proposal
is being described at:
Before we move forward with seeking approval from the Wikipedia
community, we would like additional input about the proposal,
and would welcome additional help improving it.
Also, please consider participating in WikiProject Research at:
University of Minnesota
I am doing a PhD on online civic participation project
(e-participation). Within my research, I have carried out a user
survey, where I asked how many people ever edited/created a page on a
Wiki. Now I would like to compare the results with the overall rate of
wiki editing/creation on country level.
I've found some country-level statistics on Wikipedia Statistics (e.g.
3,000 editors of Wikipedia articles in Italy) but data for UK and
France are not available since Wikipedia provides statistics by
languages, not by countries. I'm thus looking for statistics on UK and
France (but am also interested in alternative ways of measuring wiki
editing/creation in Sweden and Italy).
I would be grateful for any tips!
Sunny regards, Alina
European University Institute
I'm starting a new project, a wiki search engine. It uses MediaWiki,
Semantic MediaWiki and other minor extensions, and some tricky templates
I remember Wikia Search and how it failed. It had the mini-article thingy
for the introduction, and then a lot of links compiled by a crawler. Also
something similar to a social network.
My project idea (which still needs a cool name) is different. Althought it
uses an introduction and images copied from Wikipedia, and some links from
the "External links" sections, it is only a start. The purpose is that
community adds, removes and orders the results for each term, and creates
redirects for similar terms to avoid duplicates.
Why this? I think that Google PageRank isn't enough. It is frequently
abused by farmlinks, SEOs and other people trying to put their websites
Search "Shakira" in Google for example. You see 1) Official site, 2)
Wikipedia 3) Twitter 4) Facebook, then some videos, some news, some images,
Myspace. It wastes 3 or more results in obvious nice sites (WP, TW, FB).
The wiki search engine puts these sites in the top, and an introduction and
related terms, leaving all the space below to not so obvious but
interesting websites. Also, if you search for "semantic queries" like
"right-wing newspapers" in Google, you won't find real newspapers but
"people and sites discussing about ring-wing newspapers". Or latex and
LaTeX being shown in the same results pages. These issues can be resolved
with disambiguation result pages.
How we choose which results are above or below? The rules are not fully
designed yet, but we can put official sites in the first place, then .gov
or .edu domains which are important ones, and later unofficial websites,
blogs, giving priority to local language, etc. And reaching consensus.
We can control aggresive spam with spam blacklists, semi-protect or protect
highly visible pages, and use bots or tools to check changes.
It obviously has a CC BY-SA license and results can be exported. I think
that this approach is the opposite to Google today.
For weird queries like "Albert Einstein birthplace" we can redirect to the
most obvious results page (in this case Albert Einstein) using a hand-made
redirect or by software (some little change in MediaWiki).
You can check a pretty alpha version here http://www.todogratix.es (only
Spanish by now sorry) which I'm feeding with some bots.
I think that it is an interesting experiment. I'm open to your questions
Emilio J. Rodríguez-Posada. E-mail: emijrp AT gmail DOT com
Pre-doctoral student at the University of Cádiz (Spain)
Projects: AVBOT <http://code.google.com/p/avbot/> |
| WikiEvidens <http://code.google.com/p/wikievidens/> |
| WikiTeam <http://code.google.com/p/wikiteam/>
Personal website: https://sites.google.com/site/emijrp/
I'm sending this to Wikimedia-l, Wikitech-l, and Research-l in case other people in the Wikimedia movement or staff are interested in "big data" as it relates to Wikimedia. I hope that those who are interested in discussions about WMF editor engagement efforts, WMF fundraising, or WMF HR practices will also find that this email interests them. Feel free to skip straight to the links in the latter portion of this email if you're already familiar with "big data" and its analysis and if you just want to see what other people are writing about the subject.
* Introductory comments / my personal opinion
"Big data" refers to large quantities of information that are so large that they are difficult to analyze and may not be related internally in an obvious way. See https://en.wikipedia.org/wiki/Big_data
I think that most of us would agree that moving much of an organization's information into "the Cloud", and/or directing people to analyze massive quantities of information, will not automatically result in better, or even good, decisions based on that information. Also, I think that most of us would agree that bigger and/or more accessible quantities of data does not necessarily imply that the data are more accurate or more relevant for a particular purpose. Another concern is the possibility of unwelcome intrusions into sensitive information, including the possibility of data breaches; imagine the possible consequences if a hacker broke into supposedly secure databases held by Facebook or the Securities and Exchange Commission.
We have an enormous quantity of data on Wikimedia projects, and many ways that we can examine those data. As this Dilbert strip points out, context is important, and looking at statistics devoid of their larger contexts can be problematic. http://dilbert.com/strips/comic/1993-02-07/
Since data analysis is also something that Wikipedia does in the areas I mentioned previously, I'm passing along a few links for those who may be interested about the benefits and limitations of big data.
>From the Harvard Business Review
>From the New York Times
>From the Wall Street Journal. This may be especially interesting to those who are participating in the discussions on Wikimedia-l regarding how Wikimedia selects, pays, and manages its staff.
And from English Wikipedia (:
Yikes. I've dropped the ball on some keyword searching where I promised to help.
Worse yet, I've lost the email correspondence with the people I was helping.
I am working on repackaging the PEG Exploratory Parsing tool I built a couple of years ago. This is my experiment in building a wiki "laboratory" for posing and answering a certain class of questions about what people write in wikipedia.
I've been distracted from this work and now find that my email has glitched so I no longer have the test case I'd hoped to pursue. If I promised to help you, please forgive my tardiness and renew the correspondence.
Thanks and best regards. -- Ward
Dear semantic and non-semantic wiki communities,
please find below the full CfP for CICM (Intelligent Computer
Mathematics), 8-12 July in Bath, UK (submission deadline 8 March).
Wikis are widely used to author and publish mathematical knowledge and
thus particularly relevant to the
* DML (Digital Mathematical Libraries) and
* MKM (Mathematical Knowledge Management)
conference tracks, and I'm sure there are a lot of ongoing activities
that could be presented in the
* Systems & Projects
track as well.
Just some examples of where wiki technology (beyond mere LaTeX to PNG
rendering) has previously been used in connection with mathematical
* 2011 workshop on mathematical wikis (http://www.cs.ru.nl/mwitp/)
* The management of the Mizar Mathematical Library and other collections
of formal mathematical knowledge is being facilitated by wikis
* Besides Wikipedia there are further, math-specific community wikis,
e.g. http://www.proofwiki.org, http://www.planetmath.org,
--- %< --- %< --- %< --- %< --- %< --- %< --- %< --- %< --- %< --- %< ---
CICM 2013 - Conferences on Intelligent Computer Mathematics
July 8-12, 2013 at University of Bath, Bath, UK
Call for Papers
As computers and communications technology advance, greater
opportunities arise for intelligent mathematical computation. While
computer algebra, automated deduction, mathematical publishing and
novel user interfaces individually have long and successful histories,
we are now seeing increasing opportunities for synergy among these
areas. The Conferences on Intelligent Computer Mathematics offers a
venue for discussing these areas and their synergy.
The conference will take place at the University of Bath (www.bath.ac.uk),
with James Davenport as the local organiser. It consists of four tracks:
Chair: Wolfgang Windsteiger
Digital Mathematical Libraries (DML)
Chair: Petr Sojka
Mathematical Knowledge Management (MKM)
Chair: David Aspinall
Systems and Projects
Chair: Christoph Lange
As in previous years, there are plans to organise a workshop for
presentations by Doctoral students.
The overall programme will be organised by the General Program Chair
Abstract submission: 1 March 2013
Submission deadline: 8 March 2013
Reviews sent to authors: 5 April 2013
Rebuttals due: 8 April 2013
Notification of acceptance: 14 April 2013
Camera ready copies due: 26 April 2013
Conference: 8-12 July 2013
Calculemus 2013 invites the submission of original research contributions
to be considered for publication and presentation at the conference.
Calculemus is a series of conferences dedicated to the integration of
computer algebra systems (CAS) and systems for mechanised reasoning like
interactive proof assistants (PA) or automated theorem provers (ATP).
Currently, symbolic computation is divided into several (more or less)
independent branches: traditional ones (e.g., computer algebra and
mechanised reasoning) as well as newly emerging ones (on user interfaces,
knowledge management, theory exploration, etc.) The main concern of the
Calculemus community is to bring these developments together in order to
facilitate the theory, design, and implementation of integrated
mathematical assistant systems that will be used routinely by
mathematicians, computer scientists and all others who need
computer-supported mathematics in their every day business.
All topics in the intersection of computer algebra systems and automated
reasoning systems are of interest for Calculemus. These include but are not
* Automated theorem proving in computer algebra systems.
* Computer algebra in theorem proving systems.
* Adding reasoning capabilities to computer algebra systems.
* Adding computational capabilities to theorem proving systems.
* Theory, design and implementation of interdisciplinary systems for
* Case studies and applications that involve a mix of computation and
* Case studies in formalization of mathematical theories.
* Representation of mathematics in computer algebra systems.
* Theory exploration techniques.
* Combining methods of symbolic computation and formal deduction.
* Input languages, programming languages, types and constraint languages,
and modeling languages for mathematical assistant systems.
* Homotopy type theory.
* Infrastructure for mathematical services.
Mathematicians dream of a digital archive containing all peer-reviewed
mathematical literature ever published, properly linked, validated and
verified. It is estimated that the entire corpus of mathematical
knowledge published over the centuries does not exceed 100,000,000
pages, an amount easily manageable by current information technologies.
Track objective is to provide a forum for development of math-aware
technologies, standards, algorithms and formats towards fulfillment
of the dream of global digital mathematical library (DML). Computer
scientists (D) and librarians of digital age (L) are especially
welcome to join mathematicians (M) and discuss many aspects of DML
Track topics are all topics of mathematical knowledge management
and digital libraries applicable in the context of DML building --
processing of math knowledge expressed in scientific papers in
natural languages, namely:
* Math-aware text mining (math mining) and MSC classification
* Math-aware representations of mathematical knowledge
* Math-aware computational linguistics and corpora
* Math-aware tools for [meta]data and fulltext processing
* Math-aware OCR and document analysis
* Math-aware information retrieval
* Math-aware indexing and search
* Authoring languages and tools
* MathML, OpenMath, TeX and other mathematical content standards
* Web interfaces for DML content
* Mathematics on the web, math crawling and indexing
* Math-aware document processing workflows
* Archives of written mathematics
* DML management, bussiness models
* DML rights handling, funding, sustainability
* DML content acquisition, validation and curation
Mathematical Knowledge Management is an interdisciplinary field of
research in the intersection of mathematics, computer science, library
science, and scientific publishing. The objective of MKM is to develop
new and better ways of managing sophisticated mathematical knowledge,
based on innovative technology of computer science, the Internet, and
intelligent knowledge processing. MKM is expected to serve
mathematicians, scientists, and engineers who produce and use
mathematical knowledge; educators and students who teach and learn
mathematics; publishers who offer mathematical textbooks and
disseminate new mathematical results; and librarians and
mathematicians who catalog and organize mathematical knowledge.
The conference is concerned with all aspects of mathematical knowledge
management. A non-exclusive list of important topics includes:
* Representations of mathematical knowledge
* Authoring languages and tools
* Repositories of formalized mathematics
* Deduction systems
* Mathematical digital libraries
* Diagrammatic representations
* Mathematical OCR
* Mathematical search and retrieval
* Math assistants, tutoring and assessment systems
* MathML, OpenMath, and other mathematical content standards
* Web presentation of mathematics
* Data mining, discovery, theory exploration
* Computer algebra systems
* Collaboration tools for mathematics
* Challenges and solutions for mathematical workflows
Systems and Projects
The Systems and Projects track of the Conferences on Intelligent Computer
Mathematics is a forum for presenting available systems and new and
ongoing projects in all areas and topics related to the CICM conferences:
* Deduction and Computer Algebra (Calculemus)
* Digital Mathematical Libraries (DML)
* Mathematical Knowledge Management (MKM)
* Artificial Intelligence and Symbolic Computation (AISC)
The track aims to provide an overview of the latest developments and
trends within the CICM community as well as to exchange ideas between
developers and introduce systems to an audience of potential users.
Submissions to the research tracks must not exceed 15 pages and will be
reviewed and evaluated with respect to relevance, clarity, quality,
originality, and impact. Shorter papers, e.g., for system
descriptions, are welcome. Authors will have an opportunity to respond
to their papers' reviews before the programme committee makes a
System descriptions and projects descriptions should be 2-4 pages and
* newly developed systems,
* systems that have not previously been presented to the CICM community,
* significant updates to existing systems.
Systems must be available for download.
Project presentations should describe
* projects that are new or about to start,
* ongoing projects that have not yet been presented to the CICM community.
* significant new developments in ongoing previously presented projects.
Presentations of new projects should mention relevant previous work and
include a roadmap that outlines concrete steps. All submissions should
contain links to demos, downloadable systems, or project websites.
Accepted conference submissions from all tracks is intended to be published
as a volume in the series Lecture Notes in Artificial Intelligence (LNAI)
by Springer. In addition to these formal proceedings, authors are permitted
and encouraged to publish the final versions of their papers on arXiv.org.
Work-in-progress submissions are intended to provide a forum for the
presentation of original work that is not (yet) in a suitable form for
submission as a full or system description paper. This includes work
in progress and emerging trends. Their size is not limited, but we
recommend 5-10 pages.
The programme committee may offer authors of rejected formal
submissions to publish their contributions as work-in-progress papers
instead. Depending on the number of work-in-progress papers accepted,
they will be presented at the conference either as short talks or as
posters. The work-in-progress proceedings will be published as a
technical report, as well as online with CEUR-WS.org.
All papers should be prepared in LaTeX and formatted according to the
requirements of Springer's LNCS series (the corresponding style files
can be downloaded from
http://www.springer.de/comp/lncs/authors.html). By submitting a paper
the authors agree that if it is accepted at least one of the authors
will attend the conference to present it.
Electronic submission is done through easychair
Jacques Carette, McMaster University, Canada
Wolfgang Windsteiger, RISC Institute, JKU Linz, Austria
Petr Sojka, Masaryk University, Faculty of Informatics, Czech Republic
David Aspinall, University of Edinburgh, UK
Christoph Lange, University of Birmingham, UK
Till Mossakowski, DFKI Bremen, Germany
Jónathan Heras, University of Dundee, UK
Josef Urban, Radboud University, Netherlands
Deyan Ginev, Jacobs University Bremen, Germany
Rob Arthan, Queen Mary University of London, UK
Makarius Wenzel, Université Paris-Sud 11, France
Hendrik Tews, TU Dresden, Germany
Simon Colton, Department of Computing, Imperial College, London, UK
Paul Libbrecht, Martin Luther University Halle-Wittenberg, Germany
Cezary Kaliszyk, University of Innsbruck, Austria
Andrea Kohlhase, Jacobs University Bremen, Germany
Yannis Haralambous, Télécom Bretagne, France
Florian Rabe, Jacobs University Bremen, Germany
Akiko Aizawa, NII, The University of Tokyo, Japan
Carsten Schuermann, IT University of Copenhagen, Denmark
Magnus O. Myreen, University of Cambridge, UK
Janka Chlebíková, School of Computing, University of Portsmouth, UK
Richard Zanibbi, Rochester Institute of Technology, US
Michael Kohlhase, Jacobs University Bremen, Germany
Adam Kilgarriff, Lexical Computing Ltd, UK
Leo Freitas, Newcastle University, UK
Frank Tompa, University of Waterloo, Canada
Gudmund Grov, Heriot-Watt University, Edinburgh, UK
Jeremy Avigad, Carnegie Mellon University, US
Stephen Watt, University of Western Ontario, Canada
Temur Kutsia, RISC Institute, JKU Linz, Austria
Manfred Kerber, University of Birmingham, UK
Hoon Hong, North Carolina State University, US
Christoph Lüth, DFKI Bremen, Germany
Thierry Bouche, Université Joseph Fourier (Grenoble), France
Andrea Asperti, University of Bologna, Italy
Jesse Alama, CENTRIA, FCT, Universidade Nova de Lisboa, Portugal
Jiří Rákosník, Institute of Mathematics, Academy of Sciences, Czech Republic
Thomas Hales, University of Pittsburgh, US
Predrag Janičić, Department for Computer Science, University of
(more names will be added as confirmations arrive)
Christoph Lange, School of Computer Science, University of Birmingham
http://cs.bham.ac.uk/~langec/, Skype duke4701
→ Enabling Domain Experts to use Formalised Reasoning @ AISB 2013
2–5 April 2013, Exeter, UK. Deadline 14 Jan
→ Intelligent Computer Mathematics, 7–12 Jul 2013, Bath, UK; Deadline 8 Mar