Hi,
I have a couple of questions regarding the Wiki Page ID. Does it always
stay unique for the page, where the page itself is just a placeholder for
any kind of information that might change over time?
Consider the following cases:
1. The first time someone creates page "Moon" it is assigned ID=1. If at
some point the page is renamed to "The_Moon", the ID=1 remains intact. Is
this correct?
2. What if we have page "Moon" with ID=1. Someone creates a second-page
"The_Moon" with ID=2. Is it possible that page "Moon" is transformed into a
redirect? Then, "Moon" would be redirecting to page "The_Moon"?
3. Is it possible for page "Moon" to become a category "Category:Moon" with
the same ID=1?
Thanks,
Gintas
Hi there,
As of yesterday, lib.reviews supports searching Wikidata, so you can
quickly write a review (think "1-5 stars") of anything with a Wikidata
entry. See more here:
https://lib.reviews/team/developers/post/b2245981-0e59-427f-a7fe-8e685b8276…
lib.reviews is an open source, free content, multilingual, nonprofit
platform for reviewing anything (except living people). Registration
still needs invite codes (until we have better anti-spam and
moderation tools) -- feel free to email me offlist if you want one.
== Implementation ==
Autocompletion is done via the wbsearchentities API, and individual
entries are looked up via wbgetentities. We perform client-side
filtering to remove results that include phrases such as "Wikimedia
disambiguation page", which are not relevant in this context. (This is
done via a regex blacklist that can be different for each language.)
The client performs a "shallow" lookup for only the information the
user cares about, while upon save, the server performs a "deep"
lookup, e.g., it gets descriptions in languages the user doesn't care
about.
Each review is associated with a "thing" (similar to an item in
Wikidata), and when you review via Wikidata, labels and descriptions
in all supported lib.reviews languages are loaded automatically. We
don't "sync" this data regularly yet -- that's an upcoming feature.
You can either use the search box, or put a Wikidata URL directly into
the URL input field -- the information is looked up either way. And if
you don't have JavaScript enabled, we still fetch all the item
metadata server-side for any Wikidata URL.
== What's Next ==
In addition to automatic synchronization, we'll provide increasing
support for metadata that's relevant to reviews. These features are
intricately linked: as much as possible, we want to avoid duplicating
work, so pulling from free/open databases and keeping that information
up to date is our preferred way of handling review subject metadata.
The system we use for plugging into Wikidata is meant to evolve into a
generic "adapter" architecture. The next adapter will likely be
OpenStreetMap, so it's possible to select an object on a map (e.g., a
restaurant) and use that as a starting point for a review. OSM has a
lot of data that Wikidata doesn't, e.g., opening times for shops and
restaurants.
If more people start using it, it may also make sense to write a bot
to back-link from Wikidata to lib.reviews (there's already a property
for this: https://www.wikidata.org/wiki/Property:P3308 ).
== Questions and Notes ==
- Is there a better way to exclude disambiguation pages and similar
meta-content from an autocompletion than what I'm currently doing
(client-side filtering)? I didn't find a way to restrict
wbsearchentities results by claim, for example.
- Progress on https://phabricator.wikimedia.org/T97566 would help a
lot to clean up descriptions on some items. For applications like this
one, items with descriptions like "for the best-known species, see
Q10757112; for the genus, see Q8666090" are really problematic.
== Getting Involved ==
The most important thing for the success of the project right now is
slowly growing the community of reviewers. So, if you want to review
books, movies, etc., please ask me for a code and get involved! If
you've written reviews on sites like IMDB or Goodreads, I wrote a
browser extension called https://freeyourstuff.cc/ a while ago that
lets you export these -- there's no automatic import feature for
lib.reviews yet, but it should still save some time when copying
things over. Besides, FYS provides online backup to a
community-mirrored database, so it's a good way to liberate your
content either way.
As for development, the project is fully open and there are lots of
areas that any motivated person can work on. For example, improving
the rich-text editor component, improving file uploads, adding image
thumbnail generation, etc.
Find us here:
- Main site: https://lib.reviews/
- GitHub: https://github.com/eloquence/lib.reviews
- IRC: #lib.reviews on irc.freenode.net (also a fine place to ask for
an invite code)
This is all still in early development & bug reports are always welcome.
Cheers :)
Erik
Just a heads up that this week's Wikimedia Research Showcase will focus on
structured data in OpenStreetMap. Details below.
Cheers,
Jonathan
---------- Forwarded message ----------
From: Sarah R <srodlund(a)wikimedia.org>
Date: Tue, Jul 25, 2017 at 11:38 AM
Subject: [Analytics] Research Showcase Wednesday, July 26, 2017 at 11:30 AM
(PST) 18:30 UTC
To: wikimedia-l(a)lists.wikimedia.org, analytics(a)lists.wikimedia.org,
wiki-research-l(a)lists.wikimedia.org
Hi Everyone,
The next Research Showcase will be live-streamed this Wednesday, July 26,
2017 at 11:30 AM (PST) 18:30 UTC.
YouTube stream: https://www.youtube.com/watch?v=yC1jgK8C8aQ
As usual, you can join the conversation on IRC at #wikimedia-research. And,
you can watch our past research showcases here
<https://www.mediawiki.org/wiki/Wikimedia_Research/Showcase#July_2017>.
This month's presentation:
Freedom versus Standardization: Structured Data Generation in a Peer
Production CommunityBy *Andrew Hall*In addition to encyclopedia articles
and software, peer production communities produce *structured data*, e.g.,
Wikidata and OpenStreetMap’s metadata. Structured data from peer production
communities has become increasingly important due to its use by
computational applications, such as CartoCSS, MapBox, and Wikipedia
infoboxes. However, this structured data is usable by applications only if
it follows *standards.* We did an interview study focused on
OpenStreetMap’s knowledge production processes to investigate how – and how
successfully – this community creates and applies its data standards. Our
study revealed a fundamental tension between the need to produce structured
data in a standardized way and OpenStreetMap’s tradition of contributor
freedom. We extracted six themes that manifested this tension and three
overarching concepts, *correctness, community,* and *code,* which help make
sense of and synthesize the themes. We also offer suggestions for improving
OpenStreetMap’s knowledge production processes, including new data models,
sociotechnical tools, and community practices.
Kindly,
Sarah R. Rodlund
Senior Project Coordinator-Product & Technology, Wikimedia Foundation
srodlund(a)wikimedia.org
_______________________________________________
Analytics mailing list
Analytics(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics
--
Jonathan T. Morgan
Senior Design Researcher
Wikimedia Foundation
User:Jmorgan (WMF) <https://meta.wikimedia.org/wiki/User:Jmorgan_(WMF)>
Hi!
In the course of work on improving search in Wikidata, we now have an
implementation of entity prefix search (wbsearchentities API) using
ElasticSearch via CirrusSearch extension.
This search mode can be used via useCirrus=1 parameter on
wbsearchentities API (without the parameter, the API works as before).
Example:
https://www.wikidata.org/w/api.php?action=wbsearchentities&search=Green&for…
The results format is the same as before, but the search is driven not
by SQL but by ElasticSearch.
We've created a test page to test newly developed search of Wikidata
entities via wbsearchentities here:
http://elastic-wikidata.wmflabs.org/wb.html
Please use this page to test the search and if there's something broken
please tell us.
Please note the following:
- ElasticSearch parameters are not fully tuned yet, so there might be
weird results, wrong ordering, etc. The purpose of this test phase is to
discover and weed out such occurrences and figure out the best tuning
parameters. As such, discussions of ideas and things that are not
working right are welcome. If you plan to make a lot of tests - which is
great! - please summarize. You can post feedback on the list, or on the
talk page here:
https://www.wikidata.org/wiki/User_talk:Smalyshev_(WMF)/Wikidata_search
or anywhere on the wiki, please send me a link :)
- The test page UI is kind of rough, so it relies on language and type
parameters to be accurate and does not have any error control. It's just
a test :) Suggestion on improving it are welcome though.
More details available in: https://phabricator.wikimedia.org/T125500
Thanks,
--
Stas Malyshev
smalyshev(a)wikimedia.org
Hello all,
Since the WikidataCon is an event designed for and by the Wikidata
community, the program of the conference
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2017/Program> is open
and all the attendees can suggest ideas, talks and workshops to be included.
The deadline for proposing projects is *July 31st*, next Monday. After
that, the program committee will review, select and organize the projects
to fit in the schedule, and announce the final program around September 1st.
If you want to participate to the program and make sure that the
WikidataCon contains the topics you want, it's time to submit one or
several proposals! A lot of different formats are possible: talk, workshop,
demo, lightening talk, round table, discussion, request for comment,
meetup, hackathon, sprint...
You can find a list of ideas here
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2017/Program/Ideas#List_…>.
You're welcome to choose one of these topics and make it yours. We have for
example:
- demo all the useful tools for Wikidata,
- an overview on how Wikidata is used in Wikipedia,
- some thoughts about the community (how do you deal with vandalism?),
- discussions about ontologies and data model,
- showcase of a successful project,
- meetups for thematic projects... and many more.
Thanks in advance for making the program of the WikidataCon amazing :)
--
Léa Lacroix
Project Manager Community Communication for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
Hello everyone,
There has been a higher dispatch lag[1] since the end of last month, see https://grafana.wikimedia.org/dashboard/db/wikidata-dispatch. The English Wikipedia is now three days behind Wikidata, according to https://www.wikidata.org/wiki/Special:DispatchStats. I would like to ask everyone to keep mass edits to items with sitelinks to a minimum low until the dispatch lag is on a reasonable level.
Greetings,
Sjoerd de Bruin
sjoerddebruin(a)me.com
*Apologies for cross posting*
Dear Colleagues,
With only three weeks to go until Repository Fringe 2017 takes place at the John McIntyre Centre at the University of Edinburgh on 3 & 4 August, registrations are filling up fast and we have a packed programme which will hopefully pique your interest!
Registration closes on 28 July so make sure you register now at: http://www.epay.ed.ac.uk/conferences-events/information-services/informatio…
We have a really exciting programme lined up this year with a range of topics including:
· Kathleen Shearer, COAR - Raising our game - repositioning repositories as the foundation for sustainable scholarly communication
· Paul Ayris, UCL - The Empires of the Future are the Empires of the Mind’ [Winston Churchill]: Defining the Role of Libraries in the Open Science Landscape
· Chris Banks, Imperial College - Focusing upstream: supporting scholarly communication by academics
· Ewan McAndrew, University of Edinburgh & Navino Evans, Histropedia - Wikidata I/O Showcase
Programme Hot Pick - Wikidata Showcase
This Wikidata event at Repo Fringe is scheduled for 1pm to 3.15pm on Friday 4th August. It will be split into two main segments.
Part 1: Adding data to Wikidata: the Wikidata hackathon
The first session, from 1pm to 2.30pm, will focus on how to add data to Wikidata so this will include a short intro to what Wikidata is before looking at how individual items of data can be added to and, importantly, backed up with references as part of a data hackathon.
Part 2: Querying and visualising Wikidata
The second session, 2.30pm to 3.15pm, will look at how the data in Wikidata can be consumed, queried and visualised; whether it’s Voltaire’s works, the collections of the National Library of Wales, an analysis of MPs’ occupations or the 3 million linked citations visualised using the new Scholia tool.
In addition to these sessions we are also running a SPARQL workshop from 9.30am-10.30am on Friday morning.
Find out more about our programme at: http://rfringe17.blogs.edina.ac.uk/programme/
2017 marks the 10th Repo Fringe where we will be celebrating progress we have made over the last 10 years in sharing content beyond borders and debating future trends and challenges.
We look forward to seeing you in August!
Kind regards,
Ewan McAndrew
Wikimedian in Residence
Tel: 07719 330076
Email: ewan.mcandrew(a)ed.ac.uk
Subscribe to the mailing list: wikimedia(a)mlist.is.ed.ac.uk
My working hours are 10.30am to 6.30pm Monday to Friday.
Wikipedia Project Page for the residency: https://en.wikipedia.org/wiki/Wikipedia:University_of_Edinburgh
The University of Edinburgh, Floor H (West), Argyle House, 3 Lady Lawson Street, Edinburgh, EH3 9DR.
www.ed.ac.uk
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.