Hi,
The Wikimedia Foundation is hiring a product manager for multimedia
support/features and, most immediately, the structured data on commons
<https://commons.wikimedia.org/wiki/Commons:Structured_data> project. We
think it would be tremendously helpful if that person was already a member
of the Wikimedia movement. Given that the structured data project is
closely tied to wikidata, I wanted to make sure folks on this list were
aware.
If you have product management experience and are interested in learning
more, please see the job description and apply here
<https://boards.greenhouse.io/wikimedia/jobs/609403?t=46icqe1#.WMBMwhLyvfY>.
Please copy this notice anywhere else you think interested community
members might see it.
Thanks in advance,
Jon
user:Jkatz (WMF)
My trusty WiDaR OAuth-based tool is throwing errors since a few days:
"The authorization headers in your request are not valid: Invalid signature"
I did see the breaking change at
https://lists.wikimedia.org/pipermail/mediawiki-api-announce/2017-February/…
but I am not using lgpassword or lgtoken here.
Did something else change? I notice I have
'oauth_signature_method' => 'HMAC-SHA1'
Did someone perhaps change that after the SHA1 foobar?
Anything else it could be? I know I haven't fiddled with my code...
Hey everyone,
apologies for the cross-posting, we're just too excited:
we're looking for a new member for our team [0], who'll dive right away in
the promising Structured Data project. [1]
Is our future colleague hiding among the tech ambassadors, translators,
GLAM people, community members we usually work with? We look forward to
finding out soon.
So please, check the full job description [2], apply, or tell/recommend
anyone who you think may be a good fit. For any questions, please contact
me personally (not here).
Thanks!
Elitre (WMF)
Senior Community Liaison, Technical Collaboration
[0] https://meta.wikimedia.org/wiki/Community_Liaisons
[1] https://commons.wikimedia.org/wiki/Commons:Structured_data
[2]
https://boards.greenhouse.io/wikimedia/jobs/610643?gh_src=o3gjf21#.WMGV0Rih…
Note: Apologies for cross-posting and sending in English. This message is available for translation on Meta-Wiki.[1]
As we mentioned last month, the Wikimedia movement is beginning a movement-wide strategy discussion, a process which will run throughout 2017. This movement strategy discussion will focus on the future of our movement: where we want to go together, and what we want to achieve.
Regular updates are being sent to the Wikimedia-l mailing list,[2] and posted on Meta-Wiki.[3] Each month, we are sending overviews of these updates to this list as well.
Here is a overview of the updates that have been sent since our message last month:
Update 7 on Wikimedia movement strategy process (16 February 2017)
- https://meta.wikimedia.org/?curid=10195092
- Development of documentation for Tracks A & B
Update 8 on Wikimedia movement strategy process (24 February 2017)
- https://meta.wikimedia.org/?curid=10201503
- Introduction of Track Leads for all four audience tracks
Update 9 on Wikimedia movement strategy process (2 March 2017)
- https://meta.wikimedia.org/?curid=10207604
- Seeking feedback on documents being used to help facilitate upcoming community discussions
Sign up to receive future announcements and monthly highlights of strategy updates on your user talk page: https://meta.wikimedia.org/?curid=10153505
More information about the movement strategy is available on the Meta-Wiki 2017 Wikimedia movement strategy portal.[3]
A version of this message is available for translation on Meta-Wiki.[1]
[1] https://meta.wikimedia.org/wiki/Strategy/Wikimedia_movement/2017/Updates/Ov…
[2] https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
[3] https://meta.wikimedia.org/wiki/Strategy/Wikimedia_movement/2017
List moderators may request that their mailing list not receive future updates by contacting Gregory Varnum (gvarnum(a)wikimedia.org).
Hi all,
We finally have something like an official first release of
WikidataIntegrator (WDI), our Python library for Wikidata which we use
extensively for our bots.
You can either
'pip install wikidataintegrator'
or visit our GitHub repo at https://github.com/sulab/wikidataintegrator
What's unique about it is the tight integration of the Wikidata SPARQL
endpoint and the Wikidata API. WDI also substantially simplifies bot
writing, and comes with pre-write data consistency checks. For more
details, please consult our documentation on GitHub. WDI is in active
development but has proven very stable so far.
We welcome your questions and feedback!
Best,
Sebastian
--
Sebastian Burgstaller-Muehlbacher, PhD
Research Associate
Andrew Su Lab
MEM-216, Department of Molecular and Experimental Medicine
The Scripps Research Institute
10550 North Torrey Pines Road
La Jolla, CA 92037
Hey :)
forwarding a nice overview
Cheers
Lydia
---------- Forwarded message ----------
From: Florence Devouard <fdevouard(a)gmail.com>
Date: Wed, Mar 8, 2017 at 2:01 PM
Subject: [Wikimedia-l] Occupation of Women on WikiData
To: wikimedia-l(a)lists.wikimedia.org
This is a tool done by Envel Le Hir using WikiData and published today.
I actually inspired him the idea during a conference, when talking of
my desire to get generic data about women professionnal occupation. My
main argument is that I felt many of the added biographies about women
were about actors, singers, or football players. Much less about
politicians and business. But it was a "guess" and I wanted more hard
data.
And apparently... he got busy
http://tools.dicare.org/gaps/gender.php
Ok.
Hard data (1950-2005 birth dates):
* 80% of the biographies of porn actors are about women.
* 98.3% of beauty pageant contestants are about women.
* 24% of politicians are about women
* 8.4% of computer scientists are about women.
Or ... In Algeria... the more popular occupation of women by far is...
Volley Ball !
In France... actors.
And in Guinea... well... hard to say... only 17 biographies about
Guinean women anyway.
Florence
_______________________________________________
Wikimedia-l mailing list, guidelines at:
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
https://meta.wikimedia.org/wiki/Wikimedia-l
New messages to: Wikimedia-l(a)lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
<mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
--
Lydia Pintscher - http://about.me/lydia.pintscher
Product Manager for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/029/42207.
Hey all,
I'm happy to announce the immediate availability of Replicator 0.2.
Replicator is a CLI application for replicating a Wikibase
<http://wikiba.se/> entity base such as Wikidata <https://www.wikidata.org>.
It can import entities from the Wikidata API and from Wikibase dumps in
various formats. It features abort/resume, graceful error handling,
progress reporting, dynamic fetching of dependencies, API batching and
standalone installation (no own MediaWiki or Wikibase required).
Furthermore it uses the same deserialization code as Wikibase itself, so is
always 100% compatible.
This version brings Vagrant support and comes with the latest versions of
the Wikibase deserialization code, needed for recent data.
https://github.com/JeroenDeDauw/Replicator
Right now it's not that easy to add new "replication targets", though if
there is interest in this, I can probably spend some time catering to that
demand.
(I've written this tool, and this email, as volunteer, and not as Wikimedia
Deutschland employee.)
Cheers
--
Jeroen De Dauw | https://entropywins.wtf | https://keybase.io/jeroendedauw
Software craftsmanship advocate
~=[,,_,,]:3
Background
DBpedia and Wikidata currently focus primarily on representing factual
knowledge as contained in Wikipedia infoboxes. A vast amount of
information, however, is contained in the unstructured Wikipedia article
texts. With the DBpedia Open Text Extraction Challenge, we aim to spur
knowledge extraction from Wikipedia article texts in order to
dramatically broaden and deepen the amount of structured
DBpedia/Wikipedia data and provide a platform for benchmarking various
extraction tools.
Mission
Wikipedia has become the ubiquitous source of knowledge for the world
enabling humans to lookup definitions, quickly become familiar with new
topics, read up background infos for news event and many more - even
settling coffee house arguments via a quick mobile research. The mission
of DBpedia in general is to harvest Wikipedia’s knowledge, refine and
structure it and then disseminate it on the web - in a free and open
manner - for IT users and businesses.
http://wiki.dbpedia.org/textext
Hi all,
I'm co-organizing an Open Data Day event this weekend, as part of
which a group of people who did not know Wikidata tried to find their
way into it on their own.
Their notes are at
https://github.com/sparcopen/open-research-doathon/blob/master/wikidata_for…
.
I'll be seeing some of them again tomorrow, so there is an opportunity
for feedback from your end to them, e.g. by way of pull requests or
through comments on the accompanying issue at
https://github.com/sparcopen/open-research-doathon/issues/35 .
Where would be a good place to store such feedback on-wiki?
Thanks and cheers,
Daniel