Hello all,
A short update about the WikidataCon program. The call for projects
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2017/Program> is
running until July 31st, and we already have a bunch of nice submissions
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2017/Program/Submissions>.
But we need more!
What do you think that should be in the program? What topics, issues should
absolutely be discussed during the conference? What tools should be demoed?
What meetups should happen?
Feel free to add your ideas on the list
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2017/Program/Ideas>.
You can also add more details on the talk page. Here you can also ping
people who would be great for one of the topics.
And of course, if you plan to attend and you feel like taking care of one
of the topics, please submit it
<https://www.wikidata.org/wiki/Wikidata:WikidataCon_2017/Program/Submit> :)
If you have any question, feel free to contact the program committee
<https://www.wikidata.org/wiki/Wikidata_talk:WikidataCon_2017/Volunteer/Prog…>
.
Thanks,
--
Léa Lacroix
Project Manager Community Communication for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
Hi all,
An interesting discovery I recently made while working with the upcoming
update to Apple's mobile devices. Siri, the speech recognition/personal
assistant in the operating system, often responds to questions about many
things with content from Wikimedia projects.
I have two devices in my household and compared the differences in
responses to the question, "Who is Gover Cleveland?"
In the current OS, iOS 10: http://imgur.com/3sFUCZY
In the current beta for the next OS, iOS 11: http://imgur.com/Usz8Ryx
The description of the subject is a Wikidata description, and there are
more fields about the subject. There are 4 visible in iOS 10. If you scroll
through the results there are 11 in iOS 11.
Yours,
Chris Koerner
clkoerner.com
Hi,
I was working on the term 'identity' with respect to internet stuff; and
thereafter started looking for an RDF source for an english thesaurus or
dictionary; and couldn't find one. I found
https://en.wiktionary.org/wiki/Wiktionary:Main_Page but it didn't seem to
have well-formed RDF output; as to act as an ontological source (rather
than simply the use of RDF for SEO).
thereafter started writing; this is where i got up to,
Project Purpose
To generate an RDF compliant dictionary and thesaurus for the purpose of
ontological reuse on the web.
PROBLEM
We use language to develop web-pages that have inferred human considered
meaning. Yet, the definition of these terms are not necessarily machine
readable.
For Example: "identity".
When working on 'digital identity' this is often considered to have the
meaning of how people log-in to their personal accounts or means in which
to interact with their personal data; or that of others. HOWEVER,
identity can also mean 'sameness'; which can also be useful for
organisations such as website operators to say 'these people have one of my
website identities' that is to say, they're all consumers.
http://www.dictionary.com/browse/identity
This can be further clarified by looking at the different meanings provided
to the same word via a thesaurus: http://www.thesaurus.com/browse/identity
I thereafter looked for a way in which a statement of exactness could be
made via RDF; but couldn't find an appropriate RDF dictionary resource.
SOLUTION
Build an online dictionary and thesaurus that is machine-readable. It
makes sense that this may best be done with wiki technology.
FEATURES
- The project would firstly focus on the lexicography of the english
language and related dialects. This is expected to include works in adding
latin predicates.
- The project would produce a comprehensive thesaurus, including unique
identifiers for different uses of the same term (supporting a comprehension
of the differentiation in the use of that term).
- The project would produce a platform that provided RDF output in a number
of serialisations.
- Would provide the means for people to add / edit content on the site.
PRODUCTION METHOD
It is hoped the site can be rapidly populated using scripts to ingest
existing information from freely available sources; and to populate the
system with information in an RDF compliant format; that may be altered,
edited, updated in a ‘wiki’ like fashion.
USES
For the communication of specific concepts in a manner that may be further
clarified by both human and machine observers; as to ensure parties are
communicating and/or developing works upon a basis of common understanding
of the meaning provided to the language used.
I had concerns that the WikiData site seemed to be better orientated
towards the concept of schema.org/thing rather than a 'language' or other
form of predicate. Please let me know your thoughts? Perhaps i've missed
something entirely and this exists already? Perhaps people have been
thinking about it elsewhere? perhaps barriers exist, that i'm not aware
of...
Timothy Holborn.
Hi all,
We're trying to extract full type hierarchy of Wikidata starting from all
occurrences of P31 and P279. While we have some custom code for this, we're
thinking there may be a smarter/more-efficient way of doing it using SPARQL
or a tool that we are probably unaware of. Any hint would be appreciated. :)
Thanks,
Leila
In case you wonder why we ended up with this question and who "we" is ;):
The research is being documented at
https://meta.wikimedia.org/wiki/Research_talk:Expanding_Wikipedia_stubs_acr…
. (The documentation is not most up-to-date, but it will give you the gist
of what we are doing.)
We are interested in building systems that can help editors and editathon
organizers identify the most common structures for different article types
given the already existing articles in each type/category in Wikipedia (in
a fixed language or across languages) and the information available in
those articles.
The challenge we have run into, and we're not the first to run into it, is
that the categories in Wikipedia don't have (as a whole) is-a relationship.
This is a big problem for information extraction based on the category
system, and we're trying to find a way to clean it up before starting to
use it for this research. (We've looked at the body of research that
attempts to clean up Wikipedia category system for knowledge extraction and
none of what we've found addresses the problem we have. More on that once
we complete the documentation.)
Hello all,
As you may know, WMF, WMDE and volunteers are working together on the
structured
data for Commons
<https://commons.wikimedia.org/wiki/Commons:Structured_data> project. We’re
currently working on a lot of technical groundwork for this project. One
big part of that is allowing the use of Wikidata’s items and properties to
describe media files on Commons. We call this feature federation. We have
now developed the necessary code for it and you can try it out on a test
system and give feedback.
We have one test wiki that represents Commons (
http://structured-commons.wmflabs.org) and another one simulating Wikidata (
http://federated-wikidata.wmflabs.org). You can see an example
<http://structured-commons.wmflabs.org/wiki/MediaInfo:M13> where the
statements use items and properties from the faked Wikidata. Feel free to
try it by adding statements to to some of the files on the test system.
(You might need to create some items on
http://federated-wikidata.wmflabs.org if they don’t exist yet. We have
created a few for testing.)
If you have any questions or concern, please let us know.
Thanks,
--
Léa Lacroix
Project Manager Community Communication for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
Hi!
I'm a Polish Wikipedian currently working for WMF. My task is to ensure
that various online communities are aware of the movement-wide strategy
discussion[1], and to facilitate and summarize your talk. Now, I’d like to
invite you to Cycle 3 of the discussion.
Between March and May, members of many communities shared their opinions on
what they want the Wikimedia movement to build or achieve. (The report
written after Cycle 1 is available[2], and a similar report after Cycle 2
will be available soon.)
At the same time, designated people did a research outside of our movement.
They:
1. talked with more than 150 experts and partners from technology,
knowledge, education, media, entrepreneurs, and other sectors,
2. researched potential readers and experts in places where Wikimedia
projects are not well known or used,
3. researched by age group in places where Wikimedia projects are well
known and used.
Now, the research conclusions are published, and Cycle 3 is beginning[3].
Our task is to discuss the identified challenges and think how we want to
change or align to changes happening around us. Each week, a new challenge
will be posted. The discussions will take place until the end of July.
All of you are invited! If you want to ask a question, write here. You
might also take a look at our FAQ[4].
Thanks!
[1]
https://meta.wikimedia.org/wiki/Special:MyLanguage/Strategy/Wikimedia_movem…
[2]
https://meta.wikimedia.org/wiki/Special:MyLanguage/Strategy/Wikimedia_movem…
[3] https://www.wikidata.org/wiki/Wikidata:Strategy_2017
[4]
https://meta.wikimedia.org/wiki/Special:MyLanguage/Strategy/Wikimedia_movem…
*Szymon Grabarczuk*
Free Knowledge Advocacy Group EU
Head of R&D Group, Wikimedia Polska
pl.wikimedia.org/wiki/User:Tar_Lócesilion
<http://pl.wikimedia.org/wiki/User:Tar_L%C3%B3cesilion>
Hi everyone,
The StrepHit team [1] has submitted an official uplift proposal for the
primary sources tool [2].
This is part of a Wikimedia project grant [3], which has 2 big goals:
1. to improve the reference coverage of Wikidata statements;
2. to standardize the data release workflow for third-party providers.
We have worked hard integrating past discussions, extensively
investigating MediaWiki and Wikidata code bases, interacting with
specific people from the community.
Now we are pretty much satisfied with our proposal, and we hope you are
too. Feel free to react on the project page!
Special thanks go to the StrepHit folks Tommaso (User:Kiailandi) and
Francesco (User:Afnecors) and everyone who supported us in this delicate
phase
Best,
Marco
[1]
https://meta.wikimedia.org/wiki/Grants:IEG/StrepHit:_Wikidata_Statements_Va…
[2] https://www.wikidata.org/wiki/Wikidata:Primary_sources_tool
[3]
https://meta.wikimedia.org/wiki/Grants:IEG/StrepHit:_Wikidata_Statements_Va…
Hey,
I'm currently working on a bot for a data import about german judges. The initial purpose has been to make a comparison to find all QIDs of existing Wikidata items that match to our data and record them. We noticed that almost half of the data is missing on Wikidata. Therefore we decided to try to create a bot which imports all missing entries.
I already read the Wikidata:Bot site (https://www.wikidata.org/wiki/Wikidata:Bots <https://www.wikidata.org/wiki/Wikidata:Bots>) but I still have some questions about the bot requirements:
- The first section is for all bots thus also for our bot. One of these requirements is to be able to set a limit for maximum edits per minute. But what does “edit" exactly mean in that case? Is to create an item and to add a label each a separate edit?
- In the second section are requirements for “Langlink import bots”. In which case are these requirements related to our bot? In addition, there is a link in this section for a full list of requirements for "import bots". Which of these entries are requirements, and which are merely recommended?
- In the third section “Statement adding bot” is one requirement "Monitor constraint violation reports for possible errors generated or propagated by your bot”. Should that be implemented as well or is that rather a task for the bot operator?
It would be very helpful to have some example code (preferably in python and pywikibot) of a bot that currently has a bot flag and does imports. Does anyone know where to find some?
Thanks in advance!
Best,
Marisa Nest
………………………………………………………………………
Student assistant at Human-Centered Computing (HCC) Lab
Freie Universität Berlin | Institute of Computer Science
https://www.mi.fu-berlin.de/inf/groups/hcc/ <https://www.mi.fu-berlin.de/inf/groups/hcc/>
Congratulations and all the best!
Samuel,
_____________
Excuse the brevity or typos, sent from my HTC phone.
On Jul 3, 2017 12:19 PM, "Erica Litrenta" <elitrenta(a)wikimedia.org> wrote:
> Hey all,
> as an update on this:
> The Technical Collaboration team
> <https://meta.wikimedia.org/wiki/Technical_Collaboration> is very happy
> to welcome Sandra Fauconnier
> <https://meta.wikimedia.org/wiki/User:SFauconnier_(WMF)>, our new community
> liaison <https://meta.wikimedia.org/wiki/Community_Liaisons> focusing on
> the Structured Data <https://commons.wikimedia.org/wiki/Structured_data> program.
> Sandra will support the collaboration between the communities (Commons,
> Wikidata, GLAM <https://outreach.wikimedia.org/wiki/GLAM>…) and the
> product development teams involved at the Wikimedia Foundation and
> Wikimedia Germany. Sandra will also drive the engagement with new
> individual content contributors, existing and new GLAM organizations, and
> developers interested in exploring the possibilities of the new platform.
>
> Learn more about and congratulate her at https://meta.wikimedia.org/
> wiki/Talk:Technical_Collaboration#Welcoming_Sandra_Fauconnier.2C_our_new_
> Structured_Data_community_liaison .
>
> Best,
> Elitre (WMF)
>
>
>
> On Fri, Mar 10, 2017 at 1:20 PM, Erica Litrenta <elitrenta(a)wikimedia.org>
> wrote:
>
>> Hey everyone,
>>
>> apologies for the cross-posting, we're just too excited:
>> we're looking for a new member for our team [0], who'll dive right away
>> in the promising Structured Data project. [1]
>>
>> Is our future colleague hiding among the tech ambassadors, translators,
>> GLAM people, community members we usually work with? We look forward to
>> finding out soon.
>>
>> So please, check the full job description [2], apply, or tell/recommend
>> anyone who you think may be a good fit. For any questions, please contact
>> me personally (not here).
>>
>> Thanks!
>> Elitre (WMF)
>> Senior Community Liaison, Technical Collaboration
>>
>>
>> [0] https://meta.wikimedia.org/wiki/Community_Liaisons
>>
>> [1] https://commons.wikimedia.org/wiki/Commons:Structured_data
>>
>> [2] https://boards.greenhouse.io/wikimedia/jobs/610643?gh_src=o3
>> gjf21#.WMGV0Rih3GI
>>
>
>
> _______________________________________________
> GLAM mailing list
> GLAM(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/glam
>
>
Great news!
On Mon, Jul 3, 2017 at 2:19 PM, Erica Litrenta <elitrenta(a)wikimedia.org>
wrote:
> Hey all,
> as an update on this:
> The Technical Collaboration team
> <https://meta.wikimedia.org/wiki/Technical_Collaboration> is very happy
> to welcome Sandra Fauconnier
> <https://meta.wikimedia.org/wiki/User:SFauconnier_(WMF)>, our new community
> liaison <https://meta.wikimedia.org/wiki/Community_Liaisons> focusing on
> the Structured Data <https://commons.wikimedia.org/wiki/Structured_data> program.
> Sandra will support the collaboration between the communities (Commons,
> Wikidata, GLAM <https://outreach.wikimedia.org/wiki/GLAM>…) and the
> product development teams involved at the Wikimedia Foundation and
> Wikimedia Germany. Sandra will also drive the engagement with new
> individual content contributors, existing and new GLAM organizations, and
> developers interested in exploring the possibilities of the new platform.
>
> Learn more about and congratulate her at https://meta.wikimedia.org/
> wiki/Talk:Technical_Collaboration#Welcoming_Sandra_Fauconnier.2C_our_new_
> Structured_Data_community_liaison .
>
> Best,
> Elitre (WMF)
>
>
>
> On Fri, Mar 10, 2017 at 1:20 PM, Erica Litrenta <elitrenta(a)wikimedia.org>
> wrote:
>
>> Hey everyone,
>>
>> apologies for the cross-posting, we're just too excited:
>> we're looking for a new member for our team [0], who'll dive right away
>> in the promising Structured Data project. [1]
>>
>> Is our future colleague hiding among the tech ambassadors, translators,
>> GLAM people, community members we usually work with? We look forward to
>> finding out soon.
>>
>> So please, check the full job description [2], apply, or tell/recommend
>> anyone who you think may be a good fit. For any questions, please contact
>> me personally (not here).
>>
>> Thanks!
>> Elitre (WMF)
>> Senior Community Liaison, Technical Collaboration
>>
>>
>> [0] https://meta.wikimedia.org/wiki/Community_Liaisons
>>
>> [1] https://commons.wikimedia.org/wiki/Commons:Structured_data
>>
>> [2] https://boards.greenhouse.io/wikimedia/jobs/610643?gh_src=o3
>> gjf21#.WMGV0Rih3GI
>>
>
>
> _______________________________________________
> GLAM mailing list
> GLAM(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/glam
>
>