Hi,
I'm looking into ways to use tabular data like
https://commons.wikimedia.org/wiki/Data:Zika-institutions-test.tab
in SPARQL queries but could not find anything on that.
My motivation here is in part coming from the time out limits, and the
basic idea here would be to split queries that typically time out into
sets of queries that do not time out and - if their results were
aggregated - would yield the results that would be expected for the
original query would it not time out.
The second line of motivation here is that of keeping track of how
things develop over time, which would be interesting for both content
and maintenance queries as well as usage of things like classes,
references, lexemes or properties.
I would appreciate any pointers or thoughts on the matter.
Thanks,
Daniel
Hi all!
I really try not to spam the chat too much with pointers to my work on the
Abstract Wikipedia, but this one is probably also interesting for Wikidata
contributors. It is the draft for a chapter submitted to Koerner and
Reagle's Wikipedia@20 book, and talks about knowledge diversity under the
light of centralisation through projects such as Wikidata.
Public commenting phase is open until July 19, and very welcome:
"Collaborating on the sum of all knowledge across languages"
About the book: https://meta.wikimedia.org/wiki/Wikipedia@20
Link to chapter: https://wikipedia20.pubpub.org/pub/vyf7ksah
Cheers,
Denny
Hi!
As part of our Wikidata Query Service setup, we maintain the namespace
serving DCAT-AP (DCAT Application Profile) data[1]. (If you don't know
what I'm talking about you can safely ignore the rest of the message).
Recent check showed that this namespace is virtually unused - over the
last two months, only 3 query per month were served from that namespace,
and all of them coming from WMF servers (not sure whether it's a tool or
somebody querying manually, did not dig further).
So I wonder if it makes sense to continue maintaining this namespace?
While it does not require very significant effort - it's mostly
automated - it does need occasional attention when maintenance is
performed, and some scripts and configurations become slightly more
complex because of it. No big deal if somebody is using it, that's what
the service is for, but if it is completely unused, no point is spending
even minimal effort on it, at least on main production servers (of
course, it'd be possible to set up a simple SPARQL server in labs with
the same data).
In any case, RDF dcatap data will be available in
https://dumps.wikimedia.org/wikidatawiki/entities/dcatap.rdf, no change
is planned there, but if the namespace is phased out, the data could no
longer be queried using WDQS. One could still download it and, since
it's a very small dataset, use any tool that can read RDF to parse it
and work with it.
I'd like to hear from anybody interested in this whether they are using
this namespace or plan to use it and what for. Please either answer here
or even better in the task[2] on Phabricator.
[1]
https://www.mediawiki.org/wiki/Wikidata_Query_Service/User_Manual#DCAT-AP
[2] https://phabricator.wikimedia.org/T228297
--
Stas Malyshev
smalyshev(a)wikimedia.org
Greetings,
The Structured Data team is testing adding, editing, and removing other
statements (besides depicts) to file pages and in the UploadWizard. If all
goes well, the team will turn it on for Commons later next week. You can
read more about testing on the SDC talk page [0], or head over to
https://test-commons.wikimedia.org and pick a random file to get started.
Feedback is welcome at the same SDC talk page link below.
0.
https://commons.wikimedia.org/wiki/Commons_talk:Structured_data#Testing_sup…
--
Keegan Peterzell
Community Relations Specialist
Wikimedia Foundation
Hello,
We have requested a 30 minutes read-only window for s8 (wikidata) for the
30th July from 05:00-05:30 UTC to switchover our primary wikidata database
master (T227063)
db1071 is an old host and out of warranty that will be decommissioned
(T217396).
This will also unblock some migration stages of the new redesign for the
wb_terms table (T221764)
We are going to do this on Tuesday 30th July from 05:00 to 05:30 AM UTC (we
do not expect to use the 30 minutes window, if everything goes as expected).
Impact: Writes will be blocked, reads will remain unaffected.
Communication will happen at #wikimedia-operations
If you are around at that time and want to help with the monitoring, please
join us!
Thanks
Hello everyone,
Sending a reminder that only a few days are left to recommend tools for the
Coolest Tool Award 2019:
https://meta.wikimedia.org/wiki/Coolest_Tool_Award. Nominate
your favorite tools by July 29 :-)
Cheers,
Srishti
*Srishti Sethi*
Developer Advocate
Wikimedia Foundation <https://wikimediafoundation.org/>
On Tue, Jul 16, 2019 at 3:43 PM Bryan Davis <bd808(a)wikimedia.org> wrote:
> On Tue, Jul 16, 2019 at 4:32 PM Brian Wolff <bawolff(a)gmail.com> wrote:
> >
> > To clarify, does this mean tool in the sense of "tool"server? (Aka
> coolest
> > thing hosted using cloud services) or is it more general including
> gadgets,
> > standalone apps or any other piece of technology in the wikimedia
> ecosystem
> > that's "cool"?
>
> Any Wikimedia related software is eligible. Gadgets, Lua modules, user
> scripts, bots no matter where they normally run, web tools no matter
> where they are hosted, desktop tools, mobile apps, ...
>
> We have a huge ecosystem of things that people build and use to make
> working on the Wikimedia projects easier and the Coolest Tool Academy
> wants to hear about all of them. You could honestly even nominate your
> favorite GNU Linux distribution if you can explain how it makes life
> as a Wikimedian better.
>
> Bryan
> --
> Bryan Davis Wikimedia Foundation <bd808(a)wikimedia.org>
> [[m:User:BDavis_(WMF)]] Manager, Technical Engagement Boise, ID USA
> irc: bd808 v:415.839.6885 x6855
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Hey,
is anyone working on or has worked on generating EntitySchemas from the
Wikidata Lexeme Forms data that Lucas is collecting?
It seems that most of the necessary data should be there already for these.
E.g. generating
https://www.wikidata.org/wiki/EntitySchema:E34
from
https://www.wikidata.org/wiki/Wikidata:Wikidata_Lexeme_Forms/German
(If Danish and German were the same language, which they are not,
obviously, but this is to exemplify the idea).
If not, does anyone want to work / cooperate on that?
Cheers,
Denny