Hey,
A week ago I started working on a project which tries to make it simpler to
query wikidata.
It's still in a very early stage of development but I would like to hear
your feedback, especially if you haven't used SPARQL, because you found it
too complicated to get started with.
qwery.me
My approach was it to simplify SPARQL as much as possible without loosing
too much of its power. Basically you can just write statements and the ids
are autocompleted. Currently there are still a lot of features missing like
data literals, filters and group by which i want to implement eventually.
So please tell me, is something like that usefull to you? Is it simple
enough?
Cheers,
Paul
Hoi,
At Wikidata we often find issues with data imported from a Wikipedia. Lists
have been produced with these issues on the Wikipedia involved and arguably
they do present issues with the quality of Wikipedia or Wikidata for that
matter. So far hardly anything resulted from such outreach.
When Wikipedia is a black box, not communicating about with the outside
world, at some stage the situation becomes toxic. At this moment there are
already those at Wikidata that argue not to bother about Wikipedia quality
because in their view, Wikipedians do not care about its own quality.
Arguably known issues with quality are the easiest to solve.
There are many ways to approach this subject. It is indeed a quality issue
both for Wikidata and Wikipedia. It can be seen as a research issue; how to
deal with quality and how do such mechanisms function if at all.
I blogged about it..
Thanks,
GerardM
http://ultimategerardm.blogspot.nl/2015/11/what-kind-of-box-is-wikipedia.ht…
Hoi,
Yes.
Data in Wikidata can be dated and the latest data can be indicated as
current. As I understand LUA may be used to use the latest data from
Wikidata. So yes, you can upload the census data to Wikidata and use
templates in any Wikipedia to show the latest data for any and all
Australian settlements.
I am not the right guy to ask for the LUA code, it is why I included
Wikidata-l.
Thanks,
GerardM
On 20 November 2015 at 22:44, Kerry Raymond <kerry.raymond(a)gmail.com> wrote:
> Gerard
>
>
>
> Can you provide some URLs for these lists and blog postings please?
>
>
>
> I think part of the problem may be that the information never reaches
> “ordinary editors”. Communication channels on our projects are very poor. I
> read article talk pages and the Australian Wikipedians Noticeboard, but not
> a lot of other places.
>
>
>
> However, I have a problem and I wonder if Wikidata can help with it. We
> have a census in Australia every 5 years and the population data from the
> most recent census (2011) is a standard item in every lede and infobox for
> any Australian place (town/suburb/locality) article on en.WP at least.
> However, maintaining that information is a massive tedious manual task. As
> a consequence, we still have lots of articles with 2006 census data while
> the 2016 census is coming at us like a freight train. The 2016 census will
> be the first one done primarily online (normally we fill out a long paper
> form and so there are months of data entry which delays the release of the
> data) and the data will be released around mid-2017. Now all this
> population data is available as spreadsheets under CC-BY license.
>
>
>
> My question is this. Can we update these spreadsheets into Wikidata and
> then create some kind of template on en.WP which can extract that data from
> Wikidata. I am thinking something like:
>
>
>
> {{CensusAUlatest|QLD|Childers}}
>
>
>
> Which we could embed in, say, the lede and which would produce something
> like
>
>
>
> In the 2016 Australian census, Childers reported a population of 12,345.
> <ref>….</ref>
>
>
>
> Where the 12,345 (and probably some components of the citation) would be
> extracted from the 2016 spreadsheet entry for Childers. I’ve asked a few
> people if this is possible to automate in this way and I get the standard
> response “it might be but I don’t know enough about Wikidata”.
>
>
>
> We have a similar problem with climate data where again we can probably
> obtain spreadsheets with the data under a suitable license if we had a way
> to automatically incorporate it into articles within the current massive
> manual effort.
>
>
>
> Do you have any advice for us? I am sure we are not the only nation with
> this census problem, although I realise that in some countries the data may
> not be released in suitable formats or with suitable licenses.
>
>
>
> Kerry
>
>
>
> *From:* Wiki-research-l [mailto:
> wiki-research-l-bounces(a)lists.wikimedia.org] *On Behalf Of *Gerard
> Meijssen
> *Sent:* Friday, 20 November 2015 5:18 PM
> *To:* Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>; Research
> into Wikimedia content and communities <
> wiki-research-l(a)lists.wikimedia.org>; WikiData-l <
> wikidata-l(a)lists.wikimedia.org>
> *Subject:* [Wiki-research-l] Quality issues
>
>
>
> Hoi,
>
> At Wikidata we often find issues with data imported from a Wikipedia.
> Lists have been produced with these issues on the Wikipedia involved and
> arguably they do present issues with the quality of Wikipedia or Wikidata
> for that matter. So far hardly anything resulted from such outreach.
>
> When Wikipedia is a black box, not communicating about with the outside
> world, at some stage the situation becomes toxic. At this moment there are
> already those at Wikidata that argue not to bother about Wikipedia quality
> because in their view, Wikipedians do not care about its own quality.
>
> Arguably known issues with quality are the easiest to solve.
>
>
>
> There are many ways to approach this subject. It is indeed a quality issue
> both for Wikidata and Wikipedia. It can be seen as a research issue; how to
> deal with quality and how do such mechanisms function if at all.
>
> I blogged about it..
>
> Thanks,
>
> GerardM
>
>
> http://ultimategerardm.blogspot.nl/2015/11/what-kind-of-box-is-wikipedia.ht…
>
> _______________________________________________
> Wiki-research-l mailing list
> Wiki-research-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
>
Hi Wikidatans, Dimitris, Markus K, Magnus M, Denny V, Kingsley I, Daniel K,
James Heald, Lydia and other friends here,
(Thanks, Dimitris, re: Great to meet you recently at Stanford (DBpedia) -
and [Wikidata] Wrong template usage in Wikipedia (feedback)).
I'm emailing to inquire whether you, as a core developer group (and perhaps
even developing a face-to-face monthly business meeting process amongst
yourselves to reach consensus re WUaS's), could possibly please take the
(initial) lead and help me to develop this World University and School
(WUaS) plan (and templates) -
https://docs.google.com/document/d/1n02XHzbTE8rY14p-ei6WSNtCQZ8AYscgwI6ncMX…
(which
Lydia suggested I write) - in Wikidata (which CC WUaS donated to CC
Wikidata for its third birthday) - and inter-lingually? Thank you.
Best regards,
Scott
--
- Scott MacLeod - Founder & President
- World University and School
- http://worlduniversityandschool.org
- World University and School - like Wikipedia with best STEM-centric
OpenCourseWare - incorporated as a nonprofit university and school in
California, and is a U.S. 501 (c) (3) tax-exempt educational organization.
Hi,
probably not the right mailing list for this mail but not sure where to ask
:)
Based on DBpedia dumps I created a script that identifies usage of
undefined templates in Wikipedia, most times due to spelling mistakes.
https://docs.google.com/spreadsheets/d/1_9szZwij4fJujiFUFcsndiDkT_XpTKlgKRH…
I was wondering if it makes sense to extend this script and
- provide suggestions based in string similarity metrics
- extend this in infobox properties and report properties that are not
defined in the template definitions (And also provide suggestions from
existing properties)
I did create a one-time dump for all of the above for the Greek Wikipedia
4-5 years ago but not sure if Wikipedia maintains this automatically now
note that this is based on the Oct dump and might be a little out of date
Cheers,
Dimitris
--
Kontokostas Dimitris
I would like to display lists of untranslated painting names by Finnish
painters, or missing labels in strange languages. Using the magnificent
WiDaR hover featured in Reasonator and a language fallback, the task would
be enjoyable. Is there a way?
Susanna