I still can't find a way to query only some item properties (e.g. claims) from the Wikibase API using pywikibot. I've tried all of them:
.get('claims') .get(True, 'claims') .get(force=True, 'claims') .get('claims', force=True) .get('claims', True)
but without success. Bots often don't need all of the items' data, so such feature should be implemented (or, if present, documented better!) PS: is it worth to implement the 'wbgetclaims' action in our framework?
On Fri, Jun 13, 2014 at 9:34 AM, Ricordisamoa ricordisamoa@openmailbox.org wrote:
I still can't find a way to query only some item properties (e.g. claims) from the Wikibase API using pywikibot. I've tried all of them:
.get('claims') .get(True, 'claims') .get(force=True, 'claims') .get('claims', force=True) .get('claims', True)
but without success. Bots often don't need all of the items' data, so such feature should be implemented (or, if present, documented better!)
Are the claims a large part of the network traffic for items you are processing? Some client time might be saved by lazy loading the claim objects from _content. The claims data is even smaller when using raw revisions instead of the API JSON.
Often a large and unnecessary part of the item download is the labels and sitelinks, which are often full of duplicated information.
Most of them time the bot doesnt need every label and sitelink. What it does need is _one_ printable label to use in the user interface, and it wants 'the label closest to the UI language'.
For one of my tasks, I wrote a function to extract the 'most latin label' , but that depends on having all of the labels and sitelinks. It would be great if the API could provide something like this, so we could request that and not fetch every label and sitelink.
The PreloadingItemGenerator doesn't have any arguments to selectively query only some properties. If I don't need labels nor sitelinks nor aliases nor descriptions, why should I get them, wasting server- and client-side resources?
Il 13/06/2014 05:55, John Mark Vandenberg ha scritto:
Are the claims a large part of the network traffic for items you are processing? Some client time might be saved by lazy loading the claim objects from _content. The claims data is even smaller when using raw revisions instead of the API JSON.
Often a large and unnecessary part of the item download is the labels and sitelinks, which are often full of duplicated information.
Most of them time the bot doesnt need every label and sitelink. What it does need is _one_ printable label to use in the user interface, and it wants 'the label closest to the UI language'.
For one of my tasks, I wrote a function to extract the 'most latin label' , but that depends on having all of the labels and sitelinks. It would be great if the API could provide something like this, so we could request that and not fetch every label and sitelink.
Hey,
Are the claims a large part of the network traffic for items you are
processing? Some client time might be saved by lazy loading the claim objects from _content. The claims data is even smaller when using raw revisions instead of the API JSON.
Is the size of the serialization something that is causing problems?
Cheers
-- Jeroen De Dauw - http://www.bn2vs.com Software craftsmanship advocate Evil software architect at Wikimedia Germany ~=[,,_,,]:3
On Fri, Jun 13, 2014 at 2:51 PM, Jeroen De Dauw jeroendedauw@gmail.com wrote:
Hey,
Are the claims a large part of the network traffic for items you are processing? Some client time might be saved by lazy loading the claim objects from _content. The claims data is even smaller when using raw revisions instead of the API JSON.
Is the size of the serialization something that is causing problems?
Not serious problems IMO. e.g. Q60 is 54K via the API, but that is <10K gziped
https://www.wikidata.org/w/api.php?action=wbgetentities&ids=Q60&lang...
Fetching only the claims only 'almost' halves the network traffic, but that results in the pywiki API cache not being as efficient if several labels or sitelinks are also fetched.
https://www.wikidata.org/w/api.php?action=wbgetentities&ids=Q60&lang...
If a bot is only working with wikidata and a single language wiki, this is the 'optimal' query, which is 5.7Kb gzipped.
https://www.wikidata.org/w/api.php?action=wbgetentities&ids=Q60&lang...
Prefetching many items also reduces the network activity as it lets gzip work harder.