Once we have this, we would like to analyse for content (which properties and classes are used, etc.) but also for query feature (how many OPTIONALs, GROUP BYs, etc. are used). Ideas on what to analyse further are welcome. Of course, SPARQL can only give a partial idea of "usage", since Wikidata content can be used in ways that don't involve SPARQL. Moreover, counting raw numbers of queries can also be misleading: we have had cases where a single query result was discussed by hundreds of people (e.g. the Panama papers query that made it to Le Monde online), but in the logs it will still show up only as a single query among millions.
Yes I agree and we certainly need to look into different metrics on how Wikidata is used. I am happy to join the discussion, but even the partial view on the usage is already a big step forward. A lot of the data being fed into Wikidata through the different bots resulted from funded initiatives. Currently, we have no way of demonstrating to funders how using Wikidata in distributing their efforts is beneficial to the community at large. Simply counting the shared use of different properties could already be a very crude metric on the dissemination of scientific knowledge over different domains.