I'm interested in using wikibase for a database for elected and appointed officials at a very local level - down to a level such that they don't even meet Wikidata's relaxed notability guidelines (wikidata isn't interested in who the members were of a small town's zoning commission 10 years ago). I don't have anything online yet, just playing with the docker container versions on my laptop to make sure I know what I'm doing (many thanks to the folks who put those containers together - being able to type two commands and get a working wikibase instance is amazing)
Beyond wikibase as a solid knowledgebase management software, I am especially interested in connecting to wikidata for two reasons:
1. Wikidata has a pretty well-established ontology for my domain - the properties and constraints that describe elected offices, length of terms, details about politicians, membership of people in those offices, etc: all of that likely directly applies to my small-scale database as well, and if there's something else I need, it's likely wikidata will need it too and I can get a community of linked-data/data modelers to figure it out with me.
2. There is overlap between my dataset and what wikidata is tracking - the members of the town's zoning members might not be interesting to wikidata, but wikidata almost certainly has an item for the town. And there's some overlap between the people: long before she was a United States Senator, Tammy Baldwin was a member of the local county board in the late 1980s, so I'd like to be able to link into that. Finally, the qualifiers on domains properties are allowed to come from are an important dataset and are useful in my database as well.
For me, the most important thing to have is rock-solid backup and restore, with detailed, no-question-too-dumb documentation. I'm terrified of putting together a database and have it blow up, and having to reconstruct it. What especially makes me nervous is that Q and P ids are set by wikibase, but they're externally used as well - so if I screw up so bad I have to completely re-import all of my data, if I'm not careful the Qid for a officeholder might chance when I re-load it, so anyone else who has a query using that Qid will be out of luck.
It'd be especially nice to have an example backup of a very small site posted on the web somewhere as a set of example "fixtures" of a handful of items and properties that could optionally be used in conjunction with the docker containers to verify that you've got everything up and running end-to-end, with sample queries and example expected output - given how easy docker makes it to blow everything away and start over, it'd be very nice to be able to bring up a site, modify the data to "experiment", and if I feel like I've gotten myself into trouble, delete it and start over.
I would echo Laura's interest in optimizing server resources - for funding I'm just going to eat the costs with a couple of VMs in the cloud (I'm counting on being able to do it for about $100/month, but I don't know if that's realistic), so the smaller the footprint the better (while still maintaining some HA/disaster recovery capabilities, or at least the ability to restore quickly if a VM crashes hard - I think I'm ok if my site goes down for a while until something reboots, but I don't want to lose data)
Other things I'm interested in is more federation support and examples, so I can more easily reuse properties and items from Wikidata. I think for performance reasons I'd want to be able to import most of them into my instance directly, and not use a literal federation where queries on my site make network calls back to
wikidata.org - instead, I'd like to have the wikidata data imported into my instance and into a namespace to keep them separate, and to have a way to keep that up-to-date. I'd also like to only import a subset of wikidata - I want all the properties and constraints around P39 (position held) and I'm going to use them frequently, but I'd rather not import 20 gigs of data about genomes or fungus taxons. I'm not quite sure how to do that - I don't think wikidata neatly separates into a "core wikidata" and "everything else", so I'd guess I just keep recursively walking the graph and pull in things I need.
-Erik