What bugs me about it is that Wikidata has gone down the same road as Freebase and Neo4J in the sense of developing something ad-hoc that is not well understood.
I understand the motivations that lead there, because there are requirements to meet that standards don't necessarily satisfy, plus Wikidata really is doing ambitious things in the sense of capturing provenance information.
Perhaps it has come a little too late to help with Wikidata but it seems to me that RDF* and SPARQL* have a lot to offer for "data wikis" in that you can view data as plain ordinary RDF and query with SPARQL but you can also attach provenance and other metadata in a sane way with sweet syntax for writing it in Turtle or querying it in other ways.
Another way of thinking about it is that RDF* is formalizing the property graph model which has always been ad hoc in products like Neo4J. I can say that knowing what the algebra is you are implementing helps a lot in getting the tools to work right. So you not only have SPARQL queries as a possibility but also languages like Gremlin and Cypher and this is all pretty exciting. It is also exciting that vendors are getting on board with this and we are going to seeing some stuff that is crazy scalable (way past 10^12 facts on commodity hardware) very soon.