I'm writing at the recommendation of Mairelys Lemus-Rojas after I
approached her with the below inquiry and exchanged some emails about it.
I was wondering if anyone was familiar with a semantic/linked data capable
content management system or blog that has autofill or nanotation
capabilities. What I mean by that is, say I'm writing a blog post about
Paris, I'm looking for something that would autofill linked data 'under the
hood' by either a dropdown (a la Omeka's Value Suggest
<https://omeka.org/s/modules/ValueSuggest/>), a autofill (a la
wikidata/pedia) or something that creates semantic blog tags.
I've seen a (very) bleeding-edge technology/proof of concept called
nanotation <http://kidehen.blogspot.com/2014/07/nanotation.html> that looks
about right, but might be completely different then what I actually want,
which is to find something that incorporates linked data, autofills URIs,
and works like a blog/content management system.
So far I've explored
*Recogito* (https://recogito.pelagios.org/) is lovely but focused on
annotating images/maps/preexisting items.
*Catma* (https://catma.de/) is lovely looking but builds off preexisting
texts, not creating new texts (i.e. you'd have to write the text and then
annotate it all.). It seems to be a Voyant on steroids. Nonetheless if I
could combine Recogito and Catma, that'd be neat. The same program (?
project?) also puts out forText (https://fortext.net/), which i just
include here as it's also nice.
*dokie.li <http://dokie.li>* (https://dokie.li/) This seems the closest,
as it's focused on article publishing, annotations and social interactions,
but unfortunately, setting up a Solid Server remains quite the technical
hurdle for me
*Atomgraph* (https://atomgraph.com/) is knowledge graph oriented and
installed upon previously-existing data, not focused on content management.
Gephi on steroids.
*Webanno* (https://webanno.github.io/webanno/) which is specifically
targeted at linguistically annotating the internet, not really creating
*Wikibase*: A heavily modified wikibase might be what I'm left with. In
this scenario I'd make a Mediawiki, turn it into Wikibase, and kinda hack a
blog out of it. Less than satisfying but would work if needed.
I also tried *wiki.js* (SUCH A NICE INTERFACE, but it doesn't support
linked data yet) and *OntoWiki* (which looks like it also builds off a
preexisting knowledge graph)
*Anthologize*: (https://anthologize.org/) also looks very close as a
wordpress plugin but it is not linked-data specific so I didn't explore
ways to make it so.
I've also explored *wordpress*
<https://wordpress.org/plugins/wp-linked-data/> and Drupal plugins (one
<https://www.drupal.org/project/ldt>) that are all obsolete or not
My longterm goal with this is to create semantic libguides and blogs. I
really do think semantic libguides are NEARLY possible—maybe an API that
pulls knowledge graphs along and wikidata visualizations, along with some
blog-type software... I think it could be done, and I have some bits and
pieces of it, but not quite the whole sandwich (so to speak).
I'm partially doing this with an ALA grant I got for www.histsex.com (soon
to be www.histsex.org just in case you're clicking that in a week or so!).
This "bibliography" is all in omeka and it works effectively *like* a
libguide, but will need further plugins to make it all work as desired, so
I continue to investigate alternatives.
Perhaps this is something that a grant will be needed to do in a broader
way? Or is there something obvious I've missed here?
Thank you all for your time!
*BRIAN M. WATSON *they/them
twitter <https://twitter.com/brimwats> - website <https://brimwats.com/>
PhD: UBC SLAIS <https://slais.ubc.ca/>
Director: HistSex.org <https://histsex.com/>
Editorial Board: Homosaurus <http://homosaurus.org/about>
As every quarter, the Wikidata development team will host an Office Hour on
July 21st at 16:00 UTC (18:00 CEST), on the Wikidata Telegram channel
This session will be a bit special because we will have a guest: Guillaume
Lederrey from WMF's Search Team, who will present what they are working on
at the moment related to the Wikidata Query service: research that they
have been doing around the use of the WDQS, the reasons behind the issues
that we encounter for the past months with keeping the data up to date, and
different future paths for the service.
So if you're interested in the topic, you can prepare your questions until
As usual, notes of the discussions will be published onwiki
<https://www.wikidata.org/wiki/Category:Office_hour_notes> after the
Project Manager Community Communication for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
The upcoming domain name migration to on the Wikimedia Toolforge implies
that OpenRefine users need to update their Wikidata reconciliation
service to the new endpoint:
or by replacing "en" by any other Wikimedia language code.
The new home page of the service is at:
This new endpoint will be available by default in the upcoming release
of OpenRefine (3.4).
For details about why an automatic migration via redirects is sadly not
possible, see this Phabricator ticket:
I'm looking into ways to use tabular data like
in SPARQL queries but could not find anything on that.
My motivation here is in part coming from the time out limits, and the
basic idea here would be to split queries that typically time out into
sets of queries that do not time out and - if their results were
aggregated - would yield the results that would be expected for the
original query would it not time out.
The second line of motivation here is that of keeping track of how
things develop over time, which would be interesting for both content
and maintenance queries as well as usage of things like classes,
references, lexemes or properties.
I would appreciate any pointers or thoughts on the matter.