Hello all,
I'm writing at the recommendation of Mairelys Lemus-Rojas after I
approached her with the below inquiry and exchanged some emails about it.
I was wondering if anyone was familiar with a semantic/linked data capable
content management system or blog that has autofill or nanotation
capabilities. What I mean by that is, say I'm writing a blog post about
Paris, I'm looking for something that would autofill linked data 'under the
hood' by either a dropdown (a la Omeka's Value Suggest
<https://omeka.org/s/modules/ValueSuggest/>), a autofill (a la
wikidata/pedia) or something that creates semantic blog tags.
I've seen a (very) bleeding-edge technology/proof of concept called
nanotation <http://kidehen.blogspot.com/2014/07/nanotation.html> that looks
about right, but might be completely different then what I actually want,
which is to find something that incorporates linked data, autofills URIs,
and works like a blog/content management system.
So far I've explored
-
*Recogito* (https://recogito.pelagios.org/) is lovely but focused on
annotating images/maps/preexisting items.
-
*Catma* (https://catma.de/) is lovely looking but builds off preexisting
texts, not creating new texts (i.e. you'd have to write the text and then
annotate it all.). It seems to be a Voyant on steroids. Nonetheless if I
could combine Recogito and Catma, that'd be neat. The same program (?
project?) also puts out forText (https://fortext.net/), which i just
include here as it's also nice.
-
*dokie.li <http://dokie.li>* (https://dokie.li/) This seems the closest,
as it's focused on article publishing, annotations and social interactions,
but unfortunately, setting up a Solid Server remains quite the technical
hurdle for me
-
*Atomgraph* (https://atomgraph.com/) is knowledge graph oriented and
installed upon previously-existing data, not focused on content management.
Gephi on steroids.
-
*Webanno* (https://webanno.github.io/webanno/) which is specifically
targeted at linguistically annotating the internet, not really creating
content.
-
*Wikibase*: A heavily modified wikibase might be what I'm left with. In
this scenario I'd make a Mediawiki, turn it into Wikibase, and kinda hack a
blog out of it. Less than satisfying but would work if needed.
-
I also tried *wiki.js* (SUCH A NICE INTERFACE, but it doesn't support
linked data yet) and *OntoWiki* (which looks like it also builds off a
preexisting knowledge graph)
-
*Anthologize*: (https://anthologize.org/) also looks very close as a
wordpress plugin but it is not linked-data specific so I didn't explore
ways to make it so.
-
I've also explored *wordpress*
<https://wordpress.org/plugins/wp-linked-data/> and Drupal plugins (one
<https://www.drupal.org/project/ldp>, two
<https://www.drupal.org/project/linked_data>, three
<https://www.drupal.org/project/ldt>) that are all obsolete or not
maintained anymore
My longterm goal with this is to create semantic libguides and blogs. I
really do think semantic libguides are NEARLY possible—maybe an API that
pulls knowledge graphs along and wikidata visualizations, along with some
blog-type software... I think it could be done, and I have some bits and
pieces of it, but not quite the whole sandwich (so to speak).
I'm partially doing this with an ALA grant I got for www.histsex.com (soon
to be www.histsex.org just in case you're clicking that in a week or so!).
This "bibliography" is all in omeka and it works effectively *like* a
libguide, but will need further plugins to make it all work as desired, so
I continue to investigate alternatives.
Perhaps this is something that a grant will be needed to do in a broader
way? Or is there something obvious I've missed here?
Thank you all for your time!
--
*BRIAN M. WATSON *they/them
twitter <https://twitter.com/brimwats> - website <https://brimwats.com/>
PhD: UBC SLAIS <https://slais.ubc.ca/>
Director: HistSex.org <https://histsex.com/>
Editorial Board: Homosaurus <http://homosaurus.org/about>
Hello all,
As every quarter, the Wikidata development team will host an Office Hour on
July 21st at 16:00 UTC (18:00 CEST), on the Wikidata Telegram channel
<https://t.me/joinchat/AZriqUj5UagVMHXYzfZFvA>.
This session will be a bit special because we will have a guest: Guillaume
Lederrey from WMF's Search Team, who will present what they are working on
at the moment related to the Wikidata Query service: research that they
have been doing around the use of the WDQS, the reasons behind the issues
that we encounter for the past months with keeping the data up to date, and
different future paths for the service.
So if you're interested in the topic, you can prepare your questions until
July 21st!
As usual, notes of the discussions will be published onwiki
<https://www.wikidata.org/wiki/Category:Office_hour_notes> after the
meeting.
Cheers,
--
Léa Lacroix
Project Manager Community Communication for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/029/42207.
Hi,
The upcoming domain name migration to on the Wikimedia Toolforge implies
that OpenRefine users need to update their Wikidata reconciliation
service to the new endpoint:
https://wdreconcile.toolforge.org/en/api
or by replacing "en" by any other Wikimedia language code.
The new home page of the service is at:
https://wdreconcile.toolforge.org/
This new endpoint will be available by default in the upcoming release
of OpenRefine (3.4).
For details about why an automatic migration via redirects is sadly not
possible, see this Phabricator ticket:
https://phabricator.wikimedia.org/T254172
Cheers,
Antonin
Hi,
I'm looking into ways to use tabular data like
https://commons.wikimedia.org/wiki/Data:Zika-institutions-test.tab
in SPARQL queries but could not find anything on that.
My motivation here is in part coming from the time out limits, and the
basic idea here would be to split queries that typically time out into
sets of queries that do not time out and - if their results were
aggregated - would yield the results that would be expected for the
original query would it not time out.
The second line of motivation here is that of keeping track of how
things develop over time, which would be interesting for both content
and maintenance queries as well as usage of things like classes,
references, lexemes or properties.
I would appreciate any pointers or thoughts on the matter.
Thanks,
Daniel
Hi, I would like to implement a new property type for my project. Are there
any examples of extensions that add new prop types to wikibase?
I already implemented most of what I need by changing wikibase code, but I
doubt a property to store multiline code snippets will be accepted into
wikibase at this stage, so it has to be done as an extension. Any
suggestions would be appreciated.
Note that I had to change two repos:
https://github.com/nyurik/mediawiki-extensions-Wikibase/pull/1/fileshttps://github.com/nyurik/data-values-value-view/pull/1/files