The recently updated meeting time is: 15:00-16:00 UTC / 08:00 PDT / 11:00
EDT / 17:00 CEST
That's in about half an hour.
Daylight savings time is a scourge!
On Wed, May 3, 2023 at 10:10 AM Joris Darlington Quarshie <
> It’s 14:08 UTC so 2 hour from now.
> On Wed, 3 May 2023 at 2:02 PM, Guillaume Lederrey <glederrey(a)wikimedia.org>
>> This is happening 1 hour from now.
>> On Fri, 28 Apr 2023 at 17:02, Guillaume Lederrey <glederrey(a)wikimedia.org>
>>> Hello all!
>>> The Search Platform Team usually holds an open meeting on the first
>>> Wednesday of each month. Come talk to us about anything related to
>>> Wikimedia search, Wikidata Query Service (WDQS), Wikimedia Commons Query
>>> Service (WCQS), etc.!
>>> Feel free to add your items to the Etherpad Agenda for the next meeting.
>>> Details for our next meeting:
>>> Date: Wednesday, May 3, 2023
>>> Time: 16:00-17:00 UTC / 08:00 PDT / 11:00 EDT / 17:00 CET
>>> Google Meet link: https://meet.google.com/vgj-bbeb-uyi
>>> Join by phone: https://tel.meet/vgj-bbeb-uyi?pin=8118110806927
>>> Have fun and see you soon!
>>> *Guillaume Lederrey* (he/him)
>>> Engineering Manager
>>> Wikimedia Foundation <https://wikimediafoundation.org/>
>> *Guillaume Lederrey* (he/him)
>> Engineering Manager
>> Wikimedia Foundation <https://wikimediafoundation.org/>
>> Wikitech-l mailing list -- wikitech-l(a)lists.wikimedia.org
>> To unsubscribe send an email to wikitech-l-leave(a)lists.wikimedia.org
> Joris Darlington Quarshie
> Wikitech-l mailing list -- wikitech-l(a)lists.wikimedia.org
> To unsubscribe send an email to wikitech-l-leave(a)lists.wikimedia.org
In case you haven't seen it already, I wrote a blog post about "unpacking"
and updating our default language analyzers used for search. It's a project
(made of many little projects) that I've been working on over the last year
or two. The blog post is a review of the project and some of the fun
language facts and computational complexities I've encountered.
Hope you enjoy it.*
Staff Computational Linguist, Search Platform
UTC–4 / EDT
* Read the footnotes!
Let me introduce myself first. My name is Ivan Heibi, I am a researcher at the University of Bologna working at OpenCitations (directed by Silvio Peroni) as the responsible of the technical infrastructure.
We are currently facing a technical issue while managing our triplestore I wanted to share with you, hoping that maybe your expertise regarding similar issues might give us some new insights to help us deal with it. Thank you in advance for your time and support, here I will briefly explain you the issue.
Currently OpenCitations stores and maintain its data (citations and bibliographic metadata) in one big triplestore (JNL format) using the Blazegraph database. The size of the current JNL file has reached almost 1.5T, and this JNL file is regularly updated (almost every two months) with new triples (data regarding new citations). However, it seems that the current JNL file does not accept any further addition of data, yet its size and total number of triples (almost 8 billion) is less than the limits that Blazegraph states (50 billion). Therefore, any attempt to DATA LOAD additional triples to the JNL file makes the process hanging forever, with no effects on the triplestore.
We tried to LOAD new data into the JNL file using different properties when lanching the Blazegraph triplestore, yet all the tests we have tried gave us the same negative results.
Did you ever face a similar behaviour? are you aware of some limits that Blazegraph has (that we are ignoring)? What are the solutions you have adopted and suggest in order to deal with such issues (in case you have faced such problems)?
Thank you in advance for your support and help,
Have a nice day,
Ivan Heibi, Ph.D.
Digital Humanities Advanced Research Centre (DHARC),
Department of Classical Philology and Italian Studies,
University of Bologna, Bologna (Italy)
Personal web site: ivanhb.it<http://ivanhb.it>
University web page: unibo.it/sitoweb/ivan.heibi2<https://www.unibo.it/sitoweb/ivan.heibi2/>
I am getting frequent timeouts trying to use the SPARQL endpoint GUI at https://query.wikidata.org/ .
I'll admit, I have some complex queries, bu I really feel like this is something that the system should be able to handle or at least allow me to request a longer timeout wait.
For example, this query:
SELECT ?item ?item2
?item wdt:P625 ?location .
?item <http://www.w3.org/2002/07/owl#sameAs> ?item2 .
or this query:
SELECT DISTINCT ?item ?itemname ?location
?item wdt:P625 ?location ;
wdt:P31 ?type ;
?type wdt:P279 ?supertype .
LANG(?itemname) = "en" &&
?supertype not in (wd:Q5, wd:Q4991371, wd:Q7283, wd:Q36180, wd:Q7094076, wd:Q905511, wd:Q1063801,
wd:Q1062856, wd:Q35127, wd:Q68, wd:Q42848, wd:Q2858615, wd:Q241317 , wd:Q1662611, wd:Q7397, wd:Q151885,
wd:Q1301371, wd:Q1068715, wd:Q7366 , wd:Q18602249, wd:Q16521, wd:Q746549, wd:Q13485782, wd:Q36963)
When I use python SPARQLwrapper things improve somewhat, but still timeout on some of my queries.
I tried the first query above on an old wikidata dump we have from 2021 that we loaded on Jena TDB and it managed to complete it (0 results, but I had to run it to figure that out...).
Seems strange to get such poor performance.