Hello,
I am writing a Java program to extract the abstract of the wikipedia page
given the title of the wikipedia page. I have done some research and found
out that the abstract with be in rvsection=0
So for example if I want the abstract of 'Eiffel Tower" wiki page then I am
querying using the api in the following way.
http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Eiffel…
and parse the XML data which we get and take the wikitext in the tag <rev
xml:space="preserve"> which represents the abstract of the wikipedia page.
But this wiki text also contains the infobox data which I do not need. I
would like to know if there is anyway in which I can remove the infobox data
and get only the wikitext related to the page's abstract Or if there is any
alternative method by which I can get the abstract of the page directly.
Looking forward to your help.
Thanks in Advance
Aditya Uppu
Hello. I would like to learn if this possible
For example lets say the article gelmek
here the link https://en.wiktionary.org/wiki/gelmek
On that page it has conjugations
When we click edit, we see that it has the following template/module
{{tr-conj|gel|e|gelir|i|d}}
So can i parse this?
Like i provide the page id and this template to get the parsed results via
api?
Or any other way?
e.g.
https://en.wiktionary.org/w/api.php?action=query&titles=gelmek&parseTemplat…
Dear friends,
We have been working for some months in a wikidata project, and we have found an issue with edition performance, I began to work with wikidata java api, and when I tried to increase the edition speed the java system held editions, and inserted delays, which reduced edition output as well.
I chose the option to edit with pywikibot, but my experience was that this reduced more the edition.
At the end we use the procedure indicated here:
https://www.mediawiki.org/wiki/API:Edit#Example
With multithreading, and we reach a maximum of 10,6 edition per second.
my questions is if there is some experience when has been possible to have a higher speed?.
Currently we need to write 1.500.000 items, and we would require 5 working days for such a task.
Best regards
Luis Ramos
Senior Java Developer
(Semantic Web Developer)
PST.AG
Jena, Germany.
Hello,
in 2016 I wrote a small Android app, that is making use of the Wikipedia ActionApi to search for articles at the current location of a user.
Due to legal considerations I am currently trying to take down the app.
It’s not available any more in the Google PlayStore, but there are still installations out there.
That’s why I want to make these installations unusable by deactivating all backend services, that the app is using.
Unfortunately the app is (partially) directly communicating with wikipedia servers and not via a proxy under my control.
The app sends a special User-Agent HTTP header with every request to identify itself:
tagorama/v1.0.0.283-release (http://tagorama.rocks/ <http://tagorama.rocks/>; info(a)tagorama.rocks <mailto:info@tagorama.rocks>)
Is there any way for you to block requests from this app?
Who would I contact?
Thanks for your help,
Frank Wunderlich