If you know what the external link looks like (does it always start with
"http://www.europeana.eu/“?) and the page(s) you’re interested in, you can use
‘extlinks’ to find all external links on a set of pages:
-
You can also get a list of every page on the Commons that has a URL containing
"europeana.eu/portal/record”, like in Special:Linksearch:
-
I don’t think there’s an API to parse the Information template yet. DBpedia tries to do
this (e.g.
), but I couldn’t
find the file you were interested in on their website.
Hope that helps!
cheers,
Gaurav
On 25 Nov 2016, at 9:21 AM, Magnus Manske
<magnusmanske(a)googlemail.com> wrote:
One option (old, unmaintained code, no support, no warranty, good luck) would be my
attempt at parsing this:
https://tools.wmflabs.org/magnustools/commonsapi.php
On Fri, Nov 25, 2016 at 2:11 PM Hugo Manguinhas <Hugo.Manguinhas(a)europeana.eu>
wrote:
Hi everyone,
I am new to the Commons API and would like to know how to get (in a machine readable way)
the metadata found within the Summary section of a page.
In particular, given a File page like this one:
https://commons.wikimedia.org/wiki/File:African_Dusky_Nightjar_(Caprimulgus…
I would like to get the "Europeana link" part... it is enough for me to get the
data as Wiki markup, but parsing the whole HTML would be too much :S
... btw, is there any way to query for such data? I have been using the API Sandbox
(
https://en.wikipedia.org/wiki/Special:ApiSandbox ) but could not find a method that could
do this...
Your help is really appreciated! Thank you in advance!
Best regards,
Hugo
_______________________________________________
Commons-l mailing list
Commons-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l
_______________________________________________
Commons-l mailing list
Commons-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/commons-l