Luigi,
On Thu, Jan 28, 2016 at 2:09 AM, XDiscovery Team info@xdiscovery.com wrote:
I tried /rest_v1/ endpoint and it is terribly fast.
that is great to hear. A major goal is indeed to provide high volume and low latency access to our content.
@Strainu / @Gabriel , what does 'graph' extension do ?
If you refer to https://en.wikipedia.org/api/rest_v1/?doc#!/Page_content/get_page_graph_png_..., this is an end point exposing rendered graph images for https://www.mediawiki.org/wiki/Extension:Graph (as linked in the end point documentation).
I have few questions for using proxy cache: 1# Is it possible to query a page by page_ID and include redirect?
We don't currently provide access by page ID. Could you describe your use case a bit to help us understand how access by page id would help you?
/page/title/{title} allow to get metadata by page, including the pageID , but I would like to have final page redirect (e.g. dna return 7956 and I would like to fetch 7955 of redirected 'DNA' )
We are looking into improving our support for redirects: https://phabricator.wikimedia.org/T118548. Your input on this topic would be much appreciated.
/page/html/{title} get the article but page_ID / curid is missing in source I would like to get the two combined.
This information is actually included in the response, both in the `ETag` header and in the <head> of the HTML itself. I have updated the documentation to spell this out more clearly in [1]. The relevant addition is this:
The response provides an `ETag` header indicating the revision and render timeuuid separated by a slash (ex: `ETag: 701384379/154d7bca-c264-11e5-8c2f-1b51b33b59fc`). This ETag can be passed to the HTML save end point (as `base_etag` POST parameter), and can also be used to retrieve the exact corresponding data-parsoid metadata, by requesting the specific `revision` and `tid` indicated by the `ETag`.
2# The rest are experimental: what could happen if a query fail? Does it raise an error, return 404 page or what else?
The stability markers are primarily about request and response formats, and not about technical availability. Experimental end points can change at any time, which can result in errors (if the request interface changed), or return a different response format.
We are currently discussing the use of `Accept` headers for response format versioning at https://www.mediawiki.org/wiki/Talk:API_versioning. This will allow us to more aggressively stabilize end points by giving us the option of tweaking response formats without breaking existing clients.
I am thinking if possible to use api.wikipedia as fallback, and use proxy cache as primary source any ajax example for doing that to handle possible failures?
Yes, this is certainly possible. However, you can rely on end points currently marked as "unstable" in the REST API. Basically all of them are used by a lot of production clients at this point, and are very reliable. Once we introduce general `Accept` support, basically all of the unstable end points will likely become officially "stable", and several `experimental` end points will graduate to `unstable`.
3# Does /rest/ endpoint exist also for other languages?
Yes, it is available for all 800+ public Wikimedia projects at /api/rest_v1/.
[1]: https://github.com/wikimedia/restbase/pull/488/files#diff-2b6b60416eaafdf0ab...
Hi Gabriel!
please read below comments, and remaining questions are: - how to extract _ID from ETag in headers: GET /page/title/{title}
- how to ensure GET /page/title/{title with different char encoding or old titles are always resolved to last canonical version}
On Fri, Jan 29, 2016 at 6:42 PM, Gabriel Wicke gwicke@wikimedia.org wrote:
Luigi,
On Thu, Jan 28, 2016 at 2:09 AM, XDiscovery Team info@xdiscovery.com wrote:
I tried /rest_v1/ endpoint and it is terribly fast.
that is great to hear. A major goal is indeed to provide high volume and low latency access to our content.
@Strainu / @Gabriel , what does 'graph' extension do ?
If you refer to
https://en.wikipedia.org/api/rest_v1/?doc#!/Page_content/get_page_graph_png_... , this is an end point exposing rendered graph images for https://www.mediawiki.org/wiki/Extension:Graph (as linked in the end point documentation).
Oh very interesting! So basically html markup can be extended ? Would it be possible to share json objects as html5 markup and embed them in wiki pages?
I have few questions for using proxy cache: 1# Is it possible to query a page by page_ID and include redirect?
We don't currently provide access by page ID. Could you describe your use case a bit to help us understand how access by page id would help you?
Oh yes! Thank you so much for asking. I've been working with complex networks and knowledge graphs; I can compute a knowledge graph for each dump, and match graph entities with wiki articles.
Part of UX goes like this: - a user query my knowledge graph, click on entity A, I prompt the context of A, I query commons to decorate A by _ID(A).
If I had to query wiki by title _TITLE(A), I run into these issues: #1 '*my title (could contain brackets or weird chars in other languages or in any case title could change @ any-time! )'* how to ensure is resolved to corresponding canonical page? # 2 *a page is named 'Dna' at time0; changed to 'DeoxRiboAcid' at time1; changed to 'DNA' at time2.* I want to avoid to update my graph just because titles changes: entities are always the same.
So, assume I have titles dated at time0, how to ensure that a query will always land to last revisioned article at *timeN*? If I had to keep synced my graph, for how long old wiki titles will be validly resolved and not deleted?
In general, for research project working with commons, I think that query by _ID is very much handy (thinking about all cases or oddities for not unicode chars, less exception to handle, portability to localise content..).
/page/title/{title} allow to get metadata by page, including the pageID , but I would like to have final page redirect (e.g. dna return 7956 and I would like to fetch 7955 of redirected 'DNA' )
We are looking into improving our support for redirects: https://phabricator.wikimedia.org/T118548. Your input on this topic would be much appreciated.
just did it!
/page/html/{title} get the article but page_ID / curid is missing in
source
I would like to get the two combined.
This information is actually included in the response, both in the `ETag` header and in the <head> of the HTML itself. I have updated the documentation to spell this out more clearly in [1]. The relevant addition is this:
The response provides an `ETag` header indicating the revision and render timeuuid separated by a slash (ex: `ETag: 701384379/154d7bca-c264-11e5-8c2f-1b51b33b59fc`). This ETag can be passed to the HTML save end point (as `base_etag` POST parameter), and can also be used to retrieve the exact corresponding data-parsoid metadata, by requesting the specific `revision` and `tid` indicated by the `ETag`.
Sorry not clear to me! I still don't know what parsoid is.
Please let me understand, how to I transform the ETag <meta property="mw:TimeUuid" content="56f23674-6252-11e5-a0b4-fd306fc438f5" /> into corresponding page_ID from a client? Should I do a second query? Should I pass another parameter? Is pageID encoded in ETag ? *Could you provide an example...?*
2# The rest are experimental: what could happen if a query fail? Does it raise an error, return 404 page or what else?
The stability markers are primarily about request and response formats, and not about technical availability. Experimental end points can change at any time, which can result in errors (if the request interface changed), or return a different response format.
Among the two (end points and response format) I feel it most comfortably if the latter would not change that frequently or abruptly.
We are currently discussing the use of `Accept` headers for response format versioning at https://www.mediawiki.org/wiki/Talk:API_versioning. This will allow us to more aggressively stabilize end points by giving us the option of tweaking response formats without breaking existing clients.
Ok.
I am thinking if possible to use api.wikipedia as fallback, and use proxy cache as primary source any ajax example for doing that to handle
possible
failures?
Yes, this is certainly possible. However, you can rely on end points currently marked as "unstable" in the REST API.
OK.
Basically all of them are used by a lot of production clients at this point, and are very reliable. Once we introduce general `Accept` support, basically all of the unstable end points will likely become officially "stable", and several `experimental` end points will graduate to `unstable`.
3# Does /rest/ endpoint exist also for other languages?
Yes, it is available for all 800+ public Wikimedia projects at /api/rest_v1/.
Thank you.
-- Gabriel Wicke Principal Engineer, Wikimedia Foundation
Wikitext-l mailing list Wikitext-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitext-l
Hi Luigi,
On Fri, Jan 29, 2016 at 12:31 PM, Luigi Assom itsawesome.yes@gmail.com wrote:
- how to extract _ID from ETag in headers:
GET /page/title/{title}
the page id is indeed not directly exposed in the HTML response. However, the revision number is exposed as part of the ETag. This can then be used to request revision metadata including the page id at https://en.wikipedia.org/api/rest_v1/?doc#!/Page_content/get_page_revision_r.... This is admittedly not very convenient, so I created https://phabricator.wikimedia.org/T125453 for generally improved page id support in the REST API.
- how to ensure
GET /page/title/{title with different char encoding or old titles are always resolved to last canonical version}
The storage backing this end point is automatically kept up to date with edits and dependency changes. Edits in particular should be reflected within a few seconds.
If you refer to
https://en.wikipedia.org/api/rest_v1/?doc#!/Page_content/get_page_graph_png_..., this is an end point exposing rendered graph images for https://www.mediawiki.org/wiki/Extension:Graph (as linked in the end point documentation).
Oh very interesting! So basically html markup can be extended ? Would it be possible to share json objects as html5 markup and embed them in wiki pages?
The graph extension is using the regular MediaWiki tag extension mechanism: https://www.mediawiki.org/wiki/Manual:Tag_extensions
Graphs are indeed defined using JSON within this tag.
I want to avoid to update my graph just because titles changes: entities are always the same.
Makes sense. The current API is optimized for the common case of access by title, but we will consider adding access by page ID as well.
I still don't know what parsoid is.
Parsoid is the service providing semantic HTML and a bi-directional conversion between that & wikitext: https://www.mediawiki.org/wiki/Parsoid
Gabriel
Hi,
On Mon, Feb 1, 2016 at 11:13 PM, Gabriel Wicke gwicke@wikimedia.org wrote:
Hi Luigi,
On Fri, Jan 29, 2016 at 12:31 PM, Luigi Assom itsawesome.yes@gmail.com wrote:
- how to extract _ID from ETag in headers:
GET /page/title/{title}
the page id is indeed not directly exposed in the HTML response. However, the revision number is exposed as part of the ETag. This can then be used to request revision metadata including the page id at
https://en.wikipedia.org/api/rest_v1/?doc#!/Page_content/get_page_revision_r... . This is admittedly not very convenient, so I created https://phabricator.wikimedia.org/T125453 for generally improved page id support in the REST API.
thank you
- how to ensure
GET /page/title/{title with different char encoding or old titles are
always
resolved to last canonical version}
The storage backing this end point is automatically kept up to date with edits and dependency changes. Edits in particular should be reflected within a few seconds.
If you refer to
https://en.wikipedia.org/api/rest_v1/?doc#!/Page_content/get_page_graph_png_... ,
this is an end point exposing rendered graph images for https://www.mediawiki.org/wiki/Extension:Graph (as linked in the end point documentation).
Oh very interesting! So basically html markup can be extended ? Would it be possible to share json objects as html5 markup and embed
them in
wiki pages?
The graph extension is using the regular MediaWiki tag extension mechanism: https://www.mediawiki.org/wiki/Manual:Tag_extensions
Graphs are indeed defined using JSON within this tag.
I want to avoid to update my graph just because titles changes: entities
are
always the same.
Makes sense. The current API is optimized for the common case of access by title, but we will consider adding access by page ID as well.
oh, that would be amazing. Another suggestion, would it be to expose _ID and query by _ID of Wikidata. I am thinking to wikipedias as subset of wikidata.
I still don't know what parsoid is.
Parsoid is the service providing semantic HTML and a bi-directional conversion between that & wikitext: https://www.mediawiki.org/wiki/Parsoid
Thank you!
Gabriel
Wikitext-l mailing list Wikitext-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitext-l
wikitext-l@lists.wikimedia.org