Hello there,
I recently developed an Android application that communicates with
wizard101central wiki API. There is an unofficial API programmed by my
friend, that parses data from the official API, which I use to calculate
results in my Android application.
There is an unknown error.
For example:
If I fetch data from this link:
https://wiki.wizard101central.com/wiki/api.php?action=query&list=search&srl…,
I will get 200 OK.
But if I use my localhost API that parses data from same endpoint
(localhost:3000/creatures/lormaster, which fetch data from
https://wiki.wizard101central.com/wiki/api.php?action=query&list=search&srl…),
it will return me unknown error 520.
Let me know if there are any recent changes.
Thank you for your consideration.
---------- Forwarded message ---------
From: Adam Baso <abaso(a)wikimedia.org>
Date: Wed, Mar 22, 2023 at 4:45 AM
Subject: Service Decommission Notice: Mobile Content Service - July 2023
To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
TL;DR: The legacy Mobile Content Service is going away in July 2023. Please
switch to Parsoid or another API before then to ensure service continuity.
Hello World,
I'm writing about a service decommission we hope to complete mid-July 2023.
The service to be decommissioned is the legacy Mobile Content Service
("MCS"), which is maintained by the Wikimedia Foundation's Content
Transform Team. We will be marking this service as deprecated soon.
We hope that with this notice, people will have ample time to update their
systems for use of other endpoints such as Parsoid [1] (n.b., MCS uses
Parsoid HTML).
The MCS endpoints are the ones with the relative URL path pattern
/page/mobile-sections* on the Wikipedias. For examples of the URLs see the
"Mobile" section on the online Swagger (OpenAPI) specification
documentation with matching URLs here:
https://en.wikipedia.org/api/rest_v1/#/Mobile
== History ==
The Mobile Content Service ("MCS") is the historical aggregate service that
originally provided support for the article reading experience on the
Wikipedia for Android native app, as well as some other experiences. We
have noticed that there are other users of the service. We are not able to
determine all of the users, as it's hard to tell with confidence from the
web logs.
The Wikimedia Foundation had already transitioned the Wikipedia for
Android and iOS apps to the newer Page Content Service ("PCS") several
years ago. PCS has some similarities with MCS in terms of its mobility
focus, but it also has different request-response signatures in practice.
PCS, as with MCS, is intended to primarily satisfy Wikimedia
Foundation-maintained user experiences only, and so this is classified with
the "unstable" moniker.
== Looking ahead ==
Generally, as noted in the lead, we recommend that folks who use MCS (or
PCS, for that matter) switch over to Parsoid for accessing Wikipedia
article content programmatically for the most predictable service.
The HTML produced by Parsoid has a versioned specification [2] and because
Parsoid is accessed regularly by a number of components across the globe
tends to have fairly well cached responses. However, please note that
Parsoid may be subject to stricter rate limits that can apply under certain
traffic patterns.
At this point, I do also want to note that in order to keep up with
contemporary HTML standards, particularly those favoring accessibility and
machine readability enhancements, Parsoid HTML will undergo change as we
further converge parsing stacks [3]. Generally, you should expect iteration
on the Parsoid HTML spec, and of course as you may have come to appreciate
that the shape of HTML in practice can vary nontrivially wiki-by-wiki as
practices across wikis vary.
You may also want to consider Wikimedia Enterprise API options, which range
from no cost to higher volume access paid options.
https://meta.wikimedia.org/wiki/Wikimedia_Enterprise#Access
== Forking okay, but not recommended ==
Because MCS acts as a service aggregate and makes multiple backend API
calls, caveats can apply for those subresources - possibility of API
changes, deprecation, and the like. We do not recommend a plain fork of MCS
code because of the subresource fetch behavior. This said, of course you
are welcome to fork in a way compatible with MCS's license.
== Help spread the word ==
Although we are aware of the top two remaining consumers of MCS, we also
are not sure who else is accessing MCS and anticipate that some downstream
tech may break when MCS is turned off. As we are cross-posting this
message, we hope most people who have come to rely upon MCS will see this
message. Please feel free to forward this message to contacts if you know
they are using MCS.
== Help ==
Although we intend to decommission MCS in July 2023, we would like to share
resources if you need some help. We plan to hold office hours in case you
would like to meet with us to discuss this or other Content Transform Team
matters. We will host these events on Google Meet. We will provide notice
of these office hours on the wikitech-l mailing list in the coming weeks
and months.
Additionally, if you would like to discuss your MCS transition plans,
please visit the Content Transform Team talk page:
https://www.mediawiki.org/wiki/Talk:Content_Transform_Team
Finally, some Content Transform Team members will also be at the Wikimedia
Hackathon [4] if you would like some in-person support.
Thank you.
Adam Baso (he/him/his/Adam), on behalf of the Content Transform Team
Director of Engineering
Wikimedia Foundation
[1] https://www.mediawiki.org/wiki/Parsoid
[2] https://www.mediawiki.org/wiki/Specs/HTML
[3] https://www.mediawiki.org/wiki/Parsoid/Parser_Unification/Updates
[4] https://www.mediawiki.org/wiki/Wikimedia_Hackathon_2023
_______________________________________________
Mediawiki-api-announce mailing list -- mediawiki-api-announce(a)lists.wikimedia.org
To unsubscribe send an email to mediawiki-api-announce-leave(a)lists.wikimedia.org
Hi,
Wikipedia now renders with new design, so my previous tool relied on
obtaining the text just by downloading it and applying an XPath, have
to adjust to it. I have mixed results so the questions are:
- Is there a plan to support the old design with some additional
parameters? Even if not forever, just for comparison purposes it would
be useful for me
- Is there another better way to get the text. Basically I make a
guessing work by converting some of the classical tags like H1/H2 etc
into pseudo headings and so on, Bullet tags into bullet chars etc. The
issue with the new design for me is that floating content now at the
same level as all the items of the //main[@id='content'] tag, so I
will have to do some filtering to get the main content without
supplemental information.
Thanks
Max
<a href="http://higssoftware.com/thesis-writing-services.php">HIGS </a>- thesis writing service offers the best quality filled writing services for your thesis or dissertation.<a href="http://higssoftware.com/thesis-writing-services.php">HIGS offers customized and genuine thesis report writing services for PhD research scholars.</a> We do not only offer report writing, but we also help you in entire thesis writing and we provide complete support in explaining the research objective and outcome of research
<a href="http://higssoftware.com/thesis-writing-services.php">HIGS </a>- thesis writing service offers the best quality filled writing services for your thesis or dissertation.<a href="http://higssoftware.com/thesis-writing-services.php">HIGS offers customized and genuine thesis report writing services for PhD research scholars.</a> We do not only offer report writing, but we also help you in entire thesis writing and we provide complete support in explaining the research objective and outcome of research
Hey,
This API call works when I do it in Python using requests:
requests.get("https://en.wikipedia.org/w/api.php",
params={"action":"parse","page":"Aphrodisiac","prop":"wikitext","format":"json"})
but when I do it with wget or curl like this:
wget
https://en.wikipedia.org/w/api.php?action=parse&page=Aphrodisiac&prop=wikit…
it doesn’t return pure JSON, it returns some kind of HTML and I don’t think
I even see the desired data inside.
But that URL works in the browser.
Why would it have this behavior? How do curl/wget differ from a browser
calling this string?
Thank you,
Julius