Hi all,
For my research (related to reverse engineering) I am interested in traffic
logs of an API.
It would help me a lot if I could get access to traffic logs of a
web-service. My question is, could I acquire such traffic logs of for
example Wikipedia.org? I'm interested in a collection of API requests, for
example a list of calls like this:
https://en.wikipedia.org/w/api.php?format=json&action=query&titles=Pigeon
This can simply be a list of urls, but can also be a server log or a
Wireshark capture.
I understand potential data-privacy issue, but a curated or filtered list
would already be very helpful.
Any pointers who to contact for such a request? Or other pointers for
gathering such data?
Thank you in advance.
Kind regards,
Willem
Dear Sir or Madam:
Hope this email finds you well.
I am writing this email for a consultation on your French dictionary services.
We are Fotoable (Beijing Fotoable Technology Limited, visit us on https://www.fotoable.com/ click the button on the top right to switch the language to English), founded in 2011, a leading mobile applications developer in China. located in Beijing, China, focusing on the mobile application development. I am Yiran LI, responsible for French applications planning.
Currently, Fotoable plans to develop a French word game, Word Crossy (https://www.fotoable.com/#/products/Word), of which the English and German versions are already available online, on both IOS and Android platforms. Therefore, in order to develop the French version, we would need the right of references of the sources of a dictionary (French monologue) in the application, so that our players could look up the explanations and examples of the words in the game.
That's why we send you this email, to consult on the details below:
1. The dictionary services are available is which form, API, or the words database file ?
2. What are the rates (price) ?
3. About volume, how many entries and lemmas are included in the given database?
4. Is the API or database available for advanced development (customization)?
5. Once payment is made, how many applications will be entitled to use the API or database reference (respectively)?
6. Will the reproduction of the logo be authorized, in the APP when player looking up the vocabularies?
7. Could you send them to us a sample of the API or database?
Many thanks in advance for your answers to these questions.
Looking forward to hearing back from you.
Kind Regards,
Yiran LI
Fotoable
Madame/ Monsieur,
Je me permets de vous écrire ce mail pour une consultation sur vos services.
Nous sommes Fotoable ( Beijing Fotoable Technology Limited : https://www.fotoable.com/#/Home cliquez le bouton en haut à droite pour changer la langue vers anglais), une entreprise internet, situé à Pékin, Chine, focalisant sur le développement des applications mobile. Je suis Yiran LI, responsable pour des planifications des applications francaises.
Actuellement, Fotoable envisage de développer un jeu de mots français, Word Crossy (https://www.fotoable.com/#/products/Word ), dont les versions anglaise et allemande sont déjà disponibles en ligne, sur les plateforme IOS et Android. Par conséquent, pour développer la version française, nous aurions besoins du droit de références des sources d’un dictionnaire (français monologue) dans l’application.
C’est pour cela que nous vous adressons ce lettre, afin de vous consulter sur les détails ci-dessous :
1. Sous quelle forme se vend le service de dictionnaire, API ou le dossier du base de données des mots?
2. Quel sont les tarifs?
3. A propos de volume, combien d’entrées et de lemmes y a-il inclus dans le base donnée proposé ?
4. Est-ce que le API ou base de données sont disponibles pour le développement avancé (personnalisation) ?
5. Une fois le paiement effectué, le référence de API ou base de données seront disponible pour combien d’applications (respectivement)?
6. La reproduction du logo sera-t-elle authorisée (afin que les joueurs puissent connaitre la source des explications et exemples des mots dans l'applications) ?
7. Pourriez-vous nous envoyer un sample/Spécimen du API ou base de données ?
Nous vous remercions par avance pour vos réponses à ces questions.
Dans cette attente, veuillez agréer, Madame, Monsieur, nos salutations distinguées.
Cordialement,
Yiran LI
Fotoable
------------------
李颐然丨Yiran LI
Manager and Game designer at Fotoable
Mobile:+86 18610883603
E-mail:liyiran(a)fotoable.com
Address:402, Building B6-C, Dongsheng Science and Technology Park, Haidian District, Beijing
Beijing Fotoable Technology Co., Ltd. was founded in 2011, is one of the leading mobile application Developers in China, mainly focuses on developing and operating mobile image processing and other utilities applications, also known as “Top Developer” which is awarded by Google Play.
Hi Everyone,
Over the last few months, the Wikimedia Developer Advocacy team has been
working to improve technical documentation for the MediaWiki Action API
<https://www.mediawiki.org/wiki/API:Main_page>.
So far, we have:
- Started efforts to revise, simplify, and reorganize the MediaWiki
Action API pages on MediaWiki using a new documentation template for
sub-pages: https://www.mediawiki.org/wiki/API:Documentation_template
- Updated the API navigation-template:
https://www.mediawiki.org/wiki/Template:API
As we continue to make improvements to the technical documentation, we
could use your help to better guide our efforts!
Would you please take a few moments to complete the following survey and
share your opinions and experiences with us?
https://goo.gl/forms/Y5PGILb6b3awC3OJ2
*Notes about the Mediawiki Action API Survey:*
*Survey Period: *December 6, 2018 - January 6, 2019
*Privacy Policy:* This survey will be conducted via a third-party service,
which may subject it to additional terms. For more information on privacy
and data-handling, see the survey privacy statement
https://foundation.wikimedia.org/wiki/MediaWiki_Action_API_Survey_Privacy_S…
.
Thanks for your participation!
Kindly,
Sarah R. Rodlund
Technical Writer, Developer Advocacy
<https://meta.wikimedia.org/wiki/Developer_Advocacy>
srodlund(a)wikimedia.org
Hi, I try to write mechanism of fetching new data using your API. And I am
confused, because API response in recent changes endpoint includes
"rccontinue" like this: "20181222184231|119590". What is it? I mean this
"|" and second value.
Marek Czuma <marek.czuma(a)contractors.roche.com>
3:36 PM (7 minutes ago)
SAVE AS RECORD
to mediawiki-api
Good morning!
I'm despaired, cause I have some problem with wikimedia api and I really
can't find answer.
I am programmer and I try to deal with allPages endpoint.
I try to fetch 500 pages, take apcontinue and once again fetch more 500
pages (from apfrom point).
Everything is ok until the moment I want to fetch something like
Somenamespace:Page. I can't send request with colon inside a request.
Response;
"error": {
"code": "invalidtitle",
"info": "Bad title \"Somenamespace:Page\".",
"*": "See http://syswiki.gene.com/syswiki/api.php for API usage.
Subscribe to the mediawiki-api-announce mailing list at <
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce> for
notice of API deprecations and breaking changes."
}
Could you help me? I must fetch this page, but I don't know how to do it
properly.
Currently the codes for uncaught exceptions include the class name, for
example "internal_api_error_ReadOnlyError", or
"internal_api_error_DBQueryError", or possibly something like
"internal_api_error_MediaWiki\Namespace\FooBarException". As you can see in
that last example, that can get rather ugly and complicates recent attempts
to verify that all error codes use a restricted character set.
Thus, we are deprecating these error codes. In the future all such errors
will use the code "internal_api_error". The date for that change has not
yet been set.
If a client for some reason needs to see the class of the uncaught
exception, this is available in a new 'errorclass' data property in the API
error. This will be returned beginning in 1.33.0-wmf.8 or later, see
https://www.mediawiki.org/wiki/MediaWiki_1.33/Roadmap for a schedule. Note
that database errors will report the actual class, such as
"MediaWiki\rdbms\DBQueryError", rather than the old unprefixed name that
had been being maintained for backwards compatibility.
Clients relying on specific internal error codes or detecting internal
errors by looking for a "internal_api_error_" prefix should be updated to
recognize "internal_api_error" and to use 'errorclass' in preference to
using any class name that might be present in the error code.
In JSON format with errorformat=bc, an internal error might look something
like this:
{
"error": {
"code": "internal_api_error_InvalidArgumentException",
"info": "[61e9f71eedbe401f17d41dd2] Exception caught: Testing",
"errorclass": "InvalidArgumentException",
"trace": "InvalidArgumentException at ..."
},
"servedby": "hostname"
}
With modern errorformats, it might look like this:
{
"errors": [
{
"code": "internal_api_error_InvalidArgumentException",
"text": "[61e9f71eedbe401f17d41dd2] Exception caught: Testing",
"data": {
"errorclass": "InvalidArgumentException"
}
}
],
"trace": "InvalidArgumentException at ...",
"servedby": "hostname"
}
--
Brad Jorsch (Anomie)
Senior Software Engineer
Wikimedia Foundation
_______________________________________________
Mediawiki-api-announce mailing list
Mediawiki-api-announce(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
FYI
---------- Forwarded message ----------
From: Subramanya Sastry <ssastry(a)wikimedia.org>
Date: 14 November 2018 at 21:48
Subject: [Wikitech-l] Content Negotiation Protocol for Parsoid HTML in the
REST API
To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
Hello everyone,
The Core Platform and Parsing teams at the Wikimedia Foundation are glad
to announce the implementation of a content negotiation protocol for
Parsoid HTML in the REST API [1]. This was deployed to the Wikimedia
cluster on October 1, 2018.
TL;DR
-----
Parsoid HTML clients can now use the Accept header to specify which
version of content they expect when requesting Parsoid HTML from the
REST API. If omitted, as before, they will get whatever version of the
HTML is in storage, regardless of any breaking changes it may contain.
Parsoid's HTML is versioned
---------------------------
An advantage of Parsoid’s HTML output is that it is both specced and
versioned [2]. By adhering to the principles of semantic versioning [3],
Parsoid can signal to clients what kinds of changes can be expected
in the output between versions.
However, until recently, Parsoid always returned the latest version
of its HTML. Naturally, this posed challenges when deploying breaking
changes since clients had to be prepared to consume the newer version.
Rolling out new HTML versions without breaking clients
------------------------------------------------------
Throughout its history, Parsoid developers have had close enough contact
with the developers of Parsoid clients (they are internal to the
Wikimedia Foundation for the most part) to coordinate deployment
of breaking changes to the HTML. This mainly involved ensuring all
known clients were forward and backwards compatible with the newer
HTML version before deploying the change. Needless to say, as more
clients were coming along, this informal process would not suffice;
a scalable and predictable version upgrade solution was needed.
Content Negotiation Protocol
----------------------------
To solve this problem, a content negotiation protocol [4] relying on
HTTP Accept headers was implemented. See RESTBase’s documentation [5]
for the exact details of the protocol. What follows is just an
informal description.
Parsoid clients are expected to pass an Accept header that specifies
the HTML version they can handle. If the version present in storage
does not satisfy the request, RESTBase will attempt to resolve the
inconsistency. However, if the requested version cannot be satisfied,
an (HTTP 406) error will be returned. The meaning of “satisfied” here
mostly follows semver’s caret semantics [6] (the main difference being
that the patch level is ignored).
If a client does not pass the Accept header, everything works exactly
like before, with all the downsides of the previous behaviour:
no protection from breaking changes; you get whatever HTML version
is currently in storage.
Caveat emptors
--------------
The deployed Parsoid version generates HTML versions 1.8.0 [7] and
2.0.0 [8]. But, it is worth mentioning that the oldest acceptable
version supported is 1.6.0, so if you’re sending an Accept header with
a version less than 1.6.0, your application will break. The reason
for this odd constraint is that we mistakenly released that version
without bumping the major version [9] even though it introduced a
breaking change. Mea culpa!
Also, RESTBase only stores the latest version so, as content gets
rerendered and storage gets replaced, clients requesting older content
have to pay a latency penalty while the stored content is downgraded
to an appropriate version. Hence, we encourage Parsoid HTML clients to
pay attention to announcements about major version changes and upgrade
promptly. Going forward, we’ll send announcements about Parsoid HTML
versions changes on the mediawiki-api-announce mailing list.
How does this impact 3rd party wikis?
-------------------------------------
Finally, astute readers will have noted that this announcement is
concerning the REST API. However, many 3rd party installs have VE
communicating directly with Parsoid and may be wondering how they’ll
be impacted by the change.
Parsoid has had a similar protocol (the difference is mainly in
respecting the patch level) implemented since the v0.9.0 release [7].
So, going forward, when upgrading Parsoid or VE, if the HTML version
requested by VE can be provided by Parsoid, the upgrade will be safe.
In Conclusion
-------------
Content negotiation now allows us to deploy new Parsoid features to the
Wikimedia cluster without needing prior coordination with all clients.
Clients can continue to request older versions until they are ready to
update (assuming they don’t fall too far behind since we only plan on
supporting two major versions concurrently). And, conversely, they can
request newer versions with the guarantee that they will not receive
incompatible content.
[1]: https://phabricator.wikimedia.org/T128040
[2]: https://www.mediawiki.org/wiki/Specs/HTML
[3]: https://semver.org/
[4]: https://tools.ietf.org/html/rfc7231#section-5.3
[5]: https://www.mediawiki.org/wiki/API_versioning#Content_format
_stability_and_negotiation
[6]: https://www.npmjs.com/package/semver#caret-ranges-123-025-004
[7]: https://www.mediawiki.org/wiki/Specs/HTML/1.8.0
[8]: https://www.mediawiki.org/wiki/Specs/HTML/2.0.0
[9]: https://lists.wikimedia.org/pipermail/mediawiki-l/2018-March
/047337.html
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Hi,
I'm trying to create a dataset of summaries vs full text bodies for
automatic text summarization models.
I was looking at the online api for retrieving the summary of a page, so I
could recreate it in my Spark code for parsing wiki dumps. Specifically, I
was looking at the regex in:
https://phabricator.wikimedia.org/diffusion/ETEX/browse/master/includes/Api…
$regexp = '/^(.*?)(?=' . ExtractFormatter::SECTION_MARKER_START . ')/s';
With section marker start filled in:
$regexp = '/^(.*?)(?=' . \1\2 . ')/s';
However, when I plug that expression into an online tester (regex101.com),
I see that: \2 This token references a non-existent or invalid subpattern
I am wondering if this is a bug or if I'm placing it incorrectly?
The alternative branch is when plaintext is set to false - that's for
parsing HTML correct / not applicable for the xml in wiki dumps?
Thanks for your help,
Dan Kramer
Now that MediaWiki has a pure-PHP tidying implementation, we are
deprecating non-tidy output.[1] Further, the future rewrite of Parsoid in
PHP[2] and its merge to core will have "tidying" as an integral feature.
Thus, the disabletidy parameter to action=parse is being deprecated and
will be removed at some point in the future. Clients should stop using the
parameter and begin using tidied HTML output.
This change should be deployed to Wikimedia wikis with 1.32.0-wmf.24 or
later, see https://www.mediawiki.org/wiki/MediaWiki_1.32/Roadmap for a
schedule.
[1]: https://phabricator.wikimedia.org/T198214
[2]: https://phabricator.wikimedia.org/tag/parsoid-php/
--
Brad Jorsch (Anomie)
Senior Software Engineer
Wikimedia Foundation
_______________________________________________
Mediawiki-api-announce mailing list
Mediawiki-api-announce(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce