I'm trying to log into a MediaWiki 1.28 site using the API and a standalone PHP script, but it keeps failing with the error "invalid token." I can successfully retrieve a login token:
$tokenRequest = array(
'action' => 'query',
'format' => 'json',
'meta' => 'tokens',
'type' => 'login',
);
But when I issue my "action=clientlogin" request, I always get the error "code=badtoken, info = invalid token":
$loginRequest = array(
'action' => 'clientlogin',
'format' => 'json',
'logintoken' => $token,
'loginreturnurl' => 'https://example.com/',
'username' => $username,
'password' => $password,
'domain' => 'mydomain',
'rememberMe' => 1,
);
I suspect the problem is that the two requests are not explicitly being made in the same session. That is, I'm not adding the header "Cookie: <session cookie>" to my second HTTP POST. How do I retrieve the session cookie after issuing my meta=tokens request so I can hand it to the client login request? In earlier versions of MediaWiki, I could get the cookie information from an API call, "action=login". This has been deprecated but I haven't seen any examples of the new way to do it, just generic instructions like "Clients should handle cookies to properly manage session state."
I'm not operating inside the MediaWiki codebase with its WebRequest, SessionManager, etc. classes -- this is a standalone script.
Thank you,
DanB
Hi everyone,
ATM I’m fiddeling around with a little enigma (for me). − I hope some
one can help me … :-)
# I wanna do
Moving a page as a bot (written in PHP) with api.php without leaving a
redirect.
# the problem
I can move the page. − But the parameter "noredirect" (value = 1) seems
to be ignored. So every moved page exists with the redirect after moving …
# my guess
I guess there is something not correct with the (implementation of the)
user right "suppressredirect" (which is required for using the API
parameter "noredirect"; see [0]).
[0] <https://www.mediawiki.org/wiki/API:Move#Parameters>
# my conditions
## MW
* v1.27.3
## the bot
* I created a bot with my main account.
* the bot can do everything (edit, query, …) so far
* the bot is listed in the group "bot"
## /includes/DefaultSettings.php
L 5076: "$wgGroupPermissions['bot']['suppressredirect'] = true;"
# one more point
If I query the API (api.php) for the moved page with the parameters
* prop=contributors
* titles=$MYPAGENAME
* pcrights=suppressrevision
I get empty (= no) contributors − so my bot has not the correct right …
I hope you can give me a hint how I can accomplish such a basic task.
Thanks a lot in advance and best regards
Kai
Hi,
my software uses sub- and super- category enumeration while trying to avoid
hidden categories. There's no problem with sub-
(?action=query&prop=categories&titles=....&clshow=!hidden), but there's no
corresponding parameter for super-categories (
?action=query&list=categorymembers&cmtitle= ... ). Is there a way to avoid
contradictory states when both are used? Or maybe to achieve hidden
avoiding for super-categories some other way?
I asked about this four years ago, but not at the list (
https://www.mediawiki.org/wiki/Topic:Ra34est5vsl7fxdc), no answers at the
time, probably something changed since then or someone would give me some
workaround
Thanks in advance,
Max
Previously, it was possible to access the api via urls like:
* https://en.wikipedia.org/w/api.php/Some_text_here?action=query&titles=...
* https://www.wikidata.org/w/api.php/ (Plus some POST data)
etc.
Due to a security issue in certain browsers, we are no longer allowing
any text to be between "api.php" and the ?. If anything is there you
will get a 301 redirect to the url with the extraneous text removed.
Hopefully this doesn't cause any inconvenience to anyone.
Sincerely,
Brian Wolff
Hullo,
The Reading Web team are currently considering working on T74546: Strip
<br> tags from extracts [0]. Ideally, we'd simply enable this
transformation by default and not put it behind a flag or require a new
parameter to disable it. Would this affect anyone's current usage of the
HTML extracts returned by the TextExtracts API (prop=extracts)?
-Sam
[0] https://phabricator.wikimedia.org/T168328
--
Software Engineer, Wikimedia Foundation
IRC (Freenode): phuedx
Matrix: @phuedx:matrix.org
Timezone: BST (UTC+1)
In the action API, there are two ways to parse a page/revision: using
action=parse, or using the rvparse parameter to action=query&prop=revisions.
Similarly, there are two ways to get a diff: using action=compare, or using
parameters such as rvdiffto to action=query&prop=revisions. And then
there's action=expandtemplates versus the rvexpandtemplates parameter to
prop=revisions. This is a somewhat annoying bit of code duplication.
Further, the prop=revisions versions of these features have somewhat
strange behavior. rvparse forces rvlimit=1. rvdiffto and related parameters
will sometimes output "notcached" with no way to directly handle the
situation.
This, the 'rvdifftotext', 'rvdifftotextpst', 'rvdiffto',
'rvexpandtemplates', 'rvgeneratexml', 'rvparse', and 'rvprop=parsetree'
parameters to prop=revisions are all deprecated, as are the similarly named
parameters to prop=deletedrevisions, list=allrevisions, and
list=alldeletedrevisions.
--
Brad Jorsch (Anomie)
Senior Software Engineer
Wikimedia Foundation
_______________________________________________
Mediawiki-api-announce mailing list
Mediawiki-api-announce(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
The docs say backlinks are sorted by titles (
https://www.mediawiki.org/wiki/API:Backlinks .... "Ordered by linking page
title"). I noticed that this is partially true. So while most of them look
like sorted alphabetically, usually there are sections at the end of the
list when there are plenty items that are not. So they look like they added
later not obeying the sorting rule. The example is the article
https://en.wikipedia.org/wiki/Teleology where the first page of "What links
here" ( non-redirect articles) contains non-ordered items. Is this
expected?
But apart from figuring out whether there's a bug or not, another thought
is that the alphabetical sort (if it works) is controversial since in any
form of presentation, A-items are prime visible entities, while Z-items
tend to be obscured if the list is long. So is it possible to implement
other kind of sort, for example, something like PageRank for backlinks, so
the links are sorted by the number of links coming to them? Sure There is a
technical side (whether the structure of the database will allow this) and
probably legal (I hope Larry Page won't mind and won't sue WikiMedia for
this)
I consider backlinks important because they tend to have "see also" and
"category" flavor in them, so just by looking at a partial list, I acquire
additional knowledge and expectancy.
Thanks,
Max