Hi everyone,
ATM I’m fiddeling around with a little enigma (for me). − I hope some
one can help me … :-)
# I wanna do
Moving a page as a bot (written in PHP) with api.php without leaving a
redirect.
# the problem
I can move the page. − But the parameter "noredirect" (value = 1) seems
to be ignored. So every moved page exists with the redirect after moving …
# my guess
I guess there is something not correct with the (implementation of the)
user right "suppressredirect" (which is required for using the API
parameter "noredirect"; see [0]).
[0] <https://www.mediawiki.org/wiki/API:Move#Parameters>
# my conditions
## MW
* v1.27.3
## the bot
* I created a bot with my main account.
* the bot can do everything (edit, query, …) so far
* the bot is listed in the group "bot"
## /includes/DefaultSettings.php
L 5076: "$wgGroupPermissions['bot']['suppressredirect'] = true;"
# one more point
If I query the API (api.php) for the moved page with the parameters
* prop=contributors
* titles=$MYPAGENAME
* pcrights=suppressrevision
I get empty (= no) contributors − so my bot has not the correct right …
I hope you can give me a hint how I can accomplish such a basic task.
Thanks a lot in advance and best regards
Kai
Hi,
my software uses sub- and super- category enumeration while trying to avoid
hidden categories. There's no problem with sub-
(?action=query&prop=categories&titles=....&clshow=!hidden), but there's no
corresponding parameter for super-categories (
?action=query&list=categorymembers&cmtitle= ... ). Is there a way to avoid
contradictory states when both are used? Or maybe to achieve hidden
avoiding for super-categories some other way?
I asked about this four years ago, but not at the list (
https://www.mediawiki.org/wiki/Topic:Ra34est5vsl7fxdc), no answers at the
time, probably something changed since then or someone would give me some
workaround
Thanks in advance,
Max
Previously, it was possible to access the api via urls like:
* https://en.wikipedia.org/w/api.php/Some_text_here?action=query&titles=...
* https://www.wikidata.org/w/api.php/ (Plus some POST data)
etc.
Due to a security issue in certain browsers, we are no longer allowing
any text to be between "api.php" and the ?. If anything is there you
will get a 301 redirect to the url with the extraneous text removed.
Hopefully this doesn't cause any inconvenience to anyone.
Sincerely,
Brian Wolff
Hullo,
The Reading Web team are currently considering working on T74546: Strip
<br> tags from extracts [0]. Ideally, we'd simply enable this
transformation by default and not put it behind a flag or require a new
parameter to disable it. Would this affect anyone's current usage of the
HTML extracts returned by the TextExtracts API (prop=extracts)?
-Sam
[0] https://phabricator.wikimedia.org/T168328
--
Software Engineer, Wikimedia Foundation
IRC (Freenode): phuedx
Matrix: @phuedx:matrix.org
Timezone: BST (UTC+1)
In the action API, there are two ways to parse a page/revision: using
action=parse, or using the rvparse parameter to action=query&prop=revisions.
Similarly, there are two ways to get a diff: using action=compare, or using
parameters such as rvdiffto to action=query&prop=revisions. And then
there's action=expandtemplates versus the rvexpandtemplates parameter to
prop=revisions. This is a somewhat annoying bit of code duplication.
Further, the prop=revisions versions of these features have somewhat
strange behavior. rvparse forces rvlimit=1. rvdiffto and related parameters
will sometimes output "notcached" with no way to directly handle the
situation.
This, the 'rvdifftotext', 'rvdifftotextpst', 'rvdiffto',
'rvexpandtemplates', 'rvgeneratexml', 'rvparse', and 'rvprop=parsetree'
parameters to prop=revisions are all deprecated, as are the similarly named
parameters to prop=deletedrevisions, list=allrevisions, and
list=alldeletedrevisions.
--
Brad Jorsch (Anomie)
Senior Software Engineer
Wikimedia Foundation
_______________________________________________
Mediawiki-api-announce mailing list
Mediawiki-api-announce(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
The docs say backlinks are sorted by titles (
https://www.mediawiki.org/wiki/API:Backlinks .... "Ordered by linking page
title"). I noticed that this is partially true. So while most of them look
like sorted alphabetically, usually there are sections at the end of the
list when there are plenty items that are not. So they look like they added
later not obeying the sorting rule. The example is the article
https://en.wikipedia.org/wiki/Teleology where the first page of "What links
here" ( non-redirect articles) contains non-ordered items. Is this
expected?
But apart from figuring out whether there's a bug or not, another thought
is that the alphabetical sort (if it works) is controversial since in any
form of presentation, A-items are prime visible entities, while Z-items
tend to be obscured if the list is long. So is it possible to implement
other kind of sort, for example, something like PageRank for backlinks, so
the links are sorted by the number of links coming to them? Sure There is a
technical side (whether the structure of the database will allow this) and
probably legal (I hope Larry Page won't mind and won't sue WikiMedia for
this)
I consider backlinks important because they tend to have "see also" and
"category" flavor in them, so just by looking at a partial list, I acquire
additional knowledge and expectancy.
Thanks,
Max
I try to auto-login mediawiki by API login,but didn't know how to do.Since I already used login api get "success" status on sandbox, I think maybe I do something wrong on set cookie.Get token and login api has been revamped in version 1.27 of MediaWiki,I can't find any example code work on 1.28.Thanks for all responses and suggestion. I've been stuck for so long. I really appreciate some help here. Thanks.
Here's my code,but didn't get login token properly.
System Info:
Software Version
MediaWiki 1.28.2
PHP 5.6.30
MariaDB 10.1.21
<?php
namespace mediawiki;
// Start session
session_start();
/**
* How to log in mediawiki using PHP cURL?
* -------------------------------------------------
*/
//set login username password which already in your mediawiki database
$username = 'abc';
$password = '123';
//setup url
$Root = 'localhost/mediawiki';
$API_Location = "${Root}/api.php";
//setup cookie
$CookieFilePath = tempnam("/tmp", "TMP0");
$expire = 60*60*24*14 + time();
$CookiePrefix = 'theprefix';
$Domain = 'localhost';
// set variables to use in curl_setopts
$PostFields = "action=query&meta=tokens&type=login&format=json";
// first http post to sign in to MediaWiki
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "$API_Location");
curl_setopt($ch, CURLOPT_TIMEOUT, 500);
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
'Content-Type: application/x-www-form-urlencoded',
'Content-Length: ' .strlen($PostFields))
);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, "$PostFields");
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_COOKIEJAR, $CookieFilePath);
curl_setopt($ch, CURLOPT_COOKIEFILE, $CookieFilePath);
$Result = curl_exec($ch);
if(curl_exec($ch) === false) echo '<br>Curl error: ' . curl_error($ch).'<br>';
curl_close($ch); // curl closed
$ResultSerialized = json_decode($Result,true);
$Token = $ResultSerialized["query"]["tokens"]["logintoken"];
// cookie must be set using session id from first response
$_SESSION["logintoken"]=$Token;
//How can I get sessionid?
$sessionid=session_id();
$_SESSION["sessionid"] =$sessionid;
setcookie("${CookiePrefix}_Session",$sessionid , $expire, '/', $Domain);
setcookie("${CookiePrefix}UserName",$username,$expire,'/',$Domain);
setcookie("${CookiePrefix}Token", $_SESSION["logintoken"], $expire, '/', $Domain);
// second http post to finish sign in
$ch = curl_init();
$PostFields="action=login&lgname=${username}&lgpassword=${password}&lgtoken=${Token}&format=json";
curl_setopt($ch, CURLOPT_URL, "$API_Location");
curl_setopt($ch, CURLOPT_TIMEOUT, 500);
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
'Content-Type: application/x-www-form-urlencoded',
'Content-Length: ' .strlen($PostFields))
);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, "$PostFields");
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_COOKIE, "${CookiePrefix}_session=$sessionid");
curl_setopt($ch, CURLOPT_COOKIEJAR, $CookieFilePath);
curl_setopt($ch, CURLOPT_COOKIEFILE, $CookieFilePath);
$Result = curl_exec($ch);
if(curl_exec($ch) === false) echo '<br>Curl error: ' . curl_error($ch).'<br>';
curl_close($ch); // curl closed
$ResultSerialized = json_decode($Result,true);
// set persistent cookies
//$LgToken = $ResultSerialized["query"]["tokens"]["logintoken"];
$LgUserID = $ResultSerialized["login"]["lguserid"];
$LgUserName = $ResultSerialized["login"]["lgusername"];
$lgstatus=$ResultSerialized["login"]["result"];
var_dump($lgstatus);
setcookie("${CookiePrefix}UserName", $LgUserName, $expire, '/', $Domain);
setcookie("${CookiePrefix}UserID", $LgUserID, $expire, '/', $Domain);
//setcookie("${CookiePrefix}Token", $Token, $expire, '/', $Domain);
// Delete cURL cookie
unlink($CookieFilePath);
?>
I also try to use clientlogin via postman, post request exactly like example on mediawiki.org/wiki/API:Login ,but result: "authmanager-authn-no-primary".
Reference:
stackoverflow.com/questions/14107523/how-do-i-log-into-mediawiki-using-php-…mediawiki.org/wiki/User:Krinkle/API_PHP_cURL_examplemediawiki.org/wiki/API:Login/de/1_Beispielmediawiki.org/wiki/API:Login
Dear all
I have one confusion and would be extremely grateful if some clarification
is offered.
https://en.wikipedia.org//w/api.php?action=query&format=xml&
prop=revisions&titles=Sachin_Tendulkar&rvprop=ids%7Ctimesta
mp%7Ccomment%7Cuser&rvlimit=30&rvdiffto=prev
This is a api request I prepared using mediawiki sandbox tool to get 30
sequential edits for the page Sachin Tendulkar. If you run the query in
your browser you will see a diff field for each edit
which probably is the difference between past and current edit. Here
revision content field is false
https://en.wikipedia.org//w/api.php?action=query&format=xml&
prop=revisions&titles=Sachin_Tendulkar&rvprop=ids%7Ctimesta
mp%7Ccomment%7Cuser%7Ccontent&rvlimit=30&rvdiffto=prev
This is the same request with content filed true
Now What I see is that for all 30 revision content field is more or less
same and right now I am not able to understand what is the reason for the
content field. If you know anything regarding this please let me know , I
am also trying to understand this field.
Regards
Soumya