Hi,
Yesterday, a user reported having problem with my tool, WPCleaner, getting
error messages "assertuserfailed - Assertion that the user is logged in
failed" when trying to save pages.
I also got them a few minutes ago, restarted the tool, and the problem was
gone.
Has there been any modification on the API or MW that can explain this
error message ?
I'm pretty sure I haven't changed anything in the way I use the API, so I'm
wondering what's the cause.
Nico
[1] https://en.wikipedia.org/wiki/Wikipedia:WPCleaner
Hi,
I'm new to the api and this list, but I really appreciate that it's there.
I'm trying to create what I thought would be a simple query: "List all
recent changes of all pages, but don't show anon, minor or bot changes"
In particular, I want the pageid, page title and userid associated with
each change. Nice to have pre/post page size too. I've got the query
working in my codebase, and on the wikimedia api sandbox:
https://www.mediawiki.org/wiki/Special:ApiSandbox#action=query&list=recentc…
But even with what I think is the correct "rcshow" values, I still get anon
edits - both in the sandbox and on the live
site<https://en.wikipedia.org/w/api.php?action=query&list=recentchanges&format=j…>
.
In fact in some ways it seems like I'm *only* seeing anon posts. Any
thoughts as to what's going on? Am I misunderstanding how the API works?
Thanks for any input!
Steve
The API has long had difficulty in reporting the redirects to a title:
the best that could be done was to use
list=backlinks&blfilterredir=redirects, but that has issues if the
redirect has additional wikilinks in the non-redirect content (e.g.
bug 57057[1]). It also had issues with file and category redirects
(e.g. bug 27621[2]).
With the merge of Gerrit change 104764,[3] the API has a new
prop=redirects to query the redirects table directly. There is also a
new list=allredirects, corresponding to existing modules such as
list=alllinks. This should be deployed to WMF sites with 1.23wmf16.[4]
In addition, since Gerrit change 105829 (deployed with 1.23wmf10[4])
file and category redirects will no longer show up when using
list=imageusage or list=categorymembers, unless the target file or
category is also included in the non-redirect content. They will be
reported in list=backlinks instead.
[1]: https://bugzilla.wikimedia.org/show_bug.cgi?id=57057
[2]: https://bugzilla.wikimedia.org/show_bug.cgi?id=27621
[3]: https://gerrit.wikimedia.org/r/#/c/104764/
[4]: https://www.mediawiki.org/wiki/MediaWiki_1.23/Roadmap
[5]: https://gerrit.wikimedia.org/r/#/c/105829/
--
Brad Jorsch (Anomie)
Software Engineer
Wikimedia Foundation
_______________________________________________
Mediawiki-api-announce mailing list
Mediawiki-api-announce(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
Hello everyone,
I am Diwanshi Pandey, an OPW intern. I'd like to have your feedback on the
course I have created on codecademy for mediawiki api with help of my
mentor Yuri Astrakhan.
A little insight:
The course is about parsing and querying mediawiki api.
Initially we created one course which included 44 exercises but according
to codecademy's guidelines their course are for beginners and should have
maximum 30 exercises in one course.
So we did a split up into two courses:
One is Introduction to Wikipedia
API<http://www.codecademy.com/courses/web-beginner-en-vj9nh/0/1>and
other is Wikipedia:Query
API <http://www.codecademy.com/courses/web-beginner-en-yd3lp/0/1>.
Also due to api security and restrictions we couldn't implement tutorial on
"editing wiki pages through api call" from a non wiki site yet. We are
waiting till we find a good and easy way to demo that.
Feedback may include:
* Are the exercises easy to understand for novice users/developers?
* Are changes needed in the look of exercises?
* Are there any exercises which need not to be implemented or in too depth?
* Any other thing?
Thanks,
--
*Regards,*
*Diwanshi Pandey*
>
> On Feb 5, 2014 8:38 AM, "Zachary Harris" <zacharyharris(a)hotmail.com>
wrote:
> >
> > 1) Is there a way to tell the Parser that you don't want the initial DOM
> > element wrapped in paragraph tags? (For example, if you are client-side
> > injecting the mw-api contents into already existing content and you
> > don't want a paragraph break there?) It seems this same question was
> > effectively asked at
> > http://www.mediawiki.org/wiki/API_talk:Parsing_wikitext and hasn't been
> > answered yet.
> > If the parser can't be configured in this way and I have to strip
> > possibly undesired openings tags myself do I have any guarantees on what
> > to expect?
Not really. Even innternally in core (in OutputPage and in Message class)
we regex out the <p> when its not wanted. (Icky!)
>
> >
> > 2) Why do JSON results come back, for example, in the form of
> > parse.text['*'] rather than just parse.text? With format=XML the result
> > is just <parse><text>the stuff</text></parse>. Why the "*"? Sorry if
> > this question is dumb.
> >
>From what i've heard, historical reasons mostly. I believe its so that the
xml can be mechanically turned into json - the text body of an element is
treated like an attribute * so that if that element had attributes they
could also be represented.
-bawolff
---------- Forwarded message ----------
From: Yuvi Panda <yuvipanda(a)gmail.com>
Date: Thu, Feb 20, 2014 at 9:05 PM
Hello!
Account creation[1] was enabled on Wikimedia wikis a few weeks ago.
There is a bug[2] in the implementation that returns status codes in
all lower case ('success', 'needtoken') rather than in CamelCase
('Success', 'NeedToken'), which is what all other parts of the API
use. There is a patch[3] that will make action=createaccount behave
similar to the other actions.
If you have code that uses action=createaccount, you might have to
tweak your client to uppercase a few characters.
[1]: https://www.mediawiki.org/wiki/API:Account_creation
[2]: https://bugzilla.wikimedia.org/show_bug.cgi?id=61663
[3]: https://gerrit.wikimedia.org/r/#/c/114473/
--
Yuvi Panda T
http://yuvi.in/blog
--
Brad Jorsch (Anomie)
Software Engineer
Wikimedia Foundation
_______________________________________________
Mediawiki-api-announce mailing list
Mediawiki-api-announce(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
(Sorry for the double post, mediawiki-api subscribers)
---------- Forwarded message ----------
From: This, that and the other <at.light(a)live.com.au>
Date: Thu, Feb 13, 2014 at 5:46 AM
Subject: Modification to FlaggedRevs timestamp outputs
A patch is currently in Gerrit [1] to alter the way the FlaggedRevs
extension formats some timestamps in its API output.
Specifically, the "expiry" attribute generated by action=stabilize, and the
"protection_expiry" attribute generated by action=query&prop=flagged, are
changed to use an ISO-formatted timestamp (2014-02-13T11:00:00Z) in the
output, instead of the current internal format (20140213110000).
This change brings the timestamp formatting of FlaggedRevs into line with
the rest of the MediaWiki API, and allows external users to parse the
timestamp more easily.
(This may well affect nobody at all: [2] comes to mind!)
TTO
--
[1] https://gerrit.wikimedia.org/r/#/c/76460/
[2] https://bugzilla.wikimedia.org/show_bug.cgi?id=44468#c5
_______________________________________________
Mediawiki-api-announce mailing list
Mediawiki-api-announce(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
A patch is currently in Gerrit [1] to alter the way the FlaggedRevs extension
formats some timestamps in its API output.
Specifically, the "expiry" attribute generated by action=stabilize, and the
"protection_expiry" attribute generated by action=query&prop=flagged, are changed to
use an ISO-formatted timestamp (2014-02-13T11:00:00Z) in the output, instead of the
current internal format (20140213110000).
This change brings the timestamp formatting of FlaggedRevs into line with the rest
of the MediaWiki API, and allows external users to parse the timestamp more easily.
(This may well affect nobody at all: [2] comes to mind!)
TTO
--
[1] https://gerrit.wikimedia.org/r/#/c/76460/
[2] https://bugzilla.wikimedia.org/show_bug.cgi?id=44468#c5
1) Is there a way to tell the Parser that you don't want the initial DOM
element wrapped in paragraph tags? (For example, if you are client-side
injecting the mw-api contents into already existing content and you
don't want a paragraph break there?) It seems this same question was
effectively asked at
http://www.mediawiki.org/wiki/API_talk:Parsing_wikitext and hasn't been
answered yet.
If the parser can't be configured in this way and I have to strip
possibly undesired openings tags myself do I have any guarantees on what
to expect?
2) Why do JSON results come back, for example, in the form of
parse.text['*'] rather than just parse.text? With format=XML the result
is just <parse><text>the stuff</text></parse>. Why the "*"? Sorry if
this question is dumb.
-Zach