Hi all!
I've been studing the api and had a look to the extensions, I have some
questions about the proper way to style the html obtained of an article.
I am looking at the parse api.
I wonder if contentformat or contentmodel do the trick to obtain an article
with a "cleaner" html or the matched css, so that it can be "re-styled".
Unfortunately, I have not been able to use this parameters:
http://en.wikipedia.org/w/api.php?action=parse&page=New%20Jersey&contentfor…
Could you please give me an example so that i understand what do they do?
Alternatively, i could use mobileformat parameters, which seems to get an
adapted and cleaner html.
But still, I would need to find out the scheme of Id and classes used in
the html to re-style it.
Where could I find such scheme?
I also would like to get rid of the [edit] or [update] elements: I want the
article in "read mode" only.
Is there maybe a parameters providing this output?
If someone of you ever saw the "Dictionary" application in a macbook
(pointing at wikipedia) that is the style I am aiming to.
Am I going in the proper direction ?
Do you have any remark or suggestion ?
I was thinking to apply a css on obtained html, if there was a json output
with just "bare" elements (such as table, images, paragraphs, links ext,
links int, ) it would awesome...
As always, thank you very much for sharing ideas.
--
Luigi Assom
Skype contact: oggigigi
Hello,
I read there is no api to obtain main article of wikipedia articles, but I
also noticed many topics on stackoverflow are old of one year.
Is there any progress on this?
Also, is it possible to obtain thumbnails of images (or specified size) for
multiple pages?
If there is no way to obtain the first image appearing in the article,
which way would you suggest in order to pick up the image from a set of
images in the page, avoiding to parse the whole page and get the thumbnail?
this would be important because I am trying to make a sequential call and i
don't want to query in parallel.
Thank you for sharing ideas and state of the art!
Luigi
--
Luigi Assom
Skype contact: oggigigi
Hi, I am requesting a number of pages to be parsed on Wikipedia. Generally I
am being returned the text, as expected. For some higher traffic pages, I am
instead receiving a notice about it being cached. Could someone give me some
pointers as to how I am meant to handle this? I’m guessing I need to make a
new request to the parser, cache, using the provided key (see below). Does
anyone have a link to a page, that explains this behaviour? Notice text is
below…
Cheers
Chris Thomas
"<ol>
<li>REDIRECT <a href="/wiki/Stonehenge"
title="Stonehenge">Stonehenge</a></li>
</ol>
<!--
NewPP limit report
Preprocessor visited node count: 1/1000000
Preprocessor generated node count: 4/1500000
Post‐expand include size: 0/2048000 bytes
Template argument size: 0/2048000 bytes
Highest expansion depth: 1/40
Expensive parser function count: 0/500
-->
<!-- Saved in parser cache with key enwiki:pcache:idhash:518934-0!*!0!*!*!*!
* and timestamp 20130427125848 -->
"
Language links added by Wikidata are currently stored in the parser
cache and in the langlinks table in the database, which means they
work the same as in-page langlinks but also that the page must be
reparsed if these wikidata langlinks change. The Wikidata team has
proposed to remove the necessity for the page reparse, at the cost of
changing the behavior of the API with regard to langlinks.
Gerrit change 59997[1] (still in review) will make the following
behavioral changes:
* action=parse will return only the in-page langlinks by default.
Inclusion of Wikidata langlinks may be requested using a new
parameter.
* list=allpages with apfilterlanglinks will only consider in-page langlinks.
* list=langbacklinks will only consider in-page langlinks.
* prop=langlinks will only list in-page langlinks.
Gerrit change 60034[2] (still in review) will make the following
behavioral changes:
* prop=langlinks will have a new parameter to request inclusion of the
Wikidata langlinks in the result.
A future change, not coded yet, will allow for Wikidata to flag its
langlinks in various ways. For example, it could indicate which of the
other-language articles are Featured Articles.
At this time, it seems likely that the first change will make it into
1.22wmf3.[3] The timing of the second and third changes are less
certain.
[1]: https://gerrit.wikimedia.org/r/#/c/59997
[2]: https://gerrit.wikimedia.org/r/#/c/60034
[3]: https://www.mediawiki.org/wiki/MediaWiki_1.22/Roadmap
--
Brad Jorsch
Software Engineer
Wikimedia Foundation
_______________________________________________
Mediawiki-api-announce mailing list
Mediawiki-api-announce(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
Hello, All.
I have a request here:
I listen to the recent change list, and to the "page restore" action, I
receive the log like this:
<rc type="log" ns="0" title="William Pierce, Jr." rcid="565524106"
pageid="38932317" revid="0" old_revid="0" user="Secret" oldlen="0"
newlen="0" timestamp="2013-03-27T04:02:12Z" comment="5 revisions restored:
keep the original redirect" logid="48107996" logtype="delete"
logaction="restore"/>
It's said that it restores 5 revisions but not tells me what are the 5
revisions.
But in the log of "revision restore/delete", we can get the list of the
affected revisions.
Is it possible to do some changes of the API in order to support this
feature.
Thanks.
Hi,
Is it possible to use the API on WikiData, for example to remove or update
some interwiki links ?
How do you do that ?
I'd like to create a few bot tools for deleting / renaming categories, and
I'd need to update the interwiki on WikiData ?
Thanks
Nico
Hello!
I have a problem here:
Do the restore and delete page action can cancel out each other?
i.e. delete a page and then restore this page, equals we do nothing of this
page.
I listened the recent changes, and got more than 5 revisions of this page:
http://en.wikipedia.org/w/index.php?title=William_Pierce,_Jr.&redirect=no
And then I got a delete page action,
later I got a restore page action, as you can see from the log:
<rc type="log" ns="0" title="William Pierce, Jr." rcid="565524106" pageid="
38932317" revid="0" old_revid="0" user="Secret" oldlen="0"newlen="0"
timestamp="2013-03-27T04:02:12Z" comment="5 revisions restored: keep the
original redirect" logid="48107996" logtype="delete"logaction="restore"/>
it only restored 5 revisions but not specified what the 5 revisions were.
In my opinion, the restore and delete page actions should cancel out each
other like the restore and delete revision actions.
Ok, I've redone the whole shebang, hoping this'll pass muster. If not, I
can do a larger refactor of the whole ConfirmEdit extension, maybe make a
fresh one and introduce better machine-readable hook points into the API
and form.
But I hope this'll do for now. :)
> MediaWiki core: https://gerrit.wikimedia.org/r/53793
> ConfirmEdit ext: https://gerrit.wikimedia.org/r/53794
>
> test rig: https://github.com/brion/mw-createaccount-test
Major changes from previous approach:
* instead of falling through if in API mode (dangerous if using the new
extension on old core), we check the captcha in the same consistent place
in a hook from LoginForm.
* We add a fairly generic hook that allows extensions to add return data
via the createaccount API if they threw an abort instead of sending a
generic error. This is used to append the captcha data.
* We return result='needcaptcha' explicitly if we need to pass a captcha
* Captcha data is not available until after you have a token, so this
requires making two requests if you want to show a captcha before prompting
for username/password.
If y'all would prefer totally fresh interfaces with a consistent
machine-readable API... I can do that too, but it'd be spiffy if we could
do the less invasive change first. :)
-- brion
On Mon, Mar 25, 2013 at 1:38 AM, S Page <spage(a)wikimedia.org> wrote:
> On Thu, Mar 14, 2013 at 3:55 PM, Brion Vibber <bvibber(a)wikimedia.org>
> wrote:
>
> > MediaWiki core: https://gerrit.wikimedia.org/r/53793
> > ConfirmEdit ext: https://gerrit.wikimedia.org/r/53794
> >
> > So far I've tested it with the default 'math captcha' mode, with this
> test
> > rig: https://github.com/brion/mw-createaccount-test
>
> This is great to see.
>
> Using your test rig or Special:APISandbox, the API return warns about
> "Unrecognized parameters: 'wpCaptchaId', 'wpCaptchaWord" when I get
> the captcha wrong.
>
> It seems if the user gets the captcha wrong, there's no explicit
> indication like captcha-createaccount-fail ('Incorrect or missing
> confirmation code.'). Instead the API reports a generic Failure
> result, and the UI presents a new captcha.
>
> ConfirmEdit has a getMessage() to provide action-specific text like
> fancycaptcha-createaccount. Perhaps the API should pass that back as
> well. Otherwise the UI has to know the details of the captcha in use
> so it can get a message for it.
>
> The current CreateAccount form submission to Special:UserLogin reports
> many form errors like username exists, password wrong, etc. before it
> runs the AbortNewAccount hook where ConfirmEdit checks the captcha.
> But APICreateAccount runs the APICreateAccountBeforeCreate hook early,
> before it dummies up a login form and calls the same validation. So
> users will go through the frustration of getting the captcha right
> before being told their username isn't available or their password
> isn't long enough.
>
> There's also the weirdness that ApiCreateAccount winds up checking the
> CAPTCHA twice. AIUI, here's the program flow:
>
> ApiCreateAccount()
> Runs APICreateAccountBeforeCreate hook (captcha may abort)
> Creates a login forms and call $loginForm->addNewaccountInternal();
> addNewaccountInternal():
> Does a bunch of form validation
> Runs AbortNewAccount hook (captcha may abort, also
> TitleBlacklist, AntiSpoof, etc. may abort)
>
> If ApiCreateAccount() could tell there was a captcha failure within
> addNewaccountInternal and could ask the captcha to addCaptchaAPI() to
> the result, then we wouldn't need the new APICreateAccountBeforeCreate
> hook.
>
> It would be nice if captcha was always checked on its own hook instead
> of sharing a hook with other extensions. That would let a future
> validation API run the username past TitleBlacklist and AntiSpoof
> without getting shot down by the captcha.
>
> Cheers,
> --
> =S Page software engineer on E3
>