Hello,
I am writing a Java program to extract the abstract of the wikipedia page
given the title of the wikipedia page. I have done some research and found
out that the abstract with be in rvsection=0
So for example if I want the abstract of 'Eiffel Tower" wiki page then I am
querying using the api in the following way.
http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Eiffel…
and parse the XML data which we get and take the wikitext in the tag <rev
xml:space="preserve"> which represents the abstract of the wikipedia page.
But this wiki text also contains the infobox data which I do not need. I
would like to know if there is anyway in which I can remove the infobox data
and get only the wikitext related to the page's abstract Or if there is any
alternative method by which I can get the abstract of the page directly.
Looking forward to your help.
Thanks in Advance
Aditya Uppu
Hi,
A few hours ago, the following kind of API requests is producing an API
internal error.
http://en.wikipedia.org/w/api.php?gbllimit=max&gbltitle=American&prop=info&…
resulting in
<error code="internal_api_error_MWException" info="Exception Caught:
Internal error in ApiResult::setElement: Attempting to add element
backlinks=500, existing value is 500" xml:space="preserve">
If you remove the "gbllimit=max" parameter or replace it with a fixed value
(gbllimit=200), it works.
Would it be possible to fix this bug ?
Thanks
Nicolas
I'm trying to list all the pages in the user namespace on a wiki, NOT the
main namespace, but I can't figure this out. Here's what I'm using to query
a generator for "allpages":
{
'action':'query',
'generator':'allpages',
'gaplimit':'5',
'gapfrom':'Foo,
'prop':'links|categories',
'plnamespaces':'2'
}
but this doesn't include pages in the user namespace? How do I query pages,
using a generator, in a specific namespace?
Thank you!
Hi dear support email group!
My name is Itamar. I'm trying to automatically log in users of our main
system , which are also registered with the same details to our mediawiki
automatically so they don't have to login twice.
some sort of a single signon.
Our installation is at http://wiki.softwareprojects.com/
So here goes:
I am trying to login using the api and php.
I post with curl to our domain
to the api.php
and I get back the needToken XML.
then when I send again the same login request , just with lgtoken including
the token I got.
I still get the same response
needToken
only this time with a different token , as if the token I've sent is wrong
or anything.
I have urlencoded all POST vars and values.
later on I studied that i also, might need to tranfer the session id as well
so what I did is take out from the first request header the session id and
transferred it in the second one
as lgsessionid and as well as sessionid
and still I get the same needToken response instead of success
If you can help me or guide me to where I need to go with this question
I'll appreciate it a lot !
Thank you very much!
Itamar
The root of my installation directory is http://www.WEBSITE.com/wiki/, and
index.php is located in that directory. Since I have other software packages
installed on my web server, I have them all installed in respective
subdirectories. /wiki/ is the one for Mediawiki
On Tue, Jul 27, 2010 at 1:44 AM,
<mediawiki-api-request(a)lists.wikimedia.org>wrote:
>
> the api.php file is located at the same place as your index.php, the
> so called "root". usually not a url with /wiki/
>
> example:
> http://foo/api.php
>
> not
> http://foo/wiki/api.php
>
I'm using Mediawiki v1.15.4 on my wiki, but I can't access the api from a
script for some reason. I'm using the python wikitools library, and I can
access wikipedia's api just fine, and several other wikis... but not the one
for my wiki. I get this error (it's a python error so it might not make much
sense):
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
site = wiki.Wiki("http://www.WIKISITE/wiki/api.php")
File "C:\Python26\lib\site-packages\wikitools\wiki.py", line 79, in
__init__
self.setSiteinfo()
File "C:\Python26\lib\site-packages\wikitools\wiki.py", line 97, in
setSiteinfo
info = req.query()
File "C:\Python26\lib\site-packages\wikitools\api.py", line 140, in query
data = self.__parseJSON(rawdata)
File "C:\Python26\lib\site-packages\wikitools\api.py", line 254, in
__parseJSON
data.seek(0)
AttributeError: addinfourl instance has no attribute 'seek'
it's like it can't find the api page? When I browse out to my wiki, I can
browse to the api page, and like I said, I can access other wikis from that
same machine, same script, etc, just not mine.
Any help here? Thanks!
I know the link is malformed, too, I just edited that since I don't want to
publicize my wiki's url right now.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello,
The only CentralAuth cookies I'm getting from the 2nd response referred
to at [0] has domain=.wikipedia.org, so the bot will be logged in on all
wikipedia domains. However, this doesn't work for non-wikipedia
projects. How should I get a cookie for those domains as well?
Thanks,
- -Mike
[0] http://www.mediawiki.org/wiki/API:Login#Construct_cookies
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iEYEARECAAYFAkxLAcQACgkQst0AR/DaKHvYsQCgv+5xvl41Akj3sDCCpeSx9+cW
hvsAnjSG5e+B6egdETayzFYscBwiVxL0
=X7Iw
-----END PGP SIGNATURE-----
Setting "bot":1" is working now. Thanks!
On Thu, Jul 22, 2010 at 4:28 PM,
<mediawiki-api-request(a)lists.wikimedia.org>wrote:
> Send Mediawiki-api mailing list submissions to
> mediawiki-api(a)lists.wikimedia.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
> or, via email, send a message with subject or body 'help' to
> mediawiki-api-request(a)lists.wikimedia.org
>
> You can reach the person managing the list at
> mediawiki-api-owner(a)lists.wikimedia.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Mediawiki-api digest..."
>
>
> Today's Topics:
>
> 1. Problem with mwclient page.save and large text files (Sal976)
> 2. Re: Problem with mwclient page.save and large text files
> (Roan Kattouw)
> 3. Re: Problem with mwclient page.save and large text files
> (Salvatore Loguercio)
> 4. Re: Problem with mwclient page.save and large text files
> (Roan Kattouw)
> 5. login from multiple (two) places? (Robert Ullmann)
> 6. Re: login from multiple (two) places? (Roan Kattouw)
> 7. Re: login from multiple (two) places? (Robert Ullmann)
> 8. Re: login from multiple (two) places? (Carl (CBM))
> 9. Mediawiki adds extra lines (rashi dhing)
> 10. how to set flags to on in the api? (Python Script)
> 11. Re: how to set flags to on in the api? (Brad Jorsch)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 13 Jul 2010 08:06:19 -0700 (PDT)
> From: Sal976 <salvatore.loguercio(a)googlemail.com>
> Subject: [Mediawiki-api] Problem with mwclient page.save and large
> text files
> To: mediawiki-api(a)lists.wikimedia.org
> Message-ID: <29151604.post(a)talk.nabble.com>
> Content-Type: text/plain; charset=us-ascii
>
>
> Hi,
>
> I am using mwclient 0.6.4 (r93) to import some Wiki pages from en.wikipedia
> to another wiki installation (presumably running Mediawiki 1.15).
>
> Everything works fine, except when I try to import 'big' pages, e.g.:
>
> http://en.wikipedia.org/wiki/Grb2
>
> content= Mediawiki text file to be imported (111579 characters in this
> case)
>
> When I try to write this page (or pages of similar size), I get the
> following error:
>
> page = site.Pages['Grb2']
> page.save(content)
>
> Traceback:
> File "<stdin>", line 1, in <module>
> File "/usr/lib64/python2.6/site-packages/mwclient/page.py", line 142, in
> save
> result = do_edit()
> File "/usr/lib64/python2.6/site-packages/mwclient/page.py", line 137, in
> do_edit
> **data)
> File "/usr/lib64/python2.6/site-packages/mwclient/client.py", line 165, in
> api
> info = self.raw_api(action, **kwargs)
> File "/usr/lib64/python2.6/site-packages/mwclient/client.py", line 250, in
> raw_api
> return json.loads(json_data)
> File "/usr/lib64/python2.6/json/__init__.py", line 307, in loads
> return _default_decoder.decode(s)
> File "/usr/lib64/python2.6/json/decoder.py", line 319, in decode
> obj, end = self.raw_decode(s, idx=_w(s, 0).end())
> File "/usr/lib64/python2.6/json/decoder.py", line 338, in raw_decode
> raise ValueError("No JSON object could be decoded")
> ValueError: No JSON object could be decoded
>
>
> I wonder if this error is due to server timeout or exceeding number of
> characters..
> Any suggestions?
>
>
> Thank you,
> Sal
>
> --
> View this message in context:
> http://old.nabble.com/Problem-with-mwclient-page.save-and-large-text-files-…
> Sent from the WikiMedia API mailing list archive at Nabble.com.
>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 13 Jul 2010 17:20:23 +0200
> From: Roan Kattouw <roan.kattouw(a)gmail.com>
> Subject: Re: [Mediawiki-api] Problem with mwclient page.save and large
> text files
> To: "MediaWiki API announcements & discussion"
> <mediawiki-api(a)lists.wikimedia.org>
> Message-ID:
> <AANLkTinSMXRCMzaPNfr1PqTHYLinVREKIruVrlsrQRPt(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> 2010/7/13 Sal976 <salvatore.loguercio(a)googlemail.com>:
> > I wonder if this error is due to server timeout or exceeding number of
> > characters..
> > Any suggestions?
> >
> It spends so much time parsing the wikitext you supplied that PHP's
> max execution time limit was exceeded, which causes an empty (0-byte)
> response. You can fix the error message by checking for a response of
> length zero before you try to JSON-decode it.
>
> Roan Kattouw (Catrope)
>
>
>
> ------------------------------
>
> Message: 3
> Date: Wed, 14 Jul 2010 01:39:12 -0700 (PDT)
> From: Salvatore Loguercio <salvatore.loguercio(a)googlemail.com>
> Subject: Re: [Mediawiki-api] Problem with mwclient page.save and large
> text files
> To: mediawiki-api(a)lists.wikimedia.org
> Message-ID: <29159629.post(a)talk.nabble.com>
> Content-Type: text/plain; charset=us-ascii
>
>
> Thanks much for the quick answer, I see the problem now.
> I wonder how these large wikitexts could be written on my target Wiki.
> Is there a way to 'force' the PHP max execution time limit through the API?
> If not, I guess I will have to contact a sysop..
>
>
> Roan Kattouw-2 wrote:
> >
> > 2010/7/13 Sal976 <salvatore.loguercio(a)googlemail.com>:
> >> I wonder if this error is due to server timeout or exceeding number of
> >> characters..
> >> Any suggestions?
> >>
> > It spends so much time parsing the wikitext you supplied that PHP's
> > max execution time limit was exceeded, which causes an empty (0-byte)
> > response. You can fix the error message by checking for a response of
> > length zero before you try to JSON-decode it.
> >
> > Roan Kattouw (Catrope)
> >
> > _______________________________________________
> > Mediawiki-api mailing list
> > Mediawiki-api(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
> >
> >
>
> --
> View this message in context:
> http://old.nabble.com/Problem-with-mwclient-page.save-and-large-text-files-…
> Sent from the WikiMedia API mailing list archive at Nabble.com.
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 14 Jul 2010 12:22:58 +0200
> From: Roan Kattouw <roan.kattouw(a)gmail.com>
> Subject: Re: [Mediawiki-api] Problem with mwclient page.save and large
> text files
> To: "MediaWiki API announcements & discussion"
> <mediawiki-api(a)lists.wikimedia.org>
> Message-ID:
> <AANLkTinrrlRsfOIufQdpxyo36gifWJ5znIudlRUqf2j8(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> 2010/7/14 Salvatore Loguercio <salvatore.loguercio(a)googlemail.com>:
> >
> > Thanks much for the quick answer, I see the problem now.
> > I wonder how these large wikitexts could be written on my target Wiki.
> > Is there a way to 'force' the PHP max execution time limit through the
> API?
> > If not, I guess I will have to contact a sysop..
> >
> That would kind of defeat the purpose of the max execution time. AFAIK
> the page should still have been saved, just not have been parsed
> completely. You can check this in the history view.
>
> You can ask a sysop to raise the max exec time, or to import these
> large pages using the importTextFile.php maintenance script.
>
> Roan Kattouw (Catrope)
>
>
>
> ------------------------------
>
> Message: 5
> Date: Wed, 14 Jul 2010 14:39:58 +0300
> From: Robert Ullmann <rlullmann(a)gmail.com>
> Subject: [Mediawiki-api] login from multiple (two) places?
> To: "MediaWiki API announcements & discussion"
> <mediawiki-api(a)lists.wikimedia.org>
> Message-ID:
> <AANLkTillFYQg-wnd6UMibKx6VUF7EsKmF4erp39KdAMR(a)mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> Hi,
>
> I haven't been able to figure this out from the doc ...
>
> Can I log in a user (bot) from more than one place (IP address) at the
> same time? I have an impending need to run Interwicket from more than
> one place, as the primary will be unavailable at times. Is logging in
> the other system going to log out the first, or some such? any bad
> effects?
>
> Robert
>
>
>
> ------------------------------
>
> Message: 6
> Date: Wed, 14 Jul 2010 13:55:46 +0200
> From: Roan Kattouw <roan.kattouw(a)gmail.com>
> Subject: Re: [Mediawiki-api] login from multiple (two) places?
> To: "MediaWiki API announcements & discussion"
> <mediawiki-api(a)lists.wikimedia.org>
> Message-ID:
> <AANLkTikGvBlnNhmwINZUSpfJZQaxfyR8wUN1lnvxGiph(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> 2010/7/14 Robert Ullmann <rlullmann(a)gmail.com>:
> > Hi,
> >
> > I haven't been able to figure this out from the doc ...
> >
> > Can I log in a user (bot) from more than one place (IP address) at the
> > same time? I have an impending need to run Interwicket from more than
> > one place, as the primary will be unavailable at times. Is logging in
> > the other system going to log out the first, or some such? any bad
> > effects?
> >
> I believe it will cause such an effect, yes. AFAIK the only reliable
> way to be logged in in two places at once is to log in in one place
> and transfer the information in the login cookies to the second place.
>
> I think you should just try it and see what happens; worst case you
> can detect you've been logged out and log in again, although that
> might not be very nice if you run stuff simultaneously from two
> places, as they'll spend a lot of time competing for logged-in status.
>
> Roan Kattouw (Catrope)
>
>
>
> ------------------------------
>
> Message: 7
> Date: Wed, 14 Jul 2010 15:54:00 +0300
> From: Robert Ullmann <rlullmann(a)gmail.com>
> Subject: Re: [Mediawiki-api] login from multiple (two) places?
> To: "MediaWiki API announcements & discussion"
> <mediawiki-api(a)lists.wikimedia.org>
> Message-ID:
> <AANLkTinZEQJxvQ_QPe03kB1LzxiWZ_t-8q5tBlsPvwYl(a)mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> Interesting.
>
> I log in from two places and I get different session identifiers (not
> surprising), but the *same* token.
>
> Apparently the problem is that if one logs out from anywhere, it
> invalidates everywhere. Not too sure about the exact conditions,
> because it is rather painful. (;-) But bots don't usually log out, so
> that isn't an issue, they stay logged in for years.
>
> Might be okay.
>
> On Wed, Jul 14, 2010 at 2:55 PM, Roan Kattouw <roan.kattouw(a)gmail.com>
> wrote:
> > 2010/7/14 Robert Ullmann <rlullmann(a)gmail.com>:
> >> Hi,
> >>
> >> I haven't been able to figure this out from the doc ...
> >>
> >> Can I log in a user (bot) from more than one place (IP address) at the
> >> same time? I have an impending need to run Interwicket from more than
> >> one place, as the primary will be unavailable at times. Is logging in
> >> the other system going to log out the first, or some such? any bad
> >> effects?
> >>
> > I believe it will cause such an effect, yes. AFAIK the only reliable
> > way to be logged in in two places at once is to log in in one place
> > and transfer the information in the login cookies to the second place.
> >
> > I think you should just try it and see what happens; worst case you
> > can detect you've been logged out and log in again, although that
> > might not be very nice if you run stuff simultaneously from two
> > places, as they'll spend a lot of time competing for logged-in status.
> >
> > Roan Kattouw (Catrope)
> >
> > _______________________________________________
> > Mediawiki-api mailing list
> > Mediawiki-api(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
> >
>
>
>
> ------------------------------
>
> Message: 8
> Date: Thu, 15 Jul 2010 23:37:57 -0400
> From: "Carl (CBM)" <cbm.wikipedia(a)gmail.com>
> Subject: Re: [Mediawiki-api] login from multiple (two) places?
> To: "MediaWiki API announcements & discussion"
> <mediawiki-api(a)lists.wikimedia.org>
> Message-ID:
> <AANLkTimckxckEmCjvLa-7qUziinfByG78EFdJUyY-TQk(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On Wed, Jul 14, 2010 at 7:39 AM, Robert Ullmann <rlullmann(a)gmail.com>
> wrote:
> > Can I log in a user (bot) from more than one place (IP address) at the
> > same time?
>
> Yes; it works fine. I do it all the time with a script running on a
> remote server at the same time I am logged in on my local computer. I
> have never encountered any difficulties.
>
> - Carl
>
>
>
> ------------------------------
>
> Message: 9
> Date: Sun, 18 Jul 2010 14:36:51 +0300
> From: rashi dhing <rashi.dhing(a)gmail.com>
> Subject: [Mediawiki-api] Mediawiki adds extra lines
> To: mediawiki-api(a)lists.wikimedia.org
> Message-ID:
> <AANLkTinRh59-9g_1YZK_i3jM0RnkDHHmGzrthdaozkAt(a)mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Does anyone know how to remove the line breaks and the <p> and <pre> tags
> that mediawiki seems to add internally
> while publishing a page. I am working on this project where any text added
> by the user in the wiki is processed and
> transferred to an editor and while i am trying to maintain the alignment,
> mediawiki tends to add extra spaces and lines.
> How can I take care of this ?
>
> Thanks,
>
> Rashi D.
>
I'm trying to edit a page, and the api documentation says to "set the flag
to on" for items like the bot flag, minor flag, etc. How do I set these to
on? I'm using a python interface to the api that has a format like
"action":"edit" for post requests. I've tried "bot":"true" , "bot":"on" ,
"bot":"1" etc and I can't seem to get it to work. Thanks for the help!
Does anyone know how to remove the line breaks and the <p> and <pre> tags
that mediawiki seems to add internally
while publishing a page. I am working on this project where any text added
by the user in the wiki is processed and
transferred to an editor and while i am trying to maintain the alignment,
mediawiki tends to add extra spaces and lines.
How can I take care of this ?
Thanks,
Rashi D.