Due to a security issue,[1] the deprecated "gettoken" parameter to
action=block and action=unblock has been removed. Clients should use
action=tokens to fetch tokens of types "block" or "unblock" instead.
This also applies to the security release 1.21.2.
[1]: https://bugzilla.wikimedia.org/show_bug.cgi?id=49090
--
Brad Jorsch (Anomie)
Software Engineer
Wikimedia Foundation
_______________________________________________
Mediawiki-api-announce mailing list
Mediawiki-api-announce(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
Hello.
I'm Jong Beom Kim, web search product manager of Naver Corporation(www.naver.com)
Please, check the robots rule issues
=====================================================
We offer search results by collecting the data of Wikipedia.
However, transmitting data by dumping is not satisfy freshness,
so we want to collect the data by API( https://www.mediawiki.org/wiki/API) ) for freshness.
Your sites robots rules are restricting our API access(/w/api.php)
Therefore, YETI(Naver Corporation Web Robot Crawler) would collect the data by API ignoring "robots.txt'
If the above method is not allowed, can you tell us the correct process and policy of access?
I will wait for your guidance about policy and process of collecting data.
==========================================
Best regards.
Thanks
-----Original Message-----
From: "Wikipedia information team"<info-en(a)wikimedia.org>
To: <jongbeom.kim(a)nhn.com>om>;
Cc: <answers(a)wikimedia.org>rg>;
Sent: 2013-09-03 (화) 11:33:13
Subject: Re: [Ticket#2013090310000891] Questions about Wiki robots rule Policy
Dear 김종범,
For the best chance of a quick resolution to the issue you are having, you should email the mailing list that has a team of volunteers that look over technical matters relating to MediaWiki software and interface. This team can be reached at mediawiki-l(a)lists.wikimedia.org.
I should note at this point that while this correspondence is private, emails to most Wikimedia mailing lists (including mediawiki-l) are public.
Yours sincerely,
Kosten Frosch
--
Wikipedia - https://en.wikipedia.org/
---
Disclaimer: all mail to this address is answered by volunteers, and responses are not to be considered an official statement of the Wikimedia Foundation. For official correspondence, please contact the Wikimedia Foundation by certified mail at the address listed on https://www.wikimediafoundation.org/
09/03/2013 02:04 - 김종범 wrote:
> Hello.
>
> I'm Jong Beom Kim, web search product manager of Naver Corporation(www.naver.com)
>
> We offer search results by collecting the data of Wikipedia.
> However, transmitting data by dumping is not satisfy freshness,
> so we want to collect the data by API( https://www.mediawiki.org/wiki/API) ) for
> freshness.
>
> Your sites robots rules are restricting our API access(/w/api.php)
> Therefore, YETI(Naver Corporation Web Robot Crawler) would collect the data by API
> ignoring "robots.txt'
> If the above method is not allowed, can you tell us the correct process and policy
> of access?
>
> I will wait for your guidance about policy and process of collecting data.
>
> Thank you.
>
>
>
> Kim Jong Beom
> User DB Search Team / Assistant Manager
>
> 4th FL., NAVER Green Factory, 178-1 Jeongja-dong, Bundang-gu, Seongnam-si,
> Gyeonggi-do, KOREA
> Tel 031-784-2718
> Email jongbeom.kim(a)nhn.com
>
>
>
>
>