Hello. I'm Jong Beom Kim, web search product manager of Naver Corporation(www.naver.com) Please, check the robots rule issues
===================================================== We offer search results by collecting the data of Wikipedia. However, transmitting data by dumping is not satisfy freshness, so we want to collect the data by API( https://www.mediawiki.org/wiki/API) ) for freshness.
Your sites robots rules are restricting our API access(/w/api.php) Therefore, YETI(Naver Corporation Web Robot Crawler) would collect the data by API ignoring "robots.txt' If the above method is not allowed, can you tell us the correct process and policy of access?
I will wait for your guidance about policy and process of collecting data. ==========================================
Best regards. Thanks
-----Original Message----- From: "Wikipedia information team"<info-en@wikimedia.org> To: <jongbeom.kim@nhn.com>; Cc: <answers@wikimedia.org>; Sent: 2013-09-03 (화) 11:33:13 Subject: Re: [Ticket#2013090310000891] Questions about Wiki robots rule Policy
Dear 김종범,
For the best chance of a quick resolution to the issue you are having, you should email the mailing list that has a team of volunteers that look over technical matters relating to MediaWiki software and interface. This team can be reached at mediawiki-l@lists.wikimedia.org.
I should note at this point that while this correspondence is private, emails to most Wikimedia mailing lists (including mediawiki-l) are public.
Yours sincerely, Kosten Frosch