Hi, I just made a proposal on how to change API to better support simple "continue" scenarious -- see http://lists.wikimedia.org/pipermail/mediawiki-api/2012-December/002768.html , and would like to get some feedback from pywiki community. Would this simplify internal API use inside pywiki? What are the biggest issues with scripts when using API?

On the pywiki side, I am thinking of reworking query module in this direction, and help with migrating all API requests through it. There could always be two levels for the script authors - low level, where individual API parameters are known to the script writer and the result is returned as a dict(), and high level - with most common features offered by API are wrapped in methods, yet the speed is almost the same as low level (multiple data items returned per web request).


pg1 = Page(u'Python')
pg2 = Page(u'Cobra')
pg3 = Page(u'Viper')
pages = [pg1, pg2, pg3]
params = {'prop': 'links', 'pllimit' : 'max', 'titles': pages}

# QueryBlocks -- run query until there is no more "continue", return the dictionary as-is from each web call.
for block in pywiki.QueryBlocks(params):
    # Process block


# QueryPages -- will take any query that returns a list of pages, and yield one page at a time. The individual page data will be merged accross multiple API calls in case it exceeds the limit. This method could also return pre-populated Page objects.

for page in pywiki.QueryPages(params):
  # process one page at a time
  # Page object will have its links() property populated


# List* methods work with list= API to request all available items based on the parameters:

for page in pywiki.ListAllPages(from=u'T', getContent=True, getLinks=True):
  # each page object will be prepopulated with links and page content


Thanks! Any feedback is helpful =)
--Yuri