Adam Meyer wrote:
Is this doable?
There may be a better way to do this, but...
In PHP, you can get the wiki page as if it was a file:
$myPageUrl ="http://example.com/api.php?action=parse&text={{:Main
Page}}__TOC__&prop=sections";
// set to 20 second timeout then restore.
$currentTimeout = ini_set( 'default_socket_timeout', "20" );
$myPage = file($myPageUrl);
ini_set( 'default_socket_timeout', $currentTimeout );
This will produce a list of section headers. Count them to see what the
last one is.
I've found it useful to ensure that any timeouts due to the web page
access are trapped here and don't run into the overall program timeout.
Restore the old timeout once the page is read.
If you need to set up a session (e.g. to log in and stay logged in while
you get data from pages) look up curl in the online PHP manual.
Mike