Thanks for the suggestions, everyone. I figured out my own approach
that works pretty well, adapted from the javascript in the Wikipedia
widget for Mac OS X. It's a data scraping approach that pulls in a
page from $url and strips off the header and footer. The advantage to
this approach is that it doesn't require any knowledge of the
MediaWiki codebase. Here's the basic PHP code:
$article = file_get_contents($url);
echo processRawHtml($article);
function processRawHTML($article) {
$start = strpos($article, '<h1 class="firstHeading">');
$end = strpos($article, '<!-- end content -->');
$article = substr($article, $start, $end-$start) . '</div>';
return $article;
}
Of course, this needs some further elaboration, such as a search-and-
replace to convert local hyperlinks to their full URLs.
--Sheldon Rampton
Show replies by thread