On Sat, November 22, 2008 11:23 pm, Chris Watkins wrote:
Is there a way I could run the bot, changing the
regular pages, and
capturing the names of all the redirect page in a file? There might only
be 50 or 100 such pages, and I could find another way to handle those
(make it into a list of links, open every page, and copy the names of
the target pages to a file one by one).
The easiest way to do add
-log:abc.txt
to the command. After running, open logs/abc.txt, which contains something
like
valhallasw@elladan:~/pywikipedia/trunk/pywikipedia/logs$ cat uit.txt
Getting 3 pages from wikipedia:nl...
>> Gebruiker:Valhallasw/dp <<<
Current categories:
Adding [[Categorie:Test]]
Changing page [[nl:Gebruiker:Valhallasw/dp]]
WARNING: Gebruiker:Valhallasw-bot is redirect to Gebruiker:Valhallasw.
Ignoring.
Dumping to category.dump.bz2, please wait...
Then search for all lines having 'is redirect to', or use (for example)
sed and grep to get a tab-separated list:
valhallasw@elladan:~/pywikipedia/trunk/pywikipedia/logs$ grep uit.txt -e
'is redirect to' | sed -e 's/WARNING: \(.*\?\) is redirect to \(.*\?\).
Ignoring./\1\t\2/'
Gebruiker:Valhallasw-bot Gebruiker:Valhallasw
Good luck!
--valhallasw