I've had a good idea for an anti-spam system for awhile. Blocks, Captchas, and local filters, all the tricks we've been using end up not working well enough to easily deal with the spam on a lot of wikis.
I know this because I've been continually dealing with the spam on a small dead wiki. Simple AntiSpam, AntiBot, Captchas, TorBlock, Abuse Filter... Time after time I expand my filters more and more. But inevitably a few days later spam not covered by my filters comes through and I have to do it again.
I ended up having to deal with it more today and then started writing out the details I've had for awhile on a machine-learning based anti-spam system.
https://www.mediawiki.org/wiki/User:Dantman/Anti-spam_system
Of course. While I have the whole idea for the ui, backend stuff, how to handle the service, etc... I haven't done the actual machine-learning stuff before. Also naturally just like Gareth, OAuth, and other things this is just another one of my ideas I don't have the time and resources to do and wish I had the financial backing to work on.