I just wondered if anyone doing MediaWiki development had any experience in catching CSS regressions?
We have had a few issues recently in mobile land where we've made big CSS changes and broken buttons on hidden away special pages - particularly now we have been involved in the development of mediawiki ui and moving mobile towards using them.
My vision of how this might work is we have an automated tool that visits a list of given pages on various browsers, take screenshots of how they look and then compares the images with the last known state. The tool checks how similar the images are and complains if they are not the same - this might be a comment on the Gerrit patch or an e-mail saying something user friendly like "The Special:Nearby page on Vector looks different from how it used to. Please check everything is okay."
This would catch a host of issues and prevent a lot of CSS regression bugs.
Any experience in catching this sort of thing? Any ideas on how we could make this happen?