There are times when this works better but I disagree about it being a generally superior process. Patches are often not atomic; other patches can depend on them, sometimes without the author of either patch being aware. They might touch the same lines of code (in which case a revert would simply fail), or one patch can supply a function that the other patch calls (in which case the revert will break the site even worse, although a proper test suite should catch this), or they might include interacting CSS rules / HTML structure in which case the revert will introduce a new regression that can only be detected by regression testing. Regression testing reverts takes time - time which you could just spend on regression testing the fix. As for the commit history, reverts will make it dirty in a different way (one that confuses git bisect / pickaxe).

On Friday, June 12, 2015, Sam Smith <samsmith@wikimedia.org> wrote:
If there is a regression introduced by a patch, then please revert that patch as soon as you've identified it and let the team know via Phabricator, email, or both. Reverting the commit will often be cheaper to do than fixing the regression in a follow-on patch, but there'll undoubtedly be exceptions, which we'll deal with (and learn from) as a team.

Fixing the regression in a follow-on patch means that:
master won't be deployable until the patch has been reviewed, tested, and merged, which should be communicated to the Release Engineering team
reviewers might have to drop what they're working on in order to get it reviewed
  • what if the original patch was lower priority?
  • we should be cognisant of the cost of context switching  
  • the commit history will be dirty
Master should always be depoyable.