It takes time to evaluate all this so mostly a through evaluation is done only for a handful of other editors one interacts with.
But imagine that you had an assistant who could do this for you and come up with a number and insert that number whenever you see a signature or browse RC, RFA, RFC and all others?
Just like it's impossible to spend minutes evaluating each seller on eBay.
The big challenge is to come up with a system that doesn't bog down the servers and cannot be easily gamed. It doesn't need to be perfect the first time, just better than nothing. Slashdot's rating system for example evolved for several years.
I totally agree with the need for this. It's not about replacing human judgment, it's about giving tools to improve human judgment. For any proposal to work though, I think we need a way of indicating that an edit was "bad" in some sense. In eBay parlance, it would be like negative feedback. But at least with that, you could start to count the number of good/bad edits for a user. It would be incredibly handy to know that a given user was +11,000 (56% good) (in other words, a very active but controversial user - probably a pain in the arse), as compared to +300 (99% good) - new, but doing a great job.
Any system can be gamed - you just have to make it not worth anyone's time. Google pagerank can be gamed, but it takes a lot of effort and is very difficult to do cheaply.
Steve