Okay. Methodology:
*take the last 5 days of requestlogs;
*Filter them down to text/html requests as a heuristic for non-API requests;
*Run them through the UA parser we use;
*Exclude spiders and things which reported valid browsers;
*Aggregate the user agents left;
*???
*Profit
It looks like there are a relatively small number of bots that browse/interact via the web - ones I can identify include WPCleaner[0], which is semi-automated, something I can't find through WP or google called "DigitalsmithsBot" (could be internal, could be external), and Hoo Bot (run by User:Hoo man). My biggest concern is DotNetWikiBot, which is a general framework that could be masking multiple underlying bots and has ~ 7.4m requests through the web interface in that time period.
Obvious caveat is obvious; the edits from these tools may actually come through the API, but they're choosing to request content through the web interface for some weird reason. I don't know enough about the software behind each bot to comment on that. I can try explicitly looking for web-based edit attempts, but there would be far fewer observations that the bots might appear in, because the underlying dataset is sampled at a 1:1000 rate.
[0]
https://en.wikipedia.org/wiki/User:NicoV/Wikipedia_Cleaner/Documentation