The Security Team has recently spent some cycles investigating improved anti-automation (bad bots, high-volume spammers, etc.) solutions, particularly around an improved Wikimedia captcha. We were curious if your team has any methods or advice regarding the analysis of nefarious automated traffic within the context of raw web requests or any other relevant analytics data. If the answer is "not really", that's fine. But if there are some relevant tools, methods, research, etc. your team has performed that you would like to share with us, that would be much appreciated. If it makes sense to discuss this further during a quick call, I can try to find some time for a few of us over the next couple of weeks. We also have an extremely barebones task where we are attempting to document various methods of measurement which might be helpful:
https://phabricator.wikimedia.org/T255208.