I'm afraid I have to agree with what AntiCompositeNumber wrote. When you set up infrastructure to fight abuse – no matter if that infrastructure is a technical barrier like a captcha, a tool that "blames" people for being sock puppets, or a law – it will affect *all* users, not only the abusers. What you need to think about is not if what you do is right or wrong, but if there is still an acceptable balance between your intended positive effects, and the unavoidable negative effects.
That said, I'm very happy to see something like this being discussed that early. This doesn't always happen. Does anyone still remember discussing "Deep User Inspector"[1][2] in 2013?
Having read what was already said about "harm", I feel there is something missing: AI based tools always have the potential to cause harm simply because people don't really understand what it means to work with such a tool. For example, when the tool says "there is a 95% certainty this is a sock puppet", people will use this as "proof", totally ignoring the fact that the particular case they are looking at could as well be within the 5%. This is the reason why I believe such a tool can not be a toy, open for anyone to play around with, but needs trained users.
TL;DR: Closed source? No. Please avoid at all costs. Closed databases? Sure.
Best Thiemo
[1] https://ricordisamoa.toolforge.org/dui/ [2] https://meta.wikimedia.org/wiki/User_talk:Ricordisamoa#Deep_user_inspector