Thanks for that note, Phil! To be perfectly honest I’ve found this particular rulemaking convoluted even by my low standards for EU rulemaking, and have not had the time to do a proper, end-to-end sit down with the docs, so this is a very useful clarification.

On Fri, Sep 30, 2022 at 9:04 AM Phil Bradley-Schmieg <pbradley@wikimedia.org> wrote:
Hi Luis,

Before diving down that rabbit hole, at least so far as WMF ML is concerned, we should remember that the Directive would apply to high risk AI systems.  This will be a legally-defined category of systems.  Exactly what that will encompass is still up for debate, but you can at least see the sort thing the European Commission proposers have in mind - see Annexes II and III here: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206#document2

Regards,
Phil

On Fri, 30 Sept 2022 at 16:28, Luis Villa <luis@lu.is> wrote:
On Fri, Sep 30, 2022 at 5:39 AM Dimitar Parvanov Dimitrov <dimitar.parvanov.dimitrov@gmail.com> wrote:

======

AI Liability

======

The European Commission presented a new AI Liability Directive. [4] The stated goal is to complement the AI Act in making sure people and companies who were harmed by high-risk AI systems (think recruitment, admissions, autonomous drones, self-driving cars) are able to seek damages. Under the proposed text the burden of proof on the claimant would be reversed under certain conditions, as it would be very hard for an outside person to understand how the AI algorithm works. Also, courts will have the explicit right to request companies to disclose technical information about their algorithms.  


I would love to see any smart commentary on this that people have seen — so far I’ve seen very little. Bonus if it’s from European attorneys who are trying to explain EU product liability context to American audiences :)

Potentially an important wrinkle on this for this audience: per the explanatory documents:

“In order not to hamper innovation or research, this Directive should not apply to free and open-source software developed … outside the course of a commercial activity.” 

Emphasis mine - does this mean that open source developed by companies (such as, at this point, much of the php stack WMF relies on, and at least part of the python ML stack that I assume WMF ML uses) is not exempted from this directive?

“This is in particular the case for software...that is openly shared and freely accessible, usable, modifiable and redistributable.”

Similarly: much ML software is not freely usable in the OSI sense, because of ethical field of use restrictions. Is ethical ML more liable than fully free ML?

Related: this is just an experiment, and I don’t know how long I can keep it up, but I’m writing a newsletter on the overlap of open and ml that I suspect might be of interest to some folks here: https://openml.fyi

Yours in open-
Luis

_______________________________________________

Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org
To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org
_______________________________________________
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org
To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org