Interesting discussion on this thread of AI and open source. A few thoughts and clarifications.

The Product Liability Directive proposal includes a limited OSS exemption as @Luis points out in Recital 13. The second half defines commercial so as to apply liability as one would for other products marketed commercially; doesn't strike me as overly concerning, but I'd welcome reaction to the contrary.
In order not to hamper innovation or research, this Directive should not apply to free and open-source software developed or supplied outside the course of a commercial activity. This is in particular the case for software, including its source code and modified versions, that is openly shared and freely accessible, usable, modifiable and redistributable. However where software is supplied in exchange for a price or personal data is used other than exclusively for improving the security, compatibility or interoperability of the software, and is therefore supplied in the course of a commercial activity, the Directive should apply.

It's worth noting, @Phil, that the AI Liability Directive proposal can apply to systems beyond those that are high-risk. See Recital 28: “The presumption of causality could also apply to AI systems that are not high-risk AI systems because there could be excessive difficulties of proof for the claimant..."

My greatest concern lies in what @Alek identifies with OSS GPAI. The AI Act as scoped may pull in open source development that doesn't rise to the level of AI systems deployed on the market, general purpose pre-trained models as the primary example (Article 4a(2)). Developers should be free to build AI-related code and do R&D on AI models without being subject to Act (and AI Liability Directive) obligations that are suited for consumer product safety. Obligations should fall on providers who intend to build (or integrate) fully fledged AI systems or users deploying them in a professional setting.

On Wed, Oct 5, 2022 at 4:14 AM Alek Tarkowski <alek@openfuture.eu> wrote:
Hi,

(Coming back to this topic after the weekend)

I am not sure this will apply only to high-risk cases. At least with regard to the AIA, the Council seems to be proposing to define ‘general purpose AI’ (GPAI) (basically large models with capacity to do multiple tasks) and regulate them as such. That’s what’s suggested in this Brooking Institute analysis:
https://www.brookings.edu/blog/techtank/2022/08/24/the-eus-attempt-to-regulate-open-source-ai-is-counterproductive/

The piece argues, by the way, against regulating such GPAIs, if they are open source.
But while it gives a good argument about the value of such open source solutions, it does not explain why they should not be regulated, beyond the general “regulation sniffles innovation” argument.

I wonder whether anyone here has more opinion about this or know some analyses? It would be helpful to see an argument that specifics of open source development / deployment solve some of the issues that would be regulated. But it also seems to me that there are reasons to introduce stronger governance also of open source AI solutions (though whether to do that through regulation is a different question)

Best,
Alek



--
Director of Strategy, Open Future | openfuture.eu | +48 889 660 444
At Open Future, we tackle the Paradox of Open: paradox.openfuture.eu/

On 30 Sep 2022, at 19:06, Luis Villa <luis@lu.is> wrote:

On Fri, Sep 30, 2022 at 9:32 AM Phil Bradley-Schmieg <pbradley@wikimedia.org> wrote:
I feel the same way.  Also, despite my earlier email, it's still something to keep an eye on; e.g. MEPs are pondering adding
- internet security,
- text-generation (generating things "such as news articles, opinion articles, novels, scripts, and scientific articles"),
- image/video deepfakes, and
- "AI systems intended to be used by children in ways that have a significant impact on their personal development, including through personalised education or their cognitive or emotional development."

as high risk AI.
I’ve also seen several arguments that the same framework should essentially extent to all software, not just AI. And that’s not completely unreasonable—much software is quite opaque/black-box-y (by nature of its extreme complexity) even before layering AI into the mix.

eg, Example 1 in this analysis of the directive is about ML in cars, but ‘simple’ braking software in cars already has a blackbox, cost-shifting problem; see this analysis of Toyota’s pre-AI braking software: 
_______________________________________________
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org
To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org

_______________________________________________
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org
To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org


--
Peter Cihon
Senior Policy Manager, GitHub
pcihon@github.com
+1-315-399-9207