Interesting discussion on this thread of AI and open source. A few thoughts and clarifications.
The Product Liability Directive proposal includes a limited OSS exemption as @Luis points out in Recital 13. The second half defines commercial so as to apply liability as one would for other products marketed commercially; doesn't strike me as overly concerning, but I'd welcome reaction to the contrary.
In order not to hamper innovation or research, this Directive should not apply to free and open-source software developed or supplied outside the course of a commercial activity. This is in particular the case for software, including its source code and modified versions, that is openly shared and freely accessible, usable, modifiable and redistributable. However where software is supplied in exchange for a price or personal data is used other than exclusively for improving the security, compatibility or interoperability of the software, and is therefore supplied in the course of a commercial activity, the Directive should apply.
It's worth noting, @Phil, that the AI Liability Directive proposal can apply to systems beyond those that are high-risk. See Recital 28: “The presumption of causality could also apply to AI systems that are not high-risk AI systems because there could be excessive difficulties of proof for the claimant..."
My greatest concern lies in what @Alek identifies with OSS GPAI. The AI Act as scoped may pull in open source development that doesn't rise to the level of AI systems deployed on the market, general purpose pre-trained models as the primary example (
Article 4a(2)). Developers should be free to build AI-related code and do R&D on AI models without being subject to Act (and AI Liability Directive) obligations that are suited for consumer product safety. Obligations should fall on providers who intend to build (or integrate) fully fledged AI systems or users deploying them in a professional setting.