Hi Alek,

I was saying that the new proposed AI Liability Directive is intended to apply only to high risk AI. 
I wasn't saying the AIA (a Regulation, not a Directive) itself - i.e. the focus of the Brookings analysis - has that limitation.

Regards,
Phil

On Wed, 5 Oct 2022 at 11:43, Alek Tarkowski <alek@openfuture.eu> wrote:
Hi everyone,

(Coming back to this topic after the weekend)

I am not sure this will apply only to high-risk cases. At least with regard to the AIA, the Council seems to be proposing to define ‘general purpose AI’ (GPAI) (basically large models with capacity to do multiple tasks) and regulate them as such. That’s what’s suggested in this Brooking Institute analysis:
https://www.brookings.edu/blog/techtank/2022/08/24/the-eus-attempt-to-regulate-open-source-ai-is-counterproductive/

The piece argues, by the way, against regulating such GPAIs, if they are open source.
But while it gives a good argument about the value of such open source solutions, it does not explain why they should not be regulated, beyond the general “regulation sniffles innovation” argument.

I wonder whether anyone here has more opinion about this or know some analyses? It would be helpful to see an argument that shows which characteristics of open source development / deployment solve some of the issues that would be regulated. But it also seems to me that there are reasons to introduce stronger governance also of open source AI solutions (though whether to do that through regulation is a different question).

It would also be good to understand whether in principle a policy ask for a carveout on this issue would be similar to previous carveouts (for example for openly licensed content in copyright regulation). I think that there are differences in the two scenarios.

Best,
Alek
--
Director of Strategy, Open Future | openfuture.eu | +48 889 660 444
At Open Future, we tackle the Paradox of Open: paradox.openfuture.eu/

On 30 Sep 2022, at 19:06, Luis Villa <luis@lu.is> wrote:

On Fri, Sep 30, 2022 at 9:32 AM Phil Bradley-Schmieg <pbradley@wikimedia.org> wrote:
I feel the same way.  Also, despite my earlier email, it's still something to keep an eye on; e.g. MEPs are pondering adding
- internet security,
- text-generation (generating things "such as news articles, opinion articles, novels, scripts, and scientific articles"),
- image/video deepfakes, and
- "AI systems intended to be used by children in ways that have a significant impact on their personal development, including through personalised education or their cognitive or emotional development."

as high risk AI.
I’ve also seen several arguments that the same framework should essentially extent to all software, not just AI. And that’s not completely unreasonable—much software is quite opaque/black-box-y (by nature of its extreme complexity) even before layering AI into the mix.

eg, Example 1 in this analysis of the directive is about ML in cars, but ‘simple’ braking software in cars already has a blackbox, cost-shifting problem; see this analysis of Toyota’s pre-AI braking software: 
_______________________________________________
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org
To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org

_______________________________________________
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org
To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org