On Fri, Sep 30, 2022 at 9:32 AM Phil Bradley-Schmieg <pbradley@wikimedia.org> wrote:
I feel the same way.  Also, despite my earlier email, it's still something to keep an eye on; e.g. MEPs are pondering adding
- internet security,
- text-generation (generating things "such as news articles, opinion articles, novels, scripts, and scientific articles"),
- image/video deepfakes, and
- "AI systems intended to be used by children in ways that have a significant impact on their personal development, including through personalised education or their cognitive or emotional development."

as high risk AI.
I’ve also seen several arguments that the same framework should essentially extent to all software, not just AI. And that’s not completely unreasonable—much software is quite opaque/black-box-y (by nature of its extreme complexity) even before layering AI into the mix.

eg, Example 1 in this analysis of the directive is about ML in cars, but ‘simple’ braking software in cars already has a blackbox, cost-shifting problem; see this analysis of Toyota’s pre-AI braking software: 

ai analysis: https://www.adalovelaceinstitute.org/wp-content/uploads/2022/09/Ada-Lovelace-Institute-Expert-Explainer-AI-liability-in-Europe.pdf

toyota: https://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_slides.pdf

and in the Lovelace Institute analysis, p. 16 notes that some of the analysis should extend to all software.