The European Commission is flooding Brussels with the last batch of legislative proposals, including updated liability for software and AI tools. Meanwhile online platforms are beginning to look into the implementation of the Digital Services Act.
====
DSA Implementation
---
The very final version of the text of the Digital Services Act (a corrigendum) is available [1] and will be nodded off by the Council and Parliament in the coming days. After that it will be published in the Official Journal of the EU and enter into force 20 days after publication. Most obligations will apply 15 months after entry into force or from 1 January 2024, whichever is later. However, some obligations on providers of online platforms, such as publishing the number of average active recipients and obligations on Very Large Online Platforms (VLOPs) after designation, will apply from three months after entry into force.
---
The European Commission is expected to lay out its plans on designation and supervision of VLOPs by the end of this year. This will take the form of so-called delegated acts (a.k.a. implementing acts). Apart from a designation process we also expect the Commission to publish a supervisory fee structure and how they intend to enforce the rules.
====
Data Act
====
The lead rapporteur for the Data Act, Pilar del Castillo Vera (EPP ES) in the Industry Committee, has published her draft report with proposed amendments. [2] One area where she suggests clarification is the provision under which governments may request data from services during a “public emergency”. She tries to integrate the notion of rapidity and explicitly mentions public health emergencies and major natural disasters. Her suggestions would also exclude all SMEs from the scope. Another limitation that she proposed to write into the text is to explicitly exclude “personal data or data covered by professional secrecy”. Our position on this is the same as during the DSA: The term “public emergency” and the procedures that trigger it must be clearly defined.
---
Meanwhile in the Council the Czech Presidency is moving ahead with the negotiations among Member States in the Council. There is a proposal to extend Article 35, which limits the sui generis database right (SGDR), even further. Under the Czech proposal the SGDR would never apply to machine generated data. Under the Commission proposal it wouldn’t apply only when a user asks for access to data generated by a product or service they use. We welcome the proposal and also spoke in favour of it with relevant MEPs. Some background: [3]
======
AI Liability
======
The European Commission presented a new AI Liability Directive. [4] The stated goal is to complement the AI Act in making sure people and companies who were harmed by high-risk AI systems (think recruitment, admissions, autonomous drones, self-driving cars) are able to seek damages. Under the proposed text the burden of proof on the claimant would be reversed under certain conditions, as it would be very hard for an outside person to understand how the AI algorithm works. Also, courts will have the explicit right to request companies to disclose technical information about their algorithms.
---
In a related move, the Commission also presented its proposal for an updated Product Liability Directive. [5] This Directive covers all unsafe products, including software and digital services, meaning it also covers machine learning algorithms. Under the new rules providers would be responsible for software updates and patches. The proposal asle makes one huge exception: free and open source software provided outside the course of a commercial activity will not be covered. This provision appears in a recital only. Perhaps it would be best to move it to a proper article.
========
EMFA
========
The European Media Freedom Act was proposed by the European Commission. [6] It has the aim of helping journalists and media protect their independence across the EU, but reads a little like a smörgåsbord covered with soft law measures. It prohibits Member States from surveilling media or journalists, except under a “national security” clause. It obliges Member States to have “open and non-discriminatory” prodecudure for electing heads of public broadcasters and to distribute public advertising funds fairly. The idea seems to be that by stating these principles in EU law, citizens and media would be able to enforce them in court.
---
As expected, an alteration of the so-called “media exception” resurfaced in the proposal. Media companies had unsuccessfully tried to make it harder for online platforms to remove or restrict visibility to their content in the Digital Services Act. The new provision asks some online platforms to send registered media outlets a prior warning before removing or restricting their content. The provision will essentially only apply to Very Large Online Platforms (as per DSA) that allow business users to offer goods or services to consumers (as per Regulation on fairness for business users of online intermediation services).
==============================
Combatting Violence Against Women
==============================
A directive on combatting violence against women and domestic violence [7] is currently in the works of the European Parliament and the Council. While the piece of legislation doesn’t focus on the online world it nonetheless has provisions against the non-consensual sharing of intimate or manipulated material, cyber harassment, cyber stalking and cyber incitement of violence and hatred. It would mandate that all EU Member States make these actions punishable as criminal offences. Member States must also ensure that competent judicial authorities can issue binding legal orders to remove or disable access to such material from online platforms.
=====================
Political Advertising Online
=====================
The EU is trying to come up with universal rules on political advertising online. [8] The definition of what constitutes political advertising is quite broad, which is one major point of discussion. It is not directly related to payment, which causes quite a bit of confusion. Potentially even a Wikipedia article about a candidate could fall into this category. However, the obligations are mainly addressed at “providers of advertising services”, which effectively leaves Wikimedia projects out of scope.
---
Another point of tension is whether political advertisements can be targeted. The DSA will already prohibit the use of sensitive personal data (e.g. political preferences, sexual orientation, religous beliefs), but some lawmakers would like to go further. Others say that politicians should be able to target voters online with ads only if they have explicitly agreed to give some data like gender, age and location. A clear majority is not in sight.
=====================
Big Fat Brussels Meeting
=====================
We have published the dates, location and a draft agenda for this year’s Big Fat Brussels Meeting (2 &3 December). Feel free to add your name to the participants list if you plan to come:
https://meta.wikimedia.org/wiki/EU_policy/Big_Fat_Brussels_Meeting_VIII
====
END
====
[1] https://drive.google.com/file/d/1HaPpOkD5DdMsYlJXt-tBQtax7yduCcek/view?usp=s...
[2]https://www.europarl.europa.eu/doceo/document/ITRE-PR-732704_EN.pdf
[3] http://copyrightblog.kluweriplaw.com/2022/03/04/a-vanishing-right-the-sui-ge...
[4]https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807
[5] https://single-market-economy.ec.europa.eu/document/3193da9a-cecb-44ad-9a9c-...
[6] https://drive.google.com/file/d/18YDbhYiSQVa2x2upKW4nBipgpgQdI2Xc/view?usp=s...
[7] https://www.europarl.europa.eu/RegData/docs_autres_institutions/commission_e...
[8] https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?referen...
On Fri, Sep 30, 2022 at 5:39 AM Dimitar Parvanov Dimitrov < dimitar.parvanov.dimitrov@gmail.com> wrote:
======
AI Liability
======
The European Commission presented a new AI Liability Directive. [4] The stated goal is to complement the AI Act in making sure people and companies who were harmed by high-risk AI systems (think recruitment, admissions, autonomous drones, self-driving cars) are able to seek damages. Under the proposed text the burden of proof on the claimant would be reversed under certain conditions, as it would be very hard for an outside person to understand how the AI algorithm works. Also, courts will have the explicit right to request companies to disclose technical information about their algorithms.
I would love to see any smart commentary on this that people have seen — so far I’ve seen very little. Bonus if it’s from European attorneys who are trying to explain EU product liability context to American audiences :)
Potentially an important wrinkle on this for this audience: per the explanatory documents:
“In order not to hamper innovation or research, this Directive should not apply to free and open-source software *developed … outside the course of a commercial activity**.”*
Emphasis mine - does this mean that open source developed by companies (such as, at this point, much of the php stack WMF relies on, and at least part of the python ML stack that I assume WMF ML uses) is *not exempted *from this directive?
“This is in particular the case for software...that is openly shared and freely accessible, usable, modifiable and redistributable.”
Similarly: much ML software is *not *freely usable in the OSI sense, because of ethical field of use restrictions. Is ethical ML *more *liable than fully free ML?
Related: this is just an experiment, and I don’t know how long I can keep it up, but I’m writing a newsletter on the overlap of open and ml that I suspect might be of interest to some folks here: https://openml.fyi
Yours in open- Luis
Hi Luis,
Before diving down that rabbit hole, at least so far as WMF ML is concerned, we should remember that the Directive would apply to *high risk* AI systems. This will be a legally-defined category of systems. Exactly what that will encompass is still up for debate, but you can at least see the sort thing the European Commission proposers have in mind - see Annexes II and III here: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206#docume...
Regards, Phil
On Fri, 30 Sept 2022 at 16:28, Luis Villa luis@lu.is wrote:
On Fri, Sep 30, 2022 at 5:39 AM Dimitar Parvanov Dimitrov < dimitar.parvanov.dimitrov@gmail.com> wrote:
======
AI Liability
======
The European Commission presented a new AI Liability Directive. [4] The stated goal is to complement the AI Act in making sure people and companies who were harmed by high-risk AI systems (think recruitment, admissions, autonomous drones, self-driving cars) are able to seek damages. Under the proposed text the burden of proof on the claimant would be reversed under certain conditions, as it would be very hard for an outside person to understand how the AI algorithm works. Also, courts will have the explicit right to request companies to disclose technical information about their algorithms.
I would love to see any smart commentary on this that people have seen — so far I’ve seen very little. Bonus if it’s from European attorneys who are trying to explain EU product liability context to American audiences :)
Potentially an important wrinkle on this for this audience: per the explanatory documents:
“In order not to hamper innovation or research, this Directive should not apply to free and open-source software *developed … outside the course of a commercial activity**.”*
Emphasis mine - does this mean that open source developed by companies (such as, at this point, much of the php stack WMF relies on, and at least part of the python ML stack that I assume WMF ML uses) is *not exempted *from this directive?
“This is in particular the case for software...that is openly shared and freely accessible, usable, modifiable and redistributable.”
Similarly: much ML software is *not *freely usable in the OSI sense, because of ethical field of use restrictions. Is ethical ML *more *liable than fully free ML?
Related: this is just an experiment, and I don’t know how long I can keep it up, but I’m writing a newsletter on the overlap of open and ml that I suspect might be of interest to some folks here: https://openml.fyi
Yours in open- Luis
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org
Thanks for that note, Phil! To be perfectly honest I’ve found this particular rulemaking convoluted even by my low standards for EU rulemaking, and have not had the time to do a proper, end-to-end sit down with the docs, so this is a very useful clarification.
On Fri, Sep 30, 2022 at 9:04 AM Phil Bradley-Schmieg pbradley@wikimedia.org wrote:
Hi Luis,
Before diving down that rabbit hole, at least so far as WMF ML is concerned, we should remember that the Directive would apply to *high risk* AI systems. This will be a legally-defined category of systems. Exactly what that will encompass is still up for debate, but you can at least see the sort thing the European Commission proposers have in mind - see Annexes II and III here: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206#docume...
Regards, Phil
On Fri, 30 Sept 2022 at 16:28, Luis Villa luis@lu.is wrote:
On Fri, Sep 30, 2022 at 5:39 AM Dimitar Parvanov Dimitrov < dimitar.parvanov.dimitrov@gmail.com> wrote:
======
AI Liability
======
The European Commission presented a new AI Liability Directive. [4] The stated goal is to complement the AI Act in making sure people and companies who were harmed by high-risk AI systems (think recruitment, admissions, autonomous drones, self-driving cars) are able to seek damages. Under the proposed text the burden of proof on the claimant would be reversed under certain conditions, as it would be very hard for an outside person to understand how the AI algorithm works. Also, courts will have the explicit right to request companies to disclose technical information about their algorithms.
I would love to see any smart commentary on this that people have seen — so far I’ve seen very little. Bonus if it’s from European attorneys who are trying to explain EU product liability context to American audiences :)
Potentially an important wrinkle on this for this audience: per the explanatory documents:
“In order not to hamper innovation or research, this Directive should not apply to free and open-source software *developed … outside the course of a commercial activity**.”*
Emphasis mine - does this mean that open source developed by companies (such as, at this point, much of the php stack WMF relies on, and at least part of the python ML stack that I assume WMF ML uses) is *not exempted *from this directive?
“This is in particular the case for software...that is openly shared and freely accessible, usable, modifiable and redistributable.”
Similarly: much ML software is *not *freely usable in the OSI sense, because of ethical field of use restrictions. Is ethical ML *more *liable than fully free ML?
Related: this is just an experiment, and I don’t know how long I can keep it up, but I’m writing a newsletter on the overlap of open and ml that I suspect might be of interest to some folks here: https://openml.fyi
Yours in open- Luis
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org
I feel the same way. Also, despite my earlier email, it's still something to keep an eye on; e.g. MEPs are pondering adding https://www.europarl.europa.eu/doceo/document/CJ40-PR-731563_EN.html - internet security, - text-generation (generating things "such as news articles, opinion articles, novels, scripts, and scientific articles"), - image/video deepfakes, and - "AI systems intended to be used by children in ways that have a significant impact on their personal development, including through personalised education or their cognitive or emotional development."
as high risk AI.
On Fri, 30 Sept 2022 at 17:07, Luis Villa luis@lu.is wrote:
Thanks for that note, Phil! To be perfectly honest I’ve found this particular rulemaking convoluted even by my low standards for EU rulemaking, and have not had the time to do a proper, end-to-end sit down with the docs, so this is a very useful clarification.
On Fri, Sep 30, 2022 at 9:04 AM Phil Bradley-Schmieg < pbradley@wikimedia.org> wrote:
Hi Luis,
Before diving down that rabbit hole, at least so far as WMF ML is concerned, we should remember that the Directive would apply to *high risk* AI systems. This will be a legally-defined category of systems. Exactly what that will encompass is still up for debate, but you can at least see the sort thing the European Commission proposers have in mind - see Annexes II and III here: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206#docume...
Regards, Phil
On Fri, 30 Sept 2022 at 16:28, Luis Villa luis@lu.is wrote:
On Fri, Sep 30, 2022 at 5:39 AM Dimitar Parvanov Dimitrov < dimitar.parvanov.dimitrov@gmail.com> wrote:
======
AI Liability
======
The European Commission presented a new AI Liability Directive. [4] The stated goal is to complement the AI Act in making sure people and companies who were harmed by high-risk AI systems (think recruitment, admissions, autonomous drones, self-driving cars) are able to seek damages. Under the proposed text the burden of proof on the claimant would be reversed under certain conditions, as it would be very hard for an outside person to understand how the AI algorithm works. Also, courts will have the explicit right to request companies to disclose technical information about their algorithms.
I would love to see any smart commentary on this that people have seen — so far I’ve seen very little. Bonus if it’s from European attorneys who are trying to explain EU product liability context to American audiences :)
Potentially an important wrinkle on this for this audience: per the explanatory documents:
“In order not to hamper innovation or research, this Directive should not apply to free and open-source software *developed … outside the course of a commercial activity**.”*
Emphasis mine - does this mean that open source developed by companies (such as, at this point, much of the php stack WMF relies on, and at least part of the python ML stack that I assume WMF ML uses) is *not exempted *from this directive?
“This is in particular the case for software...that is openly shared and freely accessible, usable, modifiable and redistributable.”
Similarly: much ML software is *not *freely usable in the OSI sense, because of ethical field of use restrictions. Is ethical ML *more *liable than fully free ML?
Related: this is just an experiment, and I don’t know how long I can keep it up, but I’m writing a newsletter on the overlap of open and ml that I suspect might be of interest to some folks here: https://openml.fyi
Yours in open- Luis
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org
On Fri, Sep 30, 2022 at 9:32 AM Phil Bradley-Schmieg pbradley@wikimedia.org wrote:
I feel the same way. Also, despite my earlier email, it's still something to keep an eye on; e.g. MEPs are pondering adding https://www.europarl.europa.eu/doceo/document/CJ40-PR-731563_EN.html
- internet security,
- text-generation (generating things "such as news articles, opinion
articles, novels, scripts, and scientific articles"),
- image/video deepfakes, and
- "AI systems intended to be used by children in ways that have a
significant impact on their personal development, including through personalised education or their cognitive or emotional development."
as high risk AI.
I’ve also seen several arguments that the same framework should essentially extent to all software, not just AI. And that’s not completely unreasonable—much software is quite opaque/black-box-y (by nature of its extreme complexity) even before layering AI into the mix.
eg, Example 1 in this analysis of the directive is about ML in cars, but ‘simple’ braking software in cars already has a blackbox, cost-shifting problem; see this analysis of Toyota’s pre-AI braking software:
ai analysis: https://www.adalovelaceinstitute.org/wp-content/uploads/2022/09/Ada-Lovelace...
toyota: https://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_slides.pdf
and in the Lovelace Institute analysis, p. 16 notes that some of the analysis should extend to all software.
Hi,
(Coming back to this topic after the weekend)
I am not sure this will apply only to high-risk cases. At least with regard to the AIA, the Council seems to be proposing to define ‘general purpose AI’ (GPAI) (basically large models with capacity to do multiple tasks) and regulate them as such. That’s what’s suggested in this Brooking Institute analysis: https://www.brookings.edu/blog/techtank/2022/08/24/the-eus-attempt-to-regula... https://www.brookings.edu/blog/techtank/2022/08/24/the-eus-attempt-to-regulate-open-source-ai-is-counterproductive/
The piece argues, by the way, against regulating such GPAIs, if they are open source. But while it gives a good argument about the value of such open source solutions, it does not explain why they should not be regulated, beyond the general “regulation sniffles innovation” argument.
I wonder whether anyone here has more opinion about this or know some analyses? It would be helpful to see an argument that specifics of open source development / deployment solve some of the issues that would be regulated. But it also seems to me that there are reasons to introduce stronger governance also of open source AI solutions (though whether to do that through regulation is a different question)
Best, Alek
-- Director of Strategy, Open Future | openfuture.eu | +48 889 660 444 At Open Future, we tackle the Paradox of Open: paradox.openfuture.eu/
On 30 Sep 2022, at 19:06, Luis Villa luis@lu.is wrote:
On Fri, Sep 30, 2022 at 9:32 AM Phil Bradley-Schmieg <pbradley@wikimedia.org mailto:pbradley@wikimedia.org> wrote: I feel the same way. Also, despite my earlier email, it's still something to keep an eye on; e.g. MEPs are pondering adding https://www.europarl.europa.eu/doceo/document/CJ40-PR-731563_EN.html
- internet security,
- text-generation (generating things "such as news articles, opinion articles, novels, scripts, and scientific articles"),
- image/video deepfakes, and
- "AI systems intended to be used by children in ways that have a significant impact on their personal development, including through personalised education or their cognitive or emotional development."
as high risk AI. I’ve also seen several arguments that the same framework should essentially extent to all software, not just AI. And that’s not completely unreasonable—much software is quite opaque/black-box-y (by nature of its extreme complexity) even before layering AI into the mix.
eg, Example 1 in this analysis of the directive is about ML in cars, but ‘simple’ braking software in cars already has a blackbox, cost-shifting problem; see this analysis of Toyota’s pre-AI braking software:
ai analysis: https://www.adalovelaceinstitute.org/wp-content/uploads/2022/09/Ada-Lovelace... https://www.adalovelaceinstitute.org/wp-content/uploads/2022/09/Ada-Lovelace-Institute-Expert-Explainer-AI-liability-in-Europe.pdf
toyota: https://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_slides.pdf https://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_slides.pdf
and in the Lovelace Institute analysis, p. 16 notes that some of the analysis should extend to all software.
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org
Interesting discussion on this thread of AI and open source. A few thoughts and clarifications.
The Product Liability Directive proposal includes a limited OSS exemption as @Luis points out in Recital 13. The second half defines commercial so as to apply liability as one would for other products marketed commercially; doesn't strike me as overly concerning, but I'd welcome reaction to the contrary.
In order not to hamper innovation or research, this Directive should not apply to free and open-source software developed or supplied outside the course of a commercial activity. This is in particular the case for software, including its source code and modified versions, that is openly shared and freely accessible, usable, modifiable and redistributable. However where software is supplied in exchange for a price or personal data is used other than exclusively for improving the security, compatibility or interoperability of the software, and is therefore supplied in the course of a commercial activity, the Directive should apply.
It's worth noting, @Phil, that the AI Liability Directive proposal can apply to systems beyond those that are high-risk. See Recital 28: “The presumption of causality could also apply to AI systems that are not high-risk AI systems because there could be excessive difficulties of proof for the claimant..."
My greatest concern lies in what @Alek identifies with OSS GPAI. The AI Act as scoped may pull in open source development that doesn't rise to the level of AI systems deployed on the market, general purpose pre-trained models as the primary example (Article 4a(2) https://artificialintelligenceact.eu/wp-content/uploads/2022/09/AIA-CZ-3rd-Proposal-23-Sept.pdf#page=51). Developers should be free to build AI-related code and do R&D on AI models without being subject to Act (and AI Liability Directive) obligations that are suited for consumer product safety. Obligations should fall on providers who intend to build (or integrate) fully fledged AI systems or users deploying them in a professional setting.
On Wed, Oct 5, 2022 at 4:14 AM Alek Tarkowski alek@openfuture.eu wrote:
Hi,
(Coming back to this topic after the weekend)
I am not sure this will apply only to high-risk cases. At least with regard to the AIA, the Council seems to be proposing to define ‘general purpose AI’ (GPAI) (basically large models with capacity to do multiple tasks) and regulate them as such. That’s what’s suggested in this Brooking Institute analysis:
https://www.brookings.edu/blog/techtank/2022/08/24/the-eus-attempt-to-regula...
The piece argues, by the way, against regulating such GPAIs, if they are open source. But while it gives a good argument about the value of such open source solutions, it does not explain why they should not be regulated, beyond the general “regulation sniffles innovation” argument.
I wonder whether anyone here has more opinion about this or know some analyses? It would be helpful to see an argument that specifics of open source development / deployment solve some of the issues that would be regulated. But it also seems to me that there are reasons to introduce stronger governance also of open source AI solutions (though whether to do that through regulation is a different question)
Best, Alek
-- Director of Strategy, Open Future | openfuture.eu | +48 889 660 444 At Open Future, we tackle the Paradox of Open: paradox.openfuture.eu/
On 30 Sep 2022, at 19:06, Luis Villa luis@lu.is wrote:
On Fri, Sep 30, 2022 at 9:32 AM Phil Bradley-Schmieg < pbradley@wikimedia.org> wrote:
I feel the same way. Also, despite my earlier email, it's still something to keep an eye on; e.g. MEPs are pondering adding https://www.europarl.europa.eu/doceo/document/CJ40-PR-731563_EN.html
- internet security,
- text-generation (generating things "such as news articles, opinion
articles, novels, scripts, and scientific articles"),
- image/video deepfakes, and
- "AI systems intended to be used by children in ways that have a
significant impact on their personal development, including through personalised education or their cognitive or emotional development."
as high risk AI.
I’ve also seen several arguments that the same framework should essentially extent to all software, not just AI. And that’s not completely unreasonable—much software is quite opaque/black-box-y (by nature of its extreme complexity) even before layering AI into the mix.
eg, Example 1 in this analysis of the directive is about ML in cars, but ‘simple’ braking software in cars already has a blackbox, cost-shifting problem; see this analysis of Toyota’s pre-AI braking software:
ai analysis: https://www.adalovelaceinstitute.org/wp-content/uploads/2022/09/Ada-Lovelace...
toyota: https://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_slides.pdf
and in the Lovelace Institute analysis, p. 16 notes that some of the analysis should extend to all software.
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org
Hi everyone,
(Coming back to this topic after the weekend)
I am not sure this will apply only to high-risk cases. At least with regard to the AIA, the Council seems to be proposing to define ‘general purpose AI’ (GPAI) (basically large models with capacity to do multiple tasks) and regulate them as such. That’s what’s suggested in this Brooking Institute analysis: https://www.brookings.edu/blog/techtank/2022/08/24/the-eus-attempt-to-regula... https://www.brookings.edu/blog/techtank/2022/08/24/the-eus-attempt-to-regulate-open-source-ai-is-counterproductive/
The piece argues, by the way, against regulating such GPAIs, if they are open source. But while it gives a good argument about the value of such open source solutions, it does not explain why they should not be regulated, beyond the general “regulation sniffles innovation” argument.
I wonder whether anyone here has more opinion about this or know some analyses? It would be helpful to see an argument that shows which characteristics of open source development / deployment solve some of the issues that would be regulated. But it also seems to me that there are reasons to introduce stronger governance also of open source AI solutions (though whether to do that through regulation is a different question).
It would also be good to understand whether in principle a policy ask for a carveout on this issue would be similar to previous carveouts (for example for openly licensed content in copyright regulation). I think that there are differences in the two scenarios.
Best, Alek -- Director of Strategy, Open Future | openfuture.eu | +48 889 660 444 At Open Future, we tackle the Paradox of Open: paradox.openfuture.eu/
On 30 Sep 2022, at 19:06, Luis Villa luis@lu.is wrote:
On Fri, Sep 30, 2022 at 9:32 AM Phil Bradley-Schmieg <pbradley@wikimedia.org mailto:pbradley@wikimedia.org> wrote: I feel the same way. Also, despite my earlier email, it's still something to keep an eye on; e.g. MEPs are pondering adding https://www.europarl.europa.eu/doceo/document/CJ40-PR-731563_EN.html
- internet security,
- text-generation (generating things "such as news articles, opinion articles, novels, scripts, and scientific articles"),
- image/video deepfakes, and
- "AI systems intended to be used by children in ways that have a significant impact on their personal development, including through personalised education or their cognitive or emotional development."
as high risk AI. I’ve also seen several arguments that the same framework should essentially extent to all software, not just AI. And that’s not completely unreasonable—much software is quite opaque/black-box-y (by nature of its extreme complexity) even before layering AI into the mix.
eg, Example 1 in this analysis of the directive is about ML in cars, but ‘simple’ braking software in cars already has a blackbox, cost-shifting problem; see this analysis of Toyota’s pre-AI braking software:
ai analysis: https://www.adalovelaceinstitute.org/wp-content/uploads/2022/09/Ada-Lovelace... https://www.adalovelaceinstitute.org/wp-content/uploads/2022/09/Ada-Lovelace-Institute-Expert-Explainer-AI-liability-in-Europe.pdf
toyota: https://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_slides.pdf https://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_slides.pdf
and in the Lovelace Institute analysis, p. 16 notes that some of the analysis should extend to all software.
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org
Hi Alek,
I was saying that the new proposed AI *Liability Directive* is intended to apply only to high risk AI. I wasn't saying the AIA (a Regulation, not a Directive) itself - i.e. the focus of the Brookings analysis - has that limitation.
Regards, Phil
On Wed, 5 Oct 2022 at 11:43, Alek Tarkowski alek@openfuture.eu wrote:
Hi everyone,
(Coming back to this topic after the weekend)
I am not sure this will apply only to high-risk cases. At least with regard to the AIA, the Council seems to be proposing to define ‘general purpose AI’ (GPAI) (basically large models with capacity to do multiple tasks) and regulate them as such. That’s what’s suggested in this Brooking Institute analysis:
https://www.brookings.edu/blog/techtank/2022/08/24/the-eus-attempt-to-regula...
The piece argues, by the way, against regulating such GPAIs, if they are open source. But while it gives a good argument about the value of such open source solutions, it does not explain why they should not be regulated, beyond the general “regulation sniffles innovation” argument.
I wonder whether anyone here has more opinion about this or know some analyses? It would be helpful to see an argument that shows which characteristics of open source development / deployment solve some of the issues that would be regulated. But it also seems to me that there are reasons to introduce stronger governance also of open source AI solutions (though whether to do that through regulation is a different question).
It would also be good to understand whether in principle a policy ask for a carveout on this issue would be similar to previous carveouts (for example for openly licensed content in copyright regulation). I think that there are differences in the two scenarios.
Best, Alek -- Director of Strategy, Open Future | openfuture.eu | +48 889 660 444 At Open Future, we tackle the Paradox of Open: paradox.openfuture.eu/
On 30 Sep 2022, at 19:06, Luis Villa luis@lu.is wrote:
On Fri, Sep 30, 2022 at 9:32 AM Phil Bradley-Schmieg < pbradley@wikimedia.org> wrote:
I feel the same way. Also, despite my earlier email, it's still something to keep an eye on; e.g. MEPs are pondering adding https://www.europarl.europa.eu/doceo/document/CJ40-PR-731563_EN.html
- internet security,
- text-generation (generating things "such as news articles, opinion
articles, novels, scripts, and scientific articles"),
- image/video deepfakes, and
- "AI systems intended to be used by children in ways that have a
significant impact on their personal development, including through personalised education or their cognitive or emotional development."
as high risk AI.
I’ve also seen several arguments that the same framework should essentially extent to all software, not just AI. And that’s not completely unreasonable—much software is quite opaque/black-box-y (by nature of its extreme complexity) even before layering AI into the mix.
eg, Example 1 in this analysis of the directive is about ML in cars, but ‘simple’ braking software in cars already has a blackbox, cost-shifting problem; see this analysis of Toyota’s pre-AI braking software:
ai analysis: https://www.adalovelaceinstitute.org/wp-content/uploads/2022/09/Ada-Lovelace...
toyota: https://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_slides.pdf
and in the Lovelace Institute analysis, p. 16 notes that some of the analysis should extend to all software.
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org
Publicpolicy mailing list -- publicpolicy@lists.wikimedia.org To unsubscribe send an email to publicpolicy-leave@lists.wikimedia.org
publicpolicy@lists.wikimedia.org