Europe’s proposed AI regulation falls short on protecting rights

by Joseph K. Clark

The European Commission’s (EC) proposal to regulate artificial intelligence (AI) is a step in the right direction. Still, experts have warned that it fails to address the fundamental power imbalances between those who develop and deploy the technology and those who are subject to it. In the Artificial Intelligence Act (AIA) proposal, published on 21 April 2021, the EC adopts a decidedly risk-based approach to regulating the technology, focusing on establishing rules around the use of “high-risk” and “prohibited” AI practices. On its release, European commissioner Margrethe Vestager emphasized the importance of trusting AI systems and their outcomes and highlighted the adopted risk-based approach.

“On AI, trust is a must, not a nice-to-have. With these landmark rules, the EU [European Union] is spearheading the development of new global norms to ensure AI can be trusted,” she said. “By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive. Future-proof and innovation-friendly, our rules will intervene were strictly needed – when EU citizens’ safety and fundamental rights are at stake.”

Speaking to Computer Weekly, however, digital civil rights experts and organizations claim the EC’s regulatory proposal is stacked in favor of public and private organizations developing and deploying AI technologies, which are essentially being tasked with box-ticking exercises. At the same time, ordinary people are offered little in the way of protection or redress. This is despite them being subject to AI systems in several contexts from which they cannot opt-out, such as when used by the law or immigration enforcement bodies. Ultimately, they claim the proposal will do little to mitigate the worst abuses of AI technology and will essentially act as a green light for several high-risk use cases due to its emphasis on technical standards and mitigating risk over human rights.

Europe

Technical standards over human rights

Within the EC’s proposal, an AI system is categorized as “high risk” if it threatens a person’s health and safety or fundamental rights. This includes use cases such as remote biometric identification, the management or operation of critical infrastructure, systems used for educational purposes, and systems used in the context of employment, immigration, or law enforcement decisions. “In line with a risk-based approach, those high-risk AI systems are permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment,” says the proposal. Digital civil rights experts and organizations claim the EC’s regulatory proposal is stacked in favor of organizations that develop and deploy AI technologies. At the same time, ordinary people are offered little in the way of protection or redress.

“The classification of an AI system as high risk is based on the intended purpose of the AI system, in line with existing product safety legislation. At the same time, the mere existence of the regulation has helped open up public debate about the role of AI technologies in society; it does not fulfill the rhetorical promises made by Vestager and others at the highest levels of the bloc. Therefore, the classification as high risk depends not only on the function performed by the AI system but also on the specific purpose and modalities for which that system is used.” Alexandra Geese, a German European Parliament member (MEP) member, says.

Referring to a leaked version of the proposal from January 2021, which diverges significantly from the document that has been officially published, Geese says it “really acknowledged AI as a danger for democracy, AI as a danger for the environment, while this proposal sort of pretends that it’s just about technical standards, and I don’t think that’s good enough”. Geese adds that while the language may have been vague at points in the leaked draft – a problem present in the final proposal too – the sentiments behind it were “perfect” as it more fully acknowledges the destructive potential of AI technologies.

Daniel Laufer, a Europe policy analyst at digital and human rights group Access Now, adds that the AI whitepaper published in February 2020 – which significantly shaped the direction of the proposal – “raised alarm bells for us” because of its dual focus on increasing AI while mitigating its risks, “which doesn’t take account [of whether] there are applications of AI (which we believe there are) where you can’t mitigate the risks and that you don’t want to promote”. He distinguishes, for example, between competing globally on machine learning for medical image scanning and competing on AI for mass surveillance, adding that “there needs to be an acknowledgment that not all applications will be possible in a democratic society that’s committed to human rights”.

Databases, high-quality datasets, and conformity assessments

While the proposal’s risk-based approach contains several measures focused on how high-risk AI systems can still be used, critics argue that the thresholds placed on their use are too low to prevent the worst abuses. However, it includes provisions for the creation of an EU-wide database of high-risk systems – which will be publicly viewable and based on “conformity assessments” that seek to assess the system’s compliance with the legal criteria – multiple experts argue this is the “bare minimum” that should be done to increase transparency around, and therefore trust in artificial intelligence technology.

Sarah Chander, a senior policy advisor at European Digital Rights (EDRi), says. At the same time, the database can assist journalists, activists, and civil society figures in obtaining more information about AI systems than is currently available. However, it will not necessarily increase accountability. “That database is the high-risk applications of AI on the market, not necessarily those tn use,” she says. “For example, if a police service is using a predictive policing system that technically is categorized as high risk under the regulatory proposal as it exists now, we wouldn’t know if Amsterdam police were using it; we would just know that it’s on the market for them to buy potentially.”

Giving the example of Article 10 in the proposal, which dictates that AI systems need to be trained on high-quality datasets, Chander says the requirement is too focused on how AI operates at a technical level to help fix what is, fundamentally, a social problem. “Who defines what high quality is? The police force, for example, uses police operational data, which will be high-quality datasets to them because they have trust in the system, the political construction of those datasets [and] in the institutional processes that led to those datasets – the whole proposal overlooks the highly political nature of what it means to develop AI,” she says.

“A few technical tweaks won’t make police use of data less discriminatory because the issue is much broader than the AI system or the dataset – it’s about institutional policing [in that case].” On this point, Geese agrees that the need for high-quality datasets is not an adequate safeguard, as it again leaves the door open to too much interpretation by those developing and deploying the AI systems. This is exacerbated, she says, by the lack of measures, including combating bias in the datasets.

“It says the data has to be representative, but representative of what? Police will say, ‘This is representative of crime’, and there’s no provision that says, ‘You not only need to identify the bias, but you also have to propose corrective measures,” she says. “There is no obligation to remove the original bias [from the system]. I talked to Vestager’s cabinet about it, and they said, ‘We stopped the feedback loops from worsening it, but the bias in the data is there, and it needs to be representative.’ Still, nobody can answer the question’ representative of what?’,” says Geese.

Chander also points out that in most high-risk use cases, the proposal allows the developers of the systems to conduct the conformity assessments themselves, meaning they are in charge of determining the extent to which their systems align with the regulation’s rules. “They don’t categorize these uses as high risk. Otherwise, you would have some external verification or checks on these processes – that’s a huge red flag, and as a system check, it won’t overcome many of the potential harms,” she says.

Laufer adds that while the proposal establishes “notified bodies” to check the validity of conformity assessments if a complaint about an AI system arises, the measure risks creating a “privatized compliance industry” if commercial firms rely on data protection authorities and other similar entities. “Ideally, [notified bodies] should be focused on protecting human rights, whereas if it’s Deloitte, that’s a paid service, they’re focused on compliance, they’re focused on getting through the process,” he says. “The incentives, I think, are quite off, and the notified body doesn’t seem to be involved in most cases. Even if they were, it doesn’t seem like a sufficient measure to catch the worst harms.”

He says the processes around databases and conformity assessments are “an improvement on a really bad current state of affairs… it’s just a basic level of transparency [that] doesn’t solve any issues”. Referring to a “slow process of privatization”, Chander adds that the proposal also sets in motion a governance model whereby the user of an AI system must follow the “instructions of use” provided by the developer. “This could be viewed in a very neutral way to say, ‘Well, the system developer knows how it works, so that makes sense,” she says. “But more politically, this means that actually what we’re doing is tying in a relationship between a service provider – a private company – and a public institution… [and] embedding the reliance of the public sector on the private sector.”

Technology and ethics researcher Stephanie Hare says the EU’s current thinking around conformity assessments and databases only provides a “veneer of transparency” as there is an inherent tension within the proposal between creating transparency and protecting private companies'” proprietary information” and interests  “The increased transparency obligations will also not disproportionately affect the right to protection of intellectual property since they will be limited only to the minimum necessary information for individuals to exercise their right to an effective remedy,” says the proposal.

“Any disclosure of information will be carried out in compliance with relevant legislation in the field, including Directive 2016/943 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use, and disclosure.”

Prohibited in name only.

While the bulk of the proposal focuses on managing high-risk use cases of AI, it lists four practices considered “an unacceptable risk” and prohibited. This includes systems that distort human behavior; systems that exploit the vulnerabilities of specific social groups; systems that provide ‘scoring’ of individuals; and the remote, real-time biometric identification of people in public places. Critics say the proposal contains several loopholes that significantly weaken any claims that practices considered an unacceptable risk have been banned.

Chander says although the proposal provides a “broad horizontal prohibition” on these AI practices, such uses are still allowed within a law enforcement context and are “only prohibited insofar as they create physical or psychological harm”. “That’s a narrowing down of the prohibition already because only such uses that create these tangible – and quite a high threshold – types of harm are prohibited,” she says. However, critics say the proposal contains several loopholes that significantly weaken any claims that these practices have been banned.

“Considering that one of the prohibitions are uses that could take advantage based on people’s mental ability, physical disability or age, you would imagine a legitimate prohibition of that would be any such uses regardless of the question of whether harm was produced.” Laufer says this measure is “totally ridiculous”, giving the example that if the text was read literally. AI systems deployed “subliminal techniques” beyond a person’s consciousness to distort their behavior for their benefit, that would technically be allowed. “You can just drop the harm bit from each one of them because those practices… can’t be done for someone’s benefit – that is in itself completely at odds with human rights standards,” he says.

Biometric identification

According to Geese, the prohibition of biometric identification has several “huge loopholes”. The first, she says, is that only real-time biometric identification is banned, meaning police authorities using facial recognition, for example, could wait for a short period and do it retroactively. She adds the second major loophole is tied to the proposal’s “threat exemption” proposal, which means real-time biometric identification can be used in a law enforcement context to conduct “targeted searches” for victims of crime, including missing children, as well as threats to life or physical safety.

“You have to have the infrastructure in place all the time. You can’t just set them up overnight when a child’s missing – you have to have them in place and, usually, when you have that security infrastructure in place, there’s a strong incentive to use it,” says Geese. “It’s counter-intuitive to say, ‘We have all these cameras, and we have all this processing capacity with law enforcement agencies, and we just turn it off all the time’.” Hare shares a similar sentiment, saying that while she is not necessarily opposed to facial recognition technology being used for specific and limited tasks, this needs to be weighed against the evidence.

“They’re saying, ‘We’ll build that entire network, we’ll just turn it off most of the time’… You must weigh it and ask, ‘Do we have any examples anywhere? Have we piloted it even in just one city that used it in that specific, limited way described?'” she says. Hare further adds that while she is encouraged that European police would be banned from conducting generalized facial recognition surveillance, as they would need sign-off from a judge or some national authority, in practice, this could still run into “rubber-stamping” issues.

“Home secretaries love to be on the side of ‘law and order’, that’s their job, that’s how they get headlines… and they’re never going to want to piss off the cops,” she says. “So, if the cops wanted it, they’re going to rubber-stamp it. I’ve yet to see a home secretary who-civil liberties and privacy – it’s always about security and [stopping] the terrorists.”

Even if judges were solely in charge of the process, she adds, the post-9/11 experience in the US has seen rubber-stamp applications to tap tech companies’ data through secret courts set up under the Foreign Intelligence Surveillance Act (FISA), contributing to its intrusive surveillance practices since 2001.

Both Laufer and Hare point out that the European Data Protection Supervisor (EDPS) has been very critical of biometrics identification technology, previously calling for a moratorium on its use and now advocating for it to be banned from public spaces. “The commission keeps saying that this is, essentially, banning remote biometric identification, but we don’t see that at all,” says Laufer.

Everyone who spoke to Computer Weekly also highlighted the lack of Ka’ba” on biometric AI tools that can detect race, gender, and disability.”  “The biometric categorization of people into certain races, gender identities, disability categories – all of these things need to be banned for people’s rights to be protected because they cannot be used in a rights-compliant way,”  says Chander.”  “Insofar that the legislation doesn’t do that, it will fall short.”

Asymmetries of power

In tandem with the relaxed nature of the prohibitions and the low thresholds placed on the use of high-risk systems, critics say the proposal fundamentally does little to address the power imbalances inherent in how AI is developed and deployed today, as it also contains minimal about people’s rights to redress when negatively affected by the technology.

Describing the proposal’s provisions around redress (or lack of) as ” about as useful as an umbrella in a hurricane”, Hare adds that if AI technology has been used on you in some way – in anything from a hiring decision to a retailer using facial recognition – most people do not have the resources to make a challenge.”  “What are you going to do? Do you think the average person has the time, money, and knowledge to go and file a complaint” she says.

Laufer adds that while EU citizens can use the General Data Protection Regulation (GDPR) as an avenue to challenge abuses, this puts a “heavy burden” on individuals to know their rights as a data subject, meaning the proposal should contain different mechanisms for redress.”  “There definitely needs to be some form of redress or complaint mechanism for individuals or groups, or civil society organizations on behalf of people, to point out when a system violates this regulation… because of how market-focused this is,” he says.”  It’ss problem of signing a massive burden on individuals to know when their rights have beenWe’vettoldWe’vee told from the beginning that the commission has a responsibility to guarantee the protection and enjoyment of fundamental rights proactively – it deploys, redeploys, lets someone be harmed, and then the complaint starts”, it should be taking an active role.”

For Chander, prohibiting certain use cases and limiting others considered high risk should be to reverse the burden of proof on the parties seeking to develop and deploy the tech.”  “Particularly considering the vast power imbalance when we’re talking about AI, te argues for prohibitions in themselves because most of such legislation creates such a burden of proof on people to prove that a certain type of harm has occurred,”  she says.”  “So this takes the language of prohibition, but then also doesn’t remove those institutional barriers to seeking redress.”

Ultimately, Chander believes that while the ” creation of a bureaucracy” around AI compliance might,”  “optimistically speaking,” engender more consideration about the outcomes and impacts the systems may have if the proposal stays as it is, ” it won’t change much”.”  “The lack of any procedures for human rights impact assessments as part of this legislative process, the fact that most of the high-risk systems are self-assessed for conformity, and the fact that many of the requirements themselves don’t structurally challenge the harms, [shows] they’d rather look for more technical tweaks here or there,” she says.”  “There is some value judgment being made that these [AI use cases] will be useful to the European Commission’s broader political goal, and that speaks to why the limitations are so soft.”

Geese further contends that while AI systems may nominally be designated high-risk, the regulation will broadly legalize harmful AI practices.

The proposal must go to the European Parliament and Council for further consideration and debate before being voted on.

Related Posts