How should AI be regulated in Europe?

by | May 25, 2021

In April, the EU laid out plans for a regulatory framework for artificial intelligence in the bloc. This will, if all goes to plan, attempt to regulate how AI can be used, and build competitiveness and trust in the technology. With AI now playing catchup to the US and China, which have become global powerhouses of AI innovation, this marks an attempt by the EU to ensure European standards and values are embedded in the early stages.

Of course, this regulation would form only one part of the solution. We asked three technology leaders from across the region to put forward their perspectives on how AI should be regulated in Europe.

Addressing risks and leading innovation could be conflicting goals


Dr Terence Tse, Professor in Entrepreneurship, and leader of the Master in Digital Transformation Management & Leadership programme at ESCP Business School

In this past April, the EU government issued a proposal for an “Artificial Intelligence Act”, seeking to lay down harmonised rules on AI for all member states. The aim, according to the EU, is to address the risks of AI and position Europe to play a leading role globally.

Yet, these are seemingly conflicting goals for European governments to achieve. There is no question that shielding the public and our societies from the downside risks of AI is essential. Just like EU’s General Data Protection Regulation (GDPR) introduced in 2018, it was widely praised as a great leap forward in protecting our private data, partially inhibiting tech companies from harnessing (free) raw material from us.

However, the limited collection of data has also slowed down the development and use of data-driven technologies such as AI for European companies. In contrast, in the absence of policy as stringent as GDPR, China and the US have both been able to vie for AI dominance, leaving Europe way behind in the trail. With the introduction of policies to restrict AI development and usage, it is truly questionable how by imposing tighter regulation to reduce the risk of AI, no doubt a noble and necessary cause, can at the same time propel Europe to be a global player on AI.

This is a dilemma that is difficult to reconcile. The EU government will have to make a choice, or otherwise risk failing to achieve either goals. But whatever shape and form the harmonised rules will take in the end, it is vital for these regulations to be applied to all member states. They must be concrete and the same for all countries, not just recommendations or guidelines. If not, it will create not just a heavily regulated environment for businesses but also one that is very complex to manoeuvre in. In time, these regulations will only discourage investments, making it even harder for Europe to be a leading player in AI. No matter how you see it, Europe is looking at a very fine and balancing act.

This could be the new GDPR


Michael Boguslavsky, Head of AI at Tradeteq

Any effective regulation needs to be standardised across the EU and secure buy-in from a broad range of stakeholders. It should allow companies and consumers adequate time for the new laws to bed in.

The EU has proposed new rules governing the use of AI. The rules are highly significant, and many people will view this step as the “new GDPR”. Overall, it has been drafted with the best intentions and a lot of the requirements for high-risk AI systems are common sense.

In terms of assessing and complying with the rules, we need to wait and see how the legislation evolves. I think these initial proposals will go through many iterations before they become law. As written now, some of these proposals are very broad and cover many things that might come to mind to someone talking about “AI”.

In addition, some of the definitions might drift, for example, around expanding the definition of “high risk” AI systems. This final definition of a “high-risk” AI system is crucial, as these are the systems on which the proposed legislation imposes many new requirements. 

It is a welcome sign that the leaked legislation mostly limits the “high risk” systems to the systems used to control already regulated/dangerous machinery or systems used for the provision of state functions or state assistance.

However, even if a firm’s AI systems are not classified as high risk, it may be a good idea to voluntarily adopt some of the principles and safeguards prescribed in the law for high-risk systems.

This will level the playing field and reward companies which use AI to benefit of citizens


Peter van der Putten, assistant professor of AI, Leiden University and Director of Decisioning at Pegasystems

The EU is taking a very sensible, outcome and risk-based approach. Rather than declaring AI technology to be either a silver bullet or an evil technology in general, it focuses on specific applications of AI and distinguishes between those that have a high risk to cause harm or not.

Certain AI applications will be prohibited, but this will not affect businesses much because a majority of the AI systems the rules disallow are those targeting vulnerable people like children and individuals with disabilities, with the deliberate intent to influence their behaviour so that it may cause psychological harm.

The AI systems that will be most impacted are the so-called ‘high risk’ AI systems and decisions that may affect fundamental rights if these run amok. Companies will need to register these applications, and self-assess how these systems are kept under control in terms of AI ethics principles such as fairness, robustness and transparency. In addition, any adverse incidents will need to be reported to government bodies. 

This will require companies to implement measures such as bias detection, explainable AI and the use of good old business rules to keep the machine learning under control and ensure that automated decisions are ethical, fair and balanced.

The focus of the regulations is on so-called ‘natural persons’, so by definition business-to-consumer enterprises will be affected more by the rules than business-to-business enterprises. 

So typically, this will include industries that directly engage with their customers across many channels, track their behaviour as part of their business function, such as financial transactions or mobile phone usage, and use intelligence to optimize their business processes and customer service. Typical examples are banks, insurance companies, healthcare, telcos and utility companies. Also, the public sector itself will need to comply with these regulations. 

The regulations will also help level the playing field and reward those organisations that use data and AI to the benefit of customers and citizens, as opposed to exploiting it for selfish purposes. 

And we need to remind ourselves that these regulations, whilst they may sound restrictive, only set broad boundary constraints. Ultimately, customers will become more AI and data-savvy, and vote with their feet and choose to work with organisations that are using AI to create wins for both the enterprise and the customer – and abandon the others. We are not products, we are clients.

Get Our Briefing Newsletter

* indicates required
Startup Spotlight – Minted

Startup Spotlight – Minted

With Europe’s FinTech market booming, we spoke to the CEO of Minted to hear more about how the platform is opening up access to buying and selling physical gold.

Startup Spotlight – Mention Me

Startup Spotlight – Mention Me

We spoke to Andy Cockburn, CEO and Co-Founder of Mention Me to hear more about how the referral marketing platform has accelerated it’s growth (hitting a landmark £1bn in client value earlier this), what lies ahead and how the marketing industry in Europe is changing.