National Insights: Thinking holistically about AI

Dec 13, 2022

EFDPO member Association APDPO inteveiwed Eduardo Magrani, currently Senior Consultant at CCA Law Firm, on the current trends, risks and benefits of Artificial Intelligence and how the topic is related to data protection issues.

What are the regulatory trends for Artificial Intelligence (AI)?

AI is a cross-cutting and complex topic. Several standards can be applied to the field of AI given its extension. However, more and more, we observe a regulatory tendency to standardize specific fields and specific uses related to AI. The GDPR, in Europe, ends up regulating this matter with regard to personal data with a very robust regulation related to the protection of personal data and which has a passage focused on automatic decisions that refers to the subject of AI. In addition to the GDPR, there is now a discussion in Europe about the AI Act, a specific regulatory act for AI that should be seen in addition to other rules and regulations that already exist, such as the GDPR. Being a specific norm, a specific regulation, the AI Act has a greater condition of helping to reduce risks, to better guarantee the rights that must be guaranteed in this field and gives more legal certainty to those who want to introduce AI in their technological solutions, in its services and its products. Therefore, this is a complex issue and there is currently a strong regulatory trend, mainly in Europe, as is the case with the proposed regulation of the AI Act, which tends to be approved soon and which will impact various entities.

What are the main points of attention for a legislator in the treatment of artificial intelligence?

The first point of attention refers to the very concept of artificial intelligence. During the proposal of the AI Act, the legislators received several contributions from entities and there was a great debate that refers to the concepts. There is currently no perfect formula for conceptualizing artificial intelligence, but it is extremely important that there be a minimum consensus among legislators, civil society, companies and public bodies on this matter, because regulation, if it does not have a well-designed conceptual construction , it can become, for example, an ineffective norm. Or it can become a disproportionate norm by also regulating what it shouldn’t. So, when standardizing on artificial intelligence, I would say that the first point of attention is the very concept of artificial intelligence.

Other points of attention refer to risk analysis. For example, whether certain uses of artificial intelligence should be banned, or not and what would be the repercussions, the consequences of that.

In addition, it is to think how new regulations can complement existing regulations. So I mentioned the GDPR a little while ago, but the GDPR is only about protecting personal data, so there is a gap with regard to automated processing of data that is not personal data.

So, a point of attention would be, what are the main gaps, regulatory vacuums today and how can they be addressed by future regulation of artificial intelligence?

Finally, another point of attention, which is usually quite complex, refers not only to risk assessment, but to risk analysis. I mentioned, but it is also the responsibility of those actors who develop artificial intelligence. Artificial intelligence can generate damage that comes from different inputs, from different actions. They may have generated an artificial intelligence that brings complexity in relation to the responsibility of the agents involved, which must also be addressed with great caution.

How will artificial intelligence impact personal data protection issues?

It impacts because the specific regulation of artificial intelligence goes far beyond the protection of personal data, an artificial intelligence to be trained necessarily needs data, but this data is not always considered personal data.

Personal data is already covered by the GDPR, but it is now up to the regulation to complement everything that is not a specific matter of personal data and go beyond filling these gaps, these regulatory vacuums, thinking more holistically, artificial intelligence in its various uses, as a transversal technology that would impact different areas and that uses information to be trained, but not just personal data.

How to analyze the risks of artificial intelligence to the fundamental rights of individual subjects?

Artificial Intelligence can bring a series of benefits, automating processes, services, increasing efficiency gains not only in the private sector, but also in the public sector, generating greater profitability, so its benefits are notorious in the professional area and in our particular area, also bringing comfort to everyday life. So all these transversal benefits are already very well perceived today by society as a whole and by the companies that develop and public bodies that use and that take advantage of this new technology of great potential.

However, with all this enormous potential, artificial intelligence can also bring risks.

And what risks can artificial intelligence bring?

They are of different orders, it can bring risks related to the violation of personal data, on the one hand, due to its opacity and its lack of transparency, there is the possibility of generating unreasonable discrimination with some holder of personal data or when that holder of personal data has not authorized the processing of that personal information. So that’s just to talk about the relationship with personal data. But, in addition to personal data, artificial intelligence can also harm individuals only with the use of other information, generating problems of lack of transparency and discrimination, even without using personal data.

Today there is an international debate on ethical principles that should guide artificial intelligence, such as the principle of justice, benevolence, non-maleficence, non-discrimination, transparency, privacy and responsibility. These are some of the most mentioned principles in the field of artificial intelligence, which should be implemented by private companies and public bodies that use artificial intelligence precisely to avoid the risks and damage that can arise from this scenario. These are some of the most mentioned principles in the field of artificial intelligence, which should be implemented by private companies and public bodies that use artificial intelligence precisely to avoid the risks and damage that can arise from this scenario. Many are still not regulated, but through the implementation of ethical principles, they can be reduced.

But in addition to ethical guidance, for the development of artificial intelligence it is necessary to move forward with specific regulation. And that’s what Europe is now discussing with the AI Act, one of the facets of which is to have risk-based regulation, including banning certain uses of artificial intelligence. And these prohibitions are linked precisely to the level of impact and risk that is generated.

For a specific solution in a predetermined context, the idea is increasingly to extract the benefits, but reducing the risks and damages that can emerge from this use and development of artificial intelligence and it seems to me that Europe is going in a good direction in the to mitigate these risks that may materialize in the coming years.

Author: APDPO PORTUGAL – ASSOCIATION OF DATA PROTECTION AND SECURITY PROFESSIONALS

Eduardo Magrini: Doctor of Laws. Post Doctor at the Munich Center for Technology & Society of the Technical University of Munich (TUM) on Data Protection and Ethics of Artificial Intelligence. Senior Consultant at CCA Law Firm in Portugal. Affiliate at the Berkman Klein Center for Internet & Society (BKC) at Harvard University.

 

Recent news

EFDPO Conference. 13 & 14 May 2025, Berlin

EFDPO Conference. 13 & 14 May 2025, Berlin

One of EFDPO’s goals is to create a European network of national associations for the exchange of information, experience and methods, and to improve the quality of training and professional practice.

read more
EFDPO session at Privacy Days Prague 2024

EFDPO session at Privacy Days Prague 2024

2024We would like to invite you to the 8th year of the traditional privacy conference organized by the Czech Data Protection Association. The conference is divided into two days, the first day will be held only in English (EFDPO session), the second day exclusively in...

read more