Tudorache is Vice-President of the Renew Europe Group, the EP’s Civil Liberties, Justice and Home Affairs Committee rapporteur on the AI Act, and also sits on the EP’s foreign affairs committee. He was the Chair of the EP’s Special Committee on Artificial Intelligence in the Digital Age.
A fifth round of trilogue negotiations kicks off tomorrow (24 October) in the EU Council, what do you see as the remaining points of contention?
I see three major blocks in the ongoing negotiations and one connected but less contentious issue.
First there is the issue of various exemptions sought by the EU Council for law enforcement in the context of the Article 5 Prohibitions in the draft Act. These are all connected: the Council wants exemptions for national security; the EP is advocating for a stricter approach on facial recognition technology, where we want a hard ban on use in public places but where the Council is seeking exemptions for law enforcement; and on the use of some of the high risk applications where the Council wants more leeway for law enforcement, we in Parliament are seeking a more stringent approach. This blockage in the discussions can be taken and negotiated as a whole in one go in my opinion.
The second contentious issue relates to foundation models. The original European Commission (EC) proposal and the Council’s opening stance on the AI Act did not deal with this, but in the EP we introduced a new regime into the text in order to impose stringent obligations on apex models of AI [such as ChatGPT and Bard]. This is creating another blockage in the discussions and will be addressed in this week’s trilogue.
The third blockage relates to governance and enforcement issues where there has been quite an evolution in the text, as we in the Parliament went further than the EC envisaged and where the Council anticipated, for example in the level of fines, and this will also be discussed in this week’s trilogue.
A fourth and last issue relating to the approach in Article 6 to how we deal with high risk applications is less contentious since progress has been made so far in the trilogues and the institutions are not so far apart, the Council and Parliament both accept that there needs to be some form of qualification for the obligations imposed by Article 6, but we have given a mandate to the technical teams to consider several options and I believe that there has been progress at a technical level on this issue so possibly this could be agreed this week.
There are other issues at play, but these are the main ones as far as I see it.
How much unity is there behind the EP negotiating stance among the different parties?
It’s no secret that when you look at the voting pattern on the original mandate in the EP that there were divisions on the issue of exemptions for law enforcement within Article 5. Eventually the EP passed the mandate with a comfortable majority, but it was evident in the vote that the European People’s Party and other groups thought a prohibition on use of AI for law enforcement was going too far, which is why in the negotiations we have to take into account all the views, and achieve a compromise that will ultimately be capable of passing through Parliament.
Do you think there is room for finding compromise on all of the issues you have flagged?
There has to be, we have to find compromise if we want to close the file, and we do want to close the file; none of the institutions can afford to negotiate from red line positions, there has to be give and take.
The idea of imposing stronger obligations on so-called foundation models has proved controversial, could this disappear from the text following negotiations?
I don’t see any scenario in which it disappears. When the Council adopted its mandate late last year ChatGPT had not really surfaced as a phenomenon, and so the focus was on the value chain and general purpose AI. The Council recognises that there is no option but to deal with foundation models, which means we need to settle how. I don’t think the solution is decreasing the levels of responsibility imposed on foundation models so much as a refining the scope of definition. The Parliament wants to ensure that we are hitting the right note in what is caught by the scope of the drafting, we realise there needs to be better targeting of these rules. So if I venture a guess as to the direction of travel in relation to these foundation models it is likely that there will be a further refinement of what actually falls under this provision.
What model would now be included which might not?
For certain an app like ChatGPT will remain because it is at the top of the food chain in terms of how powerful it is. Our intention is precisely that – to focus in relation to accountability on those powerful models because of their capacity to be used for good and bad. However the nomenclatura surrounding foundation models is fluid and has evolved to encompass smaller foundation models. We need to find technical criteria that accurately classify a foundation model that should come under scope, and we are hard at work trying to determine these.
How many more trilogues after this week’s do you anticipate will be needed to find agreement?
I think that if we achieve all we want to this week then we can reach final political agreement in one more trilogue, which could take place in mid-November, either just before or during the EP’s Strasbourg week (20-23 November).
Do you see the AI Act receiving a final vote in the EP this year?
I’m not sure about this year, much depends on how much work the lawyer linguists will have to do once a final political agreement is achieved. it’s a very long and complicated piece of legislation, which might need two months of work. If we achieve political agreement by the end of November I think the prospect of catching the December plenary for a final vote might be unrealistic, and we might need to hold for the January plenary next year. But that is merely a procedural issue – what matters is achieving the political agreement.