What does the EU AI Act mean for Affective Computing and Emotion AI?

Stormy days ahead in the EU for Affective Computing innovators. Image generated by Bing.

The EU AI Act will heavily affect innovation and commercialisation of affective computing in the EU. Despite talk of having measures to support innovation, the EU AI Act will seriously stifle innovation in this area, hitting in particular SMEs and startups hard. The act has been adopted by the EU, and is now being implemented by its member states. That means that very soon providers of Affective Computing and Emotion AI systems will have to comply with its stipulations in the EU. The act defines prohibited, high risk and low-risk AI systems, with pretty onerous obligations for providers of high-risk systems and relatively few obligations for low-risk systems.

The AI Act is very comprehensive. The definition of AI is very broad, covering not only machine learning systems but also expert systems and indeed any system that uses statistics to make a prediction. Interestingly, and relevant for you reader, is that it singles out two areas that are highly relevant for practitioners of Affective computing. Firstly, biometric identification, i.e. the recognition of a natural person based on data from their body, including the face and the voice. This is unsurprising given the long history of protection of individual rights and protection from the state in the EU. Secondly, and a bit more surprising, emotion recognition systems are singled out and mentioned frequently throughout the document (10 times in the AI act and twice in the annexes. ‘Gender’ is only mentioned once, for example).

 It is significant that emotion recognition is singled out by the AI Act. In particular, the Act prohibits (in Title 1) the use of AI systems to infer emotions of a natural person in the workplace and education institutions except in cases where the use of the AI system is intended to be for medical or safety reasons. and ANY emotion recognition system will be high risk.

 An emotion recognition system is defined as an “AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.”. Biometric data, in turn, is defined as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data”. Note that an earlier draft of the Act added that biometric data had to “allow or confirm the unique identification of that natural person.”, which is the usual definition of biometric data. In the latest (final?) version of the act, biometric data is not biometric anymore in that it doesn’t have to be data that can be used to identify a natural person. I see why this was done, but it will be very confusing for practitioners.

Now, that being said, if you’re an affective computing practitioner, you may take a slightly different view of what an emotion recognition system is. The definition in the Act is:

The notion of emotion recognition system for the purpose of in this regulation should be defined as an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data. This refers to emotions or intentions such as happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement. It does not include physical states, such as pain or fatigue. It refers for example to systems used in detecting the state of fatigue of professional pilots or drivers for the purpose of preventing accidents. It does also not include the mere detection of readily apparent expressions, gestures or movements, unless they are used for identifying or inferring emotions. These expressions can be basic facial expressions such as a frown or a smile, or gestures such as the movement of hands, arms or head, or characteristics of a person’s voice, for example a raised voice or whispering.

Whenever you need this much text to define a legal concept, and you feel the need to include examples of things that are included or excluded because otherwise you worry your definition isn’t clear, you know you’re in trouble.

Excluding physical states appears unproblematic, until you realise that emotion is a physical state itself. And excluding what we call Behaviour Primitives such as frowns or smiles, unless used to infer emotions, just creates more loopholes again. At any rate, not every affective computing system will be an emotion recognition system, and you’ll have to apply some judgment to determine whether your system is or not.

Ultimately, I think what this update to the EU AI Act is trying to do is to protect people from discriminatory outcomes of emotion recognition systems in the workplace and in education. In my reading, emotion recognition system applications that are allowed include:

  • Recognising when a driver is emotionally distracted and take a safety measure based on that - regardless of whether this is a private driver or an employee

  • Recognising a medical condition using an emotion recognition system in a driver or vehicle occupant

  • Detecting pain, fatigue, or depression in a remote operator of automated vehicles

What I think is not allowed would be to:

  • Train a doctor to be more empathetic or have better bedside manners using emotion recognition systems

  • Assess the performance of an employee using an emotion recognition system

  • Assess the performance of a job candidate using an emotion recognition system

Again, the examples above still need to comply with GDPR and other relevant regulations and laws. From my reading of the objections to emotion recognition system (definition 26c), I have an inkling that prohibiting providing training of your workforce using emotion recognition systems is an unintended consequence - time will tell.

So, ALL affective computing systems now come with specific transparency obligations (set out in Title IV) which basically means that any system that uses affective computing must inform the user that it does that, so you cannot put an emotion recognition system in a product and not tell the user that it’s there. And THE VAST MAJORITY of affective computing systems will be high risk.

I’m an academic researcher. Surely this doesn’t affect me?

The bad news for academic researchers is that the EU AI Act does appear to cover open-source software that they make available for other researchers to use, even if that’s done for free. The crucial aspect here is that they have to comply if they’re a provider, which is defined as “a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge;". Slightly confusingly, "Placing on the Market" is a concept which comes from something called exhaustion of rights and is particularly relevant to IP rights.

The idea is that once goods have been "placed on the market" (in this case in the EEA) so that the purchaser is then free to deal with them how they like, the IP rights in the product are exhausted and cannot then be enforced against a later purchaser.  This is particularly important for parallel imports around the EU and is fundamental to the concept of freedom of movement of goods.

In the 1990s there were a number of cases involving Levis jeans sold in Tesco, perfume and sunglasses which confirmed these principles for trade marks (in the EU when the UK was still in the EU).  In fact, most of the case law in the area relates to parallel imports of pharmaceutical products.  It is the same principle which means that you can resell your old iPhone without Apple suing you for patent infringement.

The position is slightly more complicated for software (and by extension AI systems) because there is no tangible object at all to which the IP rights belong.  In Usedsoft v Oracle (Case C-128/11) the Court of Justice of the European Union held that granting an indefinite non-exclusive licence to software for a fee also amounted to placing on the market and therefore exhausted Oracle's rights in that copy of its software. 

A number of US cases including LifeScan Scotland v Shasta Technologies state that the fact that you have given something away instead of selling it is not an argument for saying your IP rights should not now be exhausted.  The US also recognises that open source software can confer non-monetary benefits on the distributor.

This is just an opinion, and there does not appear to be settled law on this point in the EU. It may be clarified as the text for the AI Act is finalised in the next few months.

However, "placing on the market" would not include sharing between two academic research groups for research where the system is kept confidential. So collaborations between academic (groups) would remain possible, yet you would be liable to uphold the obligations of the AI Act if you make your source code/AI systems publicly available.

I might be wrong here, and if so, I would really welcome if you could write to me to explain why I’m wrong, so I can update this article.

To an extent, it makes sense that open source systems are included. You may not intend to cause harm, but if you are a researcher who makes say an emotion recognition system to help people practice for a job interview, and the general public starts to actually use it to practice for job interviews, then you now put the general public at much the same risk as a commercial provider would. This may be a problem if you are seeking to create impact with your research - doing so will make you a provider, and providers must comply with a lot of onerous obligations.

Fine, I’ll tell the user I’m processing their emotions. Anything else?

How an AI system is dealt with and what it must comply with depends on its categorisation into prohibited, high risk, and permitted low-risk applications.

There is actually not that much that is prohibited, but crucially emotion recognition in the workplace or education is not allowed unless it’s done for medical or safety purposes. You can read it in full in Title II of the act, but basically you are not allowed to:

  • Infer emotions of a natural person in the areas of workplace and education institutions except in cases where the use of the AI system is intended to be put in place or into the market for medical or safety reasons

  • Use subliminal techniques beyond a person’s consciousness to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;

  • Exploit any of the vulnerabilities of a specific group of persons due to their age, physical or mental health, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;

  • Evaluate or classify/regress the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:

    1. detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;

    2. (detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;

  • Do ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement - unless the authorities have a warrant or a very good reason.

Note that you can still do remote real-time biometric identification for other reasons than law enforcement, e.g. to maintain security in a private facility, although there are many other laws that put restrictions on that too. Also note that there are carve-outs to the bill for military use for just about every use of AI.

So, most affective computing systems are not prohibited, unless they are to be used in the workplace or an educational setting. I have seen posts on the Internet that claim that certain things are prohibited which do not coincide with my understanding of the Act. For example, I read that monitoring workers in cars for signs of emotional distraction would be prohibited, but this would only be the case if that was done to exploit specific groups of workers based on vulnerabilities of that group, and would be allowed as long as it’s done safety or medical purposes. It might also be prohibited by other laws of course - GDPR and Encap 2030 spring to mind.

That said, all emotion recognition system are considered high risk regardless of the application area. One area relevant to affective computing that you might consider to be low-risk is entertainment, however it is still classed high risk.

Even if you have determined that your Affective Computing system is not an Emotion Recognition System as defined by the Act, the AI Act cleverly defines any system that is a safety (component) as high risk AND any system where the application is required to “undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II” - and that is a long list! It includes no less than 19 regulations and directives ranging from medical devices, to food processing, transportation, labour protection, animal welfare, and more. It’s safest to assume your application is high risk, if it’s not, you’re lucky.

Ok, so my system is probably high risk. What do I have to do?

A lot, assuming you’re a ‘provider’. Title III chapters 3, 4, and 5 detail it all. It goes on for more than 22 pages (the total act is about 90 pages and many of them are just definitions). The list is far too long to discuss comprehensively here, but some of the most important obligations include:

  • You must define an intended use

  • You must maintain a formal risk management system that comprises all stages of the system’s lifecycle and implement relevant testing at each stage

  • You must do post-market surveillance of risk and effectiveness

  • The system must be judged safe for its intended use

  • Formal data governance including documentation of assumptions made, data collection processes, potential bias analysis etc.

  • Training, validation and testing data sets shall be relevant, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used.

  • Technical documentation is in place before a system is put on the market

  • Automatic logging of e.g. faults, dangerous situations, and adverse events must be implemented

  • The system must be sufficiently transparent to enable users to interpret the system’s output and use it appropriately

  • Systems shall be designed and developed in such a way that they can be effectively overseen by natural persons during the period in which the AI system is in use, and for some critical systems two humans must provide their approval before the system can take an action

  • Systems must be demonstrably safe against interference, be it from cyberattacks or adversarial data attacks

  • Systems must be built under a Quality Management System (QMS)

  • Inform national competent authorities and/or notified bodies about the product on the market

  • You must obtain certificates of conformity where possible (these do not yet exist)

Let’s just say that you should have at least one person in your company whose sole and full-time responsibility it is to look after these regulatory affairs, and then a few more people’s time to implement all the changes and do all the reporting, plus a full culture shift to ensure all your workers are on board with all of this.

If you’re a small company working in Emotion AI, this means you’re probably screwed (pardon the language). You probably won’t have the capability to build products according to the onerous obligations, so you will now need to raise serious capital to become compliant, and doing so will take a lot of time.

For my own company, Blueskeye AI, this is not a problem as we’ve been going down the path of creating products in heavily regulated industries for a long time now and in terms of culture, processes, data and systems we have everything in place to be compliant. I predict that many companies who were interested in doing Emotion AI will now instead rely on the services provided by e.g. Blueskeye to avoid having to go through the onerous regulatory processes themselves.

If you’re an affective computing researcher, I’m not entirely sure how the AI Act will affect you. It is unclear to me what obligations there are for people who make high-risk emotion recognition systems available to others without being a provider, which is what I think is the situation for most affective computing researchers. This is a gap that I expect will be clarified over the next year or so.

There are a small number of discrepancies and vague areas in the AI Act when it comes to academic research, and I would welcome clarification of their status as a provider or not, and what elements of the AI Act non-providers should nevertheless adhere to. If you have any comments or additional clarity you can provide to our community, please comment on this post!

Acknowledgments

With thanks to Andrew Allan-Jones of DAC Beachcroft LLP for his explanation of placing on the market

PS: Some important definitions

Below I have pasted some of the most relevant definitions to help your understanding.

An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data;

‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data;

‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data unless ancillary to another commercial service and strictly necessary for objective technical reasons;

‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified .

Written by BlueSkeye Founding CEO, Prof Michel Valstar

This article was updated on 15 February following the AI Act’ approval by the Council of the EU's Committee of Permanent Representatives and the EU’s Committee on Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice and Home Affairs in early February

See original post on LinkedIn HERE

Previous
Previous

Supporting Clinical Practice with Ethical Emotion AI

Next
Next

Data are Oompa Loompas