Ethics in Affective Computing

Now out: Special Issue on Ethics in Affective Computing

TL;DR: The IEEE Transaction on Affective Computing’s Special Issue on Ethics in Affective Computing is now out!

The Affective Computing community is very close to my heart, among other things because of its multidisciplinary audience, the quality of the papers, and the diversity of the keynotes. Crucially, it is a very open community where tough but respectful discussion is held freely by the attendees without fear of being ridiculed. I clearly remember in Memphis how Arvid Kappas warned about the impact of the social setting on research results, and at multiple times the limited value of six basic emotions was openly discussed.

More than once the entire conference got involved in discussions around the difference between measuring ‘felt’ emotions vs replicating apparent emotions, as interpreted by an observer. Both are valuable, but they are very, very different. Rosalind Picard (Roz) regularly and rightly warned about the claims some papers make and the need to be exact about this: sensational paper titles get picked up by an uninformed audience of the media and general population who will take titles like ‘fully automatic facial action analysis’ quite literal (yes - that was one of my paper titles back in the day - I’ve learned since then not to do that).

In 2019 the conversation moved seriously to the issue of ethics in affective computing. Data privacy was definitely a part of that, but Roz was rightly keen to broaden the discussion well beyond that. Over dinner Jonathan Gratch, Gretchen Greene, and I discussed topics including GPDR, DPA, HIPAA, the need to cite ethics board approval in our papers, but also our responsibilities as experts in the field to be clear about our claims, the limitations of our work, and to inform regulators and policymakers about AI when applied to such a fundamental human concept as emotion.

The day after Roz gave a presentation on the topic and we had our first plenary discussion on ethics in affective computing. As a great example of what not to do, and to do, Roz showed a clip of David Hanson showing off Sophia, his social robot. The presenter says, “She’s basically alive” to which Hanson responds: “Eh, yeah, yeah, she is basically alive”. This is of course not true, and an example of misinforming the general public. I strongly believe that it is my duty, and that of other leaders in the field to properly inform the general public, key opinion leaders, and policymakers, using research output, data, our knowledge and experience.

The great thing at ACII 2019 in Cambridge was that the vast majority of attendees were already thinking about these issues and we had a really fruitful discussion, including a follow-up meeting with the editors of the IEEE Transactions on Affective Computing (TAC) about whether we should include a requirement for authors to declare that their data was obtained ethically and had the approval of an ethics board. This was ultimately not adopted as a policy as there were concerns that it would rule out a whole section of researchers that don’t operate in a strict ethical framework because of their geographic location. I think this was a missed opportunity.

Another decision that a small group made at the time was to organise a special issue in TAC on Ethics in Affective Computing, and I’m really pleased that this has now finally been published! It has taken far too long, but we have seven beautiful papers published! The articles cover a range of ethical concerns. Some authors address fundamental issues that would impact any application, while others deeply consider the issues unique to a specific domain, like education. Finally, authors consider a variety of perspectives to avoid abuse and promote well-being.

Privacy is a central concern in affective computing and the focus of several of the articles. People freely express their emotions to facilitate communication and coordination with others (even machines), yet they often regulate these expressions to hide their true feelings (e.g., to avoid hurting others' feelings or to protect against potential exploitation). To the extent that machines could “see” through these regulatory attempts, they might have a superhuman ability to predict an individual's goals, values or action tendencies. Four articles struggle with these privacy concerns.

Whereas the first two articles emphasise privacy concerns, one proposes concrete solutions where the primary raw input data (e.g., video) and the secondary outputs of basic machine learning models are separated (e.g., facial points or predicted facial muscle actions). This approach was taken to the next step by BLUESKEYE AI, which further separates the secondary output into a behaviour primitive and behaviour insights stage which makes it easier to avoid bias and lowers data requirements. There are obvious ethical benefits for using secondary data, as it does not contain readily identifiable face and voice data and can thus be used for de-identified affective computing. The authors use a spatial-temporal graph representation of the secondary outputs and present how aspects of graph network sciences can be used to interpret the affective data, for example by considering the centrality of a node in the graph which represents a particular facial muscle.

One paper examines educational applications, one of the areas now banned in the EU through its AI Act. Recent advances in affective computing have made it possible to detect student boredom or frustration to guide targeted instruction, but less consideration has gone into addressing the ethical concerns related to detecting and reporting on learners’ emotions specifically within applied educational contexts. I think it’s a shame that the EU went for an outright ban of Emotion Recognition Systems in educational institutions (other than for health and safety reasons) as research such as this shows that there’s an opportunity to do social good ethically and safely.

Emotion AI, the popular term for Affective Computing, sometimes gets a bad rap. Some people seem to think that there’s no real science being conducted by people in the field. My point in writing this article is to evidence yet again that yes, great, thoughtful, inclusive research is being done in this field, and more often than not it’s done with a real consideration of the ethics involved.

This ethical AI approach has continued ever since, with as a case in point, at ACII 2023 in the other Cambridge, this little gem of a paper by Hatice Gunez and her team on assuring fairness in dyadic social robots for mental health therapy despite having small datasets.

Ethical AI is something that is not just bandied around for profit-seeking purposes by companies in this space. It is lived by the entire Affective Computing ecosystem and will continue to do so.

Previous
Previous

The Oximeter Oxymoron

Next
Next

BLUESKEYE AI's Explainable, Robust, and Adaptable Approach to Face and Voice AI