Futures: Up Late 2024, was a coming together of the Centre for Sociodigital Futures (CenSoF), MyWorld and the Bristol Vision Institute (BVI) on the SS Great Britain. We demonstrated at the event lots of different types of technologies to stimulate an open conversation about the future use of them, from Virtual Reality to real-time emotion detection technology. With this latter technology Artificial Intelligence (AI) and computer vision can be used to not only identify faces but claims to read your emotions too. But how reliable is it? And what ethical issues arise when AI is used to detect our feelings?
The Myth – AI can detect our emotions accurately
It is increasingly discussed that AI can accurately understand emotions. At the event, we used face-api.js, a free tool developed by Vincent Müller available on Github, that detects faces and analyses emotions in real-time through a webcam. Many visitors were surprised when they were labelled as ‘happy’, ‘sad’ or ‘neutral’ even if they felt otherwise. This raised key questions about the accuracy of the system. The truth is, AI and computer vision systems like this rely on patterns in data, not actual understanding. Emotions are incredibly complex and personal and therefore these systems can often get it wrong.
Bias in AI – The Accuracy Problem
The challenges of bias in AI systems are well known. Any AI system is only as good as the data it has been trained on. In this case, if the data is not reflective of a wide range of faces and expressions, the accuracy is impacted. For some visitors, the system didn’t reflect how they actually felt at that time. These inaccurate readings were indicative of a data set that doesn’t reflect the diversity and complexity of human expression. This bias and inaccuracy can have serious consequences though, especially when used in an enforcement context where misreading expression could lead to unfair treatment. Moreover, faces don’t always fully reflect how people are feeling, adding to the complexity.
Privacy Concerns – A double-edged sword
Emotion detection also flags significant privacy concerns. This technology could be used in public spaces, such as airports or shopping centres. Would you feel comfortable knowing AI is analysing your emotions? This was raised by many visitors at the event, along with how their personal data was captured and if it was being stored. While there are potential benefits, like improved security, many visitors were uneasy about the idea of being monitored without consent. As this technology develops, it is important to consider whether emotion detection belongs in public places and how we can protect people’s privacy.
Where Can AI Help or Harm
Despite some clear issues raised, there could be real opportunities, for example, it was suggested by a visitor in the medical profession, that it could be used in fields of healthcare where monitoring a patient’s emotional change could be beneficial regarding mental health treatment or suggested by a parent regarding their child for those who have Alexithymia, trouble identifying emotions. Some of those we spoke to at the event were more concerned by the potential risks, especially owing to the mistakes that the system was presenting on the night. These concerns also raised questions about whether such systems can truly account for the complexity of human emotions, including multiple or conflicted feelings.
Towards Ethical Emotional Detection
We asked visitors what would make them feel more comfortable with the use of this technology. A clear recurring message was transparency in what and how their data was being used, with clear and informed consent being discussed and being listed as a high priority. The other aspect raised often, particularly for those where the system returned an “incorrect” emotion categorisation, was making sure the system worked accurately every time. Technologically this is challenging due to the complexity and subtlety of human emotional expressions. So not only is it critical to make sure the systems are trained on diverse datasets, but also facial image input may not be enough. A multimodal approach, integrating information from other sensors, would improve accuracy. However, as more personal information is captured to improve accuracy, it further highlights the importance of care in protecting all of this personal biometric data captured. This feedback also raises broader questions about whether such systems can or should be adapted to meet these expectations or if there are limits to what they can or should achieve.
Challenging the claim
A computer might claim to know how you feel, but given the complexity of our human emotions currently the technology can’t fully understand your emotions or get it right every time. So we must challenge this current claim of AI and computer vision emotion detection and ask questions regarding its future use. At Futures: Up Late we started some important conversations about shaping the use of these technologies responsibly. This includes considering whether, in some cases, they should be used at all.
But what do you think, should AI and computer vision be used to detect emotions in public spaces? If so, what safeguarding would you want to be implemented? Please comment!
For more information on the work of the Centre for Sociodigital Futures, join our mailing list, follow us on X and LinkedIn or visit our webpage.