Anything that we see, hear, read, smell, or physically feel in our daily life is stimulating our senses. Within research, this input to our senses is controlled and represented through specifically created stimuli. These are called emotional stimuli and are for instance images, video clips, words, or audio clips and mostly used for emotion induction in experiments, training of artificial intelligence, or therapeutic interventions. 

As my research project is focusing on emotional stimuli, prior to conducting any experiment myself, I had to obtain an overview of all existing stimuli sets to compare the various types and to understand how they are developed and used. This meant reading through hundreds of publications and looking at multiple thousand individual emotional stimuli.  

It is fascinating to notice the development within this field over time and to understand the specific research questions leading to constantly newer or ‘better’ stimuli in the sense of research. Especially stimuli used to train artificial intelligence have gained extensive attention and have exponentially developed throughout the past two decades. Training artificial intelligence in this context means ‘feeding’ a machine with stimuli along with their emotional label. To give an example, this could be images of facial emotion expressions or video clips of body movement. Based on the learned central characteristics of the image/video clip (for example position of the eyebrows forming a frown, speed of movement etc.) an algorithm is created – the rule upon which the machine will later recognize specific emotions in images or video clips that it has not ‘seen’ before. 

This way, security can aim to detect potential violence or aggression before it even happens solely based on body movement or facial expression, companies can keep customers in a good mood to enhance sales or increase chances of a contract, and the system in a car can suggest the driver to take a break.  

What may sound futuristic to technophobes, may sound extraordinary and exciting to technophiles. As a matter of fact, automatic emotion detection through artificial intelligence is being used on and around us without us being aware of it most of the time. Well-known examples are analyses of the music we listen to or video clips we watch from streaming sources as well as the comments we write on social media. Less well-known examples are the combined emotion analyses  of measured temperature, pulse and movement data collected by wearables such as smartwatches, or automatic indication of the caller’s mood to the employee in call centres based on voice parameters (within healthcare for instance, to indicate the caller’s suicidality). 

The more I read about this field of research, the more I started thinking about the role of research and the potential danger certain research results imply. Irrespective of my own research field that had set the ball rolling, I had questions over questions. While the benefit of accessing this data and hence the gain for companies is obvious, I wonder how great the benefit is for the individual. Are emotions not proper to ourselves and something very personal? There are many external factors that can influence the assumed emotion in the machine-training example given above. To name a few: ethnicity, age, culture, language, social background, religion, as well as aesthetic and historical changes that come with time.   

So, what if machines get trained with “wrong” information?  

What if an error rate can lead to fatal consequences? 

What if this scientifically gained knowledge is used to manipulate or harm people? (Emotional stimuli are for instance also used for the creation of advertisements). 

What if? 

One main aspect that kept repeatedly returning to my mind was the researcher’s responsibility. Deciding to follow the path of research, for example by starting a PhD., automatically means carrying responsibility no matter how insignificant we think our findings may be. Research is the fundament of knowledge development; however, a knife can be a tool as well as a weapon. Hence, we, the researchers, must be aware of our responsibility. It means that we are free to act within the scientific ethos. We must critically scrutinise existing findings and publicise justified concerns, however, we must also take research decisions within the borders or possible limits of research. Not respecting these limits, we may be opening Pandora’s box.  

Kathrin was funded through the Manchester Metropolitan Graduate School’s Research Support Award to complete a portion of her research. The next deadline for Research Support Award applications is Friday 4 June 2021 at 5pm. Find out more by visiting the PGR Development Moodle area. 

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *