2 November, 2016

Stanford Collaboration

In early 2016, we contributed to a research paper from Stanford University titled “Conversational Agents and Mental Health: Theory-Informed Assessment of Language and Affect”. The paper is available for download from Stanford’s archive. The paper was submitted and accepted for presentation at the HAI ’16, the fourth international conference on Human Agent Interaction, held in Singapore in early October 2016.

The study looked at how humans feel when they chat with artificial bots. When humans chat to other humans, there is a strong tendency for them to mirror each others’ positive sentiments. For example, if one person says something positive “your look nice today”, they are more likely to get a positive reply back “thanks you too”. The study found this happened 84% of the time. However, humans only mirrored negative statements 22% of the time.

The study analysed whether the same effect happened when a human chatted to an artificial conversational agent like Cleverbot. They found that yes, the sentiment mirroring still takes place, with 75% of positive sentiments reflected, but a whopping 41% of negative ones. In other words, humans are far more likely to swear back at bots than other humans!

Mirroring results table

Our involvement in this project was to analyse the conversations to extract emotions. This was one of the first uses of a very early version of our Emowatch technology, which detected emotions in text. In this case the emotions were essentially used for sentiment analysis. We were very happy to be part of this collaboration.

This work is important as it takes a step towards enabling the use of conversational agents in mental health settings.