background image


My research

My research focuses on understanding human interaction and communication by studying human-human interaction, and trying to implement human behaviour into a virtual character.

As a postdoctoral researcher I work on virtual characters as a whole. I aim to combine software made or used at HMI into a full multimodal virtual character that can be easily used and modified to suit the needs of the user. This means I focus more on the interaction between all components in the system,such as how to deal with incremental speech recognition, how detected human behaviour affects the mental state of the virtual character and how its intentions can be transformed to actual behaviour to perform.


During my PhD I worked on the SEMAINE project. This project aimed to build a virtual character (called an agent) that is capable of active listening behaviour. The agent should be able to interact with a human, it should try to sustain the conversation and it has to react appropriately to the non-verbal behaviour of the user. For example, it should request the turn (the 'right' to talk) before it starts speaking and it should give a backchannel signal when asked for one.

Being a good listener isn't just being silent and letting the other person do the talking. First of all, the speaker has to know that he has your attention, and just gazing to the speaker is often enough to show this. But besides attention, understanding is needed as well. Without understanding the speaker might as well talk to his dog; the dog might pay attention to his voice but doesn't understand a word he is saying. To show understanding a listener can nod in agreement, show fitting facial expressions and sometimes even complete the sentences or suggest suitable words.

Now the listener is actively listening to the speaker. To keep the speaker talking, the listener should also encourage the speaker to continue speaking. For example, giving small signals such as 'uhuh' (called backchannel signals) the listener shows to the speaker that he is still with him and following his story, which motivates the speaker to continue. Other options are to speak encouraging words, for example 'Wow, that's nice' or 'Tell me more about ...'.

I was responsible for the dialogue management of the agent. This piece of software receives the behaviour of the user as input. Then, based on what the user does, it has to think of some rational behaviour to perform and sends this behaviour to the virtual character to be executed.

As a part of this assignment, I focused on two things. First, the project defined four different virtual characters, each with a different emotional state. My goal was to give each character a certain turn-taking style that matched its emotional state. My second focus was the response selection of the characters. Each character had a limited set of utterances to say to the user, and based on the detected human behaviour appropriate responses had to be selected.