home

Hi, welcome to my homepage.

Here is my email address: maurizio.mancini [at] unige [dot] it

You can download my academic CV here.

I received my PhD in Computer Science in 2008 from the University of Paris 8, under the supervision of Pr. Catherine Pelachaud. From 2008-2015, I was post-doc researcher at the Department of Informatics, Bioengineering, Robotics, and Systems Engineering at University of Genoa (Italy). From 2016 to now, I’m Assistant Professor in Computer Engineering at the same university.

My research interests include: Human- Computer Interaction, Social/Affective Computing, Embodied Conversational Agents, Expressive movement analysis/synthesis, Nonverbal Behavior modeling, Multimodal systems, Multimodal interfaces and so on. I have organized many conferences and workshops in the related areas, such as the International Conference on Intelligent Virtual Agent (IVA), and International Conference on Movement and Computing (MOCO).

I have been one of the main developer of the ECA system Greta and I’m one of the main contributors of the EyesWeb XMI research platform.

If you want to take a look to my publications you can navigate this website or have a look to Google Scholar. The pdf of my dissertation, titled “Multimodal distinctive behavior for expressive embodied conversational agents” is freely available for academic use here.

I worked for the EU projects listed below, here I briefly describe the main tasks I have been directly involved in:

  • TELMI – (ICT RIA, 2016-2018): I defined and evaluated models and algorithms for the analysis of full-body movements of a violin player (e.g., balance, body-rocking) and indicators (e.g., dynamic/static movement detection) that could be provided as feedback for injury-free music training
  • DANCE – (ICT RIA, 2015-2017): I investigated how affective and relational qualities of body movement can be translated to sound to be represented and communicated through the auditory channel
  • ILHAIRE – (ICT FET-Open, 2011-2013): I developed computational models of laughter detection and synthesis in HCI for future multimodal systems
  • SIEMPRE – (ICT FET-Open, 2010-2012): I conducted research in novel theoretical and methodological frameworks, computational models, and algorithms for the analysis of creative communication within groups of people. I defined techniques for measuring creative social interaction in an ecological frameworks
  • SAME – (ICT CP, 2008-2010): I designed and implemented mobile systems for social active music listening, allowing users to interact as a group using mobile devices (e.g., smartphones) and gestures to mould a music piece in real-time
  • CALLAS – (IST IP, 2006-2010):  I implemented a computational system to define distinctive Embodied Conversational Agents, that is, virtual agents exhibiting individualized behavior, from the point of view of their communicative and emotional expression abilities
  • HUMAINE – (IST NoE, 2004-2007): I conducted my entire PhD research activity in this project, studying models of expressive movement synthesis for Embodied Conversational Agents and applying them to several concrete scenarios
  • MagiCster – (IST CSC, 2000-2004): for my Master Thesis, I designed and implemented a believable conversational agent, able to use gesture and body posture as well as synchronized speech to communicate with a human user