An Emotional Engine for Behavior Simulators
DOI:
https://doi.org/10.17083/ijsg.v2i2.76Keywords:
Behavior Simulators, Fuzzy Logic, Unity3D, Serious Games,Abstract
Interpreting, modeling and representing emotions is a key feature of new generation games. This paper describes the first version of the Emotional Engine we have developed as a component of more complex behavior simulators. The purpose of this module is to manage the state and behavior of the characters present in a scene while they interact with a human user. We use preexistent language recognition libraries like Windowsâ„¢ Speech API, and Kinectâ„¢ devices to communicate real humans with artificial characters participating in a virtual scene. The Emotional Engine works upon numeric variables extracted from such devices and calculated after some natural language interpretation process. It then produces numerical results that lead the behavior, modify both the verbal and body language of the characters, and influence the general evolution of the scene that takes place inside the simulator. This paper presents the system architecture and discusses some key components, such as the Language Interpretation and the Body Language Interpreter modules.
References
[2] Shotton J., Fitzgibbon A., Cook M., Sharp T., Finocchio M., Moore R., Kipman A., Blake A., "Real-Time Human Pose Recognition in Parts from Single Depth Images”. Microsoft Research Cambridge & Xbox Incubation. DOI: 10.1007/978-3-642-28661-2_5
[3] Gansner E. R. and North S. C.. ”An open graph visualization system and its applications to oftware engineering”. Software-Practice and Experience,
2000(30)pages1203—1233. DOI: 10.1002/1097-024X(200009)30:11<1203::AID-SPE338>3.0.CO;2-N
[4] https://www.movesinstitute.org/research/human-behavior-simulation/
[5] Tardón C. G. Behavior Simulator. Generación y Aplicación de un Ser Humano Simulado para el Estudio de la Interacción social didáctica. InteligenciaArtificial. Revista Iberoamericana de Inteligencia Artificial, 12 (38), 2008.
[6] Hartholt A., Traum D., Marsella S. C., Shapiro A., Stratou G., Leuski A., Morency L.P., Gratch J.. All Together Now. Introducing the Virtual Human Toolkit. International Conference on Intelligent Virtual Humans, 2013. DOI: 10.1007/978-3-642-40415-3_33
[7] Leuski A, Kennedy B, Patel R, Traum DR. Asking questions to limited domain virtual characters: how good does speech recognition have to be? ASC, 2006.
[8] Jung Y., Kuijper A., Fellner D., Kipp M., Miksatko J., Gratch J., Thalmann D., Believable Virtual Characters in Human-Computer Dialogs. Eurographics 2011.
[9] Chopra S., Badler N., Where to look? Automating attending behaviors of virtual human characters. Proceedings of the third annual conference on Autonomous Agents. 1999. DOI: 10.1145/301136.301152
[10] Scherer S., Marsella S., Stratou G., Xu Y., Morbini F., Egan A., Rizzo A., and Morency L.-P., Perception Markup Language: Towards a Standardized Representation of Perceived Nonverbal Behaviors. The 12th International Conference on Intelligent Virtual Agents, 2012. DOI: 10.1007/978-3-642-33197-8_47
[11] Ribeiro T., Vala M., and Paiva A., Censys: A Model for Distributed Embodied Cognition. Intelligent Virtual Agents. Lecture Notes in Computer Science (8108), 2013. DOI: 10.1007/978-3-642-40415-3_5
[12] van Oijen J. and Dignum F., Agent Communication for Believable Human-Like Interactions between Virtual Characters. Lecture Notes in Computer Science (7764), 2013. DOI: 10.1007/978-3-642-36444-0_3
[13] Traum D., Swartout W., Gratch J. and Marsella S., A Virtual Human Dialogue Model for Non-team Interaction. In Dybkjær L. and et al., editors, Recent Trends inDiscourse and Dialogue, volume 39 of Text, Speech and Language Technology, pages 45–67. Springer, 2008. DOI: 10.1007/978-1-4020-6821-8_3
[14] Tinwell A., Sloan R.J., Children’s perception of uncanny human-like virtual characters. Computers in Human Behavior. 36: pp. 286-296. 2014. DOI:10.1016/j.chb.2014.03.073
Downloads
Published
Issue
Section
License
IJSG copyright information is provided here.