Xiaofeng Liu.
Adv. Artif. Intell. Mach. Learn., 1 (2):155-170
Xiaofeng Liu. : College of IoT Engineering, Hohai University, Changzhou, 213100, China
DOI: 10.54364/AAIML.2021.1110
Article History: Received on: 03-Sep-21, Accepted on: 26-Sep-21, Published on: 30-Sep-21
Corresponding Author: Xiaofeng Liu.
Email: xfliu@hhu.edu.cn
Citation: Yizhou Chen, Xiaofeng Liu, Jie Li, Tingting Zhang, Angelo Cangelosi (2021). Generation of Head Mirror Behavior and Facial Expression for Humanoid Robots. Adv. Artif. Intell. Mach. Learn., 1 (2 ):155-170
Understanding of human expressions and action intentions is crucial to improve the intelligence of the social robots. The social
robots can also perform mirror behaviors to learn from and understand others. Here we present an intelligent framework in
which a humanoid social robot can simulate human facial expressions and emulate human head motions. To fulfill these
requirements, an intelligent interactive robot control framework which can simulate human facial expressions and head motions
is proposed. We developed a physical animatronic robotic head platform with soft skin, whose expressions can be adjusted
through the customized mechanical dynamic structure. A vision-based facial expression and head motion recognition method is
proposed to establish the understanding mechanism of the expression and action intention of the intelligent robot using deep
learning framework. Finally, the recognition results are employed to determine the optimal motor displacements for transferring
the head states of a human to the robot. The experimental results demonstrate that our method can maintain high accuracy when
facing different human subjects. The comprehensive evaluation shows that our method can perform accurate and real-time
facial expression generation and head mirror behavior, which has a promising application for home assistant robots.