Research article

ANALYSIS OF CO-LOCATED MIXED REALITY HUMAN-ROBOT INTERACTION USING MULTIMODAL HEADING AND POINTING GESTURES

Anurag Tiwari1, Nileshkumar Patel2, Shishir Kumar3

Online First: December 07, 2022


The theory and application of a human-robot collaboration using Mixed Reality based methods are discussed in this study. By using this technique, industrial robotic arms can be controlled for grab jobs by a human in a natural and intuitive manner. Human-robot interaction (HRI), that entails an operator collaborating with and monitoring nearby robots, is now possible because to mixed reality (MR). Using a head-mounted display (HMD) like the Microsoft HoloLens, the operator may see the actual robots in HRI situations. To increase security, acceptance, and reliability, multiple virtual information can be placed on the real-world view. The risk of system injury or harm to the solitary operator can be reduced to the greatest extent possible by being able to anticipate future robot actions in-situ before they occur. Researchers looked at two different multimodal HRI systems that used speech and either neck position, often known as direction, or (ii) pointing to choose the pick place on a target item. The findings demonstrate that for MR-based pick-and-place situations, heading-based action to address are more accurate, take less time, and are assessed as less mentally, physically, and mentally exhausting.

Keywords

human-robot, interaction, gestures, pointing, mixed reality