Activities
- Group1: Development of a CA which has a humanlike presence and a lifelikeness
- Group2: Research and development on unconstrained spoken dialogue
- Group3: Human level knowledge and concept acquisition
- Group4: Cooperative control of multiple CAs
- Group5: Development of CA platform
- Group6: Multidisciplinary investigation on how avatars and devices affect users
- Group7: Field experiments in the real world
- Group8: Avatar social ethics design
Group1: Development of a CA which has a humanlike presence and a lifelikeness
Latest Activity
Development of a CA which has a humanlike presence and a semi-autonomous intelligent interface (Hiroshi ISHIGURO)
We have been engaged in research and development of teleoperated androids that embody human-like sense of presence as the new communication medium based on the understanding that a human is indeed the most human-friendly medium for communication. Building on research in cognitive science and psychology as well as robotics, we have thus far developed various types of communication robots including the “Geminoid”, a teleoperated robot that looks very much like the operator himself. This Moonshot project extends our expertise on androids by creating a Cybernetics Avatar (CA) that the operator can control as if it is the extension of the operator’s body. CA will have the operator's sense of presence and enable smooth interaction between the operator and the interactants thanks to its technologies; CA will be equipped with sensors to appropriately grasp the situation around itself, the interface through which the operator can easily control the CA, and the functions that generate gestures and facial expressions that are as good as human expressions. Our goal is to create a CA that the operator can control seamlessly and without stress as a part of their extended bodies.
Lab website: https://eng.irl.sys.es.osaka-u.ac.jp/Research on the cognitive aspects of a high presence tele-operation interface (Kohei OGAWA)
We are engaged in research on display technology for interfaces that allow operators to operate multiple CAs. In conventional teleoperation systems, it is considered that providing a high-definition video stream with as much sensor information as possible to the operator improves the quality of operation. However, it has been pointed out that the operator may sometimes overlook important information by being provided with too much information, thereby reducing the quality of operation. To tackle the issue, we are developing a novel interface that improves the quality of the operation by emphasizing what the operator should focus on in a particular situation to reduce the burden on the operator. Furthermore, to reduce the operation burden, we are working on developing dialogue summarization and display technology to quickly understand what kind of dialogue has been conducted when switching from autonomous CA to teleoperation. In detail, the experiment we conducted showed that the participants could understand the content of a 3-minute chatting conversation in just 3 seconds by using our proposed summary system. In the next step, by integrating these basic technologies and implementing them in a real system, we will try to implement a multiple CA teleoperation system that is practical and less burdensome for operators.
Research and development of a mobile humanoid CA (Yoshihiro NAKATA)
To provide various ways of social participation, we are engaged in research and development of a cybernetic avatar (CA) platform, which can move around in a daily environment by remote control, and an interface for operators to control the CA with a strong sense of presence. This CA is designed with a childlike size and appearance to be able to interact with any person with affinity. At present, we are developing: 1) a head that has childlike features and can express various facial expressions; 2) a compact, quiet, and highly safe electric actuator unit used to drive joints of CA; 3) a wheel movement mechanism that realizes human-like body movement when the CA moves. In the future, we will integrate these elements to complete a mobile CA. Through social demonstration experiments, we aim to realize natural interaction between operators and users through dialogue and behavior that does not differ from face-to-face interaction, even with remote control.
Development of a Huggable CA (Masahiro SHIOMI)
Physical interaction is essential for people. However, the recent COVID-19 pandemic makes it difficult for us to interact with others physically. We believe that tele-physical interaction via cybernetic avatars will support people from both physical and mental support perspectives. Therefore, we are working on research and development of huggable CAs to realize social robots that physically interact with people in daily environments. For this purpose, we have been developing three different types of huggable CAs: a baby-sized CA for older adults, an adult-sized CA for children, and a self-huggable CA for adults. In addition, we have been developing fabric-type touch sensors for these CAs. Currently, we are conducting a preliminary field trial to provide mental support mainly for children (including autistic children) by using the adult-sized CA, which hugs and talks with them. In addition, we are developing a sensing technology to accurately recognize various hugging behaviors by using our touch sensors and a user interface to visualize the haptic interaction status between the user and the CAs to support operations. We aim to clarify the physical and mental supporting effects on the user of interaction with the huggable CAs and verify the changes in the relationship between the operator and user via interactions with the huggable CAs.
Development of life-like CA and mechanisms for collaborative conversation (Yuichiro YOSHIKAWA)
We are developing lifelike CAs with animacy like animals which users are autonomously attracted as well as the function for collaborative dialogue with multiple CAs to provide long-lasting conversations with users. So far, we have developed a lifelike CA with completely silent motions using direct drive motors, another cheaper lifelike CA with organic electroluminescent display for its head to represent its emotion and intention, and mobile type CA based on the cheaper lifelike CA. As for the function for collaborative dialogue, we have developed the teleoperating system for fewer operators to control multiple robots placed different locations to provide dialogue service. So far, we have conducted the field experiments in amusement park (Nifrel, ExpoCity) and children's bookstore (TSUTAYA, ExpoCity) to recommend goods sold in these stores and shown the effect of reducing the operator's mental burden in the service. Meanwhile, we have engaged in research and development of a semi-autonomous social avatar room called CommU-Talk where users can use life-like CG-CAs as their avatars to talk to each other. Namely, under the collaboration with Prof. Kumazaki (Group 7), we have conducted the field experiment with CommU-Talk for interview practice by groups of adolescents with ASD.
A study on generating natural CA motion without being aware of tele-operation (Takashi MINATO)
To realize robots that coexist with people in everyday life and assist people mainly through dialogue, we have studied to explore the principles of natural interaction between people and robots, as well as in the development of robot systems that can serve as research platforms for HRI studies. In particular, in the research on ERICA, an android that can have daily dialogue, we have developed a system that allows the android to generate natural movements and to interact in a natural context based on the android's intentions and desires, and have been conducting field experiments in the common space of the research institute for more than two years. Furthermore, we are engaged in organizing a dialogue robot competition as an initiative to explore the technology of dialogue generation and how dialogue robots can be used in the real world among companies and research institutions. Here, we provide a middleware that can easily implement android control in interactive interactions with humans (it was produced from ERICA’s system). Based on these technologies, we are working on the development of a system that allows people to easily tele-operate a humanlike presence CA and a system that allows the CA to autonomously express behavior appropriate to various situations. So far, we have realized an interface that enhances the sense of presence where the operator is at a remote location, and a CA that behaves hospitably when performing interpersonal services with the CA. In addition, the developed tele-operation system has been introduced to a long-term field experiment in which CA is used for corporate receptionists, and verification of the system is ongoing.
Development of an interaction behavior learning method for CAs (Yutaka NAKAMURA)
Recent advances in information technologies and artificial intelligence systems have led to development of various communication systems, such as communication robots and video conferencing systems. However, there are issues of awkwardness and unnaturalness when using such communication systems compared to on-site human-human interaction. In human-human communication, not only verbal communication but also nonverbal communication channels such as gestures are utilized, and those channels are used in a two-way and simultaneous manner, such as nodding when the other person is speaking.
Under the hypothesis that the interaction is cyclical such that changing one's own behavior in response to the behavior of others changes the behavior of others, our team is working on modeling human-human interactions by using machine learning techniques. It is difficult to use nonverbal communication channels in teleoperation due to the difficulties to obtain vivid local information in contrast to on-site human-human interaction where they are expressed without special awareness. Therefore, we are working to develop a teleoperation system for avatars without awkwardness or unnaturalness by controlling these channels semi-autonomously.