WO2018000268A1 - Procédé et système pour générer un contenu d'interaction de robot, et robot - Google Patents
Procédé et système pour générer un contenu d'interaction de robot, et robot Download PDFInfo
- Publication number
- WO2018000268A1 WO2018000268A1 PCT/CN2016/087753 CN2016087753W WO2018000268A1 WO 2018000268 A1 WO2018000268 A1 WO 2018000268A1 CN 2016087753 W CN2016087753 W CN 2016087753W WO 2018000268 A1 WO2018000268 A1 WO 2018000268A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- robot
- time axis
- signal
- life
- generating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
- B25J11/001—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means with emotions simulating means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/163—Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
Definitions
- the invention relates to the field of robot interaction technology, and in particular to a method, a system and a robot for generating robot interactive content.
- an expression is made in the process of human interaction.
- a reasonable expression feedback is given, and the person comes to a life scene on a certain time axis, such as eating, Sleeping, exercise, etc.
- changes in various scene values can affect the feedback of human expression.
- the current desire for robots to make expression feedback is mainly through pre-designed methods and deep learning training corpus.
- This kind of feedback through pre-designed programs and corpus training has the following disadvantages:
- the output of the expression depends on the human text representation, that is, similar to a question-and-answer machine, the different words of the user trigger different expressions.
- the robot actually outputs the expression according to the human pre-designed interaction mode, which leads to the robot.
- the object of the present invention is to provide a method, a system and a robot for generating robot interactive content, so that the robot itself has a human lifestyle within the active interactive variable parameters, enhances the anthropomorphicity of the robot interactive content generation, and enhances the human-computer interaction experience. Improve intelligence.
- a method for generating robot interactive content comprising:
- the robot interaction content is generated in conjunction with the current robot life timeline.
- the method for generating parameters of the life time axis of the robot includes:
- the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
- the step of expanding the self-cognition of the robot specifically comprises: combining the life scene with the self-knowledge of the robot to form a self-cognitive curve based on the life time axis.
- the step of fitting the self-cognitive parameter of the robot to the parameter in the life time axis comprises: using a probability algorithm to calculate each parameter of the robot on the life time axis after the time axis scene parameter is changed.
- the probability of change forms a fitted curve.
- the life time axis refers to a time axis including 24 hours a day
- the parameters in the life time axis include at least a daily life behavior performed by the user on the life time axis and parameter values representing the behavior.
- the multi-modal signal includes at least an image signal
- the step of generating the robot interaction content according to the current multi-modality signal and the user intention, in combination with the current robot life time axis specifically includes:
- the robot interaction content is generated in conjunction with the current robot life timeline.
- the multi-modal signal includes at least a voice signal
- the step of generating the robot interaction content according to the current multi-modality signal and the user intention, in combination with the current robot life time axis specifically includes:
- the robot interaction content is generated in conjunction with the current robot life timeline.
- the multi-modal signal includes at least a gesture signal
- the step of generating the robot interaction content according to the current multi-modality signal and the user intention, in combination with the current robot life time axis specifically includes:
- a robot interaction content is generated in accordance with the current robot life timeline based on the gesture signal and the user intent.
- the invention discloses a system for generating robot interactive content, comprising:
- An intent identification module configured to determine a user intent according to the multimodal signal
- a content generating module configured to combine current according to the multimodal signal and the user intention
- the robot life timeline generates robot interactive content.
- the system comprises a time axis based and artificial intelligence cloud processing module for:
- the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
- the time-based and artificial intelligence cloud processing module is further configured to combine a life scene with a self-awareness of the robot to form a self-cognitive curve based on a life time axis.
- the time axis-based and artificial intelligence cloud processing module is further configured to: use a probability algorithm to calculate a probability of each parameter change of the robot on the life time axis after the time axis scene parameter is changed, to form a fitting curve.
- the life time axis refers to a time axis including 24 hours a day
- the parameters in the life time axis include at least a daily life behavior performed by the user on the life time axis and parameter values representing the behavior.
- the multi-modal signal includes at least an image signal
- the content generating module is specifically configured to: generate, according to the image signal and the user intention, a robot interaction content according to a current robot life time axis.
- the multi-modal signal includes at least a voice signal
- the content generating module is specifically configured to: generate, according to the voice signal and the user intention, a robot interaction content according to a current life time axis of the robot.
- the multi-modal signal includes at least a gesture signal
- the content generating module is specifically configured to: generate, according to the gesture signal and the user intention, a robot interaction content according to a current robot life time axis.
- the invention discloses a robot comprising a system for generating interactive content of a robot as described above.
- a method for generating interactive content of a robot includes: acquiring a multi-modal signal; determining a user intention according to the multi-modal signal; and combining a current life time axis of the robot according to the multi-modal signal and the user intention Generate robot interaction content.
- multi-modal signals such as image signals, speech signals, and robot variable parameters can be combined to more accurately generate robot interaction content, thereby being more accurate, Personification and interaction with people. For people, everyday life has a certain regularity.
- the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
- Generate anthropomorphic enhance the human-computer interaction experience and improve intelligence.
- FIG. 1 is a flowchart of a method for generating interactive content of a robot according to Embodiment 1 of the present invention
- FIG. 2 is a schematic diagram of a system for generating interactive content of a robot according to a second embodiment of the present invention.
- Computer devices include user devices and network devices.
- the user equipment or the client includes but is not limited to a computer, a smart phone, a PDA, etc.;
- the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud computing-based computer or network server. cloud.
- the computer device can operate alone to carry out the invention, and can also access the network and implement the invention through interoperation with other computer devices in the network.
- the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
- first means “first,” “second,” and the like may be used herein to describe the various elements, but the elements should not be limited by these terms, and the terms are used only to distinguish one element from another.
- the term “and/or” used herein includes any and all combinations of one or more of the associated listed items. When a unit is referred to as being “connected” or “coupled” to another unit, it can be directly connected or coupled to the other unit, or an intermediate unit can be present.
- a method for generating interactive content of a robot including:
- a method for generating interactive content of a robot includes: acquiring a multi-modal signal; determining a user intention according to the multi-modal signal; and combining a current life time axis of the robot according to the multi-modal signal and the user intention Generate robot interaction content.
- multi-modal signals such as image signals and speech signals can be combined with robot variable parameters to more accurately generate robot interaction content, thereby more accurately and anthropomorphic interaction and communication with people. For people, everyday life has a certain regularity.
- the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
- Generate anthropomorphic enhance the human-computer interaction experience and improve intelligence.
- the interactive content can be an expression or text or voice.
- the robot life timeline 300 is completed and set in advance. Specifically, the robot life timeline 300 is a series of parameter collections, and this parameter is transmitted to the system to generate interactive content.
- the multimodal information in this embodiment may be one of user expression, voice information, gesture information, scene information, image information, video information, face information, pupil iris information, light sense information, and fingerprint information.
- voice information voice information
- gesture information scene information
- image information image information
- video information face information
- pupil iris information light sense information
- fingerprint information fingerprint information.
- the life time axis is specifically: according to the time axis of human daily life, the robot is fitted with the time axis of human daily life, and the behavior of the robot follows this fitting line. Move, that is, get the robot's own behavior in a day, so that the robot can perform its own behavior based on the life time axis, such as generating interactive content and communicating with humans. If the robot is always awake, it will act according to the behavior on this timeline, and the robot's self-awareness will be changed according to this timeline.
- the life timeline and variable parameters can be used to change the attributes of self-cognition, such as mood values, fatigue values, etc., and can also automatically add new self-awareness information, such as no previous anger value, based on the life time axis and The scene of the variable factor will automatically add to the self-cognition of the robot based on the scene that previously simulated the human self-cognition.
- the multi-modal signal is used by the user to speak to the robot by using voice: "good sleepy", the multi-modal signal can be added with a picture signal, and the robot comprehensively judges according to the multi-modal signal such as the above-mentioned voice signal plus the picture signal. Identifying the user's intention is that the user is very sleepy, and the robot life timeline, for example, the current time is 9 am, then the robot knows that the owner is just getting up, then you should ask the owner early, for example, answer "Good morning” as a reply, It is also possible to match an expression, a picture, etc., and the interactive content in the present invention can be understood as a reply of the robot.
- the multi-modal signal can be added with a picture signal, and the robot comprehensively judges according to the multi-modal signal such as the above-mentioned voice signal plus the picture signal. Identifying the user's intention is that the user is very sleepy, and the robot lives on the timeline. For example, the current time is 9:00 pm, then the robot knows that the owner needs to sleep, then he will reply with the words "master good night, sleep well” and the like. It can also be accompanied by expressions, pictures, etc. This kind of approach is more anthropomorphic than simply relying on scene recognition to generate replies and expressions that are more intimate with people's lives.
- the multi-modal signal is generally a combination of a plurality of signals, such as a picture signal plus a voice signal, or a picture signal plus a voice signal plus a gesture signal.
- the method for generating parameters of the robot life time axis includes:
- the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
- the life time axis is added to the self-cognition of the robot itself, so that the robot has an anthropomorphic life. For example, add the cognition of lunch to the robot.
- the step of expanding the self-cognition of the robot specifically includes: combining the life scene with the self-awareness of the robot to form a self-cognitive curve based on the life time axis.
- the life time axis can be specifically added to the parameters of the robot itself.
- the parameter of the self-cognition of the robot and the life time axis The step of fitting the parameters in the method specifically includes: using a probability algorithm, calculating a probability of each parameter change of the robot on the life time axis after the time axis scene parameter is changed, and forming a fitting curve.
- the probability algorithm may be a Bayesian probability algorithm.
- the robot will have sleep, exercise, eat, dance, read books, eat, make up, sleep and other actions. Each action will affect the self-cognition of the robot itself, and combine the parameters on the life time axis with the self-cognition of the robot itself.
- the robot's self-cognition includes, mood, fatigue value, intimacy. , goodness, number of interactions, three-dimensional cognition of the robot, age, height, weight, intimacy, game scene value, game object value, location scene value, location object value, etc. For the robot to identify the location of the scene, such as cafes, bedrooms, etc.
- the machine will perform different actions in the time axis of the day, such as sleeping at night, eating at noon, exercising during the day, etc. All the scenes in the life time axis will have an impact on self-awareness. These numerical changes are modeled by the dynamic fit of the probability model, fitting the probability that all of these actions occur on the time axis.
- the multi-modal signal includes at least an image signal
- the step of generating the robot interaction content in combination with the current robot life time axis according to the multi-modality signal and the user intention specifically includes:
- the robot interaction content is generated in conjunction with the current robot life timeline.
- the multi-modal signal includes at least an image signal, so that the robot can grasp the user's intention, and in order to better understand the user's intention, other signals, such as a voice signal, a gesture signal, etc., are generally added, so that the robot can be more accurately understood. Whether the user is the real expression or the meaning of a joke.
- the multi-modal signal includes at least a voice signal
- the step of generating the robot interaction content according to the current multi-modality signal and the user intention, in combination with the current robot life time axis specifically includes:
- the robot interaction content is generated in conjunction with the current robot life timeline.
- the multi-modal signal includes at least a gesture signal
- the step of generating the robot interaction content according to the current multi-modality signal and the user intention, in combination with the current robot life time axis specifically includes:
- the multi-modal signal is used by the user to speak to the robot by using voice: "hungry”, the multi-modal signal can be added with a picture signal, and the robot comprehensively judges and recognizes according to the multi-modal signal such as the above-mentioned voice signal plus picture signal.
- the user's intention is that the user is very hungry, and the robot life timeline, for example, the current time is 9 am, then the robot will reply, let the user go to breakfast, and with a cute expression.
- the multi-modal signal is used by the user to speak to the robot by using voice: "hungry”
- the multi-modal signal can be added with a picture signal, and the robot comprehensively judges and recognizes the multi-modal signal according to the above-mentioned voice signal plus picture signal.
- the user's intention is that the user is very hungry, and the robot lives on the timeline. For example, the current time is 9:00 pm, then the robot will reply, eat too late, and have a cute expression.
- the voice signal and the picture signal are generally used to accurately understand the meaning of the user, thereby more accurately replying to the user.
- other signals are more accurate, such as gesture signals, video signals, and the like.
- a system for generating interactive content of a robot includes:
- the obtaining module 201 is configured to acquire a multi-modal signal
- the intent identification module 202 is configured to determine a user intent according to the multimodal signal
- the content generation module 203 is configured to generate the robot interaction content according to the current multi-modality signal and the user intention, in conjunction with the current robot life time axis sent by the robot life timeline module 301.
- multi-modal signals such as image signals and speech signals can be combined with robot variable parameters to more accurately generate robot interaction content, thereby more accurately and anthropomorphic interaction and communication with people.
- everyday life has a certain regularity.
- the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
- Generate anthropomorphic enhance the human-computer interaction experience and improve intelligence.
- the interactive content can be an expression or text or voice.
- the multi-modal signal is used by the user to speak to the robot by using voice: "good sleepy", the multi-modal signal can be added with a picture signal, and the robot according to the multi-modal signal such as the above-mentioned voice signal
- the user's intention is recognized as the user is very sleepy, and the robot life timeline, for example, the current time is 9:00 am, then the robot knows that the owner is just getting up, then the owner should ask early, for example, answer "Good morning” as a reply, can also be accompanied by expressions, pictures, etc.
- the interactive content in the present invention can be understood as the reply of the robot.
- the multi-modal signal can be added with a picture signal, and the robot comprehensively judges according to the multi-modal signal such as the above-mentioned voice signal plus the picture signal. Identifying the user's intention is that the user is very sleepy, and the robot lives on the timeline. For example, the current time is 9:00 pm, then the robot knows that the owner needs to sleep, then he will reply with the words "master good night, sleep well” and the like. It can also be accompanied by expressions, pictures, etc. This kind of approach is more anthropomorphic than simply relying on scene recognition to generate replies and expressions that are more intimate with people's lives.
- the multi-modal signal is generally a combination of a plurality of signals, such as a picture signal plus a voice signal, or a picture signal plus a voice signal plus a gesture signal.
- the system includes a time axis based and artificial intelligence cloud processing module for:
- the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
- the life time axis is added to the self-cognition of the robot itself, so that the robot has an anthropomorphic life. For example, add the cognition of lunch to the robot.
- the time-based and artificial intelligence cloud processing module is further configured to combine a life scene with a self-awareness of the robot to form a self-cognitive curve based on a life time axis.
- the life time axis can be specifically added to the parameters of the robot itself.
- the time axis-based and artificial intelligence cloud processing module is further configured to: use a probability algorithm to calculate a probability of each parameter change of the robot on the life time axis after the time axis scene parameter changes, to form a fit curve.
- the probability algorithm may be a Bayesian probability algorithm.
- the robot will have sleep, exercise, eat, dance, read books, eat, make up, sleep and other actions. Each action will affect the self-cognition of the robot itself, and combine the parameters on the life time axis with the self-cognition of the robot itself.
- the robot's self-cognition includes, mood, fatigue value, intimacy. , good feelings, number of interactions, The three-dimensional cognition, age, height, weight, intimacy, game scene value, game object value, location scene value, location object value, etc. of the robot.
- For the robot to identify the location of the scene such as cafes, bedrooms, etc.
- the machine will perform different actions in the time axis of the day, such as sleeping at night, eating at noon, exercising during the day, etc. All the scenes in the life time axis will have an impact on self-awareness. These numerical changes are modeled by the dynamic fit of the probability model, fitting the probability that all of these actions occur on the time axis.
- the multi-modality signal includes at least an image signal
- the content generation module is specifically configured to generate the robot interaction content according to the current robot life time axis according to the image signal and the user intention.
- the multi-modal signal includes at least an image signal, so that the robot can grasp the user's intention, and in order to better understand the user's intention, other signals, such as a voice signal, a gesture signal, etc., are generally added, so that the robot can be more accurately understood. Whether the user is the real expression or the meaning of a joke.
- the multi-modal signal includes at least a voice signal
- the content generating module is specifically configured to: generate, according to the voice signal and the user intention, a robot interaction content according to a current robot life time axis.
- the multi-modality signal includes at least a gesture signal
- the content generation module is specifically configured to generate the robot interaction content according to the current robot life time axis according to the gesture signal and the user intention.
- the multi-modal signal is used by the user to speak to the robot by using voice: "hungry”, the multi-modal signal can be added with a picture signal, and the robot comprehensively judges and recognizes according to the multi-modal signal such as the above-mentioned voice signal plus picture signal.
- the user's intention is that the user is very hungry, and the robot life timeline, for example, the current time is 9 am, then the robot will reply, let the user go to breakfast, and with a cute expression.
- the multi-modal signal is used by the user to speak to the robot by using voice: "hungry”
- the multi-modal signal can be added with a picture signal, and the robot comprehensively judges and recognizes the multi-modal signal according to the above-mentioned voice signal plus picture signal.
- the user's intention is that the user is very hungry, and the robot lives on the timeline. For example, the current time is 9:00 pm, then the robot will reply, eat too late, and have a cute expression.
- the voice signal and the picture signal are generally used to accurately understand the meaning of the user, thereby more accurately replying to the user.
- other signals are more accurate, such as gesture signals, video signals, and the like.
- the invention discloses a robot comprising a system for generating interactive content of a robot as described above.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Algebra (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- Computational Linguistics (AREA)
- Manipulator (AREA)
- Toys (AREA)
Abstract
L'invention porte également sur un procédé pour générer un contenu d'interaction de robot, comprenant : l'obtention d'un signal multimodal (S101); la détermination d'une intention d'utilisateur selon le signal multimodal (S102); et générer un contenu d'interaction de robot en combinant une ligne temporelle de vie courante d'un robot selon le signal multimodal et l'intention d'utilisateur (S103) Au moyen du procédé, la ligne temporelle de vie où se trouve le robot est ajoutée à la génération du contenu d'interaction du robot, de telle sorte que le robot soit plus humanisé lorsqu'il interagit avec l'homme et a un style de vie humain dans la ligne temporelle de vie, et l'humanisation de la génération de contenu d'interaction de robot, de l'expérience d'interaction homme-robot et de l'intelligence peut être améliorée.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201680001744.2A CN106462254A (zh) | 2016-06-29 | 2016-06-29 | 一种机器人交互内容的生成方法、系统及机器人 |
| PCT/CN2016/087753 WO2018000268A1 (fr) | 2016-06-29 | 2016-06-29 | Procédé et système pour générer un contenu d'interaction de robot, et robot |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2016/087753 WO2018000268A1 (fr) | 2016-06-29 | 2016-06-29 | Procédé et système pour générer un contenu d'interaction de robot, et robot |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018000268A1 true WO2018000268A1 (fr) | 2018-01-04 |
Family
ID=58215746
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2016/087753 Ceased WO2018000268A1 (fr) | 2016-06-29 | 2016-06-29 | Procédé et système pour générer un contenu d'interaction de robot, et robot |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN106462254A (fr) |
| WO (1) | WO2018000268A1 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111970536A (zh) * | 2020-07-24 | 2020-11-20 | 北京航空航天大学 | 一种基于音频生成视频的方法和装置 |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10864633B2 (en) * | 2017-04-28 | 2020-12-15 | Southe Autonomy Works, Llc | Automated personalized feedback for interactive learning applications |
| CN109202921B (zh) * | 2017-07-03 | 2020-10-20 | 北京光年无限科技有限公司 | 用于机器人的基于遗忘机制的人机交互方法及装置 |
| CN107491511A (zh) * | 2017-08-03 | 2017-12-19 | 深圳狗尾草智能科技有限公司 | 机器人的自我认知方法及装置 |
| CN107563517A (zh) * | 2017-08-25 | 2018-01-09 | 深圳狗尾草智能科技有限公司 | 机器人自我认知实时更新方法及系统 |
| CN107992935A (zh) * | 2017-12-14 | 2018-05-04 | 深圳狗尾草智能科技有限公司 | 为机器人设置生活周期的方法、设备及介质 |
| CN108297098A (zh) * | 2018-01-23 | 2018-07-20 | 上海大学 | 人工智能驱动的机器人控制系统及方法 |
| CN108363492B (zh) * | 2018-03-09 | 2021-06-25 | 南京阿凡达机器人科技有限公司 | 一种人机交互方法及交互机器人 |
| CN109376282A (zh) * | 2018-09-26 | 2019-02-22 | 北京子歌人工智能科技有限公司 | 一种基于人工智能的人机智能聊天的方法和装置 |
| CN109976338A (zh) * | 2019-03-14 | 2019-07-05 | 山东大学 | 一种多模态四足机器人人机交互系统及方法 |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1392826A (zh) * | 2000-10-05 | 2003-01-22 | 索尼公司 | 机器人设备及其控制方法 |
| US7685518B2 (en) * | 1998-01-23 | 2010-03-23 | Sony Corporation | Information processing apparatus, method and medium using a virtual reality space |
| CN104951077A (zh) * | 2015-06-24 | 2015-09-30 | 百度在线网络技术(北京)有限公司 | 基于人工智能的人机交互方法、装置和终端设备 |
| CN105058389A (zh) * | 2015-07-15 | 2015-11-18 | 深圳乐行天下科技有限公司 | 一种机器人系统、机器人控制方法及机器人 |
| CN105082150A (zh) * | 2015-08-25 | 2015-11-25 | 国家康复辅具研究中心 | 一种基于用户情绪及意图识别的机器人人机交互方法 |
| CN105490918A (zh) * | 2015-11-20 | 2016-04-13 | 深圳狗尾草智能科技有限公司 | 一种机器人主动与主人交互的系统及方法 |
| CN105701211A (zh) * | 2016-01-13 | 2016-06-22 | 北京光年无限科技有限公司 | 面向问答系统的主动交互数据处理方法及系统 |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105093986A (zh) * | 2015-07-23 | 2015-11-25 | 百度在线网络技术(北京)有限公司 | 基于人工智能的拟人机器人控制方法、系统及拟人机器人 |
-
2016
- 2016-06-29 WO PCT/CN2016/087753 patent/WO2018000268A1/fr not_active Ceased
- 2016-06-29 CN CN201680001744.2A patent/CN106462254A/zh active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7685518B2 (en) * | 1998-01-23 | 2010-03-23 | Sony Corporation | Information processing apparatus, method and medium using a virtual reality space |
| CN1392826A (zh) * | 2000-10-05 | 2003-01-22 | 索尼公司 | 机器人设备及其控制方法 |
| CN104951077A (zh) * | 2015-06-24 | 2015-09-30 | 百度在线网络技术(北京)有限公司 | 基于人工智能的人机交互方法、装置和终端设备 |
| CN105058389A (zh) * | 2015-07-15 | 2015-11-18 | 深圳乐行天下科技有限公司 | 一种机器人系统、机器人控制方法及机器人 |
| CN105082150A (zh) * | 2015-08-25 | 2015-11-25 | 国家康复辅具研究中心 | 一种基于用户情绪及意图识别的机器人人机交互方法 |
| CN105490918A (zh) * | 2015-11-20 | 2016-04-13 | 深圳狗尾草智能科技有限公司 | 一种机器人主动与主人交互的系统及方法 |
| CN105701211A (zh) * | 2016-01-13 | 2016-06-22 | 北京光年无限科技有限公司 | 面向问答系统的主动交互数据处理方法及系统 |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111970536A (zh) * | 2020-07-24 | 2020-11-20 | 北京航空航天大学 | 一种基于音频生成视频的方法和装置 |
| CN111970536B (zh) * | 2020-07-24 | 2021-07-23 | 北京航空航天大学 | 一种基于音频生成视频的方法和装置 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106462254A (zh) | 2017-02-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018000268A1 (fr) | Procédé et système pour générer un contenu d'interaction de robot, et robot | |
| CN106956271B (zh) | 预测情感状态的方法和机器人 | |
| WO2018000259A1 (fr) | Procédé et système pour générer un contenu d'interaction de robot et robot | |
| CN107030691B (zh) | 一种看护机器人的数据处理方法及装置 | |
| US10628714B2 (en) | Entity-tracking computing system | |
| US20210191506A1 (en) | Affective interaction systems, devices, and methods based on affective computing user interface | |
| JP2022517457A (ja) | 感情認識機械を定義するための方法及びシステム | |
| CN107870994A (zh) | 用于智能机器人的人机交互方法及系统 | |
| WO2018000267A1 (fr) | Procédé de génération de contenu d'interaction de robot, système et robot | |
| CN108847226A (zh) | 管理人机对话中的代理参与 | |
| CN109765991A (zh) | 社交互动系统、用于帮助用户进行社交互动的系统及非暂时性计算机可读存储介质 | |
| CN109789550A (zh) | 基于小说或表演中的先前角色描绘的社交机器人的控制 | |
| WO2018006374A1 (fr) | Procédé, système et robot de recommandation de fonction basés sur un réveil automatique | |
| WO2021217282A1 (fr) | Procédé de mise en œuvre d'intelligence artificielle universelle | |
| WO2018006371A1 (fr) | Procédé et système de synchronisation de paroles et d'actions virtuelles, et robot | |
| WO2018006372A1 (fr) | Procédé et système de commande d'appareil ménager sur la base de la reconnaissance d'intention, et robot | |
| CN106471444A (zh) | 一种虚拟3d机器人的交互方法、系统及机器人 | |
| US20250133038A1 (en) | Context-aware dialogue system | |
| Khalid et al. | Determinants of trust in human-robot interaction: Modeling, measuring, and predicting | |
| Paterson | Inviting robot touch (by design) | |
| WO2018000258A1 (fr) | Procédé et système permettant de générer un contenu d'interaction de robot et robot | |
| WO2018000261A1 (fr) | Procédé et système permettant de générer un contenu d'interaction de robot, et robot | |
| WO2018000260A1 (fr) | Procédé servant à générer un contenu d'interaction de robot, système et robot | |
| De Simone et al. | Empowering human interaction: A socially assistive robot for support in trade shows | |
| WO2018000266A1 (fr) | Procédé et système permettant de générer un contenu d'interaction de robot, et robot |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16906669 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 16906669 Country of ref document: EP Kind code of ref document: A1 |