[go: up one dir, main page]

WO2018174290A1 - Système de commande de conversation et système de commande de robot - Google Patents

Système de commande de conversation et système de commande de robot Download PDF

Info

Publication number
WO2018174290A1
WO2018174290A1 PCT/JP2018/011919 JP2018011919W WO2018174290A1 WO 2018174290 A1 WO2018174290 A1 WO 2018174290A1 JP 2018011919 W JP2018011919 W JP 2018011919W WO 2018174290 A1 WO2018174290 A1 WO 2018174290A1
Authority
WO
WIPO (PCT)
Prior art keywords
scenario
robot
dialogue
internal state
control unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2018/011919
Other languages
English (en)
Japanese (ja)
Inventor
雄介 柴田
智彦 大内
麻莉子 矢作
浩平 小川
雄一郎 吉川
石黒 浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zensho Holdings Co Ltd
University of Osaka NUC
Original Assignee
Osaka University NUC
Zensho Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Osaka University NUC, Zensho Holdings Co Ltd filed Critical Osaka University NUC
Publication of WO2018174290A1 publication Critical patent/WO2018174290A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Definitions

  • the present invention relates to a dialogue control device and a robot control system.
  • the present invention has been made in consideration of such points, and in the case where a dialogue between a user and a robot or the like is performed, a dialogue that creates a more realistic sensation of dialogue through the exchange of dialogue reflecting the internal state of the robot or the like. It is an object to provide a control device and a robot control system.
  • An interactive control device includes an information acquisition unit that acquires interactive scenario information for interacting with a customer who has visited a restaurant from a plurality of interactive scenario information, and progress of the interactive scenario corresponding to the acquired interactive scenario information
  • An internal state value acquisition unit that acquires a value obtained by quantifying the internal state that changes with the time, and the accumulation of values acquired by the internal state value acquisition unit as the conversation scenario corresponding to the acquired conversation scenario information progresses
  • a scenario control unit that corrects the dialogue scenario based on the value.
  • the scenario control unit may accumulate values obtained by quantifying the first internal state or the second internal state that are in conflict with each other as the dialogue scenario progresses.
  • the first internal state may be a positive emotion obtained for a customer preference or evaluation regarding a recommended menu
  • the second internal state may be a negative emotion obtained for customer preference or evaluation regarding the recommended menu.
  • the first internal state may be a positive feeling obtained for at least one of the customer's evaluation regarding the ordered menu and the customer's evaluation regarding the customer service
  • the second internal state may be a negative emotion obtained with respect to at least one of a customer evaluation related to the ordered menu and a customer evaluation related to the customer service.
  • the scenario control unit separates the ongoing dialogue scenario based on the customer selection information, the cumulative value of the first internal state and the cumulative value of the second internal state. It may be controlled whether to switch to the interactive scenario.
  • the internal state may include at least one of the emotion and intention of the robot that interacts with the customer.
  • a robot motion control unit that performs an action representing the internal state according to a cumulative value of values obtained by quantifying the internal state by the scenario control unit may be provided.
  • a robot control system includes a robot that performs an operation according to an instruction signal, and an operation terminal that generates an instruction signal and transmits the instruction signal to the robot.
  • An information acquisition unit that acquires dialogue scenario information for interacting with customers who have visited restaurants from multiple dialogue scenario information, and the internal state that changes with the progress of the dialogue scenario corresponding to the acquired dialogue scenario information is quantified
  • the robot motion control unit may change at least one of the voice color of the robot, the motion, the eye color, and the way the eyes glow according to the cumulative value of the values obtained by digitizing the internal state by the scenario control unit.
  • FIG. 1 is a block diagram showing a robot control system 1 according to an embodiment.
  • the flowchart which shows an example of the flow of a process in a scenario control part.
  • the sequence diagram which shows a part of progress example of the dialogue scenario in a basic scenario.
  • the figure which shows the scene corresponding to the dialogue scenario 1-1 typically.
  • the figure which shows the scene corresponding to the dialogue scenario 1-5 typically.
  • the block diagram which shows the robot control system which concerns on a 1st modification.
  • FIG. 1 is a block diagram illustrating a robot control system 1 according to an embodiment.
  • the robot control system 1 includes a robot 2, an operation terminal 3 that is an example of a dialog control device, a handy terminal 4, and a store system 5 such as a POS (point of sales) system.
  • the robot control system 1 is, for example, a system for a restaurant customer (hereinafter referred to as a user) to interact with the robot 2 via an operation terminal 3 that operates the robot 2.
  • the robot 2 in the example of FIG. 1 is a machine having a human-like appearance and interactive function, that is, a humanoid. Note that the robot 2 may have a dissimilar appearance to humans such as animals and characters. Further, the robot 2 may be a virtual robot based on an image displayed on the display unit 38.
  • the robot 2 may be a general-purpose robot that can be programmed in posture, motion, or the like, or may be a robot developed for the robot control system 1.
  • the robot 2 includes a robot drive unit 21 and a robot control unit 22 that is an example of a drive control unit.
  • the robot 2 is driven by electric power supplied from a commercial power source.
  • the robot 2 may be a battery-driven type such as a battery.
  • the robot drive unit 21 includes, for example, an actuator that drives a part of the robot 2 having a degree of freedom, such as a body part, an arm part, a neck part, an eyeball part, a buttocks part, or a mouth part, and a voice output that outputs the speech voice of the robot 2 And a lighting device for lighting the eyeball portion of the robot 2.
  • a degree of freedom such as a body part, an arm part, a neck part, an eyeball part, a buttocks part, or a mouth part
  • a voice output that outputs the speech voice of the robot 2
  • a lighting device for lighting the eyeball portion of the robot 2.
  • a robot control command which is an example of a command signal, is input from the operation terminal 3 to the robot control unit 22.
  • the robot control unit 22 generates the drive control signal described above based on the input robot control command, and outputs the generated drive control signal to the robot drive unit 21. That is, the robot 2 can operate according to the robot control command.
  • the robot control command includes a command for controlling the posture, position, and lighting state of the robot 2 and a command for causing the robot 2 to speak.
  • This robot control command includes robot side scenario data described later.
  • the operation terminal 3 is carried by the user and is, for example, a tablet terminal having a touch function.
  • the operation terminal 3 may be a smartphone, a desktop display type terminal, or the like.
  • the operation terminal 3 is driven by power supplied from a built-in battery, and includes an orientation sensor 31, an operation generation unit 32, a scenario DB (database) 33, an internal state value accumulation unit 34, An information acquisition unit 35, an internal state value acquisition unit 36, a scenario control unit 37, a display unit 38, an input unit 39 such as a touch panel, an audio output unit 40, and a robot operation control unit 41 are included.
  • the operation terminal 3 according to the present embodiment corresponds to a dialogue control device.
  • the azimuth sensor 31 outputs an azimuth detection signal indicating the azimuth of the operation terminal 3 to the motion generation unit 32.
  • the direction sensor 31 outputs a direction detection signal indicating the detected direction of the operation terminal 3 to the motion generation unit 32.
  • the motion generation unit 32 generates a robot control command for controlling the posture of the robot 2 so as to face the direction in which the operation terminal 3 exists based on the direction detection signal from the direction sensor 31.
  • the motion generation unit 32 generates a robot control command for controlling the motion of the robot 2 based on the output of the direction sensor 31.
  • the motion generation unit 32 transmits the generated robot control command to the robot control unit 22 via wireless communication such as Wi-Fi, for example.
  • the robot control unit 22 receives the robot control command from the motion generation unit 32, and outputs a drive control signal corresponding to the received robot control command to the robot drive unit 21, thereby controlling the operation of the robot 2. Do.
  • the scenario DB 33 stores a plurality of pieces of dialogue scenario information for carrying out a dialogue between the user and the robot 2.
  • the dialogue scenario information includes a plurality of scenario groups (each scenario group has a plurality of dialogue scenarios) and a plurality of dialogue scenarios.
  • One dialogue scenario is exchanged between the robot 2 and the user. This is a story of dialogue.
  • One dialogue scenario includes robot-side scenario data used by the robot 2 for dialogue, that is, speech and movement, and user-side scenario data used for dialogue, that is, selection by the operation terminal 3.
  • the user-side scenario data includes some dialogue scenario data that can be responded to the robot-side scenario data, and the user can select arbitrary data from these through the input unit 39.
  • the dialogue scenario data has a tree structure in which the robot side scenario data and the user side scenario data are alternately coupled as nodes.
  • a predetermined series of nodes ranging from the highest level to the lowest level is managed as a basic scenario used for a typical dialogue, for example.
  • Other node groups may be managed as a correction scenario for correcting the basic scenario.
  • dialogue scenario data for a restaurant may be divided into a plurality of scenario groups according to status data indicating a staying state of the user at the restaurant.
  • the status data in the example of FIG. 1 is information indicating that the user has performed the following actions: sitting, ordering, food and drink provision, meal termination, and accounting.
  • the dialogue scenario data is provided from the scenario group classified by these status data.
  • the internal state value accumulation unit 34 accumulates values obtained by quantifying the internal state that changes with the progress of the dialogue scenario corresponding to the dialogue scenario information.
  • the changing internal state includes a first internal state indicating positive emotion and a second internal state indicating negative emotion, and each is quantified.
  • the degree of positive emotion is indicated by a numerical value. That is, the higher the numerical value, the higher the positive feeling.
  • the degree of negative emotion is indicated by a numerical value. In other words, the higher the numerical value, the higher the negative emotion.
  • the neutral feeling is digitized by 0, for example.
  • the internal state includes not only emotions but also all elements that can normally exist in a person such as senses, intentions, and interests. Therefore, hereinafter, the emotion is mainly described as the internal state, but the applicable internal state is not limited to this.
  • the first internal state is a positive emotion obtained for the user's preference or evaluation regarding a recommended menu in a restaurant such as a restaurant
  • the second internal state is a user's preference regarding a recommended menu, for example.
  • negative emotions obtained for evaluation is obtained for evaluation.
  • the first internal state is a positive emotion obtained for at least one of a user's evaluation related to a menu ordered in a restaurant such as a restaurant and a user's evaluation related to a customer service
  • the second internal state is a negative emotion obtained for at least one of the user's evaluation regarding the ordered menu and the user's evaluation regarding the customer service.
  • the internal state value accumulation unit 34 stores numerical values indicating positive emotions and negative emotions in association with options of the user-side dialogue scenario. Thereby, the internal state value storage unit 34 can output a numerical value corresponding to the option selected by the user.
  • the information acquisition unit 35 acquires dialog scenario information for interacting with a user who has visited a restaurant from a plurality of dialog scenario information in the scenario DB 33. That is, the information acquisition unit 35 acquires dialogue scenario information for interacting with the user from the plurality of dialogue scenario information stored in the scenario DB 33 according to the control of the scenario control unit 37, and sends it to the scenario control unit 37. Output.
  • the internal state value acquisition unit 36 acquires a value obtained by quantifying the internal state that changes with the progress of the dialog scenario corresponding to the dialog scenario information. That is, the internal state value acquisition unit 36 acquires a value obtained by quantifying the internal state that changes with the progress of the dialogue scenario stored in the internal state value storage unit 34 according to the control of the scenario control unit 37. Output to the control unit 37.
  • the scenario control unit 37 advances the dialogue scenario based on the dialogue scenario information acquired by the information acquisition unit 35. Moreover, the scenario control part 37 updates the cumulative value of the value which digitized the internal state acquired via the internal state value acquisition part 36 with progress of a dialogue scenario. That is, as described above, the scenario control unit 37 determines at least one of the value obtained by quantifying the first internal state indicating positive emotion and the value obtained by quantifying the second internal state indicating negative emotion as the progress of the dialogue scenario. It accumulates with. Then, the scenario control unit 37 corrects the dialogue scenario based on the cumulative value of the values acquired by the internal state value acquisition unit 36. In addition, the scenario control unit 37 sends the cumulative value obtained by quantifying the first internal state indicating positive emotion and the cumulative value obtained by quantifying the second internal state indicating negative emotion via the motion generating unit 32. This is supplied to the robot operation control unit 41.
  • the scenario control unit 37 causes the display unit 38 to display user-side dialog scenario options based on the user-side scenario data included in the dialog scenario information.
  • the scenario control unit 37 causes the display unit 38 to display user-side dialog scenario options based on the user-side scenario data included in the dialog scenario information.
  • the motion generation unit 32 generates a robot control command for causing the robot 2 to speak and operate based on the robot side scenario data input from the scenario control unit 37. At this time, the motion generation unit 32 includes commands related to motions such as the tone of the utterance voice, the lighting state of the eyeball, and the posture of the robot 2 in the command content of the robot control command corresponding to the robot side scenario data.
  • the user can select a desired dialogue scenario using the input unit 39 from the choices of the dialogue scenario displayed on the display unit 38. That is, the input unit 39 outputs a dialogue scenario selection signal indicating a dialogue scenario selected by the user to the scenario control unit 37 and the voice output unit 40.
  • the voice output unit 40 outputs the selected dialogue scenario as a voice based on the dialogue scenario selection signal input from the input unit 39.
  • the robot motion control unit 41 causes the robot 2 to perform an action representing the internal state in accordance with a cumulative value of values obtained by quantifying the internal state sequentially received from the operation terminal 3 as the dialogue progresses.
  • the internal state is a state including at least one of emotion and intention.
  • the robot motion control unit 41 can cause the robot 2 to perform an action reflecting at least one of the emotion and intention of the robot 2.
  • the robot motion control unit 41 generates a robot control command that changes at least one of the voice color, motion, eye color, and eye glow of the robot 2 according to the accumulated value of the internal state.
  • the internal state of the robot 2 is digitized, and the robot 2 is intended to perform some action according to the accumulated value of the digitized values.
  • a control unit 41 is provided.
  • the robot motion control unit 41 may increase the voice of the robot 2, increase the voice, speed up the blinking of light, change the tone to a pleasant tone, You can change the color of your eyes to a warm color, change your face to a more enjoyable expression by raising your corners, raise your arms, or adjust your face upwards.
  • the robot motion control unit 41 may reduce the pitch of the voice of the robot 2, decrease the voice, slow the blinking of light, or change it to a sad tone as an action expressing negative emotion. You can change the color of the eyes to a cool color, change the face to a sad expression by lowering the corner of the mouth, whisper the face or body, lower the arm, or lower the shoulder.
  • the robot 2 by operating the robot 2 so as to include at least one of emotion and intention through the exchange of dialogue, the realism of the dialogue is suppressed, and the naturalness of the dialogue is suppressed by suppressing the sense of incongruity of the dialogue. Can be generated. Thereby, the user can improve the satisfaction by the dialogue with the robot 2.
  • the robot motion control unit 41 may generate a robot control command based on the robot control command input from the motion generation unit 32 and the accumulated value obtained by quantifying the internal state. For example, the robot motion control unit 41 may correct another robot control command input from the motion generation unit 32 and generate another robot control command.
  • the robot motion control unit 41 is input from the scenario control unit 37 with a cumulative value obtained by quantifying the internal state, but is not limited thereto, and the robot motion control unit 41 includes the internal state. A cumulative value obtained by quantifying the values may be generated.
  • FIG. 2 is a diagram showing a table of control operation examples of the scenario control unit 37, and the options A, B, and C are selected by the user from the options displayed on the display unit 38 as the dialogue scenario progresses. Information on options is shown.
  • Positive emotion ( ⁇ ) indicates a cumulative value obtained by quantifying the first internal state indicating the positive emotion when the user selects the target option
  • negative emotion ( ⁇ ) indicates the target option.
  • the cumulative value of the value which digitized the 2nd internal state which shows the negative emotion at the time of selecting is shown.
  • the scenario control unit 37 corrects the dialogue scenario based on the cumulative value acquired by the internal state value acquisition unit 36. For example, as shown in the first line in the table of FIG. 2, the scenario control unit 37 sets the dialogue scenario information when the cumulative value of the value obtained by quantifying the first internal state indicating positive emotion is equal to or greater than a predetermined threshold value 10. Is corrected in consideration of the positive emotion of the robot 2. On the other hand, as shown in the second row in the table of FIG. 2, the scenario control unit 37 interacts when the cumulative value of the value obtained by quantifying the second internal state indicating negative emotion is equal to or greater than the predetermined threshold value 10. The scenario information is corrected in consideration of the negative emotion of the robot 2.
  • the scenario control unit 37 accumulates values obtained by quantifying the first internal state or the second internal state, which are mutually contradictory, as the dialogue scenario progresses, and corrects the dialogue scenario based on these accumulated values. To do. That is, the scenario control unit 37 updates the accumulated value of the value acquired by the internal state value acquiring unit 36 as the dialog scenario corresponding to the dialog scenario information acquired via the information acquiring unit 35 progresses, Correct the dialogue scenario when the value reaches the threshold. Thereby, the scenario control unit 37 can correct the conversation scenario into a conversation scenario according to the internal state of the robot 2.
  • scenario correction refers to changing the order of dialogue scenarios in ongoing dialogue scenario information, changing dialogue scenario information to other dialogue scenario information, and controlling the operation of the robot 2. It means changing the dialogue scenario.
  • the dialogue scenario information is corrected in consideration of the positive emotion of the robot 2
  • the dialogue scenario information is changed to a more familiar feeling, for example, a light feeling like the dialogue between friends.
  • the dialogue scenario information is changed to dialogue scenario information with a feeling of heavy honorific tone.
  • the scenario correction performed by the scenario control unit 37 can change the dialog scenario information, so that the dialog contents can also be changed.
  • the robot-side scenario data related to the robot control command is corrected, and the operation of the robot 2 is further increased.
  • the robot side scenario data related to the robot control command is corrected to make the operation of the robot 2 slower or reduce the movement amount.
  • the scenario control unit 37 corrects the dialogue scenario in accordance with the accumulated value of the value indicating the internal state of the robot 2 that is changed as the dialogue scenario progresses, so that the dialogue between the robot 2 and the user is exchanged. It is possible to create a sense of reality.
  • the scenario control unit 37 corrects the scenario according to the accumulated value obtained by accumulating the digitized values, the value indicating the internal state generated so far can be reflected in the scenario correction.
  • the internal state value indicating the emotion and intention stored in the robot 2 can be reflected in the dialogue and operation of the robot 2.
  • the robot 2 it is possible to cause the robot 2 to perform a tone-like dialogue or action that a human performs when a dialogue that enhances negative emotions continues.
  • an emotional element generated in a dialogue between humans even in a dialogue between the robot 2 and the user.
  • the dialogue scenario when both the cumulative value of positive emotion and the cumulative value of negative emotion are less than 10, the dialogue scenario is advanced. Although omitted in FIG. 2, the dialogue scenario may be switched when the cumulative value of at least one of positive emotion and negative emotion reaches a predetermined value (for example, 20) greater than 10.
  • a predetermined value for example, 20
  • FIG. 3 is a flowchart showing an example of the process flow in the scenario control unit 37.
  • An example of the process flow in the scenario control unit 37 will be described with reference to FIG.
  • an example will be described in which the scenario control unit 37 reads out and advances a correction scenario corresponding to the status data input from the handy terminal 4 via the store system 5 from the scenario DB 33.
  • the scenario control unit 37 causes the display unit 38 to display the user-side dialogue scenario options based on the user-side scenario data included in the read dialogue scenario information. Subsequently, the user selects a specific option displayed on the display unit 38 (step S300). In this case, the robot-side scenario data included in the dialogue scenario information is output to the motion generation unit 32.
  • the scenario control unit 37 acquires the internal state value of the robot 2 from the internal state value accumulation unit 34 via the internal state value acquisition unit 36 based on the option selected by the user (step S302). That is, the scenario control unit 37 acquires a value obtained by quantifying the first internal state and a value obtained by quantifying the second internal state.
  • the scenario control unit 37 accumulates the value obtained by quantifying the first internal state and the value obtained by quantifying the second internal state, and updates the accumulated value (step S304). ). That is, the scenario control unit 37 updates the cumulative value as the positive emotion ( ⁇ ) and the cumulative value as the negative emotion ( ⁇ ), respectively.
  • the scenario control unit 37 determines whether any of the cumulative value as the positive emotion ( ⁇ ) and the cumulative value as the negative emotion ( ⁇ ) exceeds a threshold, for example, 10 (step S306). When either the cumulative value as the positive emotion ( ⁇ ) or the cumulative value as the negative emotion ( ⁇ ) is equal to or greater than the threshold (step S306: YES), the scenario control unit 37 corrects the dialogue scenario (step S308). ). On the other hand, when both the cumulative value as positive emotion ( ⁇ ) and the cumulative value as negative emotion ( ⁇ ) are less than the threshold value (step S306: NO), the scenario control unit 37 selects the currently executed dialogue scenario. Continue to proceed (step S310).
  • a threshold for example, 10
  • step S312 determines whether or not the dialogue scenario has ended.
  • step S312 determines that the dialogue scenario has ended.
  • step S312 determines that the conversation scenario has not ended.
  • step S312 repeats the processing from (step S300).
  • the scenario control unit 37 accumulates a cumulative value as a positive emotion ( ⁇ ) and a cumulative value as a negative emotion ( ⁇ ), which are opposite to each other, as the dialogue scenario progresses. If any of them is equal to or greater than the threshold value, the dialogue scenario is corrected. On the other hand, the scenario control unit 37 advances the dialogue scenario when any of these accumulated values is less than the threshold value. Thereby, when the accumulated value is equal to or greater than the threshold value, the scenario control unit 37 can correct the conversation scenario to a conversation scenario according to the internal state of the robot 2 and can select the conversation scenario according to the internal state of the robot 2. is there.
  • FIG. 4 is a sequence diagram showing a part of the progress example of the dialogue scenario.
  • the progress example of the scenario will be described with reference to FIG.
  • Dialog scenarios 1-1 to 1-4 in FIG. 4 are partial scenarios included in the dialog scenario 1.
  • the initial value of the cumulative value of positive emotion is assumed to be ⁇
  • the initial value of the cumulative value of negative emotion is assumed to be ⁇ .
  • the dialogue scenario 1 starts from the dialogue scenario 1-1.
  • the user's options in the dialogue scenario 1-1 are option 1 and option 2.
  • the user selects option 1 the first internal state value indicating positive emotion is 0, and the second internal state value indicating negative emotion is 1. That is, for the robot 2, option 1 indicates that it is not a preferable selection. For this reason, the cumulative value of the negative emotion of the robot 2 increases to ⁇ + 1.
  • the first internal state value indicating positive emotion is 1, and the second internal state value indicating negative emotion is 0. That is, for the robot 2, option 2 is a preferable selection. For this reason, the cumulative value of the positive emotion of the robot 2 increases to ⁇ + 1.
  • the dialogue scenario 1-3 proceeds.
  • the user's options in the dialogue scenario 1-3 are options 3-7.
  • option 3 is selected.
  • the first internal state value indicating positive emotion is 1
  • the second internal state value indicating negative emotion is 1. That is, for the robot 2, the option 3 indicates that the positive emotion is increased and the negative emotion is decreased. For this reason, the cumulative value of the positive emotion of the robot 2 increases to ⁇ + 2, and the cumulative value of the negative emotion increases to ⁇ + 1.
  • the dialogue scenario 1-4 proceeds according to the selection of the option 3.
  • the user's options in the dialogue scenario 1-4 are option 8 and option 9.
  • the first internal state value indicating positive emotion is 3, and the second internal state value indicating negative emotion is 0. That is, for the robot 2, option 8 indicates that it is a very preferable choice.
  • the cumulative value of the positive emotion of the robot 2 increases to ⁇ + 5.
  • the cumulative value of positive emotion becomes equal to or greater than the threshold value.
  • the scenario control unit 37 obtains a scenario correction in which the dialogue scenario 1 is corrected in consideration of the positive emotion of the robot 2.
  • the scenario correction scenario proceeds in accordance with the selection of the option 8.
  • the first internal state value indicating positive emotion is 0, and the second internal state value indicating negative emotion is 2. That is, for the robot 2, the option 9 is not a preferable choice. For this reason, the cumulative value of the negative emotion of the robot 2 increases to ⁇ + 3.
  • the first internal state value and the second internal state value are associated with each option, and the cumulative value and negative emotion of the value obtained by quantifying the first internal state indicating positive emotion according to the selection of the option are displayed. The accumulated value of the value obtained by quantifying the second internal state shown is updated.
  • FIG. 5 is a diagram schematically showing a scene corresponding to the dialogue scenario 1-1.
  • the operation terminal 3 is, for example, a tablet-type portable terminal that includes a display unit 38 and a transparent touch panel type input unit 39 installed together with the display unit 38.
  • the scenario control unit 37 Based on the robot-side scenario data of the dialogue scenario 1-1 acquired by the information acquisition unit 35, the utterance “What to order?” Is executed from the robot 2.
  • the scenario control unit 37 causes the display unit 38 to display user-side dialog scenario options based on the user-side scenario data included in the dialog scenario 1-1.
  • the options here are option 500 and option 502.
  • the cumulative negative emotion value of the robot 2 increases by 1.
  • the positive emotion cumulative value of the robot 2 increases by one.
  • the body 2a of the robot 2 is supported on the pedestal 2b so as to be rotatable about a vertical axis, and the body 2a of the robot 2 may rotate according to a dialogue scenario.
  • FIG. 6 is a diagram schematically showing a scene corresponding to the dialogue scenario 1-3.
  • the scenario control unit 37 displays the user-side dialog scenario options on the display unit 38 based on the user-side scenario data included in the dialog scenario 1-3 acquired by the information acquisition unit 35. It is displayed.
  • the options here are options 600, 602, 604, 606, and 608.
  • the options 600, 602, and 608 are items related to user preferences regarding the recommended menu
  • the options 604 and 606 are items related to user evaluation regarding the recommended menu.
  • the first internal state according to the present embodiment is a positive emotion obtained for at least one of the user's evaluation regarding the ordered menu and the user's evaluation regarding the customer service
  • the second internal state Is a negative emotion obtained for at least one of the user's evaluation regarding the ordered menu and the user's evaluation regarding the customer service.
  • the cumulative value of the positive emotion of the robot 2 is increased by 1, and the cumulative value of the negative emotion is increased by 1.
  • the cumulative value of the positive emotion of the robot 2 increases by 2, and the cumulative value of the negative emotion does not change.
  • the option 604 is selected, the cumulative value of the positive emotion of the robot 2 does not change, and the cumulative value of the negative emotion increases by one.
  • the option 606 is selected, the cumulative values of the positive emotion and negative emotion of the robot 2 are not changed.
  • the option 608 the cumulative value of the positive emotion of the robot 2 increases by 3, but the cumulative value of the negative emotion does not change.
  • FIG. 7 is a diagram schematically showing a scene corresponding to a dialogue scenario in which scenario correction considering positive emotion is performed.
  • the robot 2 speaks according to the robot side scenario data included in the dialogue scenario.
  • the scenario correction is performed in consideration of the positive emotion
  • the operation of the robot 2 is corrected to the operation indicating the positive emotion.
  • the scenario is corrected so that the facial expression of the robot 2 is a smile and the voice is high.
  • the scenario control unit 37 causes the display unit 38 to display user-side dialog scenario options based on the user-side scenario data included in the correction scenario.
  • the options here are option 700 and option 702.
  • the option 700 is selected, the cumulative negative emotion value of the robot 2 increases by 1.
  • the option 702 is selected, the positive emotion cumulative value of the robot 2 increases by one.
  • FIG. 8 is a diagram schematically showing a scene corresponding to the dialogue scenario 1-5.
  • the robot 2 says “Did you like it?” According to the robot-side scenario data included in the dialogue scenario 1-5.
  • the robot 2 since the second internal state value indicating negative emotion is increased by 2 by selection 9 (FIG. 4), the robot 2 is performing an operation indicating negative emotion.
  • the operation of the robot 2 is controlled so that the expression becomes darker and the voice becomes lower than the expression of the robot 2 in the previous dialogue scenario 1-4.
  • FIG. 9 is a diagram schematically illustrating a scene relating to user evaluation of a menu.
  • FIG. 10 is a diagram schematically showing a scene related to user evaluation for customer service.
  • the robot 2 makes a statement “Is the meal delicious?”
  • the options 900 and 902 are items relating to user evaluation regarding the ordered menu. For example, when the option 900 is selected, the cumulative value of the positive emotion of the robot 2 is increased by 3, and the cumulative value of the negative emotion of the robot 2 is not changed. On the other hand, when the option 902 is selected, the cumulative value of the negative emotion of the robot 2 increases by 1, and the cumulative value of the positive emotion of the robot 2 does not change.
  • the robot 2 remarks “Has you enjoyed it” according to the robot side scenario data included in the post-meal dialogue scenario?
  • the options 1000 and 1002 are items relating to user evaluation regarding the ordered menu. That is, the first internal state according to the present embodiment includes positive emotions obtained for at least one of user evaluation regarding the ordered menu and user evaluation regarding the customer service, and the second internal state is the order Negative feelings obtained for at least one of the user's evaluation regarding the selected menu and the user's evaluation regarding the customer service.
  • the option 1000 the cumulative value of the positive emotion of the robot 2 is increased by 3, and the cumulative value of the negative emotion of the robot 2 is not changed.
  • the option 1002 when the option 1002 is selected, the cumulative value of the negative emotion of the robot 2 increases by 1, and the cumulative value of the positive emotion of the robot 2 does not change.
  • a user who visits a restaurant can perform an appropriate dialogue with the robot 2 installed in the restaurant and the operation terminal 3. That is, in the present embodiment, the scenario control unit 37 corrects the dialogue scenario based on the accumulated value of the value indicating the internal state whose value is changed as the dialogue scenario progresses. Thereby, the scenario control unit 37 can correct the dialogue scenario into a dialogue scenario according to the internal state of the robot 2, and can give a sense of dialogue and a sense of reality to the exchange of dialogue with the robot 2. Thereby, the unnaturalness of the dialogue is eliminated, and the user's satisfaction can be increased.
  • FIG. 11 is a block diagram showing the robot control system 1 according to the first modification.
  • the scenario DB 33 and the internal state value storage unit 34 are installed in the operation terminal 3, but are not limited thereto.
  • the scenario DB 33 and the internal state value storage unit 34 may be outside the operation terminal 3.
  • the scenario control unit 37 is connected to the scenario DB 33 and the internal state value storage unit 34 via a network.
  • the network and the operation terminal 3 or the scenario DB 33 are connected via wired or wireless communication.
  • the network and the operation terminal 3 or the internal state value storage unit 34 are connected via wired or wireless communication.
  • the robot motion control unit 41 is installed in the operation terminal 3, but is not limited thereto.
  • the robot motion control unit 41 may be in the robot 2.
  • the scenario DB 33 by having the scenario DB 33 outside the operation terminal 3, it is not necessary to manage the scenario DB at the operation terminal 3, and the scenario data can be centrally managed. Furthermore, by setting the scenario DB 33 on the cloud so that a specific number of editors can edit or add scenario data, it is possible to select more varied dialogue contents. It becomes possible.
  • FIG. 12 is a diagram illustrating a configuration of the robot control system 1 according to the second modification.
  • the motion generation unit 32, the information acquisition unit 35, the internal state value acquisition unit 36, and the scenario control unit 37 may be provided in the robot 2 instead of the operation terminal 3. That is, the main control function may be provided in the robot 2 instead of the operation terminal 3.
  • the operations of the operation generation unit 32, the information acquisition unit 35, the internal state value acquisition unit 36, and the scenario control unit 37 are not significantly different from the operations in the above-described embodiment, and the flowchart shown in FIG. The dialogue process is executed according to the above.
  • the configuration of the operation terminal 3 can be simplified, and the amount of data transmitted from the operation terminal 3 to the robot 2 can be reduced, so that the performance of the communication line between the operation terminal 3 and the robot 2 can be reduced. Even when the value is low, dialogue with the robot 2 can be performed without any trouble.
  • the motion generation unit 32 may be provided outside the operation terminal 3 and the robot 2.
  • the motion generation unit 32 may be built in the store system 5 or provided in a communication device separate from the store system 5. May be.
  • the dialogue is described as a state in which the user or the robot 2 outputs some information, but is not limited thereto.
  • a silent scenario may be prepared in advance and the silent scenario may be output. .
  • this silent scenario is also included as scenario data.
  • the robot control system 1 according to all the embodiments and the modifications described above can be suitably applied to the eating and drinking service as described above, but may be applied to scenes other than the eating and drinking service.
  • the motion generation unit 32 may be provided outside the operation terminal 3 and the robot 2.
  • the motion generation unit 32 may be built in the store system 5 or provided in a communication device separate from the store system 5. May be.
  • the operation terminal 3 may have a function of ordering food and drink through communication with the store system 5.
  • the robot control system 1 can be suitably applied to a food and drink service, but may be applied to scenes other than the food and drink service.

Landscapes

  • Manipulator (AREA)

Abstract

L'objectif de l'invention est d'augmenter davantage le réalisme d'une conversation grâce aux mouvements de va-et-vient d'une conversation reflétant un état interne d'un robot ou autre dans une situation où une conversation est menée entre un utilisateur et le robot ou autre. Selon un mode de réalisation, un dispositif de commande de conversation comprend une unité d'acquisition d'informations, une unité d'acquisition de valeur d'état interne et une unité de commande de scénario. L'unité d'acquisition d'informations acquiert, à partir de multiples éléments d'informations de scénario de conversation, des informations de scénario de conversation permettant d'avoir une conversation avec un client qui se rend dans un restaurant. L'unité d'acquisition de valeur d'état interne acquiert une valeur obtenue en numérisant un état interne qui évolue en fonction de la progression d'un scénario de conversation correspondant aux informations de scénario de conversation acquises. D'après une valeur cumulée des valeurs acquises par l'unité d'acquisition de valeur d'état interne, l'unité de commande de scénario corrige un scénario de conversation en fonction de la progression du scénario de conversation correspondant aux informations de scénario de conversation acquises.
PCT/JP2018/011919 2017-03-24 2018-03-23 Système de commande de conversation et système de commande de robot Ceased WO2018174290A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017059953A JP2018161703A (ja) 2017-03-24 2017-03-24 対話制御装置およびロボット制御システム
JP2017-059953 2017-03-24

Publications (1)

Publication Number Publication Date
WO2018174290A1 true WO2018174290A1 (fr) 2018-09-27

Family

ID=63585534

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/011919 Ceased WO2018174290A1 (fr) 2017-03-24 2018-03-23 Système de commande de conversation et système de commande de robot

Country Status (2)

Country Link
JP (1) JP2018161703A (fr)
WO (1) WO2018174290A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10748644B2 (en) 2018-06-19 2020-08-18 Ellipsis Health, Inc. Systems and methods for mental health assessment
US11120895B2 (en) 2018-06-19 2021-09-14 Ellipsis Health, Inc. Systems and methods for mental health assessment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001300876A (ja) * 2000-04-20 2001-10-30 Yamatake Corp サービスロボット及びこれを使用する給仕システム
JP2002283261A (ja) * 2001-03-27 2002-10-03 Sony Corp ロボット装置及びその制御方法、並びに記憶媒体
JP2015090563A (ja) * 2013-11-05 2015-05-11 Meet株式会社 オーダー管理システム、オーダー管理方法及びプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001300876A (ja) * 2000-04-20 2001-10-30 Yamatake Corp サービスロボット及びこれを使用する給仕システム
JP2002283261A (ja) * 2001-03-27 2002-10-03 Sony Corp ロボット装置及びその制御方法、並びに記憶媒体
JP2015090563A (ja) * 2013-11-05 2015-05-11 Meet株式会社 オーダー管理システム、オーダー管理方法及びプログラム

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KATO SHOHEI: "Affective computing and characterization for kensei communication robot", JOURNAL OF THE JAPANESE SOCIETY FOR ARTIFICIAL INTELLIGENCE, vol. 31, no. 5, 1 September 2016 (2016-09-01), pages 671 - 678 *
TAKEUCHI SHOUGO: "An emotion generation model based on the dialogist likability for sensitivity communication robot", JOURNAL OF THE ROBOTICS SOCIETY OF JAPAN, vol. 25, no. 7, 15 October 2007 (2007-10-15), pages 1125 - 1133 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10748644B2 (en) 2018-06-19 2020-08-18 Ellipsis Health, Inc. Systems and methods for mental health assessment
US11120895B2 (en) 2018-06-19 2021-09-14 Ellipsis Health, Inc. Systems and methods for mental health assessment
US11942194B2 (en) 2018-06-19 2024-03-26 Ellipsis Health, Inc. Systems and methods for mental health assessment
US12230369B2 (en) 2018-06-19 2025-02-18 Ellipsis Health, Inc. Systems and methods for mental health assessment

Also Published As

Publication number Publication date
JP2018161703A (ja) 2018-10-18

Similar Documents

Publication Publication Date Title
JP7695295B2 (ja) 対話型アニメキャラクターヘッドステム及び方法
US10664741B2 (en) Selecting a behavior of a virtual agent
US20190206393A1 (en) System and method for dialogue management
KR102400398B1 (ko) 애니메이션 캐릭터 헤드 시스템 및 방법
US11003860B2 (en) System and method for learning preferences in dialogue personalization
EP3732677A1 (fr) Système et procédé destinés à un compagnon automatisé commandé par intelligence artificielle
WO2018174289A1 (fr) Système de commande de conversation, et système de commande de robot
WO2018174290A1 (fr) Système de commande de conversation et système de commande de robot
JP2018161712A (ja) 対話制御装置およびロボット制御システム
WO2024004609A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et support d'enregistrement
WO2023089537A1 (fr) Alternance de réalités perçues dans un monde virtuel sur la base de premières préférences de personne et d'un système de coordonnées relatives
JP2018161713A (ja) 対話制御装置およびロボット制御システム
JP2018161702A (ja) ロボット制御システムおよびロボット制御装置
JP2018161709A (ja) 対話制御システムおよび対話制御装置
WO2018174285A1 (fr) Dispositif de commande de conversation et système de conversation
JP2018161710A (ja) 対話制御装置及びロボット制御システム
JP2018161707A (ja) ロボット制御システムおよびロボット制御装置
JP2018161706A (ja) ロボット制御システムおよびロボット制御装置
JP2025071018A (ja) システム
JP2025001601A (ja) 制御システム
JP2025000490A (ja) 行動制御システム
JP2024179696A (ja) 行動制御システム
CN118829966A (zh) 车辆的交互式控制
JP2018161708A (ja) ロボット制御システムおよびロボット制御装置
JP2018161715A (ja) 対話制御装置および対話システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18770393

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18770393

Country of ref document: EP

Kind code of ref document: A1