[go: up one dir, main page]

WO2025181507A1 - Information processing method, information processing device, and program - Google Patents

Information processing method, information processing device, and program

Info

Publication number
WO2025181507A1
WO2025181507A1 PCT/IB2024/000088 IB2024000088W WO2025181507A1 WO 2025181507 A1 WO2025181507 A1 WO 2025181507A1 IB 2024000088 W IB2024000088 W IB 2024000088W WO 2025181507 A1 WO2025181507 A1 WO 2025181507A1
Authority
WO
WIPO (PCT)
Prior art keywords
agent image
specific content
movement
content
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IB2024/000088
Other languages
French (fr)
Japanese (ja)
Inventor
美友紀 茂田
敦 高松
拓良 柳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Renault SAS
Nissan Motor Co Ltd
Original Assignee
Renault SAS
Nissan Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Renault SAS, Nissan Motor Co Ltd filed Critical Renault SAS
Priority to PCT/IB2024/000088 priority Critical patent/WO2025181507A1/en
Publication of WO2025181507A1 publication Critical patent/WO2025181507A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof

Definitions

  • the present invention relates to an information processing method, information processing device, and program for controlling an agent capable of communicating with a user.
  • JP2020-055348A proposes a technology in which, when an image to be provided to a passenger is displayed on a display unit, the display position of an agent image is moved to the display position of the provided image in order to guide the passenger's gaze to the display position of the provided image.
  • the present invention aims to appropriately move an agent image on a display unit so as not to obstruct the visibility of specific content.
  • One aspect of the present invention is an information processing method for displaying an agent image on a display unit that communicates with a vehicle occupant, and controlling the display state of the agent image based on the state of the occupant.
  • This information processing method includes, when an agent image displayed at a first position on the display unit moves to a second position based on the state of the occupant, a determination process for determining whether movement of the agent image along the first path will obstruct the visibility of specific content, based on a linear movement path (first path) from the first position to the second position and the content displayed on the display unit; if the visibility of the specific content will not be obstructed, moving the agent image along the first path in a first display mode; and, if the visibility of the specific content will be obstructed, a movement path different from the first path that does not obstruct the visibility of the specific content (second path).
  • a second movement process of moving the agent image from a first position to a second position in a second display mode that is a display mode different from the first display mode and does not impair the visibility of specific content and a third movement process of moving the agent image from the first position to the second position at a second movement timing that is different from the first movement timing in the first display mode and does not impair the visibility of specific content, or at a second movement speed that is different from the first movement speed in the first display mode and does not impair the visibility of specific content.
  • FIG. 1 is a simplified diagram showing an example of the configuration of the interior of a vehicle.
  • FIG. 2 is a simplified diagram showing an example of the configuration of the interior of a vehicle.
  • FIG. 3 is a simplified diagram showing an example of the configuration of the interior of a vehicle.
  • FIG. 4 is a diagram showing an example of a transition when an agent image is moved.
  • FIG. 5 is a block diagram showing an example of the system configuration of the information processing system.
  • FIG. 6 is a diagram showing an example of the transition of the agent image.
  • FIG. 7 is a diagram showing an example of the transition of the agent image.
  • FIG. 8 is a diagram showing an example of a transition of an agent image.
  • FIG. 9 is a diagram showing an example of a transition of an agent image.
  • FIG. 10 is a flowchart showing an example of an agent migration process.
  • FIG. 11 is a flowchart showing an example of an agent migration process.
  • FIG. 1 is a simplified diagram showing an example of the configuration of the interior of a vehicle C1.
  • Fig. 1 shows the interior of the vehicle C1 in front of the driver's seat and passenger seat (not shown) as viewed from the rear in the longitudinal direction of the vehicle C1.
  • Fig. 1 omits illustrations of components other than a dashboard 2, a steering wheel 3, a windshield 4, a rearview mirror 5, and an output device 200.
  • the output device 200 is an output device installed inside the vehicle C1, and performs various output operations based on instructions from the information processing device 110 (see Figure 5).
  • the output device 200 is an apparatus installed on the dashboard 2, and includes a display unit 201, a sound output unit 202, and a reception unit 203 (see Figure 5).
  • an output device that is long in the left-right direction of the vehicle C1 is shown as an example of the output device 200.
  • the output device 200 is an apparatus whose display area extends, for example, from near the center of the dashboard 2 to the driver's seat.
  • the output device 200 is an in-vehicle system consisting of one or more devices capable of providing various types of information.
  • the output device 200 can be, for example, at least one of a navigation device, an audio device, a DVD device, a TV tuner, an IVI (In-Vehicle Infotainment), etc.
  • the images displayed on each output device 200 can also be displayed using a HUD (Head Up Display) implemented on the windshield 4, or other display device.
  • a portable device that can be carried by the driver D1 (or a device that can be installed in the vehicle C1), such as a smartphone, tablet, or portable personal computer, can be used.
  • the output device 200 executes output operations that enable communication with users (including the driver D1) aboard the vehicle C1.
  • an agent image AG1 (see Figures 2 and 3) that enables communication with the occupants is displayed on the display unit 201 of the output device 200, and various effects are executed using the agent image AG1.
  • an agent image that enables communication with the occupants may also be displayed on the windshield 4 (HUD display area) or another display device, and various effects may be executed using the agent image.
  • Example of agent image display 2 and 3 are simplified diagrams showing examples of the configuration of the interior of the vehicle C1. Note that since the configuration examples shown in Fig. 2 and 3 are similar to the configuration example shown in Fig. 1, detailed description thereof will be omitted here.
  • Figures 2 and 3 show an example in which content CT1 and agent image AG1 are displayed on the display unit 201 of the output device 200.
  • Content CT1 is content that displays map information including a current location indicator PL1 indicating the current location of vehicle C1 and route information RI1 indicating the route traveled by vehicle C1.
  • route information RI1 is displayed in a display format different from that of a map (for example, arrows colored blue, red, etc.).
  • content CT1 is displayed based on a navigation function.
  • Agent image AG1 is an image representing an object capable of communicating with the user.
  • an example is shown in which an image that resembles a human face is used as agent image AG1.
  • agent image AG1 is not limited to this, and for example, an image that resembles the entire body or part of a human (for example, the upper body) may be used as agent image AG1, or an image that resembles an animal such as a rabbit or pig (or the face of that animal), or an image that resembles a virtual creature (for example, the face of an anime character) (or that face). In this way, a mimicked image can be used as agent image AG1.
  • the output device 200 performs various operations related to driving assistance (e.g., operations related to content CT1 and agent image AG1) based on instructions from the information processing device 110 (see Figure 5). For example, the output device 200 outputs various information related to driving assistance when the driver D1 performs driving operations, surrounding facilities, etc., based on the control of the information processing device 110.
  • driving assistance e.g., operations related to content CT1 and agent image AG1
  • the output device 200 outputs various information related to driving assistance when the driver D1 performs driving operations, surrounding facilities, etc., based on the control of the information processing device 110.
  • driving assistance using the agent image AG1 is expected to include, for example, alerting the driver of moving objects ahead or behind.
  • a voice output such as "Be careful, there is a railroad crossing ahead” or "There is a person ahead” can be output.
  • various actions can be performed using the agent image AG1.
  • driving assistance can be performed using the agent image AG1.
  • the output device 200 performs various processes using the agent image AG1 based on the control of the information processing device 110.
  • the output device 200 performs communication processing to exchange various types of information with the driver D1 using the agent image AG1.
  • the output device 200 uses the agent image AG1 to exchange various types of conversations with the driver D1 or to provide information preferred by the driver D1.
  • agent image AG1 can be moved near the occupant who made the user input, making it easier to provide assistance to that occupant.
  • driver D1 utters voice S2, "Is that a highway up ahead?"
  • the agent image AG1 can be moved near driver D1 (for example, in the direction of driver D1's line of sight), as shown in Figure 3.
  • agent image AG1 to the position where the user is operating it, or to the position where the user wants to see it.
  • the method of moving agent image AG1 in this case will be explained with reference to Figure 4.
  • FIG. 4 is a diagram showing an example of a transition when the agent image AG1 is moved on the display unit 201 of the output device 200.
  • FIG. 4 is a diagram showing an example of a transition when the agent image AG1 is moved on the display unit 201 of the output device 200.
  • dotted lines MT1 and MT2 indicate the range of the movement trajectory of agent image AG1 when moving linearly from agent image AG1a to agent image AG1e on the display unit 201. Also in Figure 4(B), part of the movement trajectory of agent image AG1 is schematically indicated by dotted circles AG1a to AG1e. In this way, Figure 4(B) shows the relationship between the first movement trajectory AG1a to AG1e of agent image AG1 from the first position AG1a to the second position AG1e and the currently displayed content CT1.
  • a linear movement path is set from the initial position (first position AG1a) of the agent image AG1 to a response position (second position AG1e), which is the position to which the agent image AG1 is moved in response to user input (e.g., manual input, voice input).
  • each position on the display surface of the display unit 201 can be managed, for example, as coordinate information in the agent management DB 132 (see FIG. 5).
  • the output content determination unit 126 (see FIG. 5) can obtain the current position of the agent image AG1 by sequentially storing the positions of each image (content image, agent image AG1) displayed on the display surface of the display unit 201 in the agent management DB 132.
  • the movement mode determination unit 127 can calculate the range of the movement trajectory of the agent image AG1 as it moves on the display surface of the display unit 201 based on the display area (display area) of the agent image AG1, the current position of the agent image AG1, and the destination position of the agent image AG1. A known calculation method can be used to calculate this range.
  • the agent image AG1 it is conceivable to move the agent image AG1 based on the line of sight of the occupant of vehicle C1, the position of the occupant's hands, etc. For example, when the occupant performs a desired operation on the display unit 201, it is possible to move the agent image AG1 to a position that is easy for the occupant to operate based on the occupant's actions. For example, when the occupant operates with their hands, it is conceivable to move the agent image AG1 to a position close to their hands. For example, it is conceivable to move in a straight line from the source to the destination, as shown by arrow AW1.
  • agent image AG1 follow the user's operations and moving the agent image AG1 continuously, it is possible to show that the agent image AG1 is responding to the user's operations. Furthermore, by looking at the agent image AG1, the user can easily visually grasp the information that the agent image AG1 is trying to convey. For this reason, it is conceivable that the user may feel uneasy if the agent image AG1 disappears even temporarily. However, there is a possibility that the agent image AG1 may overlap with information that should not be hidden while moving in response to user input.
  • the agent image AG1 will overlap the front side of the content CT1 (i.e., the display surface side of the display unit 201) as it moves. In this case, the occupant will not be able to see part of the content CT1 while the agent image AG1 is moving. In other words, the movement of the agent image AG1 will impede the visibility of the content CT1.
  • the movement path of the agent image AG1 is changed, the display mode of the agent image AG1 is changed, or the movement timing or movement speed of the agent image AG1 is changed so as not to impair the visibility of the specific content.
  • FIG. 5 is a block diagram showing an example of the system configuration of the information processing system 100 installed in the vehicle C1.
  • the information processing system 100 includes a sound acquisition unit 101, a driver image acquisition unit 102, a vehicle interior image acquisition unit 103, a vehicle exterior image acquisition unit 104, an information processing device 110, and an output device 200.
  • the information processing device 110 is an example of a device that controls the output device 200, which is capable of communicating with the occupants of the vehicle C1 (including the driver D1).
  • the information processing device 110 and the output device 200 are connected via a communication method using wired communication or wireless communication.
  • the information processing device 110 is also connected to the network 20 via a communication method using wireless communication.
  • the network 20 is a network such as a public line network or the Internet.
  • the output device 200 may also be connected to the network 20 via a communication method using wireless communication. While Figure 5 shows an example in which the information processing device 110 and the output device 200 are configured as separate entities, the information processing device 110 and the output device 200 may also be configured as an integrated device.
  • the sound acquisition unit 101 is provided inside the vehicle C1, acquires sounds inside the vehicle C1, and outputs sound information related to the acquired sounds to the information processing device 110.
  • the sound acquisition unit 101 can be, for example, one or more microphones or sound acquisition sensors.
  • the driver image acquisition unit 102 captures an image of the driver D1 in the vehicle C1 and generates an image (image data), and outputs image information related to the generated image to the information processing device 110.
  • the driver image acquisition unit 102 is provided at least inside the vehicle C1 and captures an image of the driver D1 in the vehicle C1 and generates an image (image data).
  • the driver image acquisition unit 102 is configured, for example, with one or more camera devices or image sensors capable of capturing an image of the driver D1.
  • one driver image acquisition unit 102 can be provided in the front of the interior of the vehicle C1 (e.g., on the ceiling), and can capture an image of the driver D1 from in front of the vehicle C1 and generate an image (image data).
  • the driver image acquisition unit 102 can be provided above the windshield 4, i.e., above the rearview mirror 5.
  • the driver image acquisition unit 102 and the vehicle interior image acquisition unit 103 may be the same device or different devices.
  • the vehicle interior image acquisition unit 103 captures images of subjects inside the vehicle C1 to generate images (image data), and outputs image information related to the generated images to the information processing device 110.
  • the vehicle interior image acquisition unit 103 is provided at least inside the vehicle C1 (e.g., the ceiling) and captures images of subjects inside the vehicle C1 to generate images (image data).
  • the vehicle interior image acquisition unit 103 is composed of, for example, one or more camera devices or image sensors capable of capturing images of subjects.
  • one vehicle interior image acquisition unit 103 may be provided at the front of the vehicle C1 to capture images of subjects from the front to generate images (image data)
  • another vehicle interior image acquisition unit 103 may be provided at the rear of the vehicle C1 to capture images of subjects from the rear to generate images (image data).
  • the outside-vehicle image acquisition unit 104 captures images of subjects outside the vehicle C1 to generate images (image data), and outputs image information related to the generated images to the information processing device 110.
  • Two or more outside-vehicle image acquisition units 104 may be provided, and all or some of the images from these outside-vehicle image acquisition units 104 may be used.
  • one outside-vehicle image acquisition unit 104 may be provided in front of the vehicle C1 to capture images of subjects from in front of the vehicle C1 to generate images (image data), and another outside-vehicle image acquisition unit 104 may be provided behind the vehicle C1 to capture images of subjects behind the vehicle C1 to generate images (image data).
  • one or more devices capable of capturing subjects in all directions from the vehicle C1 and subjects inside the vehicle C1, such as a 360-degree camera may be used.
  • the driver image acquisition unit 102, the vehicle interior image acquisition unit 103, and the vehicle exterior image acquisition unit 104 are each composed of, for example, an image sensor that receives light from a subject collected by a lens, and an image processing unit that performs predetermined image processing on the image data generated by the image sensor.
  • an image sensor that receives light from a subject collected by a lens
  • an image processing unit that performs predetermined image processing on the image data generated by the image sensor.
  • a CCD (Charge Coupled Device) type or CMOS (Complementary Metal Oxide Semiconductor) type image sensor can be used as the image sensor.
  • the information processing device 110 includes a control unit 120, a storage unit 130, and a communication unit 140.
  • the communication unit 140 exchanges various types of information with other devices using wired or wireless communication under the control of the control unit 120. For example, when the communication unit 140 receives driving assistance information, operation information of the agent image AG1, etc. from an external device (e.g., a server), it outputs each piece of information to the control unit 120.
  • an external device e.g., a server
  • the control unit 120 controls each unit based on various programs stored in the memory unit 130.
  • the control unit 120 is realized by a processing device such as a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit).
  • the vehicle ECU (Electronic Control Unit) of the vehicle C1 may also be used as the control unit 120, or a processing device different from the vehicle ECU may be provided as the control unit 120.
  • the control unit 120 executes various controls based on the information output from the sound acquisition unit 101, driver image acquisition unit 102, vehicle interior image acquisition unit 103, vehicle exterior image acquisition unit 104, communication unit 140, etc., and the information acquired by the vehicle information acquisition unit 125.
  • the control unit 120 executes control processing to control the operating state of the output device 200.
  • the control unit 120 includes an utterance acquisition unit 121, a driver status acquisition unit 122, a vehicle interior status acquisition unit 123, a vehicle exterior status acquisition unit 124, a vehicle information acquisition unit 125, an output content determination unit 126, a movement mode determination unit 127, and an output control unit 128.
  • the speech acquisition unit 121 performs a predetermined sound analysis process on the sound information output from the sound acquisition unit 101 to acquire speech information related to the speech of each user (including driver D1) contained in the sound information, and outputs the speech information to the output content determination unit 126.
  • This sound analysis process can be performed using known sound analysis processes.
  • the driver status acquisition unit 122 performs a predetermined image analysis process on the image information output from the driver image acquisition unit 102 to acquire various pieces of information about driver D1 contained in the image information, and outputs each piece of information to the output content determination unit 126.
  • This image analysis process can be performed using known image analysis processes.
  • As various pieces of information about driver D1 it is possible to acquire various actions of driver D1 related to, for example, driver D1's facial expression, line of sight, hand, face, body movements, etc. From this, it is possible to detect, for example, whether driver D1 has gotten in and out of vehicle C1, whether each seat in vehicle C1 is occupied, and hand movements of driver D1.
  • driver status information related to the state of driver D1 is acquired by the driver status acquisition unit 122.
  • the cabin status acquisition unit 123 acquires various pieces of information about the interior of the vehicle C1 contained in the image information by performing a predetermined image analysis process on the image information output from the cabin image acquisition unit 103, and outputs the information to the output content determination unit 126.
  • This image analysis process can be performed using known image analysis processes.
  • As various pieces of information about the interior of the vehicle C1 it is possible to acquire, for example, each user's facial expression, each user's gaze, and each user's actions related to the movements of their hands, face, body, etc.
  • the cabin status acquisition unit 123 acquires user status information about the status of each user aboard the vehicle C1, and vehicle status information about the status of the vehicle C1.
  • the vehicle exterior situation acquisition unit 124 acquires various pieces of information about the outside of the vehicle C1 contained in the image information by performing a predetermined image analysis process on the image information output from the vehicle exterior image acquisition unit 104, and outputs each piece of information to the output content determination unit 126.
  • This image analysis process can use known image analysis processes.
  • As various pieces of information about the outside of the vehicle C1 for example, it is possible to detect whether the vehicle C1 is traveling on a road, whether the vehicle C1 is stopped on a road, and traffic lights, signs, etc. that are present in front of the vehicle C1. In other words, the vehicle exterior situation acquisition unit 124 acquires vehicle state information about the state of the vehicle C1.
  • the vehicle information acquisition unit 125 acquires information (vehicle state information) relating to various vehicle states of the vehicle C1 and outputs the acquired vehicle state information to the output content determination unit 126.
  • the vehicle state information can be acquired, for example, from a CAN (Controller Area Network) signal.
  • the vehicle state information includes, for example, vehicle speed, acceleration, shift lever position (e.g., P range, D range), accelerator pedal depression amount, brake pedal depression amount, position information, and abnormality occurrence information. For example, it is possible to determine whether the vehicle C1 is stopped or moving based on the vehicle speed, acceleration, etc. Furthermore, various warning information can be displayed based on the abnormality occurrence information relating to the vehicle C1.
  • the vehicle information acquisition unit 125 may also acquire sensor detection information output from various sensors installed in the vehicle C1.
  • sensors include LiDAR (Light Detection and Ranging), RADAR (Radio Detection and Ranging), sonar, vehicle speed sensor, acceleration sensor, steering sensor, accelerator position sensor, position information acquisition sensor (position information acquisition unit), seat occupancy sensor, and seat belt sensor.
  • Publicly known sensors can be used for each of these sensors.
  • LiDAR, RADAR, sonar, and the like are examples of sensors that detect the conditions around the vehicle C1.
  • Vehicle speed sensors, acceleration sensors, steering sensors, accelerator position sensors, and the like are examples of sensors that detect the driving operation conditions of the driver D1. These are just examples, and other sensors may also be used. Furthermore, only some of these sensors may be used.
  • the location information acquisition unit acquires location information regarding the location where vehicle C1 is located. For example, it can be realized by a GNSS receiver that acquires location information using the GNSS (Global Navigation Satellite System). This location information includes position-related data such as latitude, longitude, and altitude at the time the GNSS signal is received. Location information may also be acquired using other location information acquisition methods. For example, location information may be derived using information from nearby access points or base stations. Location information may also be acquired using beacons.
  • GNSS Global Navigation Satellite System
  • the seating sensor (or seat sensor) is a sensor that detects whether or not an occupant is seated in each seat of the vehicle C1.
  • the seat belt sensor is a sensor that detects whether or not an occupant is wearing a seat belt in each seat of the vehicle C1. For example, whether or not the driver D1 sitting in the driver's seat is wearing a seat belt can be detected using a seating sensor, seat belt sensor, etc.
  • the output content determination unit 126 determines the content of each piece of output information to be output from the output device 200 based on the information output from the speech acquisition unit 121, driver status acquisition unit 122, cabin status acquisition unit 123, exterior status acquisition unit 124, vehicle information acquisition unit 125, reception unit 203, etc.
  • the output content determination unit 126 then outputs the determined content of each piece of output information and the information used in determining this content to the movement mode determination unit 127 and output control unit 128.
  • the output content determination unit 126 acquires at least one of user status information regarding the status of each occupant (including driver D1) in vehicle C1 and vehicle status information regarding the status of vehicle C1.
  • the output content determination unit 126 determines the content of each piece of information to be output from the output device 200 based on at least one of the acquired user status information and vehicle status information. For example, if the output content determination unit 126 detects, based on the user status information or vehicle status information, that assistance information to be communicated to driver D1, it determines to output the assistance information in order to communicate the assistance information to driver D1. Furthermore, if the output content determination unit 126 detects, based on the user status information or vehicle status information, that a predetermined communication (e.g., conversation) is to be performed with any of the occupants of vehicle C1, it determines the output information for performing that communication. Furthermore, the output content determination unit 126 determines, for example, the display mode, display position, etc. of agent image AG1 based on the user status information or operation information received by the reception unit 203.
  • a predetermined communication e.g., conversation
  • the movement mode determination unit 127 determines the movement mode when moving the agent image AG1 based on the content of each output information determined by the output content determination unit 126, the state of the driver D1 (e.g., driving load, line of sight direction), the state of the vehicle C1 (e.g., whether the surrounding environment is high driving load), etc.
  • the movement mode determination unit 127 then outputs the determined movement mode of the agent image AG1 to the output control unit 128. For example, the movement mode determination unit 127 checks the relationship between the movement trajectory of the agent image AG1 moving from the source to the destination and one or more pieces of content displayed on the display unit 201.
  • the movement mode determination unit 127 determines whether there is content currently being displayed between the source and the destination, and if there is content currently being displayed between the source and the destination, determines whether the content overlaps with the movement trajectory of the agent image AG1. For example, if there is content currently being displayed between the source and destination and that content overlaps with the movement trajectory of the agent image AG1, the movement mode determination unit 127 determines to execute a movement effect that moves the agent image AG1 so as not to impair the visibility of that content. This movement effect will be described in detail with reference to Figures 6 to 9. Furthermore, of the content currently being displayed, only specific content may be subject to the determination of whether or not there is an overlap.
  • content containing particularly important information can be designated as specific content.
  • This important information is, for example, information that is required by law to be continuously displayed (i.e., information that is required by law to be continuously displayed, information that is required by law to be concealed).
  • This information is, for example, various indicator lights and warning lights. Examples include battery level warning lights, maximum speed signs, battery level gauges, cruising range indicators, speedometers, position indicators (shift lever positions), odometers (total distance meters), trip meters (segment distance meters), warning lights indicating vehicle malfunctions, etc. Note that these are only examples of representative information, and other information (information that is required by law to be continuously displayed) may also be designated as important information.
  • important information is information of high urgency.
  • This high urgency information is, for example, information related to driving assistance that is presented to driver D1 while driving. Examples of such information include information notifying the driver D1 of the direction of travel of vehicle C1, and information notifying the driver of objects (e.g., railroad crossings, accident vehicles) that exist in the direction of travel of vehicle C1.
  • objects e.g., railroad crossings, accident vehicles
  • important information may include information related to driving operations, driving control information related to driving control, and abnormality occurrence information regarding the occurrence of an abnormality in vehicle C1.
  • Information related to driving operations is, for example, map information (map content) displayed to guide the vehicle C1 in the direction of travel, and in particular map information including a current location indicator indicating the current location of the vehicle C1 and a route indicator indicating the route the vehicle C1 will take (e.g., the route from the current location to the destination).
  • map information including a current location indicator indicating the current location of the vehicle C1 and a route indicator indicating the route the vehicle C1 will take (e.g., the route from the current location to the destination).
  • the route the vehicle C1 will take in the future is more important to the driver D1 than the route the vehicle C1 has taken so it is preferable to treat the route the vehicle C1 will take in the future as important information.
  • the autonomous driving route will be displayed in the map content.
  • driving control information related to driving control is, for example, information regarding the driving of vehicle C1, turning of vehicle C1, stopping of vehicle C1, etc.
  • the abnormality occurrence information content regarding the occurrence of an abnormality in vehicle C1 is, for example, content that includes information regarding malfunctions in various parts of vehicle C1.
  • specific content may also be set based on the state of the occupant looking at the display unit 201. For example, if there is an occupant looking at the display unit 201, content that is in the occupant's line of sight (i.e., line of sight) may be set as specific content. In this case, content may be set as specific content on the condition that the same content is in the occupant's line of sight (i.e., line of sight) for a predetermined period of time or more.
  • entertainment information that displays network information such as SNS (social networking service), music information, and entertainment video information can be set to not be specified as specific content.
  • SNS social networking service
  • the output control unit 128 controls the operating state of the output device 200 based on the content of the output information determined by the output content determination unit 126 and the movement mode of the agent image AG1 determined by the movement mode determination unit 127. Each of these operations will be described in detail with reference to Figs. 6 to 11 etc.
  • the memory unit 130 is a storage medium that stores various types of information.
  • the memory unit 130 stores various types of information (e.g., control programs, agent information DB 131, agent management DB 132, content management DB 133, map information DB) required by the control unit 120 to perform various processes.
  • the memory unit 130 also stores various types of information acquired via the communication unit 140.
  • the memory unit 130 can be, for example, ROM (Read Only Memory), RAM (Random Access Memory), SRAM (Static Random Access Memory), HDD (Hard Disk Drive), SSD (Solid State Drive), or a combination of these.
  • Agent information DB131 stores various types of information required to realize the various operations of agent image AG1 displayed on output device 200.
  • image information related to agent image AG1 to be displayed on display unit 201 of output device 200 and audio information related to the audio of agent image AG1 to be output from audio output unit 202 are stored in agent information DB131.
  • operation information for operating agent image AG1 on output device 200 when various communications are performed is stored in agent information DB131.
  • the agent management DB 132 stores agent management information (coordinate information) for managing the display position of the agent image AG1 displayed on the display unit 201. For example, information regarding the position of the agent image AG1 on the display surface of the display unit 201 and the display area of the agent image AG1 is managed as agent management information.
  • Content management DB 133 stores content management information (coordinate information) for managing the display position of each piece of content (e.g., map content, music content) displayed on display unit 201. For example, information regarding the position of each piece of content on the display surface of display unit 201, the display area of the content, and the type of content is managed as content management information.
  • content management information coordinate information for managing the display position of each piece of content (e.g., map content, music content) displayed on display unit 201. For example, information regarding the position of each piece of content on the display surface of display unit 201, the display area of the content, and the type of content is managed as content management information.
  • the output device 200 is a device that can display the agent image AG1 based on instructions from the information processing device 110 and convey various information to the driver D1, etc. using the agent image AG1.
  • the output device 200 includes a display unit 201, a sound output unit 202, and a reception unit 203.
  • the display unit 201, the sound output unit 202, and the reception unit 203 are controlled based on a control unit (not shown) included in the output device 200.
  • the display unit 201 is a display unit that displays various images based on instructions from the information processing device 110.
  • the sound output unit 202 outputs various sounds based on instructions from the information processing device 110.
  • one or more speakers can be used as the sound output unit 202.
  • the reception unit 203 receives user input from the occupants of the vehicle C1 and outputs the received input to the control unit 120.
  • the reception unit 203 may be, for example, a touch panel or various operating members.
  • the display unit 201 and reception unit 203 may be configured as a touch panel that allows the user to perform operation input by touching or bringing their finger close to the display surface, or may be configured as a separate user interface.
  • the display unit 201, sound output unit 202, and reception unit 203 are examples of user interfaces, and some of them may be omitted, or other user interfaces may be used.
  • the degree of driving load can be determined based on whether the vehicle C1 is stopped or moving. For example, when the vehicle C1 is stopped, it can be determined that the driving load is low (e.g., the driving load is less than a threshold value). On the other hand, when the vehicle C1 is moving, it can be determined that the driving load is high (e.g., the driving load is equal to or greater than a threshold value). Whether the vehicle C1 is stopped or moving can be determined based on vehicle information (e.g., vehicle speed, acceleration, position of the shift lever (e.g., P range, D range), accelerator pedal depression amount, brake pedal depression amount) acquired by the vehicle information acquisition unit 125.
  • vehicle information e.g., vehicle speed, acceleration, position of the shift lever (e.g., P range, D range), accelerator pedal depression amount, brake pedal depression amount) acquired by the vehicle information acquisition unit 125.
  • the driving load can be determined based on the traffic light ahead of vehicle C1. For example, if the traffic light ahead of vehicle C1 is red and vehicle C1 is stopped, it can be determined that the driving load is low (for example, the driving load is below a threshold). On the other hand, if vehicle C1 is stopped but the traffic light ahead of vehicle C1 turns green, it can be determined that the driving load is high (for example, the driving load is above a threshold). Note that determinations other than these can be made similar to those based on whether vehicle C1 is stopped or moving. Note that when vehicle C1 is in autonomous driving mode, it can be determined that the driving load is low (for example, the driving load is below a threshold) even if vehicle C1 is moving.
  • the driving load of driver D1 may be determined based on the driving operation of driver D1, the surrounding conditions of vehicle C1, etc.
  • the operating status of driver D1, the status around vehicle C1, etc. can be acquired by the driver status acquisition unit 122, the vehicle exterior status acquisition unit 124, the vehicle information acquisition unit 125, etc.
  • driver D1's driving load is high.
  • the driving load of driver D1 can be estimated based on the status around vehicle C1, etc. For example, if vehicle C1 is traveling on a winding road, a narrow road, a highly congested road, a road with many people, etc., it can be assumed that driver D1's driving load is high.
  • driver D1's driving load is low.
  • the driving load of driver D1 can be estimated based on the user's facial expression, the voice of driver D1, etc.
  • steering entropy can be used to measure the driving operation of driver D1.
  • the steering entropy method is a measurement method that measures and estimates the driver's load based on the smoothness of the driver's steering angle, and known calculation methods can be used. Furthermore, this steering entropy method makes it possible to use digitized measurement results (measured values) as information entropy values calculated based on time-series steering angle data.
  • the number of traffic participants around vehicle C1, the weather around vehicle C1, the darkness around vehicle C1, the shape of the road around vehicle C1, etc. can be used as the surrounding conditions of vehicle C1.
  • the driving load of driver D1 can be determined using the values described above. For example, a predetermined calculation (e.g., addition) can be performed on the values described above, and the results of this calculation can be used to determine the driving load of driver D1. Note that the driving load of driver D1 may also be determined using at least one of the values described above. Other well-known methods for determining driving load can also be used.
  • FIG. 6 and 7 are diagrams showing an example of the transition of the agent image AG1 when the agent image AG1 is moved from the first position PP1 to the second position SP1 on the display unit 201 of the output device 200.
  • FIG. 6 and 7 are diagrams showing an example of the transition of the agent image AG1 when the agent image AG1 is moved from the first position PP1 to the second position SP1 on the display unit 201 of the output device 200.
  • Figure 6(A) shows an example of a transition when agent image AG1 is moved from first position PP1 to second position SP1 when content CT2, which is not specific content, is displayed on display unit 201.
  • Content CT2 is music content that is displayed to allow various operations when listening to music, for example.
  • the movement trajectory of agent image AG1 moving from first position PP1 to second position SP1 is schematically shown by agent images AG1a to AG1e.
  • the moving agent image AG1 and the displayed content CT2 may overlap.
  • the moving agent images AG1b to AG1d may overlap with the content CT2.
  • the moving agent images AG1b to AG1d are displayed so as to overlap the front side of the content CT2 (i.e., the display surface side of the display unit 201), preventing the occupant from seeing all or part of the content CT2.
  • the front side of the content CT2 i.e., the display surface side of the display unit 201
  • the output control unit 128 executes display control to move the agent image AG1 along a linear movement path from the first position PP1 to the second position SP1.
  • the display of the moving agent image AG1 may be stopped.
  • the agent image AG1 may be displayed at the second position SP1. Note that if it is detected that any occupant is looking at content CT2, the agent image AG1 may be moved without overlapping with content CT2. An example of this is shown in Figure 11.
  • Figures 6(B) (C) and 7(A) to (C) show an example of a transition when an agent image AG1 is moved from a first position PP1 to a second position SP1 when a specific piece of content CT1 is displayed on the display unit 201.
  • Figure 6(B) shows an example of moving the agent image AG1 behind the content CT1 (i.e., the opposite side of the display surface of the display unit 201).
  • the agent image AG1 moves along a linear path from the first position PP1 to the second position SP1, the moving agent image AG1 overlaps with the displayed content CT1.
  • the displayed content CT1 is specific content, it is important not to impede the visibility of the content CT1. Therefore, when the displayed content CT1 is specific content, the output control unit 128 executes display control such that the agent image AG1 moves behind the content CT1 (i.e., on the opposite side of the display surface of the display unit 201) along the linear path from the first position PP1 to the second position SP1. This makes it possible to maintain the identity of the moving agent image AG1 while not impeding the visibility of the content CT1.
  • Figure 6(C) shows an example in which a transparent or semi-transparent agent image AG1 is moved across the front side of content CT1 (i.e., the display surface side of the display unit 201).
  • the output control unit 128 executes display control to display a transparent or semi-transparent agent image AG1 moving on the front side of the content CT1 (i.e., the display surface side of the display unit 201) along a linear movement path from the first position PP1 to the second position SP1.
  • the transparency of the agent image AG1 can be set according to the importance of the overlapping content. For example, if the content is specific and cannot be hidden by law, the importance is determined to be high, and the transparency of the agent image AG1 is set high. In other words, the agent image AG1 is set to be completely transparent. On the other hand, if the content is specific and it is considered acceptable for some of it to be hidden, the importance is determined to be low, and the transparency of the agent image AG1 is set low.
  • Figures 7(A) and (B) show an example of moving agent image AG1 so as to bypass content CT1.
  • the output control unit 128 executes display control to display the agent image AG1 moving along a detour route that bypasses the content CT1 as a movement route from the first position PP1 to the second position SP1.
  • the moving agent image AG1 is shown as agent images AG1a to AG1f.
  • the size of the moving agent image AG1 is reduced, and a portion of the moving agent image AG1 is displayed outside the display surface of the display unit 201.
  • the size of the moving agent image AG1 is reduced, and a display mode is shown in which part of the moving agent image AG1 is displayed on the back side of the content CT1 (i.e., the opposite side of the display surface of the display unit 201) and on the outside of the display surface of the display unit 201.
  • Figure 7(C) shows an example of moving the agent image AG1 so as to bypass specific content CT3.
  • the output control unit 128 executes display control to display the agent image AG1 moving along a detour route that bypasses the content CT3 as a movement route from the first position PP1 to the second position SP1.
  • the moving agent image AG1 is shown by agent images AG1a to AG1d.
  • the agent image AG1 and the specific content do not overlap, but are close to each other.
  • the lower parts of the moving agent images AG1b to AG1e are close to the upper part of the specific content CT1.
  • the proximity of the agent image AG1 to the specific content will reduce the visibility of the content. Therefore, if there is a close area on the detour route where the agent image AG1 and the specific content are close to each other, the agent image AG1 in that close area may be displayed transparent or semi-transparent.
  • the distance for proximity determination can be set appropriately based on experiments, simulations, etc. For example, a distance of several millimeters to several centimeters can be used as the criterion for determining proximity.
  • the movement trajectory of the agent image AG1 may be expressed using animation processing, etc.
  • the detour route and display mode of the agent image AG1 for specific content can be set appropriately based on experiments, simulations, etc.
  • detour routes that bypass specific content.
  • Figure 8 shows an example of the transition of the agent image AG1 when the agent image AG1 is moved from the first position PP1 to the second position SP1 when multiple specific contents CT3 and CT4 are displayed on the display unit 201 of the output device 200.
  • Figure 8 shows a modified example of the example shown in Figure 7(A), in which the size of the moving agent image AG1 is maintained, and a portion of the moving agent image AG1 is displayed outside the display surface of the display unit 201.
  • Figure 9 shows an example of the transition of the agent image AG1 when the agent image AG1 is moved from the first position PP1 to the second position SP1 when a specific content CT5 is displayed on the display unit 201 of the output device 200 and a specific content CT6 is displayed in the HUD display area of the windshield 4.
  • a HUD display device for realizing the HUD is provided at the top of the dashboard 2, near the boundary with the windshield 4.
  • the HUD display device is a display device, such as a projector or optical system, that projects light onto the display area 4a of the windshield 4 and uses the reflected light to create a HUD display that shows a virtual image to the driver D1. That is, the light projected from the HUD display device onto the display area 4a of the windshield 4 is reflected by the windshield 4, and the reflected light heads toward the eyes of the driver D1. The reflected light that is projected onto the display area 4a and enters the eyes of the driver D1 is then displayed together with actual objects visible through the windshield 4, superimposed on those objects. In this way, the HUD display device creates a HUD display by using the windshield 4 to display a virtual image.
  • a display device such as a projector or optical system
  • the HUD display device displays the agent image AG1 in the display area 4a based on the control of the information processing device 110 (see Figure 5).
  • the windshield 4 also functions as a display medium for the HUD of the vehicle C1.
  • FIG. 9 shows an example of a display mode in which the size of the moving agent image AG1 is maintained and a portion of the moving agent image AG1 is displayed outside the display surface of the display unit 201.
  • the agent image AG1 when moving the agent image AG1 between multiple physically separated display units, it is preferable to set a detour route or the like within the range that the occupant viewing the agent image AG1 can create. For example, if a detour route would make the movement difficult to see, the agent image AG1 can be displayed transparently or semi-transparently.
  • Figure 9 shows an example in which specific content is displayed on the display unit 201 and the HUD display area on the windshield 4, this is not limiting.
  • a display unit is provided on the steering wheel 3, it is also possible for specific content to be displayed on that display unit.
  • this embodiment can be understood as a display system comprising at least one or more display units that display the agent image AG1, and a control unit that controls the position at which the agent image AG1 is displayed on any of the display units.
  • the above example shows how the movement processing of the agent image AG1 is changed when the agent image AG1 moving along a linear path overlaps with specific content.
  • the agent image AG1 moving along a linear path does not overlap with the specific content but is close to it, it is possible that a passenger looking at the specific content may find the visibility of the specific content reduced due to the influence of the nearby agent image AG1. Therefore, the movement processing of the agent image AG1 may also be changed in a similar manner when the agent image AG1 moving along a linear path is close to the specific content. For example, a determination is made as to whether the linear path passes through an area (proximity area) within a predetermined distance of the specific content.
  • the agent image AG1 is moved along a path (detour route) that passes through a position farther away than the predetermined distance.
  • the display mode of the proximity area may also be changed to move the agent image AG1.
  • the movement processing in these cases is the same as the movement processing shown in Figures 6 to 9, etc.
  • FIGS. 10 and 11 are flowcharts showing an example of agent migration processing in the information processing system 100.
  • This agent migration processing is executed by the control unit 120 (see FIG. 5) based on a program stored in the storage unit 130 (see FIG. 5). This agent migration processing is constantly executed for each control cycle. This agent migration processing will be explained with appropriate reference to FIGS. 1 to 9.
  • step S501 the output content determination unit 126 determines whether or not user input has been received. For example, if a manual operation by any of the occupants is received by the reception unit 203, if a voice operation by any of the occupants is received by the speech acquisition unit 121, or if a gesture operation by any of the occupants is received by the driver status acquisition unit 122 or the vehicle interior status acquisition unit 123, it is determined that user input has been received. If user input has been received, the process proceeds to step S502. On the other hand, if user input has not been received, monitoring continues.
  • step S502 the output content determination unit 126 determines whether the user input received in step S501 is a voice input. That is, if the speech acquisition unit 121 receives a voice operation from any of the occupants, the user input is determined to be a voice input. If the user input is a voice input, the process proceeds to step S503. On the other hand, if the user input is not a voice input, the process proceeds to step S510 (see FIG. 11).
  • the output content determination unit 126 acquires the current display position (first position) of the agent image AG1 and determines the destination display position (second position) of the agent image AG1.
  • the first position can be acquired based on the display position of the agent image AG1 stored in the agent management DB 132.
  • the position of the occupant who made the user input e.g., hand position, eye position
  • the position on the surface of the display unit 201 closest to the occupant's position can be determined as the second position.
  • the occupant who made the user input can be determined based on the relationship between the timing at which the utterance was acquired by the speech acquisition unit 121 and the occupant's mouth movement acquired by the driver status acquisition unit 122 or the vehicle interior status acquisition unit 123. For example, if there is an occupant moving their mouth at the time the utterance was made, it can be assumed that the occupant is making a sound as user input.
  • the presence or absence of an occupant in each seat of the vehicle C1 can be determined based on detection values from seat occupancy sensors, seat belt sensors, etc. Furthermore, the presence or absence of an occupant in each seat of the vehicle C1 may be determined based on images acquired by the driver status acquisition unit 122 or the vehicle interior status acquisition unit 123.
  • the location where a sound is generated can be identified based on sounds acquired by those microphones. Therefore, the location of the occupant who made the user input may be identified using technology for identifying the location where these sounds are generated.
  • the second position can be determined based on the eye position of the occupant who made the user input, rather than the hand position.
  • the position on the display surface of the display unit 201 that is closest to the eye position of the occupant who made the user input can be determined as the second position.
  • the third position shown in step S511 can be determined based on the hand position of the occupant who made the user input. In this case, the position on the display surface of the display unit 201 that is closest to the hand position of the occupant who made the user input can be determined as the third position (see step S511).
  • step S504 the movement mode determination unit 127 determines whether the user input received in step S501 was made by driver D1. For example, when driver D1 is driving, manual operation is often not possible, so it is assumed that user input is often made by voice.
  • the method for identifying the occupant who made the user input is the same as the method shown in step S503. If the user input was made by driver D1, the process proceeds to step S505. On the other hand, if the user input was made by an occupant other than the driver, the process proceeds to step S506.
  • step S505 the movement mode determination unit 127 determines whether the driving load of driver D1 who provided user input is less than a threshold.
  • the method for detecting the driving load is the same as the detection method described above.
  • the threshold value shown here is determination information for determining whether the driving load of driver D1 is high or low. This threshold value can be set appropriately based on experiments, simulations, etc. Note that here, an example is shown in which it is determined whether the driving load of driver D1 is less than a threshold value, but if autonomous driving is not being performed, it may also be determined whether the vehicle C1 is moving or stopped. If the driving load of driver D1 is less than the threshold value, the process proceeds to step S506. On the other hand, if the driving load of driver D1 is equal to or greater than the threshold value, the process proceeds to step S510.
  • the movement mode determination unit 127 checks the relationship between the first movement trajectory of the agent image AG1 moving from the first position to the second position determined in step S503 and one or more pieces of content displayed on the display unit 201. For example, the movement mode determination unit 127 determines whether or not there is content currently being displayed between the first position and the second position, and if there is content currently being displayed between the first position and the second position, determines whether or not that content overlaps with the first movement trajectory of the agent image AG1. Note that the method for determining whether or not the first movement trajectory of the agent image AG1 moving from the first position to the second position overlaps with the currently displayed content is the same as the determination method shown in Figure 4 (B).
  • step S507 the movement mode determination unit 127 determines whether specific content exists among the displayed content that overlaps with the first movement trajectory of the agent image AG1 moving from the first position to the second position. If specific content exists that overlaps with the first movement trajectory, the process proceeds to step S508. On the other hand, if specific content does not exist that overlaps with the first movement trajectory, the process proceeds to step S509.
  • step S508 the movement mode determination unit 127 determines a movement path and display mode for moving the agent image AG1 from the first position to the second position so as not to impair the visibility of the specific content determined in step S507 to overlap with the first movement trajectory of the agent image AG1.
  • the output control unit 128 then moves the agent image AG1 from the first position to the second position in accordance with the determined movement path and display mode. For example, as shown in Figures 6(B) (C) and 7 to 9, it is possible to move the agent image AG1 so as not to impair the visibility of the specific content CT1.
  • step S509 the movement mode determination unit 127 determines to move the agent image AG1 along the first movement trajectory. Then, the output control unit 128 moves the agent image AG1 from the first position to the second position along the first movement trajectory. In other words, the agent image AG1 is moved over the shortest distance. For example, as shown in FIG. 6(A), it is possible to move the agent image AG1 so that it overlaps with the front side of the currently displayed content CT2 (i.e., the display surface side of the display unit 201).
  • step S510 the movement mode determination unit 127 determines to move only the position of the agent image AG1 without executing a movement effect that moves the agent image AG1. In other words, the movement mode determination unit 127 determines to erase the agent image AG1 displayed at the first position, and then display the agent image AG1 at the second position. Then, the output control unit 128 erases the agent image AG1 displayed at the first position, and then displays the agent image AG1 at the second position.
  • driver D1 when driver D1 is driving, manual operation is often not possible, so it is assumed that user input is often performed via voice. Furthermore, when driver D1's driving load is high (above a threshold), it is thought that driver D1 does not have the time to see the movement of agent image AG1, so even if agent image AG1 is temporarily erased, it is thought that this will not affect the presence of agent image AG1. Furthermore, when driver D1's driving load is high (above a threshold), it is thought that it is easier for driver D1 to recognize agent image AG1 if it is moved instantaneously from a first position to a second position rather than by executing a movement effect. Therefore, in step S510, only the display position of agent image AG1 is changed without executing a movement effect for agent image AG1.
  • step S511 the output content determination unit 126 acquires the current display position (first position) of the agent image AG1 and determines the destination display position (third position) of the agent image AG1.
  • the user input is manual input rather than voice input
  • the hand position of the occupant who made the user input is used as the reference position, and the position on the surface of the display unit 201 closest to the occupant's hand position is determined as the third position.
  • the occupant who made the user input can be determined based on the relationship between the timing at which the operation is accepted by the acceptance unit 203 and the occupant's hand movement acquired by the driver status acquisition unit 122 or the vehicle interior status acquisition unit 123.
  • the hand position can be determined to be the position at which the touch operation was performed.
  • a selection operation to select an operation button related to content displayed on the display unit 201 is performed as a user input.
  • steps S512 to S514 and S516 correspond to the processes in steps S506 to S509, so explanations will be omitted here.
  • step S515 the movement mode determination unit 127 determines whether the occupant is looking at the displayed content (other than the specific content) that was determined to overlap with the first movement trajectory of the agent image AG1 in step S513.
  • driver D1 when driver D1 is driving, manual operation is often not possible. Therefore, it is assumed that the occupant targeted for each process in steps S511 to S516 is an occupant other than driver D1 (or a driver D1 whose driving load is below a threshold). It is also assumed that an occupant other than driver D1 (or a driver D1 whose driving load is below a threshold) is enjoying viewing the content displayed on the display unit 201.
  • step S514 the content the occupant is viewing is also subject to movement effect, along with the specific content.
  • the movement timing or movement speed of the agent image AG1 may be changed so as not to impede the visibility of specific content. For example, by moving the agent image AG1 at a speed faster than the normal movement speed, it is possible to avoid impeding the visibility of specific content. For example, even if there is an overlapping portion between the agent image AG1 and the specific content, it is possible to shorten the time during which the specific content is invisible by moving the agent image AG1 quickly along a movement path that includes the overlapping portion.
  • the agent image AG1 it is possible to determine whether any occupant of vehicle C1 is looking at specific content, and move the agent image AG1 (at a normal speed or a speed faster than normal) when the occupant is not looking at the specific content. For example, when moving the agent image AG1 along a travel route that includes an overlapping portion where the agent image AG1 and specific content overlap, it is possible to prevent the occupant from feeling that the specific content is not visible by moving the agent image AG1 at a time when the occupant is not looking at the specific content.
  • FIG. 10 and 11 show an example in which the control content is changed based on whether the user input is a voice input, but this embodiment is not limited to this and may be realized using other control examples.
  • the agent movement process (steps S503 to S510 or steps S511 to S516) may be executed regardless of whether any user input is made.
  • FIG. 10 shows a control process in which the movement mode of the agent image AG1 is changed based on the process of determining whether the driver's driving load is high or low (step S505).
  • this embodiment is not limited to this and may be realized using other control examples.
  • the process of determining whether the driver D1's driving load is high or low may be omitted.
  • step S525 shows a control process in which the movement mode of the agent image AG1 is changed based on the process of determining whether the occupant is viewing content (step S515).
  • this embodiment is not limited to this and may be realized using other control examples.
  • the process of determining whether the occupant is viewing content (step S515) may be omitted.
  • control process is performed to change the movement pattern of the agent image AG1 based on whether or not the agent image AG1 overlaps with specific content.
  • this control process may be performed using other methods.
  • the control process can be performed using artificial intelligence (AI).
  • AI artificial intelligence
  • various situations related to the driver D1, vehicle C1, etc., the movement pattern of the agent to be executed in response to those situations, and the overlapping state between the agent image AG1 and specific content can be learned in advance, and this learned data can be used in the control process.
  • the learned data for various situations that arise related to the driver D1, vehicle C1, etc., and the overlapping state between the agent image AG1 and specific content can be used to determine the movement pattern of the agent, and the agent image AG1 can be moved in that determined movement pattern.
  • the occupant who has made a user input can visually confirm (or feel the presence of) the agent image AG1 that moves in response to the user input, and the visibility of specific content is not obstructed.
  • This allows the agent image AG1 to provide appropriate operational assistance to the user. It is also possible to create an impression of intelligence in the agent image AG1 that moves without obscuring specific content. This makes it possible to increase trust and attachment to the agent image AG1, and to improve the ability of the agent image AG1 to convey information to each occupant. In this way, in this embodiment, the agent image AG1 can be moved appropriately on the display unit 201 so as not to obstruct the visibility of specific content.
  • the information processing system is configured by the devices that execute part of each of these processes.
  • at least part of each process can be executed using various information processing devices and various electronic devices, such as in-vehicle devices, devices that can be used by the user (e.g., smartphones, tablet terminals, personal computers, car navigation devices, IVIs), and servers that can be connected via a predetermined network such as the Internet.
  • part (or all) of the information processing system capable of executing the functions of the information processing device 110 may be provided by an application that can be provided via a predetermined network such as the Internet.
  • This application is, for example, SaaS (Software as a Service).
  • the information processing method of this embodiment is an information processing method in which an agent image AG1 that communicates with an occupant riding in vehicle C1 is displayed on display unit 201 (including each display unit such as the HUD display area of the front window 4), and the display state of the agent image AG1 is controlled based on the state of the occupant.
  • This information processing method includes, when an agent image AG1 displayed at a first position (initial position) on the display unit 201 moves to a second position (a position to which the agent image AG1 moves in response to a user input) based on the state of the occupant, a determination process for determining whether or not the visibility of a specific content is obstructed when the agent image AG1 moves along the first path based on a linear movement path (first path) from the first position to the second position and the content displayed on the display unit 201 (steps S506, S507, S512, S513), and if the visibility of the specific content is not obstructed, moving the agent image AG1 along the first path in a first display mode (steps S509, S516).
  • the control process includes control processes (steps S508, S509, S514, S516) for executing one of the following: a first movement process for moving the agent image AG1 along a movement path (second path) that does not obstruct the visibility of the specific content; a second movement process for moving the agent image AG1 from a first position to a second position in a display mode (second display mode) that is different from the first display mode and does not obstruct the visibility of the specific content; and a third movement process for moving the agent image AG1 from the first position to the second position at a second movement timing that is different from the first movement timing in the first display mode and does not obstruct the visibility of the specific content, or at a second movement speed that is different from the first movement speed in the first display mode and does not obstruct the visibility of the specific content (steps S508, S514)
  • the first movement process is performed by the movement processes shown in FIGS. 7 to 9.
  • the second movement process is performed by the movement processes shown in FIGS. 6B and 6C.
  • the program according to this embodiment is a program that causes a computer to execute each of these processes.
  • the program according to this embodiment is a program that causes a computer to realize each of the functions that can be executed by the information processing device 110.
  • the agent image AG1 which moves based on the occupant's state, can be visually confirmed (or its presence, if transparent), and the visibility of specific content is not obstructed.
  • the agent image AG1 can be moved appropriately on the display unit 201 so as not to obstruct the visibility of specific content.
  • the first movement process moves the agent image AG1 along a second route that is a detour route that bypasses specific content.
  • the agent image AG1 can be moved using the movement processes shown in Figures 7 to 9.
  • the agent image AG1 that bypasses a specific piece of content can be visually confirmed near the specific content.
  • the display mode is such that the agent image AG1 in the overlapping or close portion is transparent or semi-transparent, or the agent image AG1 in the overlapping portion is displayed as if it were positioned behind the specific content (the farther depthwise of the display unit 201).
  • the display mode is such that the agent image AG1 in the overlapping or close portion is transparent or semi-transparent, or the agent image AG1 in the overlapping portion is displayed as if it were positioned behind the specific content (the farther depthwise of the display unit 201).
  • FIG. 7(B) if there is a portion of the detour route where the agent image AG1 overlaps with specific content CT1, it is possible to display the agent image AG1 in the overlapping portion as if it were positioned behind the specific content CT1.
  • the agent image AG1 in the first movement process, if there is a portion of the agent image AG1 that extends beyond the display surface of the display unit 201 on the detour route, that portion of the agent image AG1 that extends beyond the display surface of the display unit 201 is not displayed. For example, as shown in FIG. 7(A), if there is a portion of the agent image AG1 that extends beyond the display surface of the display unit 201 on the detour route (the upper portions of agent images AG1b to AG1e), it is possible to prevent that portion of the agent image AG1 from being displayed.
  • the determination process may determine that the moving agent image AG1 obstructs the visibility of specific content when the agent image AG1 moving along the first path is adjacent to or overlaps with specific content.
  • the second movement process may move the agent image AG1 along the first path, and display the overlapping portion of the agent image AG1 as if it were positioned behind the specific content, or may display the adjacent or overlapping portion of the agent image AG1 as if it were positioned behind the specific content, as the second display mode. For example, as shown in FIG.
  • the agent image AG1 may move along the first path, and display the overlapping portion of the agent image AG1 as if it were positioned behind the specific content. Furthermore, as shown in FIG. 6(C), the overlapping portion of the agent image AG1 may be displayed as if it were transparent or semi-transparent. Additionally, the agent image AG1 in the vicinity of specific content may be displayed transparently or semi-transparently.
  • the agent image AG1 in the third movement process, may be moved at a second movement speed that is faster than the first movement speed. However, if the specific content is information that is required by law to be continuously displayed, this movement control is not executed.
  • the information processing method may further include a detection process for detecting the gaze of an occupant based on an image including the occupant's eyes.
  • the movement mode determination unit 127 may detect the gaze of an occupant (including driver D1) based on images acquired by the driver state acquisition unit 122 and the vehicle interior state acquisition unit 123. Note that known gaze detection technology may be used for this gaze detection.
  • it may be determined whether the occupant is looking at specific content based on the occupant's gaze, and the timing when the occupant is not looking at the specific content may be set as the second movement timing.
  • the specific content can be at least one of map content displayed to guide the direction of travel of vehicle C1, route information content showing the route from the current location of vehicle C1 to the destination in the map content, driving control information content related to driving control of vehicle C1, abnormality occurrence information content related to the occurrence of an abnormality in vehicle C1, and information that is required by law to be continuously displayed.
  • This configuration makes it possible to set important information as specific content.
  • the information processing method may further include a driving load determination process (step S505) that determines the driving load of driver D1 based on at least one of environmental information related to the environment outside vehicle C1 and driving behavior information related to driver D1's driving behavior.
  • the determination process determines whether the visibility of specific content is impaired if the driving load is below a threshold. Then, in the control process, if the driving load is equal to or greater than the threshold, the agent image AG1 displayed at the first position is erased and then displayed at the second position (step S510). If the driving load is below the threshold, a movement process of agent image AG1 is executed (steps S508, S508) based on the determination result of the determination process (steps S506, S507).
  • driver D1 is under a high driving load, it is thought that driver D1 will not have time to see the movement of agent image AG1, and therefore temporarily erasing agent image AG1 will not affect the presence of agent image AG1. Furthermore, if driver D1 is under a high driving load, it is thought that it will be easier for driver D1 to recognize agent image AG1 if it is moved instantaneously from a first position to a second position rather than by executing a movement effect. In this way, it is possible to appropriately perform the movement effect of agent image AG1 based on driver D1's driving load.
  • the information processing device 110 is an information processing device that displays an agent image AG1, which communicates with a passenger in the vehicle C1, on the display unit 201 (including each display unit such as the HUD display area of the windshield 4), and controls the display state of the agent image AG1 based on the state of the passenger.
  • the information processing device 110 When the agent image AG1, which is displayed at a first position (initial position) on the display unit 210, moves to a second position (a position to which the image moves in response to user input) based on the state of the passenger, the information processing device 110 includes a movement mode determination unit 127 (an example of a determination unit) that determines whether or not the visibility of specific content will be obstructed when the agent image AG1 moves along the first path based on a linear movement path (first path) from the first position to the second position and the content displayed on the display unit 201; if the visibility of the specific content is not obstructed, the information processing device moves the agent image AG1 along the first path in a first display mode; and if the visibility of the specific content is obstructed, the information processing device 110 moves the agent image AG1 along a movement path different from the first path that obstructs the visibility of the specific content.
  • a movement mode determination unit 127 an example of a determination unit
  • the information processing device 110 includes an output control unit 128 (an example of a control unit) that executes one of the following: a first movement process that moves the agent image AG1 along a movement path (second path) that does not impair visibility of the specific content; a second movement process that moves the agent image AG1 from a first position to a second position in a display mode (second display mode) different from the first display mode that does not impair visibility of the specific content; and a third movement process that moves the agent image from the first position to the second position at a second movement timing different from the first movement timing in the first display mode that does not impair visibility of the specific content, or at a second movement speed different from the first movement speed in the first display mode that does not impair visibility of the specific content.
  • a first movement process that moves the agent image AG1 along a movement path (second path) that does not impair visibility of the specific content
  • second movement process that moves the agent image AG1 from a first position to a second position in a display mode (second display mode) different from the
  • the information processing device 110 may be a device built into the output device 200 or may be a device separate from the output device 200. Furthermore, instead of the information processing device 110, an information processing system configured with multiple devices capable of executing the processes realized by the information processing device 110 may be implemented.
  • the agent image AG1 which moves based on the occupant's state, can be visually confirmed (or its presence, if transparent), and the visibility of specific content is not obstructed.
  • the agent image AG1 can be moved appropriately on the display unit 201 so as not to obstruct the visibility of specific content.
  • processing steps shown in this embodiment are merely examples for realizing this embodiment, and the order of some of the processing steps may be changed, some of the processing steps may be omitted, or other processing steps may be added, to the extent that this embodiment can be realized.
  • each process in this embodiment is executed based on a program that causes a computer to execute various processing procedures.
  • This embodiment can also be understood as an embodiment of a program that realizes the function of executing each process, and a recording medium that stores that program.
  • an update process for adding a new function to an information processing device can store the program in the storage device of the information processing device. This makes it possible to cause the updated information processing device to perform each process shown in this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Navigation (AREA)

Abstract

The present invention comprises: a determination process for determining, on the basis of a linear movement path (first path) from a first position to a second position and content displayed on a display unit, whether or not the visibility of specific content is impaired if an agent image, which is displayed at the first position on the display unit, moves along the first path when the agent image moves to the second position on the basis of the state of an occupant; and a control process for executing a first movement process or the like for moving the agent image along a movement path (second path) which is different from the first path and which does not impair the visibility of the specific content if the visibility of the specific content is impaired.

Description

情報処理方法、情報処理装置及びプログラムInformation processing method, information processing device, and program

 本発明は、ユーザとのコミュニケーションを行うことが可能なエージェントを制御する情報処理方法、情報処理装置及びプログラムに関する。 The present invention relates to an information processing method, information processing device, and program for controlling an agent capable of communicating with a user.

 従来、車両において各種情報をユーザに伝える技術が提案されている。例えば、JP2020−055348Aには、乗員への提供画像が表示部に表示される場合に、その提供画像の表示位置に乗員の視線を誘導させるため、エージェント画像の表示位置を、その提供画像の表示位置に移動させる技術が提案されている。
発明の概要体
Conventionally, technologies for conveying various types of information to a user in a vehicle have been proposed. For example, JP2020-055348A proposes a technology in which, when an image to be provided to a passenger is displayed on a display unit, the display position of an agent image is moved to the display position of the provided image in order to guide the passenger's gaze to the display position of the provided image.
Summary of the Invention

 上述した従来技術では、エージェント画像の表示位置を、乗員への提供画像の表示位置に移動させる場合に、表示部に表示されているコンテンツが、移動するエージェント画像によって隠れてしまうおそれがある。例えば、車両の走行経路、車両の走行制御等に関するコンテンツが表示部に表示されている場合に、移動するエージェント画像によりそのコンテンツが隠れてしまう可能性もある。そこで、特定のコンテンツの視認性が、移動するエージェント画像により阻害されないようにすることが重要である。 In the conventional technology described above, when the display position of an agent image is moved to the display position of the image provided to the occupant, there is a risk that the content displayed on the display unit may be obscured by the moving agent image. For example, if content related to the vehicle's driving route or vehicle driving control is displayed on the display unit, there is a possibility that the moving agent image may obscure that content. Therefore, it is important to ensure that the visibility of specific content is not obstructed by the moving agent image.

 本発明は、表示部において、特定のコンテンツの視認性を阻害しないようにエージェント画像を適切に移動させることを目的とする。 The present invention aims to appropriately move an agent image on a display unit so as not to obstruct the visibility of specific content.

 本発明の一態様は、車両に乗車する乗員とのコミュニケーションを行うエージェント画像を表示部に表示させ、乗員の状態に基づいてエージェント画像の表示状態を制御する情報処理方法である。この情報処理方法は、表示部において第1位置に表示されているエージェント画像が、乗員の状態に基づいて第2位置に移動する場合に、第1位置から第2位置までの直線状の移動経路(第1経路)と、表示部に表示されているコンテンツとに基づいて、第1経路に沿ってエージェント画像が移動すると特定のコンテンツの視認性を阻害するか否かを判定する判定処理と、特定のコンテンツの視認性を阻害しない場合には、第1経路に沿ってエージェント画像を第1表示態様で移動させ、特定のコンテンツの視認性を阻害する場合には、第1経路とは異なる移動経路であって特定のコンテンツの視認性を阻害しない移動経路(第2経路)に沿ってエージェント画像を移動させる第1移動処理と、第1表示態様とは異なる表示態様であって特定のコンテンツの視認性を阻害しない表示態様である第2表示態様で、第1位置から第2位置までエージェント画像を移動させる第2移動処理と、第1の表示態様での第1移動タイミングとは異なる移動タイミングであって特定のコンテンツの視認性を阻害しない第2移動タイミング、又は、第1の表示態様での第1移動速度とは異なる移動速度であって特定のコンテンツの視認性を阻害しない第2移動速度で、第1位置から第2位置までエージェント画像を移動させる第3移動処理とのうちの何れかを実行する制御処理とを含む。 One aspect of the present invention is an information processing method for displaying an agent image on a display unit that communicates with a vehicle occupant, and controlling the display state of the agent image based on the state of the occupant. This information processing method includes, when an agent image displayed at a first position on the display unit moves to a second position based on the state of the occupant, a determination process for determining whether movement of the agent image along the first path will obstruct the visibility of specific content, based on a linear movement path (first path) from the first position to the second position and the content displayed on the display unit; if the visibility of the specific content will not be obstructed, moving the agent image along the first path in a first display mode; and, if the visibility of the specific content will be obstructed, a movement path different from the first path that does not obstruct the visibility of the specific content (second path). ), a second movement process of moving the agent image from a first position to a second position in a second display mode that is a display mode different from the first display mode and does not impair the visibility of specific content, and a third movement process of moving the agent image from the first position to the second position at a second movement timing that is different from the first movement timing in the first display mode and does not impair the visibility of specific content, or at a second movement speed that is different from the first movement speed in the first display mode and does not impair the visibility of specific content.

図1は、車両の車室内の構成例を簡略化して示す図である。FIG. 1 is a simplified diagram showing an example of the configuration of the interior of a vehicle. 図2は、車両の車室内の構成例を簡略化して示す図である。FIG. 2 is a simplified diagram showing an example of the configuration of the interior of a vehicle. 図3は、車両の車室内の構成例を簡略化して示す図である。FIG. 3 is a simplified diagram showing an example of the configuration of the interior of a vehicle. 図4は、エージェント画像を移動する場合の遷移例を示す図である。FIG. 4 is a diagram showing an example of a transition when an agent image is moved. 図5は、情報処理システムのシステム構成例を示すブロック図である。FIG. 5 is a block diagram showing an example of the system configuration of the information processing system. 図6は、エージェント画像の遷移例を示す図である。FIG. 6 is a diagram showing an example of the transition of the agent image. 図7は、エージェント画像の遷移例を示す図である。FIG. 7 is a diagram showing an example of the transition of the agent image. 図8は、エージェント画の遷移例を示す図である。FIG. 8 is a diagram showing an example of a transition of an agent image. 図9は、エージェント画像の遷移例を示す図である。FIG. 9 is a diagram showing an example of a transition of an agent image. 図10は、エージェント移動処理例を示すフローチャートである。FIG. 10 is a flowchart showing an example of an agent migration process. 図11は、エージェント移動処理例を示すフローチャートである。FIG. 11 is a flowchart showing an example of an agent migration process.

 以下、添付図面を参照しながら本発明の実施形態について説明する。 Embodiments of the present invention will be described below with reference to the accompanying drawings.

 [表示部の設置例]
 図1は、車両C1の車室内の構成例を簡略化して示す図である。なお、図1では、運転席、助手席(図示省略)よりも前側を、車両C1の前後方向の後側から見た場合の車両C1の車室内を簡略化して示す。また、図1では、説明を容易にするため、ダッシュボード2、ステアリングホイール3、フロントウインド4、バックミラー5、出力機器200以外の図示は省略する。
[Display unit installation example]
Fig. 1 is a simplified diagram showing an example of the configuration of the interior of a vehicle C1. Fig. 1 shows the interior of the vehicle C1 in front of the driver's seat and passenger seat (not shown) as viewed from the rear in the longitudinal direction of the vehicle C1. To facilitate explanation, Fig. 1 omits illustrations of components other than a dashboard 2, a steering wheel 3, a windshield 4, a rearview mirror 5, and an output device 200.

 出力機器200は、車両C1の内部に設置される出力機器であり、情報処理装置110(図5参照)からの指示に基づいて各種出力動作を実行する。出力機器200は、ダッシュボード2に設置される機器であり、表示部201、音出力部202及び受付部203(図5参照)を備える。図1では、車両C1の左右方向に長い出力機器を出力機器200の一例として示す。出力機器200は、例えば、ダッシュボード2の中央付近から運転席までの範囲を表示領域とする機器である。 The output device 200 is an output device installed inside the vehicle C1, and performs various output operations based on instructions from the information processing device 110 (see Figure 5). The output device 200 is an apparatus installed on the dashboard 2, and includes a display unit 201, a sound output unit 202, and a reception unit 203 (see Figure 5). In Figure 1, an output device that is long in the left-right direction of the vehicle C1 is shown as an example of the output device 200. The output device 200 is an apparatus whose display area extends, for example, from near the center of the dashboard 2 to the driver's seat.

 例えば、出力機器200は、各種情報を提供可能な1又は複数の機器からなる車載システムである。出力機器200として、例えば、ナビゲーション機器、オーディオ機器、DVD機器、TVチューナー機器、IVI(In−Vehicle Infotainment)等のうちの少なくとも1つを用いることが可能である。なお、出力機器200のそれぞれに表示させる画像については、フロントウインド4において実現されるHUD(Head Up Display)、又は、他の表示機器を用いて表示させることも可能である。また、ドライバD1が所持することが可能な携帯型の機器(又は車両C1に設置可能な機器)、例えば、スマートフォン、タブレット端末、携帯型のパーソナルコンピュータ等の情報処理装置を用いてもよい。 For example, the output device 200 is an in-vehicle system consisting of one or more devices capable of providing various types of information. The output device 200 can be, for example, at least one of a navigation device, an audio device, a DVD device, a TV tuner, an IVI (In-Vehicle Infotainment), etc. The images displayed on each output device 200 can also be displayed using a HUD (Head Up Display) implemented on the windshield 4, or other display device. Alternatively, a portable device that can be carried by the driver D1 (or a device that can be installed in the vehicle C1), such as a smartphone, tablet, or portable personal computer, can be used.

 また、出力機器200は、車両C1に乗車するユーザ(ドライバD1を含む)とのコミュニケーションを行うことが可能な出力動作を実行する。例えば、出力機器200の表示部201には、乗員とのコミュニケーションを行うことが可能なエージェント画像AG1(図2、図3参照)が表示され、エージェント画像AG1を用いた各種演出が実行される。なお、フロントウインド4(HUD表示領域)又は他の表示機器に、乗員とのコミュニケーションを行うことが可能なエージェント画像を表示して、そのエージェント画像を用いた各種演出を実行してもよい。 In addition, the output device 200 executes output operations that enable communication with users (including the driver D1) aboard the vehicle C1. For example, an agent image AG1 (see Figures 2 and 3) that enables communication with the occupants is displayed on the display unit 201 of the output device 200, and various effects are executed using the agent image AG1. Note that an agent image that enables communication with the occupants may also be displayed on the windshield 4 (HUD display area) or another display device, and various effects may be executed using the agent image.

 [エージェント画像の表示例]
 図2、図3は、車両C1の車室内の構成例を簡略化して示す図である。なお、図2、図3は、図1に示す構成例と同様であるため、ここでの詳細な説明を省略する。
[Example of agent image display]
2 and 3 are simplified diagrams showing examples of the configuration of the interior of the vehicle C1. Note that since the configuration examples shown in Fig. 2 and 3 are similar to the configuration example shown in Fig. 1, detailed description thereof will be omitted here.

 図2、図3には、出力機器200の表示部201にコンテンツCT1とエージェント画像AG1とが表示されている場合の例を示す。コンテンツCT1は、車両C1の現在地を示す現在地標識PL1と、車両C1の走行経路を示す経路情報RI1とを含む地図情報を表示するコンテンツである。例えば、経路情報RI1は、地図とは異なる表示態様(例えば、青、赤等の色を付した矢印)等で表示される。また、図示を省略するが、視点を変更した地図情報を適宜表示することも可能である。例えば、コンテンツCT1は、ナビゲーション機能に基づいて表示される。 Figures 2 and 3 show an example in which content CT1 and agent image AG1 are displayed on the display unit 201 of the output device 200. Content CT1 is content that displays map information including a current location indicator PL1 indicating the current location of vehicle C1 and route information RI1 indicating the route traveled by vehicle C1. For example, route information RI1 is displayed in a display format different from that of a map (for example, arrows colored blue, red, etc.). Although not shown, it is also possible to display map information with a different viewpoint as appropriate. For example, content CT1 is displayed based on a navigation function.

 エージェント画像AG1は、ユーザとのコミュニケーションを行うことが可能な物体を表す画像である。本実施形態では、人間の顔を模した画像をエージェント画像AG1とする例を示す。ただし、エージェント画像AG1はこれに限定されず、例えば、人間の全身又は一部(例えば上半身)を模した画像をエージェント画像AG1としてもよく、ウサギ、豚等のような動物(又はその動物の顔)を模した画像、仮想物の生物(例えばアニメのキャラクターの顔)(又はその顔)を模した画像をエージェント画像AG1としてもよい。このように、擬生物化された画像をエージェント画像AG1とすることが可能である。 Agent image AG1 is an image representing an object capable of communicating with the user. In this embodiment, an example is shown in which an image that resembles a human face is used as agent image AG1. However, agent image AG1 is not limited to this, and for example, an image that resembles the entire body or part of a human (for example, the upper body) may be used as agent image AG1, or an image that resembles an animal such as a rabbit or pig (or the face of that animal), or an image that resembles a virtual creature (for example, the face of an anime character) (or that face). In this way, a mimicked image can be used as agent image AG1.

 出力機器200は、情報処理装置110(図5参照)からの指示に基づいて運転支援に関する各種の動作(例えば、コンテンツCT1、エージェント画像AG1に関する動作)を実行する。例えば、出力機器200は、情報処理装置110の制御に基づいて、ドライバD1が運転操作をする際における運転支援、周囲の施設等に関する各種情報を出力する。 The output device 200 performs various operations related to driving assistance (e.g., operations related to content CT1 and agent image AG1) based on instructions from the information processing device 110 (see Figure 5). For example, the output device 200 outputs various information related to driving assistance when the driver D1 performs driving operations, surrounding facilities, etc., based on the control of the information processing device 110.

 なお、エージェント画像AG1を用いた運転支援として、前方又は後方の動体の報知等が想定される。例えば、前方の動体の報知として、「この先踏切だから気を付けてよ」と音声出力したり、「前方に人がいるよ」と音声出力したりすることができる。これらの各出力時にエージェント画像AG1を用いて各種動作を実行させることが可能である。すなわち、エージェント画像AG1を用いて運転支援を実行可能である。また、例えば、出力機器200は、情報処理装置110の制御に基づいて、エージェント画像AG1を用いた各種処理を実行する。例えば、出力機器200は、エージェント画像AG1を用いて、ドライバD1との間で各種やりとりを行うコミュニケーション処理を実行する。例えば、出力機器200は、エージェント画像AG1を用いて、ドライバD1との間で各種会話のやりとりを行ったり、ドライバD1が好む情報を提供したりする。 Note that driving assistance using the agent image AG1 is expected to include, for example, alerting the driver of moving objects ahead or behind. For example, to alert the driver of a moving object ahead, a voice output such as "Be careful, there is a railroad crossing ahead" or "There is a person ahead" can be output. At the time of each of these outputs, various actions can be performed using the agent image AG1. In other words, driving assistance can be performed using the agent image AG1. Furthermore, for example, the output device 200 performs various processes using the agent image AG1 based on the control of the information processing device 110. For example, the output device 200 performs communication processing to exchange various types of information with the driver D1 using the agent image AG1. For example, the output device 200 uses the agent image AG1 to exchange various types of conversations with the driver D1 or to provide information preferred by the driver D1.

 例えば、図2に示すように、エージェント画像AG1が音声情報S1「次の交差点を右だよ」を発したことに応じて、ドライバD1が音声S2「その先は高速道路だっけ?」を発することが考えられる。そして、ドライバD1が発した音声S2に対して、エージェント画像AG1が音声情報S3「次は高速道路に乗ります」を発する(図3参照)ことが考えられる。なお、これらのやりとりは、簡略化したものであり、これに限定されない。 For example, as shown in Figure 2, it is conceivable that in response to the agent image AG1 uttering voice information S1 "Turn right at the next intersection," driver D1 utters voice S2 "Is there a highway beyond that?". Then, in response to the voice S2 uttered by driver D1, agent image AG1 utters voice information S3 "We'll get on the highway next" (see Figure 3). Note that these exchanges are simplified and are not limited to these.

 このように、車両C1の乗員が何らかのユーザ入力をした場合には、そのユーザ入力をした乗員の近くにエージェント画像AG1を移動させ、その乗員に対する支援をし易くすることが考えられる。例えば、図2に示す位置にエージェント画像AG1が表示されている場合に、ドライバD1が音声S2「その先は高速道路だっけ?」を発した場合を想定する。この場合には、図3に示すように、ドライバD1の近く(例えば、ドライバD1の視線方向)にエージェント画像AG1を移動することが考えられる。すなわち、エージェント画像AG1をユーザが操作している位置、ユーザが見てほしい位置に移動することが考えられる。この場合のエージェント画像AG1の移動方法について図4を参照して説明する。 In this way, when an occupant of vehicle C1 makes some kind of user input, it is conceivable that the agent image AG1 can be moved near the occupant who made the user input, making it easier to provide assistance to that occupant. For example, suppose that when agent image AG1 is displayed in the position shown in Figure 2, driver D1 utters voice S2, "Is that a highway up ahead?" In this case, it is conceivable that the agent image AG1 can be moved near driver D1 (for example, in the direction of driver D1's line of sight), as shown in Figure 3. In other words, it is conceivable to move agent image AG1 to the position where the user is operating it, or to the position where the user wants to see it. The method of moving agent image AG1 in this case will be explained with reference to Figure 4.

 [エージェント画像の移動例]
 図4は、出力機器200の表示部201においてエージェント画像AG1を移動する場合の遷移例を示す図である。
[Example of agent image movement]
FIG. 4 is a diagram showing an example of a transition when the agent image AG1 is moved on the display unit 201 of the output device 200. In FIG.

 図4(A)には、表示部201においてエージェント画像AG1を移動する場合における移動元のエージェント画像をエージェント画像AG1aとして示し、移動先のエージェント画像をエージェント画像AG1eとして示す。 In Figure 4(A), when moving agent image AG1 on the display unit 201, the source agent image is shown as agent image AG1a, and the destination agent image is shown as agent image AG1e.

 図4(B)には、表示部201においてエージェント画像AG1aからエージェント画像AG1eまでを直線状に移動する場合のエージェント画像AG1の移動軌跡の範囲を点線MT1、MT2で示す。また、図4(B)では、エージェント画像AG1の移動軌跡の一部を点線の丸AG1a乃至AG1eで模式的に示す。このように、図4(B)には、第1位置AG1aから第2位置AG1eまでのエージェント画像AG1の第1移動軌跡AG1a乃至AG1eと、表示中のコンテンツCT1との関係を示す。 In Figure 4(B), dotted lines MT1 and MT2 indicate the range of the movement trajectory of agent image AG1 when moving linearly from agent image AG1a to agent image AG1e on the display unit 201. Also in Figure 4(B), part of the movement trajectory of agent image AG1 is schematically indicated by dotted circles AG1a to AG1e. In this way, Figure 4(B) shows the relationship between the first movement trajectory AG1a to AG1e of agent image AG1 from the first position AG1a to the second position AG1e and the currently displayed content CT1.

 ここでは、エージェント画像AG1の初期位置(第1位置AG1a)から、ユーザ入力(例えば、手入力、音声入力)への応答として、エージェント画像AG1を移動させる位置である応答位置(第2位置AG1e)に至る直線状の移動経路を設定する例を示す。 Here, an example is shown in which a linear movement path is set from the initial position (first position AG1a) of the agent image AG1 to a response position (second position AG1e), which is the position to which the agent image AG1 is moved in response to user input (e.g., manual input, voice input).

 なお、表示部201の表示面における各位置は、例えば座標情報としてエージェント管理DB132(図5参照)で管理することが可能である。例えば、出力内容決定部126(図5参照)は、表示部201の表示面に表示されている各画像(コンテンツ画像、エージェント画像AG1)の位置を、エージェント管理DB132に順次記憶させることにより、エージェント画像AG1の現在位置を取得可能である。また、移動態様決定部127は、表示部201の表示面において移動するエージェント画像AG1の移動軌跡の範囲については、エージェント画像AG1の表示領域(表示面積)と、エージェント画像AG1の現在位置と、エージェント画像AG1の移動先の位置とに基づいて演算可能である。この範囲の演算については、公知の演算方法を用いることが可能である。 Note that each position on the display surface of the display unit 201 can be managed, for example, as coordinate information in the agent management DB 132 (see FIG. 5). For example, the output content determination unit 126 (see FIG. 5) can obtain the current position of the agent image AG1 by sequentially storing the positions of each image (content image, agent image AG1) displayed on the display surface of the display unit 201 in the agent management DB 132. Furthermore, the movement mode determination unit 127 can calculate the range of the movement trajectory of the agent image AG1 as it moves on the display surface of the display unit 201 based on the display area (display area) of the agent image AG1, the current position of the agent image AG1, and the destination position of the agent image AG1. A known calculation method can be used to calculate this range.

 例えば、車両C1の乗員の視線方向、乗員の手の位置等に基づいて、エージェント画像AG1を移動することが考えられる。例えば、表示部201において乗員が所望の操作を行う場合には、その乗員の行動に基づいて、その乗員が操作し易い位置にエージェント画像AG1を移動することが可能である。例えば、乗員が手で操作する場合には、その手に近い位置にエージェント画像AG1を移動することが考えられる。例えば、矢印AW1で示すように、移動元から移動先まで直線状に移動することが考えられる。 For example, it is conceivable to move the agent image AG1 based on the line of sight of the occupant of vehicle C1, the position of the occupant's hands, etc. For example, when the occupant performs a desired operation on the display unit 201, it is possible to move the agent image AG1 to a position that is easy for the occupant to operate based on the occupant's actions. For example, when the occupant operates with their hands, it is conceivable to move the agent image AG1 to a position close to their hands. For example, it is conceivable to move in a straight line from the source to the destination, as shown by arrow AW1.

 このように、ユーザの操作に応じてエージェント画像AG1を追従させ、エージェント画像AG1を連続的に動かすことによって、エージェント画像AG1がユーザの操作に反応していることを示すことが可能となる。また、ユーザは、エージェント画像AG1を見ることにより、エージェント画像AG1が伝えようとしている情報を視覚的に容易に把握することが可能となる。このため、エージェント画像AG1が一時的でも消えてしまうと不安になることも想定される。しかし、エージェント画像AG1がユーザ入力に応じて移動している最中に隠したくない情報に重なってしまう可能性がある。 In this way, by having the agent image AG1 follow the user's operations and moving the agent image AG1 continuously, it is possible to show that the agent image AG1 is responding to the user's operations. Furthermore, by looking at the agent image AG1, the user can easily visually grasp the information that the agent image AG1 is trying to convey. For this reason, it is conceivable that the user may feel uneasy if the agent image AG1 disappears even temporarily. However, there is a possibility that the agent image AG1 may overlap with information that should not be hidden while moving in response to user input.

 例えば、エージェント画像AG1を最前面に表示した状態で矢印AW1示す移動経路に沿って移動した場合には、移動する際のエージェント画像AG1がコンテンツCT1の表側(すなわち、表示部201の表示面側)に重複することになる。この場合には、エージェント画像AG1の移動中にコンテンツCT1の一部を乗員が見ることができない。すなわち、エージェント画像AG1の移動によりコンテンツCT1の視認性が阻害されることになる。 For example, if the agent image AG1 is displayed in the foreground and the vehicle moves along the path indicated by the arrow AW1, the agent image AG1 will overlap the front side of the content CT1 (i.e., the display surface side of the display unit 201) as it moves. In this case, the occupant will not be able to see part of the content CT1 while the agent image AG1 is moving. In other words, the movement of the agent image AG1 will impede the visibility of the content CT1.

 そこで、本実施形態では、エージェント画像AG1の移動により特定のコンテンツの視認性が阻害される場合には、エージェント画像AG1の移動経路を変更したり、エージェント画像AG1の表示態様を変更したり、エージェント画像AG1の移動タイミング又は移動速度を変更したりして、特定のコンテンツの視認性を阻害しないようにする。 Therefore, in this embodiment, if the movement of the agent image AG1 impairs the visibility of specific content, the movement path of the agent image AG1 is changed, the display mode of the agent image AG1 is changed, or the movement timing or movement speed of the agent image AG1 is changed so as not to impair the visibility of the specific content.

 [情報処理システムの構成例]
 図5は、車両C1に設置されている情報処理システム100のシステム構成の一例を示すブロック図である。
[Configuration example of information processing system]
FIG. 5 is a block diagram showing an example of the system configuration of the information processing system 100 installed in the vehicle C1.

 情報処理システム100は、音取得部101と、ドライバ画像取得部102と、車室画像取得部103と、車外画像取得部104と、情報処理装置110と、出力機器200とを備える。情報処理装置110は、車両C1の乗員(ドライバD1を含む)とのコミュニケーションを行うことが可能な出力機器200を制御する機器の一例である。 The information processing system 100 includes a sound acquisition unit 101, a driver image acquisition unit 102, a vehicle interior image acquisition unit 103, a vehicle exterior image acquisition unit 104, an information processing device 110, and an output device 200. The information processing device 110 is an example of a device that controls the output device 200, which is capable of communicating with the occupants of the vehicle C1 (including the driver D1).

 なお、情報処理装置110、出力機器200は、有線通信又は無線通信を利用した通信方式によって接続される。また、情報処理装置110は、無線通信を利用した通信方式によってネットワーク20に接続されている。ネットワーク20は、公衆回線網、インターネット等のネットワークである。なお、出力機器200についても、無線通信を利用した通信方式によってネットワーク20に接続してもよい。なお、図5では、情報処理装置110、出力機器200を別体として構成する例を示すが、情報処理装置110、出力機器200を一体の機器として構成してもよい。 The information processing device 110 and the output device 200 are connected via a communication method using wired communication or wireless communication. The information processing device 110 is also connected to the network 20 via a communication method using wireless communication. The network 20 is a network such as a public line network or the Internet. The output device 200 may also be connected to the network 20 via a communication method using wireless communication. While Figure 5 shows an example in which the information processing device 110 and the output device 200 are configured as separate entities, the information processing device 110 and the output device 200 may also be configured as an integrated device.

 音取得部101は、車両C1の内部に設けられ、車両C1の内部の音を取得するものであり、取得された音に関する音情報を情報処理装置110に出力する。音取得部101として、例えば、1又は複数のマイクや音取得センサを用いることができる。 The sound acquisition unit 101 is provided inside the vehicle C1, acquires sounds inside the vehicle C1, and outputs sound information related to the acquired sounds to the information processing device 110. The sound acquisition unit 101 can be, for example, one or more microphones or sound acquisition sensors.

 ドライバ画像取得部102は、車両C1に乗車するドライバD1を撮像して画像(画像データ)を生成するものであり、生成された画像に関する画像情報を情報処理装置110に出力する。ドライバ画像取得部102は、車両C1のうちの少なくとも内部に設けられ、車両C1に乗車するドライバD1を撮像して画像(画像データ)を生成する。ドライバ画像取得部102は、例えば、ドライバD1を撮像することが可能な1又は複数のカメラ機器や画像センサにより構成される。例えば、1つのドライバ画像取得部102を車両C1の内部の前方(例えば天井)に設け、車両C1の前方からのドライバD1を撮像して画像(画像データ)を生成することが可能である。例えば、フロントウインド4の上部、すなわち、バックミラー5の上側にドライバ画像取得部102を設けることができる。なお、ドライバ画像取得部102及び車室画像取得部103については、同一の機器を用いてもよく、異なる機器として構成してもよい。 The driver image acquisition unit 102 captures an image of the driver D1 in the vehicle C1 and generates an image (image data), and outputs image information related to the generated image to the information processing device 110. The driver image acquisition unit 102 is provided at least inside the vehicle C1 and captures an image of the driver D1 in the vehicle C1 and generates an image (image data). The driver image acquisition unit 102 is configured, for example, with one or more camera devices or image sensors capable of capturing an image of the driver D1. For example, one driver image acquisition unit 102 can be provided in the front of the interior of the vehicle C1 (e.g., on the ceiling), and can capture an image of the driver D1 from in front of the vehicle C1 and generate an image (image data). For example, the driver image acquisition unit 102 can be provided above the windshield 4, i.e., above the rearview mirror 5. The driver image acquisition unit 102 and the vehicle interior image acquisition unit 103 may be the same device or different devices.

 車室画像取得部103は、車両C1の内部の被写体を撮像して画像(画像データ)を生成するものであり、生成された画像に関する画像情報を情報処理装置110に出力する。車室画像取得部103は、車両C1のうちの少なくとも内部(例えば天井)に設けられ、車両C1の内部の被写体を撮像して画像(画像データ)を生成する。車室画像取得部103は、例えば、被写体を撮像することが可能な1又は複数のカメラ機器や画像センサにより構成される。例えば、1つの車室画像取得部103を車両C1の前方に設け、前方からの被写体を撮像して画像(画像データ)を生成してもよく、他の車室画像取得部103を車両C1の後方に設け、後方からの被写体を撮像して画像(画像データ)を生成してもよい。 The vehicle interior image acquisition unit 103 captures images of subjects inside the vehicle C1 to generate images (image data), and outputs image information related to the generated images to the information processing device 110. The vehicle interior image acquisition unit 103 is provided at least inside the vehicle C1 (e.g., the ceiling) and captures images of subjects inside the vehicle C1 to generate images (image data). The vehicle interior image acquisition unit 103 is composed of, for example, one or more camera devices or image sensors capable of capturing images of subjects. For example, one vehicle interior image acquisition unit 103 may be provided at the front of the vehicle C1 to capture images of subjects from the front to generate images (image data), and another vehicle interior image acquisition unit 103 may be provided at the rear of the vehicle C1 to capture images of subjects from the rear to generate images (image data).

 車外画像取得部104は、車両C1の外部の被写体を撮像して画像(画像データ)を生成するものであり、生成された画像に関する画像情報を情報処理装置110に出力する。なお、2以上の車外画像取得部104を備え、これらの車外画像取得部104のうちの全部又は一部の画像を用いてもよい。例えば、1つの車外画像取得部104を車両C1の前方に設け、車両C1の前方からの被写体を撮像して画像(画像データ)を生成してもよく、他の車外画像取得部104を車両C1の後方に設け、車両C1からの後方の被写体を撮像して画像(画像データ)を生成してもよい。また、車両C1の全方位に存在する被写体と、車両C1の内部の被写体とを取得可能な1又は複数の機器、例えば、360度カメラを用いてもよい。 The outside-vehicle image acquisition unit 104 captures images of subjects outside the vehicle C1 to generate images (image data), and outputs image information related to the generated images to the information processing device 110. Two or more outside-vehicle image acquisition units 104 may be provided, and all or some of the images from these outside-vehicle image acquisition units 104 may be used. For example, one outside-vehicle image acquisition unit 104 may be provided in front of the vehicle C1 to capture images of subjects from in front of the vehicle C1 to generate images (image data), and another outside-vehicle image acquisition unit 104 may be provided behind the vehicle C1 to capture images of subjects behind the vehicle C1 to generate images (image data). Furthermore, one or more devices capable of capturing subjects in all directions from the vehicle C1 and subjects inside the vehicle C1, such as a 360-degree camera, may be used.

 なお、ドライバ画像取得部102、車室画像取得部103及び車外画像取得部104は、例えば、レンズにより集光された被写体からの光を入射する撮像素子(イメージセンサ)と、その撮像素子により生成された画像データについて所定の画像処理を施す画像処理部とにより構成される。撮像素子として、例えば、CCD(Charge Coupled Device)型やCMOS(Complementary Metal Oxide Semiconductor)型の撮像素子を用いることができる。 The driver image acquisition unit 102, the vehicle interior image acquisition unit 103, and the vehicle exterior image acquisition unit 104 are each composed of, for example, an image sensor that receives light from a subject collected by a lens, and an image processing unit that performs predetermined image processing on the image data generated by the image sensor. For example, a CCD (Charge Coupled Device) type or CMOS (Complementary Metal Oxide Semiconductor) type image sensor can be used as the image sensor.

 情報処理装置110は、制御部120と、記憶部130と、通信部140とを備える。通信部140は、制御部120の制御に基づいて、有線通信又は無線通信を利用して、他の機器との間で各種情報のやりとりを行うものである。例えば、通信部140は、外部機器(例えば、サーバ)から、運転支援情報、エージェント画像AG1の動作情報等を受信すると、それらの各情報を制御部120に出力する。 The information processing device 110 includes a control unit 120, a storage unit 130, and a communication unit 140. The communication unit 140 exchanges various types of information with other devices using wired or wireless communication under the control of the control unit 120. For example, when the communication unit 140 receives driving assistance information, operation information of the agent image AG1, etc. from an external device (e.g., a server), it outputs each piece of information to the control unit 120.

 制御部120は、記憶部130に記憶されている各種プログラムに基づいて各部を制御するものである。制御部120は、例えば、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)等の処理装置により実現される。なお、車両C1の車両ECU(Electronic Control Unit)を制御部120としても使用してもよく、車両ECUとは異なる処理装置を制御部120として設けてもよい。 The control unit 120 controls each unit based on various programs stored in the memory unit 130. The control unit 120 is realized by a processing device such as a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit). The vehicle ECU (Electronic Control Unit) of the vehicle C1 may also be used as the control unit 120, or a processing device different from the vehicle ECU may be provided as the control unit 120.

 制御部120は、音取得部101、ドライバ画像取得部102、車室画像取得部103、車外画像取得部104、通信部140等から出力された各情報と、車両情報取得部125により取得された各情報とに基づいて各種制御を実行する。例えば、制御部120は、出力機器200の動作状態を制御する制御処理を実行する。具体的には、制御部120は、発話取得部121と、ドライバ状況取得部122と、車室状況取得部123と、車外状況取得部124と、車両情報取得部125と、出力内容決定部126と、移動態様決定部127と、出力制御部128とを備える。 The control unit 120 executes various controls based on the information output from the sound acquisition unit 101, driver image acquisition unit 102, vehicle interior image acquisition unit 103, vehicle exterior image acquisition unit 104, communication unit 140, etc., and the information acquired by the vehicle information acquisition unit 125. For example, the control unit 120 executes control processing to control the operating state of the output device 200. Specifically, the control unit 120 includes an utterance acquisition unit 121, a driver status acquisition unit 122, a vehicle interior status acquisition unit 123, a vehicle exterior status acquisition unit 124, a vehicle information acquisition unit 125, an output content determination unit 126, a movement mode determination unit 127, and an output control unit 128.

 発話取得部121は、音取得部101から出力された音情報について所定の音解析処理を実行することにより、その音情報に含まれる各ユーザ(ドライバD1を含む)の発話に関する発話情報を取得するものであり、その発話情報を出力内容決定部126に出力する。この音解析処理は、公知の音解析処理を用いることが可能である。 The speech acquisition unit 121 performs a predetermined sound analysis process on the sound information output from the sound acquisition unit 101 to acquire speech information related to the speech of each user (including driver D1) contained in the sound information, and outputs the speech information to the output content determination unit 126. This sound analysis process can be performed using known sound analysis processes.

 ドライバ状況取得部122は、ドライバ画像取得部102から出力された画像情報について所定の画像解析処理を実行することにより、その画像情報に含まれるドライバD1に関する各種情報を取得するものであり、その各情報を出力内容決定部126に出力する。この画像解析処理は、公知の画像解析処理を用いることが可能である。ドライバD1に関する各種情報として、例えば、ドライバD1の表情、ドライバD1の視線、手、顔、身体等の動きに関するドライバD1の各行動を取得することが可能である。これらにより、例えば、ドライバD1の車両C1への乗車及び降車、車両C1における各座席の着座の有無、ドライバD1の手の動き等を検出可能である。すなわち、ドライバ状況取得部122によりドライバD1の状態に関するドライバ状態情報が取得される。 The driver status acquisition unit 122 performs a predetermined image analysis process on the image information output from the driver image acquisition unit 102 to acquire various pieces of information about driver D1 contained in the image information, and outputs each piece of information to the output content determination unit 126. This image analysis process can be performed using known image analysis processes. As various pieces of information about driver D1, it is possible to acquire various actions of driver D1 related to, for example, driver D1's facial expression, line of sight, hand, face, body movements, etc. From this, it is possible to detect, for example, whether driver D1 has gotten in and out of vehicle C1, whether each seat in vehicle C1 is occupied, and hand movements of driver D1. In other words, driver status information related to the state of driver D1 is acquired by the driver status acquisition unit 122.

 車室状況取得部123は、車室画像取得部103から出力された画像情報について所定の画像解析処理を実行することにより、その画像情報に含まれる車両C1の車室内に関する各種情報を取得するものであり、その各情報を出力内容決定部126に出力する。この画像解析処理は、公知の画像解析処理を用いることが可能である。車両C1の車室内に関する各種情報として、例えば、各ユーザの表情、各ユーザの視線、手、顔、身体等の動きに関する各ユーザの各行動を取得することが可能である。すなわち、車室状況取得部123により車両C1に乗車する各ユーザの状態に関するユーザ状態情報と、車両C1の状態に関する車両状態情報とが取得される。 The cabin status acquisition unit 123 acquires various pieces of information about the interior of the vehicle C1 contained in the image information by performing a predetermined image analysis process on the image information output from the cabin image acquisition unit 103, and outputs the information to the output content determination unit 126. This image analysis process can be performed using known image analysis processes. As various pieces of information about the interior of the vehicle C1, it is possible to acquire, for example, each user's facial expression, each user's gaze, and each user's actions related to the movements of their hands, face, body, etc. In other words, the cabin status acquisition unit 123 acquires user status information about the status of each user aboard the vehicle C1, and vehicle status information about the status of the vehicle C1.

 車外状況取得部124は、車外画像取得部104から出力された画像情報について所定の画像解析処理を実行することにより、その画像情報に含まれる車両C1の外部に関する各種情報を取得するものであり、その各情報を出力内容決定部126に出力する。この画像解析処理は、公知の画像解析処理を用いることが可能である。車両C1の外部に関する各種情報として、例えば、車両C1が道路上を走行しているか否か、車両C1が道路上に停車しているか否か、車両C1の前方に存在する信号、標識等を検出可能である。すなわち、車外状況取得部124により車両C1の状態に関する車両状態情報が取得される。 The vehicle exterior situation acquisition unit 124 acquires various pieces of information about the outside of the vehicle C1 contained in the image information by performing a predetermined image analysis process on the image information output from the vehicle exterior image acquisition unit 104, and outputs each piece of information to the output content determination unit 126. This image analysis process can use known image analysis processes. As various pieces of information about the outside of the vehicle C1, for example, it is possible to detect whether the vehicle C1 is traveling on a road, whether the vehicle C1 is stopped on a road, and traffic lights, signs, etc. that are present in front of the vehicle C1. In other words, the vehicle exterior situation acquisition unit 124 acquires vehicle state information about the state of the vehicle C1.

 車両情報取得部125は、車両C1に関する各種の車両状態に関する情報(車両状態情報)を取得するものであり、取得された車両状態情報を出力内容決定部126に出力する。車両状態情報は、例えば、CAN(Controller Area Network)信号から取得可能である。なお、車両状態情報は、例えば、車速、加速度、シフトレバーの位置(例えば、Pレンジ、Dレンジ)、アクセルペダルの踏込量、ブレーキペダルの踏込量、位置情報、異常発生情報等である。例えば、車速、加速度等により車両C1の停車中及び走行中を判定可能である。また、車両C1に関する各異常発生情報に基づいて、各種の警告情報を表示可能である。 The vehicle information acquisition unit 125 acquires information (vehicle state information) relating to various vehicle states of the vehicle C1 and outputs the acquired vehicle state information to the output content determination unit 126. The vehicle state information can be acquired, for example, from a CAN (Controller Area Network) signal. The vehicle state information includes, for example, vehicle speed, acceleration, shift lever position (e.g., P range, D range), accelerator pedal depression amount, brake pedal depression amount, position information, and abnormality occurrence information. For example, it is possible to determine whether the vehicle C1 is stopped or moving based on the vehicle speed, acceleration, etc. Furthermore, various warning information can be displayed based on the abnormality occurrence information relating to the vehicle C1.

 また、車両情報取得部125は、車両C1に設置されている各種のセンサから出力されたセンサ検出情報を取得してもよい。センサ類は、例えば、LiDAR(Light Detection And Ranging)、RADAR(Radio Detection And Ranging)、Sonar、車速センサ、加速度センサ、ステアリングセンサ、アクセルポジションセンサ、位置情報取得センサ(位置情報取得部)、着座センサ、シートベルトセンサ等である。これらの各センサについては、公知のセンサを用いることが可能である。なお、LiDAR、RADAR、Sonar等は、車両C1の周囲の状況を検出するセンサの一例である。また、車速センサ、加速度センサ、ステアリングセンサ、アクセルポジションセンサ等は、ドライバD1の運転操作の状況を検出するセンサの一例である。なお、これらは一例であり、他のセンサを用いてもよい。また、これらのうちの一部のセンサのみを用いてもよい。 The vehicle information acquisition unit 125 may also acquire sensor detection information output from various sensors installed in the vehicle C1. Examples of such sensors include LiDAR (Light Detection and Ranging), RADAR (Radio Detection and Ranging), sonar, vehicle speed sensor, acceleration sensor, steering sensor, accelerator position sensor, position information acquisition sensor (position information acquisition unit), seat occupancy sensor, and seat belt sensor. Publicly known sensors can be used for each of these sensors. LiDAR, RADAR, sonar, and the like are examples of sensors that detect the conditions around the vehicle C1. Vehicle speed sensors, acceleration sensors, steering sensors, accelerator position sensors, and the like are examples of sensors that detect the driving operation conditions of the driver D1. These are just examples, and other sensors may also be used. Furthermore, only some of these sensors may be used.

 位置情報取得部は、車両C1が存在する位置に関する位置情報を取得するものである。例えば、GNSS(Global Navigation Satellite System:全球測位衛星システム)を利用して位置情報を取得するGNSS受信機により実現できる。また、その位置情報には、GNSS信号の受信時における緯度、経度、高度等の位置に関する各データが含まれる。また、他の位置情報の取得方法により位置情報を取得してもよい。例えば、周囲に存在するアクセスポイントや基地局からの情報を用いて位置情報を導き出してもよい。また、ビーコンを用いて位置情報を取得してもよい。 The location information acquisition unit acquires location information regarding the location where vehicle C1 is located. For example, it can be realized by a GNSS receiver that acquires location information using the GNSS (Global Navigation Satellite System). This location information includes position-related data such as latitude, longitude, and altitude at the time the GNSS signal is received. Location information may also be acquired using other location information acquisition methods. For example, location information may be derived using information from nearby access points or base stations. Location information may also be acquired using beacons.

 着座センサ(又はシートセンサ)は、車両C1の各座席に着座している乗員の有無を検出するセンサである。また、シートベルトセンサは、車両C1の各座席に着座している乗員がシートベルトを着用しているか否かを検出するセンサである。例えば、運転席に着座しているドライバD1のシートベルトの着用の有無については、着座センサ、シートベルトセンサ等を用いて検出可能である。 The seating sensor (or seat sensor) is a sensor that detects whether or not an occupant is seated in each seat of the vehicle C1. The seat belt sensor is a sensor that detects whether or not an occupant is wearing a seat belt in each seat of the vehicle C1. For example, whether or not the driver D1 sitting in the driver's seat is wearing a seat belt can be detected using a seating sensor, seat belt sensor, etc.

 出力内容決定部126は、発話取得部121、ドライバ状況取得部122、車室状況取得部123、車外状況取得部124、車両情報取得部125、受付部203等から出力された各情報に基づいて、出力機器200から出力させる各出力情報の内容を決定するものである。そして、出力内容決定部126は、決定された各出力情報の内容と、これらの各内容を決定する際に用いた各情報を移動態様決定部127、出力制御部128に出力する。例えば、出力内容決定部126は、車両C1に乗車する各乗員(ドライバD1を含む)の状態に関するユーザ状態情報と、車両C1の状態に関する車両状態情報とのうちの少なくとも1つを取得する。そして、出力内容決定部126は、その取得されたユーザ状態情報及び車両状態情報のうちの少なくとも1つに基づいて、出力機器200から出力させる各情報の内容を決定する。例えば、出力内容決定部126は、ユーザ状態情報又は車両状態情報に基づいて、ドライバD1に伝えるべき支援情報が検出された場合には、その支援情報をドライバD1に伝えるため、その支援情報を出力することを決定する。また、例えば、出力内容決定部126は、ユーザ状態情報又は車両状態情報に基づいて、車両C1に乗車する何れかの乗員に対して所定のコミュニケーション(例えば、会話)を実行することが検出された場合には、そのコミュニケーションを実行するための出力情報を決定する。また、出力内容決定部126は、例えば、ユーザ状態情報又は受付部203により受け付けられた操作情報に基づいて、エージェント画像AG1の表示態様、表示位置等を決定する。 The output content determination unit 126 determines the content of each piece of output information to be output from the output device 200 based on the information output from the speech acquisition unit 121, driver status acquisition unit 122, cabin status acquisition unit 123, exterior status acquisition unit 124, vehicle information acquisition unit 125, reception unit 203, etc. The output content determination unit 126 then outputs the determined content of each piece of output information and the information used in determining this content to the movement mode determination unit 127 and output control unit 128. For example, the output content determination unit 126 acquires at least one of user status information regarding the status of each occupant (including driver D1) in vehicle C1 and vehicle status information regarding the status of vehicle C1. The output content determination unit 126 then determines the content of each piece of information to be output from the output device 200 based on at least one of the acquired user status information and vehicle status information. For example, if the output content determination unit 126 detects, based on the user status information or vehicle status information, that assistance information to be communicated to driver D1, it determines to output the assistance information in order to communicate the assistance information to driver D1. Furthermore, if the output content determination unit 126 detects, based on the user status information or vehicle status information, that a predetermined communication (e.g., conversation) is to be performed with any of the occupants of vehicle C1, it determines the output information for performing that communication. Furthermore, the output content determination unit 126 determines, for example, the display mode, display position, etc. of agent image AG1 based on the user status information or operation information received by the reception unit 203.

 移動態様決定部127は、出力内容決定部126により決定された各出力情報の内容、ドライバD1の状態(例えば、運転負荷、視線方向)、車両C1の状態(例えば、運転負荷が高い周囲環境か否か)等に基づいて、エージェント画像AG1を移動させる場合の移動態様を決定するものである。そして、移動態様決定部127は、決定されたエージェント画像AG1の移動態様を出力制御部128に出力する。例えば、移動態様決定部127は、移動元から移動先まで移動するエージェント画像AG1の移動軌跡と、表示部201に表示されている1又は複数のコンテンツとの関係を確認する。そして、移動態様決定部127は、移動元から移動先までの間に表示中のコンテンツが存在するか否かを判定し、移動元から移動先までの間に表示中のコンテンツが存在する場合には、そのコンテンツとエージェント画像AG1の移動軌跡とが重複するか否かを判定する。例えば、移動元から移動先までの間に表示中のコンテンツが存在し、そのコンテンツとエージェント画像AG1の移動軌跡とが重複する場合には、移動態様決定部127は、そのコンテンツの視認性を阻害しないようにエージェント画像AG1を移動させる移動演出を実行することを決定する。この移動演出については、図6乃至図9を参照して詳細に説明する。また、表示中のコンテンツのうち、特定のコンテンツのみを重複するか否かの判定対象としてもよい。 The movement mode determination unit 127 determines the movement mode when moving the agent image AG1 based on the content of each output information determined by the output content determination unit 126, the state of the driver D1 (e.g., driving load, line of sight direction), the state of the vehicle C1 (e.g., whether the surrounding environment is high driving load), etc. The movement mode determination unit 127 then outputs the determined movement mode of the agent image AG1 to the output control unit 128. For example, the movement mode determination unit 127 checks the relationship between the movement trajectory of the agent image AG1 moving from the source to the destination and one or more pieces of content displayed on the display unit 201. The movement mode determination unit 127 then determines whether there is content currently being displayed between the source and the destination, and if there is content currently being displayed between the source and the destination, determines whether the content overlaps with the movement trajectory of the agent image AG1. For example, if there is content currently being displayed between the source and destination and that content overlaps with the movement trajectory of the agent image AG1, the movement mode determination unit 127 determines to execute a movement effect that moves the agent image AG1 so as not to impair the visibility of that content. This movement effect will be described in detail with reference to Figures 6 to 9. Furthermore, of the content currently being displayed, only specific content may be subject to the determination of whether or not there is an overlap.

 [特定のコンテンツについて]
 本実施形態では、エージェント画像AG1の移動によりコンテンツ(特に、特定のコンテンツ)の視認性が阻害される可能性のある場合には、上述したように、エージェント画像AG1の移動経路の変更、エージェント画像AG1の表示態様の変更、エージェント画像AG1の移動タイミング又は移動速度の変更等を実行する。そこで、ここでは、特定のコンテンツについて説明する。
[Regarding specific content]
In this embodiment, when there is a possibility that the visibility of content (particularly specific content) is impaired by the movement of the agent image AG1, as described above, the movement path of the agent image AG1 is changed, the display mode of the agent image AG1 is changed, the movement timing or movement speed of the agent image AG1 is changed, etc. Here, the specific content will be described.

 例えば、表示部201に表示されるコンテンツのうち、特に重要な情報を含むコンテンツを特定のコンテンツとすることができる。この重要な情報は、例えば、法律で表示し続けることが規定されている情報(すなわち、法律上表示し続ける必要がある情報、法律上隠してはいけない情報)である。この情報は、例えば、各種の表示灯、各種の警告灯である。例えば、バッテリ残量警告灯、最高速度標識表示、バッテリ残量計、航続可能距離表示、スピードメータ(速度計)、ポジションインジケータ(シフトレバーの位置)、オドメータ(積算距離計)、トリップメータ(区間距離計)、車両の不具合を示す警告灯、等の各情報である。なお、これは、代表的な情報の例示であり、他の情報(法律で表示し続けることが規定されている情報)を重要な情報としてもよい。 For example, of the content displayed on the display unit 201, content containing particularly important information can be designated as specific content. This important information is, for example, information that is required by law to be continuously displayed (i.e., information that is required by law to be continuously displayed, information that is required by law to be concealed). This information is, for example, various indicator lights and warning lights. Examples include battery level warning lights, maximum speed signs, battery level gauges, cruising range indicators, speedometers, position indicators (shift lever positions), odometers (total distance meters), trip meters (segment distance meters), warning lights indicating vehicle malfunctions, etc. Note that these are only examples of representative information, and other information (information that is required by law to be continuously displayed) may also be designated as important information.

 また、例えば、重要な情報は、緊急性が高い情報である。この緊急性が高い情報は、例えば、運転中にドライバD1に提示される運転支援に関する情報である。例えば、車両C1の進行方向を通知するための情報、車両C1の進行方向に存在する対象物(例えば踏切、事故車)を通知するための情報等が想定される。 Furthermore, for example, important information is information of high urgency. This high urgency information is, for example, information related to driving assistance that is presented to driver D1 while driving. Examples of such information include information notifying the driver D1 of the direction of travel of vehicle C1, and information notifying the driver of objects (e.g., railroad crossings, accident vehicles) that exist in the direction of travel of vehicle C1.

 また、例えば、重要な情報は、運転操作に関連する情報、走行制御に関連する走行制御情報、車両C1の異常発生に関する異常発生情報等である。 Furthermore, for example, important information may include information related to driving operations, driving control information related to driving control, and abnormality occurrence information regarding the occurrence of an abnormality in vehicle C1.

 運転操作に関連する情報は、例えば、車両C1の進行方向を誘導するために表示される地図情報(地図コンテンツ)であり、特に地図に含まれる車両C1の現在地を示す現在地標識と、車両C1が進行する経路(例えば、現在地から目的地までの経路)を示す経路標識とを含む地図情報である。例えば、ドライバD1にとっては、車両C1が通過した経路よりも、車両C1が今後通過する経路が重要となるため、車両C1が今後通過する経路を重要な情報とすることが好ましい。また、例えば、車両C1において自動運転が実行されている場合には、地図コンテンツにおいて自動運転の経路が表示されていることも想定される。また、車両C1において自動運転が実行されている場合には、ドライバD1が走行経路を確認したり、運転以外の操作をしたりすることが多くなることが想定される。このような場合には、ドライバD1の手動操作に応じてエージェント画像AG1が移動する機会が増えることも想定される。また、運転操作に関連する情報として、車両C1の進行方向に関する案内情報等も特定のコンテンツとすることが可能である。 Information related to driving operations is, for example, map information (map content) displayed to guide the vehicle C1 in the direction of travel, and in particular map information including a current location indicator indicating the current location of the vehicle C1 and a route indicator indicating the route the vehicle C1 will take (e.g., the route from the current location to the destination). For example, the route the vehicle C1 will take in the future is more important to the driver D1 than the route the vehicle C1 has taken so it is preferable to treat the route the vehicle C1 will take in the future as important information. Furthermore, for example, when autonomous driving is being performed in the vehicle C1, it is expected that the autonomous driving route will be displayed in the map content. Furthermore, when autonomous driving is being performed in the vehicle C1, it is expected that the driver D1 will often check the driving route or perform operations other than driving. In such cases, it is expected that the agent image AG1 will move more frequently in response to manual operations by the driver D1. Furthermore, guidance information regarding the direction the vehicle C1 will take can also be specified as specific content as information related to driving operations.

 また、例えば、自動運転でない場合において、緊急性が高い情報が表示されるときには、助手席側の乗員が操作を行うことが多いと想定される。このような場合には、助手席側の乗員の手動操作に応じてエージェント画像AG1が移動する機会が増えることも想定される。このように、ドライバD1が運転中に助手席側の乗員が手動操作を行う場合には、ドライバD1側に表示されていたエージェント画像AG1が、その助手席の乗員の操作に追従して移動することも考えられる。この場合には、そのエージェント画像AG1の移動によりドライバD1は重要な情報(特定のコンテンツ)を見えなくなることが想定される。 Furthermore, for example, in non-autonomous driving situations, when highly urgent information is displayed, it is expected that the passenger in the passenger seat will often perform the operation. In such cases, it is expected that there will be more opportunities for the agent image AG1 to move in response to manual operation by the passenger in the passenger seat. In this way, if the passenger in the passenger seat performs manual operation while driver D1 is driving, it is possible that the agent image AG1 displayed on the driver D1 side will move in response to the operation of the passenger in the passenger seat. In this case, it is expected that the movement of the agent image AG1 will cause driver D1 to lose sight of important information (specific content).

 また、走行制御に関連する走行制御情報は、例えば、車両C1の走行、車両C1が曲がること、車両C1の停止等に関する情報である。 Furthermore, driving control information related to driving control is, for example, information regarding the driving of vehicle C1, turning of vehicle C1, stopping of vehicle C1, etc.

 また、車両C1の異常発生に関する異常発生情報コンテンツは、例えば、車両C1の各部の故障に関する情報を含むコンテンツである。例えば、一般に運転席の前方に設置されるメータ類に表示される各種警告灯に相当する情報を含むコンテンツである。 Furthermore, the abnormality occurrence information content regarding the occurrence of an abnormality in vehicle C1 is, for example, content that includes information regarding malfunctions in various parts of vehicle C1. For example, it is content that includes information corresponding to the various warning lights that are generally displayed on meters installed in front of the driver's seat.

 以上では、特定のコンテンツを予め設定しておく例を示すが、表示部201を見ている乗員の状態に基づいて特定のコンテンツを設定してもよい。例えば、表示部201を見ている乗員が存在し、かつ、その乗員の視線の先(すなわち視線方向)に存在するコンテンツを特定のコンテンツとして設定もよい。この場合には、所定時間以上継続して、乗員の視線の先(すなわち視線方向)に同一のコンテンツに存在することを条件として、そのコンテンツを特定のコンテンツとして設定してもよい。 The above shows an example in which specific content is set in advance, but specific content may also be set based on the state of the occupant looking at the display unit 201. For example, if there is an occupant looking at the display unit 201, content that is in the occupant's line of sight (i.e., line of sight) may be set as specific content. In this case, content may be set as specific content on the condition that the same content is in the occupant's line of sight (i.e., line of sight) for a predetermined period of time or more.

 なお、SNS(Social networking service)等のネットワーク情報、音楽情報、娯楽映像情報等を表示するエンターテイメント情報(エンタメコンテンツ)については、特定のコンテンツとしない設定をすることが可能である。 In addition, entertainment information (entertainment content) that displays network information such as SNS (social networking service), music information, and entertainment video information can be set to not be specified as specific content.

 [情報処理システムの構成例]
 出力制御部128は、出力内容決定部126により決定された出力情報の内容と、移動態様決定部127により決定されたエージェント画像AG1の移動態様とに基づいて、出力機器200の動作状態を制御するものである。なお、これらの各動作については、図6乃至図11等を参照して詳細に説明する。
[Configuration example of information processing system]
The output control unit 128 controls the operating state of the output device 200 based on the content of the output information determined by the output content determination unit 126 and the movement mode of the agent image AG1 determined by the movement mode determination unit 127. Each of these operations will be described in detail with reference to Figs. 6 to 11 etc.

 記憶部130は、各種情報を記憶する記憶媒体である。例えば、記憶部130には制御部120が各種処理を行うために必要となる各種情報(例えば、制御プログラム、エージェント情報DB131、エージェント管理DB132、コンテンツ管理DB133、地図情報DB)が記憶される。また、記憶部130には、通信部140を介して取得された各種情報が記憶される。記憶部130として、例えば、ROM(Read Only Memory)、RAM(Random Access Memory)、SRAM(Static Random Access Memory)、HDD(Hard Disk Drive)、SSD(Solid State Drive)、又は、これらの組み合わせを用いることができる。 The memory unit 130 is a storage medium that stores various types of information. For example, the memory unit 130 stores various types of information (e.g., control programs, agent information DB 131, agent management DB 132, content management DB 133, map information DB) required by the control unit 120 to perform various processes. The memory unit 130 also stores various types of information acquired via the communication unit 140. The memory unit 130 can be, for example, ROM (Read Only Memory), RAM (Random Access Memory), SRAM (Static Random Access Memory), HDD (Hard Disk Drive), SSD (Solid State Drive), or a combination of these.

 エージェント情報DB131には、出力機器200に表示されるエージェント画像AG1の各種動作を実現するために必要となる各種情報が格納されている。例えば、出力機器200の表示部201に表示させるエージェント画像AG1に関する画像情報と、音出力部202から出力させるエージェント画像AG1の音声に関する音声情報とがエージェント情報DB131に格納される。また、例えば、各種コミュニケーションを実行する際に出力機器200におけるエージェント画像AG1を動作させるための動作情報等がエージェント情報DB131に格納される。 Agent information DB131 stores various types of information required to realize the various operations of agent image AG1 displayed on output device 200. For example, image information related to agent image AG1 to be displayed on display unit 201 of output device 200 and audio information related to the audio of agent image AG1 to be output from audio output unit 202 are stored in agent information DB131. In addition, for example, operation information for operating agent image AG1 on output device 200 when various communications are performed is stored in agent information DB131.

 エージェント管理DB132には、表示部201に表示されるエージェント画像AG1の表示位置を管理するためのエージェント管理情報(座標情報)が格納されている。例えば、表示部201の表示面におけるエージェント画像AG1の位置と、エージェント画像AG1の表示面積とに関する情報がエージェント管理情報として管理される。 The agent management DB 132 stores agent management information (coordinate information) for managing the display position of the agent image AG1 displayed on the display unit 201. For example, information regarding the position of the agent image AG1 on the display surface of the display unit 201 and the display area of the agent image AG1 is managed as agent management information.

 コンテンツ管理DB133には、表示部201に表示される各コンテンツ(例えば、地図コンテンツ、音楽コンテンツ)の表示位置を管理するためのコンテンツ管理情報(座標情報)が格納されている。例えば、表示部201の表示面における各コンテンツの位置と、コンテンツの表示面積と、コンテンツの種類とに関する情報がコンテンツ管理情報として管理される。 Content management DB 133 stores content management information (coordinate information) for managing the display position of each piece of content (e.g., map content, music content) displayed on display unit 201. For example, information regarding the position of each piece of content on the display surface of display unit 201, the display area of the content, and the type of content is managed as content management information.

 出力機器200は、情報処理装置110からの指示に基づいてエージェント画像AG1を表示させ、エージェント画像AG1を用いてドライバD1等に各種情報を伝えることが可能な機器である。 The output device 200 is a device that can display the agent image AG1 based on instructions from the information processing device 110 and convey various information to the driver D1, etc. using the agent image AG1.

 出力機器200は、表示部201、音出力部202及び受付部203を備える。なお、表示部201、音出力部202及び受付部203は、出力機器200が備える制御部(図示省略)に基づいて制御される。 The output device 200 includes a display unit 201, a sound output unit 202, and a reception unit 203. The display unit 201, the sound output unit 202, and the reception unit 203 are controlled based on a control unit (not shown) included in the output device 200.

 表示部201は、情報処理装置110からの指示に基づいて、各種画像を表示する表示部である。 The display unit 201 is a display unit that displays various images based on instructions from the information processing device 110.

 音出力部202は、情報処理装置110からの指示に基づいて、各種音声を出力するものである。音出力部202として、例えば、1又は複数のスピーカを用いることができる。 The sound output unit 202 outputs various sounds based on instructions from the information processing device 110. For example, one or more speakers can be used as the sound output unit 202.

 受付部203は、車両C1の乗員によるユーザ入力を受け付けるものであり、その受け付けた入力内容を制御部120に出力する。受付部203として、例えば、タッチパネル、各種操作部材を用いることができる。なお、表示部201及び受付部203については、使用者がその指を表示面に接触又は近接することにより操作入力を行うことが可能なタッチパネルとして構成してもよく、別体のユーザインタフェースとして構成してもよい。また、なお、表示部201、音出力部202及び受付部203は、ユーザインタフェースの一例であり、これらのうちの一部を省略してもよく、他のユーザインタフェースを用いてもよい。 The reception unit 203 receives user input from the occupants of the vehicle C1 and outputs the received input to the control unit 120. The reception unit 203 may be, for example, a touch panel or various operating members. The display unit 201 and reception unit 203 may be configured as a touch panel that allows the user to perform operation input by touching or bringing their finger close to the display surface, or may be configured as a separate user interface. The display unit 201, sound output unit 202, and reception unit 203 are examples of user interfaces, and some of them may be omitted, or other user interfaces may be used.

 [運転負荷の検出例]
 運転負荷の度合については、車両C1が停車中であるか走行中であるかに基づいて判定が可能である。例えば、車両C1が停車中である場合には、運転負荷が低い(例えば運転負荷が閾値未満)と判定可能である。一方、車両C1が走行中である場合には、運転負荷が高い(例えば運転負荷が閾値以上)と判定可能である。車両C1が停車中であるか走行中であるかについては、車両情報取得部125により取得された車両情報(例えば、車速、加速度、シフトレバーの位置(例えば、Pレンジ、Dレンジ)、アクセルペダルの踏込量、ブレーキペダルの踏込量)に基づいて判定可能である。
[Example of driving load detection]
The degree of driving load can be determined based on whether the vehicle C1 is stopped or moving. For example, when the vehicle C1 is stopped, it can be determined that the driving load is low (e.g., the driving load is less than a threshold value). On the other hand, when the vehicle C1 is moving, it can be determined that the driving load is high (e.g., the driving load is equal to or greater than a threshold value). Whether the vehicle C1 is stopped or moving can be determined based on vehicle information (e.g., vehicle speed, acceleration, position of the shift lever (e.g., P range, D range), accelerator pedal depression amount, brake pedal depression amount) acquired by the vehicle information acquisition unit 125.

 また、運転負荷は、車両C1の前方の信号に基づいて判定が可能である。例えば、車両C1の前方の信号が赤であり、車両C1が停車中である場合には、運転負荷が低い(例えば運転負荷が閾値未満)と判定可能である。一方、車両C1が停車中であるが、車両C1の前方の信号が青に変化した場合には、運転負荷が高い(例えば運転負荷が閾値以上)と判定可能である。なお、これらの判定以外では、車両C1が停車中であるか走行中であるかに基づく判定と同様とすることが可能である。なお、自動運転中である場合には、車両C1が走行中であっても運転負荷が低い(例えば運転負荷が閾値未満)と判定可能である。 Furthermore, the driving load can be determined based on the traffic light ahead of vehicle C1. For example, if the traffic light ahead of vehicle C1 is red and vehicle C1 is stopped, it can be determined that the driving load is low (for example, the driving load is below a threshold). On the other hand, if vehicle C1 is stopped but the traffic light ahead of vehicle C1 turns green, it can be determined that the driving load is high (for example, the driving load is above a threshold). Note that determinations other than these can be made similar to those based on whether vehicle C1 is stopped or moving. Note that when vehicle C1 is in autonomous driving mode, it can be determined that the driving load is low (for example, the driving load is below a threshold) even if vehicle C1 is moving.

 なお、ドライバD1の運転操作、車両C1の周辺状況等に基づいてドライバD1の運転負荷を判定してもよい。 In addition, the driving load of driver D1 may be determined based on the driving operation of driver D1, the surrounding conditions of vehicle C1, etc.

 例えば、ドライバD1の操作状況、車両C1の周囲の状況等をドライバ状況取得部122、車外状況取得部124、車両情報取得部125等により取得可能である。例えば、所定時間内におけるアクセル操作の回数が多く、ステアリング操作の回数も多い場合には、ドライバD1の運転負荷が高いと推定可能である。また、車両C1の周囲の状況等に基づいてドライバD1の運転負荷を推定可能である。例えば、くねくねする道、狭い道、混雑度が高い道、人が多い道等を車両C1が走行している場合には、ドライバD1の運転負荷が高いと推定可能である。一方、例えば、長い道をまっすぐ延びる道を進んでいる場合等には、ドライバD1の運転負荷が低いと推定可能である。また、ユーザの顔の表情、ドライバD1が発する声等に基づいてドライバD1の運転負荷を推定可能である。 For example, the operating status of driver D1, the status around vehicle C1, etc. can be acquired by the driver status acquisition unit 122, the vehicle exterior status acquisition unit 124, the vehicle information acquisition unit 125, etc. For example, if there are many accelerator operations and many steering operations within a predetermined time, it can be assumed that driver D1's driving load is high. In addition, the driving load of driver D1 can be estimated based on the status around vehicle C1, etc. For example, if vehicle C1 is traveling on a winding road, a narrow road, a highly congested road, a road with many people, etc., it can be assumed that driver D1's driving load is high. On the other hand, for example, if vehicle C1 is traveling on a long, straight road, it can be assumed that driver D1's driving load is low. In addition, the driving load of driver D1 can be estimated based on the user's facial expression, the voice of driver D1, etc.

 例えば、ドライバD1の運転操作として、ステアリングエントロピーを用いることが可能である。ステアリングエントロピー法は、運転者の操舵角の滑らかさに基づいて運転者の負荷を測定して推定する測定方法であり、公知の演算方法を用いることが可能である。また、このステアリングエントロピー法では、時系列舵角データに基づいて計算される情報エントロピー値として数値化された測定結果(測定値)を用いることが可能である。 For example, steering entropy can be used to measure the driving operation of driver D1. The steering entropy method is a measurement method that measures and estimates the driver's load based on the smoothness of the driver's steering angle, and known calculation methods can be used. Furthermore, this steering entropy method makes it possible to use digitized measurement results (measured values) as information entropy values calculated based on time-series steering angle data.

 また、例えば、車両C1の周辺状況として、車両C1の周囲の交通参加者の数、車両C1の周囲の天候、車両C1の周囲の暗さ、車両C1の周囲の道路形状等を用いることが可能である。 Furthermore, for example, the number of traffic participants around vehicle C1, the weather around vehicle C1, the darkness around vehicle C1, the shape of the road around vehicle C1, etc. can be used as the surrounding conditions of vehicle C1.

 上述した各値を用いてドライバD1の運転負荷を求めることが可能である。例えば、上述した各値に所定の演算(例えば加算)を施し、この演算結果を用いてドライバD1の運転負荷を求めることが可能である。なお、上述した各値のうちの少なくとも1つを用いてドライバD1の運転負荷を求めてもよい。また、他の公知の運転負荷の判定方法を用いることが可能である。 The driving load of driver D1 can be determined using the values described above. For example, a predetermined calculation (e.g., addition) can be performed on the values described above, and the results of this calculation can be used to determine the driving load of driver D1. Note that the driving load of driver D1 may also be determined using at least one of the values described above. Other well-known methods for determining driving load can also be used.

 [エージェント画像の移動例]
 図6、図7は、出力機器200の表示部201において第1位置PP1から第2位置SP1までエージェント画像AG1を移動させる場合のエージェント画像AG1の遷移例を示す図である。
[Example of agent image movement]
6 and 7 are diagrams showing an example of the transition of the agent image AG1 when the agent image AG1 is moved from the first position PP1 to the second position SP1 on the display unit 201 of the output device 200. FIG.

 図6(A)には、特定のコンテンツではないコンテンツCT2が表示部201に表示されている場合に、第1位置PP1から第2位置SP1までエージェント画像AG1を移動させる場合の遷移例を示す。コンテンツCT2は、例えば音楽を聴く場合に各種操作をするために表示される音楽コンテンツである。また、図6(A)では、第1位置PP1から第2位置SP1まで移動するエージェント画像AG1の移動軌跡を、エージェント画像AG1a乃至AG1eで模式的に示す。 Figure 6(A) shows an example of a transition when agent image AG1 is moved from first position PP1 to second position SP1 when content CT2, which is not specific content, is displayed on display unit 201. Content CT2 is music content that is displayed to allow various operations when listening to music, for example. Also, in Figure 6(A), the movement trajectory of agent image AG1 moving from first position PP1 to second position SP1 is schematically shown by agent images AG1a to AG1e.

 図6(A)に示すように、第1位置PP1から第2位置SP1までの直線状の移動経路をエージェント画像AG1が移動する場合に、移動中のエージェント画像AG1と表示中のコンテンツCT2とが重複することも想定される。例えば、移動中のエージェント画像AG1b乃至AG1dがコンテンツCT2と重複する。この場合には、移動中のエージェント画像AG1b乃至AG1dがコンテンツCT2の表側(すなわち、表示部201の表示面側)に重なるように表示されるため、乗員は、コンテンツCT2の全部又は一部を見ることができなくなる。ここで、上述したように、特定のコンテンツ以外のコンテンツの場合には、乗員が一時的に視認できないことが生じても運転等に支障が生じず、乗員に不快な印象を与えることが少ないと想定される。また、移動中のエージェント画像AG1を表示し続けることにより、エージェント画像AG1の同一性を認識し易くなり、エージェント画像AG1に対する信頼性、愛着が増加すると考えられる。そこで、表示中のコンテンツCT2が特定のコンテンツでないときには、出力制御部128は、第1位置PP1から第2位置SP1までの直線状の移動経路に沿ってエージェント画像AG1を移動させる表示制御を実行する。 As shown in FIG. 6(A), when the agent image AG1 moves along a linear path from the first position PP1 to the second position SP1, it is possible that the moving agent image AG1 and the displayed content CT2 may overlap. For example, the moving agent images AG1b to AG1d may overlap with the content CT2. In this case, the moving agent images AG1b to AG1d are displayed so as to overlap the front side of the content CT2 (i.e., the display surface side of the display unit 201), preventing the occupant from seeing all or part of the content CT2. As mentioned above, in the case of content other than specific content, even if the occupant is temporarily unable to see it, it is expected that this will not interfere with driving, etc., and will not cause the occupant any unpleasant impression. Furthermore, by continuing to display the moving agent image AG1, it becomes easier to recognize the identity of the agent image AG1, which is thought to increase trust and attachment to the agent image AG1. Therefore, when the content CT2 being displayed is not a specific content, the output control unit 128 executes display control to move the agent image AG1 along a linear movement path from the first position PP1 to the second position SP1.

 例えば、車両C1の各乗員の視線を検出することが可能であるため、何れかの乗員がコンテンツCT2を見ているか否かを判定可能である。例えば、車両C1の乗員の何れかが、コンテンツCT2を見ている場合には、移動中のエージェント画像AG1を表示し続けることにより、エージェント画像AG1の同一性を認識し易くなると考えられる。しかし、車両C1の乗員の何れも、コンテンツCT2を見ていない場合には、移動中のエージェント画像AG1を表示し続けても、移動中にコンテンツCT2と重複するエージェント画像AG1を見る乗員がいないと想定される。そこで、車両C1の乗員の何れも、コンテンツCT2を見ていない場合には、移動中のエージェント画像AG1の表示演出を停止してもよい。すなわち、エージェント画像AG1を第1位置PP1から消去した後に、エージェント画像AG1を第2位置SP1に表示させるようにしてもよい。なお、何れかの乗員がコンテンツCT2を見ていることが検出された場合には、コンテンツCT2に重複させずエージェント画像AG1を移動させてもよい。この例を図11に示す。 For example, since it is possible to detect the line of sight of each occupant of vehicle C1, it is possible to determine whether any occupant is looking at content CT2. For example, if any occupant of vehicle C1 is looking at content CT2, continuing to display the moving agent image AG1 is thought to make it easier to recognize the identity of the agent image AG1. However, if none of the occupants of vehicle C1 are looking at content CT2, it is assumed that no occupant will look at the agent image AG1 that overlaps with content CT2 while moving, even if the moving agent image AG1 continues to be displayed. Therefore, if none of the occupants of vehicle C1 are looking at content CT2, the display of the moving agent image AG1 may be stopped. In other words, after erasing the agent image AG1 from the first position PP1, the agent image AG1 may be displayed at the second position SP1. Note that if it is detected that any occupant is looking at content CT2, the agent image AG1 may be moved without overlapping with content CT2. An example of this is shown in Figure 11.

 図6(B)(C)、図7(A)乃至(C)には、特定のコンテンツであるコンテンツCT1が表示部201に表示されている場合に、第1位置PP1から第2位置SP1までエージェント画像AG1を移動させる場合の遷移例を示す。図6(B)には、コンテンツCT1の裏側(すなわち、表示部201の表示面の反対側)を、エージェント画像AG1を移動させる例を示す。 Figures 6(B) (C) and 7(A) to (C) show an example of a transition when an agent image AG1 is moved from a first position PP1 to a second position SP1 when a specific piece of content CT1 is displayed on the display unit 201. Figure 6(B) shows an example of moving the agent image AG1 behind the content CT1 (i.e., the opposite side of the display surface of the display unit 201).

 図6(B)に示すように、第1位置PP1から第2位置SP1までの直線状の移動経路をエージェント画像AG1が移動する場合に、移動中のエージェント画像AG1と表示中のコンテンツCT1とが重複する。この場合に、表示中のコンテンツCT1が特定のコンテンツであるときには、コンテンツCT1の視認性を阻害しないことが重要である。そこで、表示中のコンテンツCT1が特定のコンテンツであるときには、出力制御部128は、第1位置PP1から第2位置SP1までの直線状の移動経路に沿って、コンテンツCT1の裏側(すなわち、表示部201の表示面の反対側)をエージェント画像AG1が移動するような表示態様とする表示制御を実行する。これにより、移動するエージェント画像AG1の同一性を維持しつつ、コンテンツCT1の視認性を阻害しないようにすることが可能となる。 As shown in Figure 6 (B), when the agent image AG1 moves along a linear path from the first position PP1 to the second position SP1, the moving agent image AG1 overlaps with the displayed content CT1. In this case, if the displayed content CT1 is specific content, it is important not to impede the visibility of the content CT1. Therefore, when the displayed content CT1 is specific content, the output control unit 128 executes display control such that the agent image AG1 moves behind the content CT1 (i.e., on the opposite side of the display surface of the display unit 201) along the linear path from the first position PP1 to the second position SP1. This makes it possible to maintain the identity of the moving agent image AG1 while not impeding the visibility of the content CT1.

 図6(C)には、コンテンツCT1の表側(すなわち、表示部201の表示面側)を、透明又は半透明のエージェント画像AG1を移動させる例を示す。 Figure 6(C) shows an example in which a transparent or semi-transparent agent image AG1 is moved across the front side of content CT1 (i.e., the display surface side of the display unit 201).

 図6(C)に示すように、表示中のコンテンツCT1が特定のコンテンツであるときには、出力制御部128は、第1位置PP1から第2位置SP1までの直線状の移動経路に沿って、コンテンツCT1の表側(すなわち、表示部201の表示面側)を、透明又は半透明のエージェント画像AG1が移動するような表示態様とする表示制御を実行する。これにより、移動するエージェント画像AG1の同一性を維持しつつ、コンテンツCT1の視認性を阻害しないようにすることが可能となる。なお、エージェント画像AG1の透明度合については、重複するコンテンツの重要度に応じて設定が可能である。例えば、法規上隠すことができない特定のコンテンツである場合には、重要度が高いと判定し、エージェント画像AG1の透明度を高く設定する。すなわち、エージェント画像AG1が完全に透明となるように設定する。一方、例えば、一部が隠れても問題がないと考えられる特定のコンテンツである場合には、重要度が低いと判定し、エージェント画像AG1の透明度を低く設定する。 As shown in FIG. 6(C), when the displayed content CT1 is specific content, the output control unit 128 executes display control to display a transparent or semi-transparent agent image AG1 moving on the front side of the content CT1 (i.e., the display surface side of the display unit 201) along a linear movement path from the first position PP1 to the second position SP1. This makes it possible to maintain the identity of the moving agent image AG1 while not impeding the visibility of the content CT1. The transparency of the agent image AG1 can be set according to the importance of the overlapping content. For example, if the content is specific and cannot be hidden by law, the importance is determined to be high, and the transparency of the agent image AG1 is set high. In other words, the agent image AG1 is set to be completely transparent. On the other hand, if the content is specific and it is considered acceptable for some of it to be hidden, the importance is determined to be low, and the transparency of the agent image AG1 is set low.

 図7(A)(B)には、コンテンツCT1を迂回するようにエージェント画像AG1を移動させる例を示す。 Figures 7(A) and (B) show an example of moving agent image AG1 so as to bypass content CT1.

 図7(A)(B)に示すように、表示中のコンテンツCT1が特定のコンテンツであるときには、出力制御部128は、第1位置PP1から第2位置SP1まで移動経路として、コンテンツCT1を迂回する迂回経路に沿って、エージェント画像AG1が移動するような表示態様とする表示制御を実行する。図7(A)(B)では、移動中のエージェント画像AG1を、エージェント画像AG1a乃至AG1fで示す。 As shown in Figures 7(A) and (B), when the content CT1 being displayed is specific content, the output control unit 128 executes display control to display the agent image AG1 moving along a detour route that bypasses the content CT1 as a movement route from the first position PP1 to the second position SP1. In Figures 7(A) and (B), the moving agent image AG1 is shown as agent images AG1a to AG1f.

 ここで、図7(A)(B)に示すように、迂回経路に沿ってエージェント画像AG1を移動させる際に、迂回経路の範囲がエージェント画像AG1の表示領域よりも狭いことも想定される。この場合には、移動中のエージェント画像AG1のサイズを小さくしたり、移動中のエージェント画像AG1の一部が、コンテンツCT1の裏側(すなわち、表示部201の表示面の反対側)に表示されるような表示態様としたり、表示部201の表示面の外側に表示されるような表示態様としたりすることが可能である。 Here, as shown in Figures 7(A) and (B), when moving agent image AG1 along a detour route, it is possible that the range of the detour route is narrower than the display area of agent image AG1. In this case, it is possible to reduce the size of agent image AG1 while it is moving, or to display part of agent image AG1 while it is moving behind content CT1 (i.e., the opposite side of the display surface of display unit 201), or to display it outside the display surface of display unit 201.

 図7(A)に示す例では、移動中のエージェント画像AG1のサイズを小さくするとともに、移動中のエージェント画像AG1の一部が、表示部201の表示面の外側に表示されるような表示態様とする例を示す。 In the example shown in Figure 7 (A), the size of the moving agent image AG1 is reduced, and a portion of the moving agent image AG1 is displayed outside the display surface of the display unit 201.

 図7(B)に示す例では、移動中のエージェント画像AG1のサイズを小さくするとともに、移動中のエージェント画像AG1の一部が、コンテンツCT1の裏側(すなわち、表示部201の表示面の反対側)と、表示部201の表示面の外側とに表示されるような表示態様とする例を示す。 In the example shown in Figure 7 (B), the size of the moving agent image AG1 is reduced, and a display mode is shown in which part of the moving agent image AG1 is displayed on the back side of the content CT1 (i.e., the opposite side of the display surface of the display unit 201) and on the outside of the display surface of the display unit 201.

 図7(C)には、特定のコンテンツであるコンテンツCT3を迂回するようにエージェント画像AG1を移動させる例を示す。 Figure 7(C) shows an example of moving the agent image AG1 so as to bypass specific content CT3.

 図7(C)に示すように、表示中のコンテンツCT3が特定のコンテンツであるときには、出力制御部128は、第1位置PP1から第2位置SP1まで移動経路として、コンテンツCT3を迂回する迂回経路に沿って、エージェント画像AG1が移動するような表示態様とする表示制御を実行する。図7(C)では、移動中のエージェント画像AG1を、エージェント画像AG1a乃至AG1dで示す。 As shown in Figure 7(C), when the content CT3 being displayed is specific content, the output control unit 128 executes display control to display the agent image AG1 moving along a detour route that bypasses the content CT3 as a movement route from the first position PP1 to the second position SP1. In Figure 7(C), the moving agent image AG1 is shown by agent images AG1a to AG1d.

 ここで、図7(C)に示すように、迂回経路に沿ってエージェント画像AG1を移動させる際に、迂回経路の範囲がエージェント画像AG1の表示領域よりも大きいことも想定される。この場合には、移動中のエージェント画像AG1のサイズを維持することが可能である。 Here, as shown in Figure 7(C), when moving the agent image AG1 along a detour route, it is possible that the range of the detour route will be larger than the display area of the agent image AG1. In this case, it is possible to maintain the size of the agent image AG1 while it is moving.

 ここで、迂回経路においてエージェント画像AG1と特定のコンテンツとが重複しないが、近接する部分が存在することも想定される。例えば、図7(A)に示す例では、移動中のエージェント画像AG1b乃至AG1eの下部等が、特定のコンテンツCT1の上部と近接している。このような場合には、エージェント画像AG1と特定のコンテンツとの近接によりコンテンツの視認性が低下することも想定される。そこで、迂回経路においてエージェント画像AG1と特定のコンテンツとが近接する近接部分が存在する場合には、その近接部分のエージェント画像AG1を透明又は半透明とする表示態様としてもよい。ここで、近接判定の距離については、実験又はシミュレーション等に基づいて適宜設定可能である。例えば、数mm乃至数cm程度の距離を近接の判定基準とすることが可能である。 It is also possible that there are areas on the detour route where the agent image AG1 and the specific content do not overlap, but are close to each other. For example, in the example shown in Figure 7(A), the lower parts of the moving agent images AG1b to AG1e are close to the upper part of the specific content CT1. In such cases, it is also possible that the proximity of the agent image AG1 to the specific content will reduce the visibility of the content. Therefore, if there is a close area on the detour route where the agent image AG1 and the specific content are close to each other, the agent image AG1 in that close area may be displayed transparent or semi-transparent. The distance for proximity determination can be set appropriately based on experiments, simulations, etc. For example, a distance of several millimeters to several centimeters can be used as the criterion for determining proximity.

 なお、図6、図7に示すように、エージェント画像AG1を移動させる場合には、エージェント画像AG1の移動軌跡をアニメーション処理等により表現してもよい。 As shown in Figures 6 and 7, when moving the agent image AG1, the movement trajectory of the agent image AG1 may be expressed using animation processing, etc.

 これらにより、移動するエージェント画像AG1の同一性を維持しつつ、コンテンツCT1の視認性を阻害しないようにすることが可能となる。特定のコンテンツに対するエージェント画像AG1の迂回経路及び表示態様については、実験又はシミュレーション等に基づいて適宜設定可能である。 This makes it possible to maintain the identity of the moving agent image AG1 while not impeding the visibility of the content CT1. The detour route and display mode of the agent image AG1 for specific content can be set appropriately based on experiments, simulations, etc.

 ここで、特定のコンテンツを迂回する迂回経路として複数の経路を設定することが可能な場合も存在する。このような場合には、予め設定された基準に基づいて迂回経路を設定することが可能である。例えば、乗員の視線に基づいて、迂回経路を設定することが可能である。例えば、各乗員の視線に基づいて、特定のコンテンツを見ている乗員を特定する。そして、特定のコンテンツを見ている乗員の視線方向に基づいて、表示部201の表示面において乗員が最も見ていると想定される位置(注目位置)を特定する。そして、特定のコンテンツを迂回する複数の迂回経路のうちから、注目位置に最も近い迂回経路を決定するようにする。これにより、移動中のエージェント画像AG1の移動演出を乗員が容易に認識可能となる。 Here, there are cases where it is possible to set multiple detour routes that bypass specific content. In such cases, it is possible to set detour routes based on preset criteria. For example, it is possible to set detour routes based on the line of sight of the occupants. For example, the occupants who are viewing specific content are identified based on the line of sight of each occupant. Then, based on the line of sight of the occupant viewing the specific content, the position on the display surface of the display unit 201 that the occupant is most likely to be looking at (focus position) is identified. Then, from among the multiple detour routes that bypass the specific content, the detour route closest to the focus position is determined. This allows the occupants to easily recognize the movement of the agent image AG1 while it is moving.

 [移動経路に複数の特定のコンテンツが存在する場合の移動例]
 以上では、表示中の1つの特定のコンテンツと、移動するエージェント画像AG1とが重複する場合のエージェント画像AG1の移動例を示した。ここで、表示部201に複数の特定のコンテンツが表示されている場合において、それらの2以上の特定のコンテンツと、移動するエージェント画像AG1とが重複する場合も想定される。この場合についても、図6(B)(C)、図7(A)乃至(C)に示す例と同様に、エージェント画像AG1を移動させることが可能である。この例を図8に示す。
[Example of movement when multiple specific contents exist on the movement route]
The above has shown an example of the movement of the agent image AG1 when one specific content currently being displayed overlaps with the moving agent image AG1. However, it is also possible that when multiple specific contents are displayed on the display unit 201, two or more of these specific contents overlap with the moving agent image AG1. In this case, too, it is possible to move the agent image AG1 in the same way as in the examples shown in Figures 6(B) and (C) and Figures 7(A) to (C). An example of this is shown in Figure 8.

 図8は、出力機器200の表示部201に複数の特定のコンテンツCT3、CT4が表示されている場合において、第1位置PP1から第2位置SP1までエージェント画像AG1を移動させる場合のエージェント画像AG1の遷移例を示す図である。 Figure 8 shows an example of the transition of the agent image AG1 when the agent image AG1 is moved from the first position PP1 to the second position SP1 when multiple specific contents CT3 and CT4 are displayed on the display unit 201 of the output device 200.

 図8に示すように、第1位置PP1から第2位置SP1までの直線状の移動経路をエージェント画像AG1が移動する場合に、移動中のエージェント画像AG1と表示中のコンテンツCT3、CT4とが重複する。この場合についても、図6(B)(C)、図7(A)乃至(C)に示す例と同様に、エージェント画像AG1を移動させることが可能である。図8では、図7(A)に示す例の変形例として、移動中のエージェント画像AG1のサイズを維持するとともに、移動中のエージェント画像AG1の一部が、表示部201の表示面の外側に表示されるような表示態様とする例を示す。 As shown in Figure 8, when the agent image AG1 moves along a linear path from the first position PP1 to the second position SP1, the moving agent image AG1 overlaps with the displayed content CT3 and CT4. In this case, too, it is possible to move the agent image AG1, similar to the examples shown in Figures 6(B)(C) and 7(A) to (C). Figure 8 shows a modified example of the example shown in Figure 7(A), in which the size of the moving agent image AG1 is maintained, and a portion of the moving agent image AG1 is displayed outside the display surface of the display unit 201.

 [複数の表示部をエージェント画像が移動する場合に迂回経路を移動する移動例]
 以上では、表示部201に表示中の特定のコンテンツと、移動するエージェント画像AG1とが重複する場合のエージェント画像AG1の移動例を示した。ここで、複数の表示部(表示部201を含む)に1又は複数の特定のコンテンツが表示されている場合において、その特定のコンテンツと、移動するエージェント画像AG1とが重複する場合も想定される。この場合についても、図6(B)(C)、図7(A)乃至(C)、図8に示す例と同様に、エージェント画像AG1を移動させることが可能である。この例を図9に示す。
[Example of movement when an agent image moves on a detour route across multiple displays]
The above has shown an example of movement of the agent image AG1 when a specific content being displayed on the display unit 201 overlaps with the moving agent image AG1. Here, it is also possible that when one or more specific contents are displayed on multiple display units (including the display unit 201), the specific content and the moving agent image AG1 overlap. In this case, too, it is possible to move the agent image AG1 in the same way as in the examples shown in Figures 6(B) and (C), 7(A) to (C), and 8. An example of this is shown in Figure 9.

 図9は、出力機器200の表示部201に特定のコンテンツCT5が表示され、フロントウインド4のHUD表示領域に特定のコンテンツCT6が表示されている場合において、第1位置PP1から第2位置SP1までエージェント画像AG1を移動させる場合のエージェント画像AG1の遷移例を示す図である。 Figure 9 shows an example of the transition of the agent image AG1 when the agent image AG1 is moved from the first position PP1 to the second position SP1 when a specific content CT5 is displayed on the display unit 201 of the output device 200 and a specific content CT6 is displayed in the HUD display area of the windshield 4.

 ここで、フロントウインド4のHUD表示領域に各種画像を表示させる場合には、ダッシュボード2の上部において、フロントウインド4との境界付近に、HUDを実現するためのHUD表示装置を設ける。 Here, when displaying various images in the HUD display area of the windshield 4, a HUD display device for realizing the HUD is provided at the top of the dashboard 2, near the boundary with the windshield 4.

 HUD表示装置は、フロントウインド4の表示領域4aに光を投射して、その反射光を利用してドライバD1に虚像を見せるHUD表示を実現するための表示装置、例えばプロジェクタ、光学系である。すなわち、HUD表示装置からフロントウインド4の表示領域4aに投射された光は、フロントウインド4により反射され、その反射光がドライバD1の眼に向かう。そして、表示領域4aに投射されてドライバD1の眼に入った反射光は、フロントウインド4越しに見える実際の物体とともに、その物体に重畳されて表示される。このように、HUD表示装置は、フロントウインド4を用いて虚像を表示することにより、HUD表示を実現する。 The HUD display device is a display device, such as a projector or optical system, that projects light onto the display area 4a of the windshield 4 and uses the reflected light to create a HUD display that shows a virtual image to the driver D1. That is, the light projected from the HUD display device onto the display area 4a of the windshield 4 is reflected by the windshield 4, and the reflected light heads toward the eyes of the driver D1. The reflected light that is projected onto the display area 4a and enters the eyes of the driver D1 is then displayed together with actual objects visible through the windshield 4, superimposed on those objects. In this way, the HUD display device creates a HUD display by using the windshield 4 to display a virtual image.

 例えば、HUD表示装置は、情報処理装置110(図5参照)の制御に基づいて、エージェント画像AG1を表示領域4aに表示させる。また、フロントウインド4は、車両C1のHUDの表示媒体として機能する。 For example, the HUD display device displays the agent image AG1 in the display area 4a based on the control of the information processing device 110 (see Figure 5). The windshield 4 also functions as a display medium for the HUD of the vehicle C1.

 図9に示すように、第1位置PP1から第2位置SP1までの直線状(3次元空間上の直線、又は、乗員の視覚を基準とする2次元空間上の直線)の移動経路をエージェント画像AG1が移動する場合に、移動中のエージェント画像AG1と表示中のコンテンツCT5、CT6とが重複する。この場合についても、図6(B)(C)、図7(A)乃至(C)、図8に示す例と同様に、エージェント画像AG1を移動させることが可能である。図9では、図8に示す例と同様に、移動中のエージェント画像AG1のサイズを維持するとともに、移動中のエージェント画像AG1の一部が、表示部201の表示面の外側に表示されるような表示態様とする例を示す。ここで、物理的に離れている複数の表示部間を、エージェント画像AG1を移動させる場合には、エージェント画像AG1を見ている乗員が創造できる範囲内の迂回経路等を設定して移動することが好ましい。例えば、迂回経路では見難い移動演出となる場合には、エージェント画像AG1を透明又は半透明とする表示態様とすることも可能である。 As shown in FIG. 9, when the agent image AG1 moves along a linear path (a straight line in three-dimensional space or a straight line in two-dimensional space based on the occupant's visual field) from the first position PP1 to the second position SP1, the moving agent image AG1 overlaps with the displayed content CT5 and CT6. In this case, the agent image AG1 can be moved in the same way as in the examples shown in FIGS. 6(B)(C), 7(A) to (C), and 8. Similarly to the example shown in FIG. 8, FIG. 9 shows an example of a display mode in which the size of the moving agent image AG1 is maintained and a portion of the moving agent image AG1 is displayed outside the display surface of the display unit 201. Here, when moving the agent image AG1 between multiple physically separated display units, it is preferable to set a detour route or the like within the range that the occupant viewing the agent image AG1 can create. For example, if a detour route would make the movement difficult to see, the agent image AG1 can be displayed transparently or semi-transparently.

 なお、図9では、表示部201とフロントウインド4のHUD表示領域とに特定のコンテンツが表示される例を示すが、これに限定されない。例えば、ステアリングホイール3に表示部が設けられている場合には、その表示部に特定のコンテンツが表示されることも想定される。この場合には、表示部201と、フロントウインド4のHUD表示領域と、ステアリングホイール3の表示部とのうちの少なくとも1つに表示された特定のコンテンツと、移動するエージェント画像AG1との重複状態に基づいて、エージェント画像AG1の移動処理を実行することが可能である。 Note that while Figure 9 shows an example in which specific content is displayed on the display unit 201 and the HUD display area on the windshield 4, this is not limiting. For example, if a display unit is provided on the steering wheel 3, it is also possible for specific content to be displayed on that display unit. In this case, it is possible to perform a movement process for the agent image AG1 based on the overlap state between the specific content displayed on at least one of the display unit 201, the HUD display area on the windshield 4, and the display unit of the steering wheel 3 and the moving agent image AG1.

 このように、本実施形態は、エージェント画像AG1を表示する少なくとも1つ又は複数の表示部と、エージェント画像AG1を何れかの表示部に表示する位置を制御する制御部とを備える表示システムとして把握することが可能である。 In this way, this embodiment can be understood as a display system comprising at least one or more display units that display the agent image AG1, and a control unit that controls the position at which the agent image AG1 is displayed on any of the display units.

 なお、以上では、直線状の移動経路を移動するエージェント画像AG1と特定のコンテンツとが重複する場合に、エージェント画像AG1の移動処理を変更する例を示した。ここで、例えば、直線状の移動経路を移動するエージェント画像AG1と特定のコンテンツとが重複しないが、これらが近接する場合には、特定のコンテンツを見ている乗員が、近接するエージェント画像AG1の影響で、特定のコンテンツの視認性が低下することも想定される。そこで、直線状の移動経路を移動するエージェント画像AG1と特定のコンテンツとが近接する場合についても同様にエージェント画像AG1の移動処理を変更してもよい。例えば、直線状の移動経路が、特定のコンテンツに対して所定距離内の領域(近接領域)を通過する経路であるか否かを判定し、直線状の移動経路がその近接領域を通過すると判定されると、その所定距離よりも大きく離れている位置を通過する移動経路(迂回経路)に沿って、エージェント画像AG1を移動させるようにする。また、近接領域の表示態様を変更してエージェント画像AG1を移動させてもよい。これらの場合の移動処理については、図6乃至図9等で示した移動処理と同様である。 The above example shows how the movement processing of the agent image AG1 is changed when the agent image AG1 moving along a linear path overlaps with specific content. Here, for example, if the agent image AG1 moving along a linear path does not overlap with the specific content but is close to it, it is possible that a passenger looking at the specific content may find the visibility of the specific content reduced due to the influence of the nearby agent image AG1. Therefore, the movement processing of the agent image AG1 may also be changed in a similar manner when the agent image AG1 moving along a linear path is close to the specific content. For example, a determination is made as to whether the linear path passes through an area (proximity area) within a predetermined distance of the specific content. If it is determined that the linear path passes through the proximity area, the agent image AG1 is moved along a path (detour route) that passes through a position farther away than the predetermined distance. The display mode of the proximity area may also be changed to move the agent image AG1. The movement processing in these cases is the same as the movement processing shown in Figures 6 to 9, etc.

 [情報処理システムの動作例]
 図10、図11は、情報処理システム100におけるエージェント移動処理の一例を示すフローチャートである。また、このエージェント移動処理は、記憶部130(図5参照)に記憶されているプログラムに基づいて、制御部120(図5参照)により実行される。また、このエージェント移動処理は、制御周期毎に常時実行される。また、このエージェント移動処理では、図1乃至図9を適宜参照して説明する。
[Example of operation of information processing system]
10 and 11 are flowcharts showing an example of agent migration processing in the information processing system 100. This agent migration processing is executed by the control unit 120 (see FIG. 5) based on a program stored in the storage unit 130 (see FIG. 5). This agent migration processing is constantly executed for each control cycle. This agent migration processing will be explained with appropriate reference to FIGS. 1 to 9.

 ステップS501において、出力内容決定部126は、ユーザ入力を受け付けたか否かを判定する。例えば、受付部203により何れかの乗員の手動操作が受け付けられた場合、発話取得部121により何れかの乗員の音声操作が受け付けられた場合、又は、ドライバ状況取得部122又は車室状況取得部123により何れかの乗員のジェスチャ操作が受け付けられた場合には、ユーザ入力を受け付けと判定される。ユーザ入力を受け付けた場合には、ステップS502に進む。一方、ユーザ入力を受け付けていない場合には監視を継続して行う。 In step S501, the output content determination unit 126 determines whether or not user input has been received. For example, if a manual operation by any of the occupants is received by the reception unit 203, if a voice operation by any of the occupants is received by the speech acquisition unit 121, or if a gesture operation by any of the occupants is received by the driver status acquisition unit 122 or the vehicle interior status acquisition unit 123, it is determined that user input has been received. If user input has been received, the process proceeds to step S502. On the other hand, if user input has not been received, monitoring continues.

 ステップS502において、出力内容決定部126は、ステップS501で受け付けられたユーザ入力が音声入力であるか否かを判定する。すなわち、発話取得部121により何れかの乗員の音声操作が受け付けられた場合には、ユーザ入力が音声入力であると判定される。そのユーザ入力が音声入力である場合には、ステップS503に進む。一方、そのユーザ入力が音声入力でない場合には、ステップS510(図11参照)に進む。 In step S502, the output content determination unit 126 determines whether the user input received in step S501 is a voice input. That is, if the speech acquisition unit 121 receives a voice operation from any of the occupants, the user input is determined to be a voice input. If the user input is a voice input, the process proceeds to step S503. On the other hand, if the user input is not a voice input, the process proceeds to step S510 (see FIG. 11).

 ステップS503において、出力内容決定部126は、エージェント画像AG1の現在の表示位置(第1位置)を取得し、エージェント画像AG1の移動先の表示位置(第2位置)を決定する。なお、第1位置については、エージェント管理DB132に格納されているエージェント画像AG1の表示位置に基づいて取得可能である。例えば、ユーザ入力をした乗員の位置(例えば、手の位置、眼の位置)を基準とし、表示部201の表面のうち、その乗員の位置に最も近い位置を第2位置として決定することが可能である。なお、ユーザ入力をした乗員については、発話取得部121により発話が取得されたタイミングと、ドライバ状況取得部122又は車室状況取得部123により取得された乗員の口の動作との関係に基づいて判定可能である。例えば、発話がされたタイミングで口を動かしている乗員が存在する場合には、その乗員がユーザ入力として音声を発していると想定可能である。また、車両C1の各座席における乗員の有無については、着座センサ、シートベルトセンサ等の検出値等に基づいて判定可能である。また、ドライバ状況取得部122又は車室状況取得部123により取得された画像に基づいて、車両C1の各座席における乗員の有無を判定してもよい。また、複数のマイク等が車両C1に設置されている場合には、それらのマイクにより取得された音に基づいて、音が発生した位置を特定可能である。そこで、これらの音の発生位置を特定する技術を用いて、ユーザ入力をした乗員の位置を特定してもよい。 In step S503, the output content determination unit 126 acquires the current display position (first position) of the agent image AG1 and determines the destination display position (second position) of the agent image AG1. The first position can be acquired based on the display position of the agent image AG1 stored in the agent management DB 132. For example, the position of the occupant who made the user input (e.g., hand position, eye position) can be used as a reference, and the position on the surface of the display unit 201 closest to the occupant's position can be determined as the second position. The occupant who made the user input can be determined based on the relationship between the timing at which the utterance was acquired by the speech acquisition unit 121 and the occupant's mouth movement acquired by the driver status acquisition unit 122 or the vehicle interior status acquisition unit 123. For example, if there is an occupant moving their mouth at the time the utterance was made, it can be assumed that the occupant is making a sound as user input. The presence or absence of an occupant in each seat of the vehicle C1 can be determined based on detection values from seat occupancy sensors, seat belt sensors, etc. Furthermore, the presence or absence of an occupant in each seat of the vehicle C1 may be determined based on images acquired by the driver status acquisition unit 122 or the vehicle interior status acquisition unit 123. Furthermore, if multiple microphones or the like are installed in the vehicle C1, the location where a sound is generated can be identified based on sounds acquired by those microphones. Therefore, the location of the occupant who made the user input may be identified using technology for identifying the location where these sounds are generated.

 例えば、ユーザ入力が音声入力である場合には、手動操作ではないため、手の位置ではなく、ユーザ入力をした乗員の眼の位置を基準として第2位置を決定可能である。この場合には、ユーザ入力をした乗員の眼の位置に最も近い表示部201の表示面上の位置を第2位置として決定可能である。また、例えば、ユーザ入力が音声入力以外(例えば、手動操作)である場合には、ユーザ入力をした乗員の手の位置を基準として第3位置(ステップS511で示す)を決定可能である。この場合には、ユーザ入力をした乗員の手の位置に最も近い表示部201の表示面上の位置を第3位置(ステップS511参照)として決定可能である。 For example, if the user input is a voice input, it is not a manual operation, and therefore the second position can be determined based on the eye position of the occupant who made the user input, rather than the hand position. In this case, the position on the display surface of the display unit 201 that is closest to the eye position of the occupant who made the user input can be determined as the second position. Also, for example, if the user input is something other than a voice input (for example, a manual operation), the third position (shown in step S511) can be determined based on the hand position of the occupant who made the user input. In this case, the position on the display surface of the display unit 201 that is closest to the hand position of the occupant who made the user input can be determined as the third position (see step S511).

 ステップS504において、移動態様決定部127は、ステップS501で受け付けられたユーザ入力がドライバD1により行われたか否かを判定する。例えば、ドライバD1が運転中である場合には、手動操作ができないことが多いため、音声によるユーザ入力が行われることが多いと想定される。ユーザ入力をした乗員の特定方法については、ステップS503で示した方法と同様である。そのユーザ入力がドライバD1により行われた場合には、ステップS505に進む。一方、そのユーザ入力がドライバ以外の乗員により行われた場合には、ステップS506に進む。 In step S504, the movement mode determination unit 127 determines whether the user input received in step S501 was made by driver D1. For example, when driver D1 is driving, manual operation is often not possible, so it is assumed that user input is often made by voice. The method for identifying the occupant who made the user input is the same as the method shown in step S503. If the user input was made by driver D1, the process proceeds to step S505. On the other hand, if the user input was made by an occupant other than the driver, the process proceeds to step S506.

 ステップS505において、移動態様決定部127は、ユーザ入力をしたドライバD1の運転負荷が閾値未満であるか否かを判定する。なお、運転負荷の検出方法については、上述した検出方法と同様である。また、ここで示す閾値は、ドライバD1の運転負荷が高いか低いかを判定するための判定情報である。この閾値については、実験又はシミュレーション等に基づいて適宜設定可能である。なお、ここでは、ドライバD1の運転負荷が閾値未満であるか否かを判定する例を示すが、自動運転の実行中でない場合には、車両C1が走行中か停車中かを判定してもよい。ドライバD1の運転負荷が閾値未満である場合には、ステップS506に進む。一方、ドライバD1の運転負荷が閾値以上である場合には、ステップS510に進む。 In step S505, the movement mode determination unit 127 determines whether the driving load of driver D1 who provided user input is less than a threshold. The method for detecting the driving load is the same as the detection method described above. The threshold value shown here is determination information for determining whether the driving load of driver D1 is high or low. This threshold value can be set appropriately based on experiments, simulations, etc. Note that here, an example is shown in which it is determined whether the driving load of driver D1 is less than a threshold value, but if autonomous driving is not being performed, it may also be determined whether the vehicle C1 is moving or stopped. If the driving load of driver D1 is less than the threshold value, the process proceeds to step S506. On the other hand, if the driving load of driver D1 is equal to or greater than the threshold value, the process proceeds to step S510.

 ステップS506において、移動態様決定部127は、ステップS503で決定した第1位置から第2位置まで移動するエージェント画像AG1の第1移動軌跡と、表示部201に表示されている1又は複数のコンテンツとの関係を確認する。例えば、移動態様決定部127は、第1位置から第2位置までの間に表示中のコンテンツが存在するか否かを判定し、第1位置から第2位置までの間に表示中のコンテンツが存在する場合には、そのコンテンツとエージェント画像AG1の第1移動軌跡とが重複するか否かを判定する。なお、第1位置から第2位置まで移動するエージェント画像AG1の第1移動軌跡と、表示中のコンテンツとが重複することを判定する判定方法については、図4(B)に示す判定方法と同様である。 In step S506, the movement mode determination unit 127 checks the relationship between the first movement trajectory of the agent image AG1 moving from the first position to the second position determined in step S503 and one or more pieces of content displayed on the display unit 201. For example, the movement mode determination unit 127 determines whether or not there is content currently being displayed between the first position and the second position, and if there is content currently being displayed between the first position and the second position, determines whether or not that content overlaps with the first movement trajectory of the agent image AG1. Note that the method for determining whether or not the first movement trajectory of the agent image AG1 moving from the first position to the second position overlaps with the currently displayed content is the same as the determination method shown in Figure 4 (B).

 ステップS507において、移動態様決定部127は、第1位置から第2位置まで移動するエージェント画像AG1の第1移動軌跡と重複する表示中のコンテンツのうちに、特定のコンテンツが存在するか否かを判定する。第1移動軌跡と重複する特定のコンテンツが存在する場合には、ステップS508に進む。一方、第1移動軌跡と重複する特定のコンテンツが存在しない場合には、ステップS509に進む。 In step S507, the movement mode determination unit 127 determines whether specific content exists among the displayed content that overlaps with the first movement trajectory of the agent image AG1 moving from the first position to the second position. If specific content exists that overlaps with the first movement trajectory, the process proceeds to step S508. On the other hand, if specific content does not exist that overlaps with the first movement trajectory, the process proceeds to step S509.

 ステップS508において、移動態様決定部127は、ステップS507でエージェント画像AG1の第1移動軌跡と重複すると判定された特定のコンテンツの視認性を阻害しないようにエージェント画像AG1を第1位置から第2位置まで移動する移動経路及び表示態様を決定する。そして、出力制御部128は、その決定された移動経路及び表示態様に従って、エージェント画像AG1を第1位置から第2位置まで移動させる。例えば、図6(B)(C)、図7乃至図9に示すように、特定のコンテンツCT1の視認性を阻害しないように、エージェント画像AG1を移動させることが可能である。 In step S508, the movement mode determination unit 127 determines a movement path and display mode for moving the agent image AG1 from the first position to the second position so as not to impair the visibility of the specific content determined in step S507 to overlap with the first movement trajectory of the agent image AG1. The output control unit 128 then moves the agent image AG1 from the first position to the second position in accordance with the determined movement path and display mode. For example, as shown in Figures 6(B) (C) and 7 to 9, it is possible to move the agent image AG1 so as not to impair the visibility of the specific content CT1.

 ステップS509において、移動態様決定部127は、第1移動軌跡に沿ってエージェント画像AG1を移動することを決定する。そして、出力制御部128は、第1移動軌跡に沿ってエージェント画像AG1を第1位置から第2位置まで移動させる。すなわち、エージェント画像AG1を最短距離で移動させる。例えば、図6(A)に示すように、表示中のコンテンツCT2の表側(すなわち、表示部201の表示面側)に重なるようにエージェント画像AG1を移動させることが可能である。 In step S509, the movement mode determination unit 127 determines to move the agent image AG1 along the first movement trajectory. Then, the output control unit 128 moves the agent image AG1 from the first position to the second position along the first movement trajectory. In other words, the agent image AG1 is moved over the shortest distance. For example, as shown in FIG. 6(A), it is possible to move the agent image AG1 so that it overlaps with the front side of the currently displayed content CT2 (i.e., the display surface side of the display unit 201).

 ステップS510において、移動態様決定部127は、エージェント画像AG1を移動させる移動演出を実行せずにエージェント画像AG1の位置のみを移動することを決定する。すなわち、移動態様決定部127は、第1位置に表示されているエージェント画像AG1を消去させた後に、第2位置にエージェント画像AG1を表示させることを決定する。そして、出力制御部128は、第1位置に表示されているエージェント画像AG1を消去させた後に、第2位置にエージェント画像AG1を表示させる。 In step S510, the movement mode determination unit 127 determines to move only the position of the agent image AG1 without executing a movement effect that moves the agent image AG1. In other words, the movement mode determination unit 127 determines to erase the agent image AG1 displayed at the first position, and then display the agent image AG1 at the second position. Then, the output control unit 128 erases the agent image AG1 displayed at the first position, and then displays the agent image AG1 at the second position.

 上述したように、ドライバD1が運転中である場合には、手動操作ができないことが多いため、音声によるユーザ入力が行われることが多いと想定される。また、ドライバD1の運転負荷が高い場合(閾値以上の場合)には、ドライバD1がエージェント画像AG1の移動を見る余裕がないと考えられるため、エージェント画像AG1を一時的に消去しても、エージェント画像AG1の存在感に影響はないと考えられる。また、ドライバD1の運転負荷が高い場合(閾値以上の場合)には、移動演出を実行するよりも、エージェント画像AG1を第1位置から第2位置に瞬間的に移動させた方がドライバD1に認識し易いと考えられる。そこで、ステップS510では、エージェント画像AG1の移動演出を実行せずに、エージェント画像AG1の表示位置のみを変更する。 As mentioned above, when driver D1 is driving, manual operation is often not possible, so it is assumed that user input is often performed via voice. Furthermore, when driver D1's driving load is high (above a threshold), it is thought that driver D1 does not have the time to see the movement of agent image AG1, so even if agent image AG1 is temporarily erased, it is thought that this will not affect the presence of agent image AG1. Furthermore, when driver D1's driving load is high (above a threshold), it is thought that it is easier for driver D1 to recognize agent image AG1 if it is moved instantaneously from a first position to a second position rather than by executing a movement effect. Therefore, in step S510, only the display position of agent image AG1 is changed without executing a movement effect for agent image AG1.

 ステップS511において、出力内容決定部126は、エージェント画像AG1の現在の表示位置(第1位置)を取得し、エージェント画像AG1の移動先の表示位置(第3位置)を決定する。ここでは、ユーザ入力が音声入力でなく、手動入力である場合を想定するため、ユーザ入力をした乗員の位置として手の位置を基準とし、表示部201の表面のうち、その乗員の手の位置に最も近い位置を第3位置として決定する例を示す。なお、ユーザ入力(手動入力)をした乗員については、受付部203で操作が受け付けられたタイミングと、ドライバ状況取得部122又は車室状況取得部123により取得された乗員の手の動作との関係に基づいて判定可能である。例えば、受付部203で操作が受け付けられたタイミングで表示部201の近くで手を動かしている乗員が存在する場合には、その乗員がユーザ入力として手動操作をしていると想定可能である。また、手の位置については、表示部201の表面面において接触操作がされた場合には、その接触操作がされた位置とすることが可能である。 In step S511, the output content determination unit 126 acquires the current display position (first position) of the agent image AG1 and determines the destination display position (third position) of the agent image AG1. Here, assuming that the user input is manual input rather than voice input, an example is shown in which the hand position of the occupant who made the user input is used as the reference position, and the position on the surface of the display unit 201 closest to the occupant's hand position is determined as the third position. The occupant who made the user input (manual input) can be determined based on the relationship between the timing at which the operation is accepted by the acceptance unit 203 and the occupant's hand movement acquired by the driver status acquisition unit 122 or the vehicle interior status acquisition unit 123. For example, if an occupant is moving their hand near the display unit 201 at the timing at which the acceptance unit 203 accepts the operation, it can be assumed that the occupant is performing a manual operation as user input. Furthermore, if a touch operation is performed on the surface of the display unit 201, the hand position can be determined to be the position at which the touch operation was performed.

 例えば、表示部201に表示されているコンテンツに関する操作ボタンを選択する選択操作がユーザ入力として行われることが想定される。この場合には、その操作に関する支援をするため、その選択操作がされた操作ボタンの近くにエージェント画像AG1を移動させることが可能である。 For example, it is assumed that a selection operation to select an operation button related to content displayed on the display unit 201 is performed as a user input. In this case, in order to provide assistance with that operation, it is possible to move the agent image AG1 near the operation button that was selected.

 ステップS512乃至S514、S516の各処理については、ステップS506乃至S509の各処理に対応するため、ここでの説明を省略する。 The processes in steps S512 to S514 and S516 correspond to the processes in steps S506 to S509, so explanations will be omitted here.

 ステップS515において、移動態様決定部127は、ステップS513でエージェント画像AG1の第1移動軌跡と重複すると判定された表示中のコンテンツ(特定のコンテンツ以外)を乗員が見ているか否かを判定する。上述したように、ドライバD1が運転中である場合には、手動操作ができないことが多いため、ステップS511乃至S516の各処理の対象となる乗員はドライバD1以外の乗員(又は運転負荷が閾値未満のドライバD1)であることが想定される。また、ドライバD1以外の乗員(又は運転負荷が閾値未満のドライバD1)の場合には、表示部201に表示中のコンテンツを見て楽しんでいることも想定される。そこで、表示中のコンテンツを見ている乗員が存在する場合には、そのコンテンツが特定のコンテンツでなくても、そのコンテンツの視認性を阻害しないことも重要である。そこで、表示中のコンテンツを乗員が見ている場合には、そのコンテンツの視認性を阻害しない移動演出を実行する。なお、この移動演出については、特定のコンテンツの移動演出と同様とすることが可能である。すなわち、ステップS514では、特定のコンテンツとともに、乗員が見ているコンテンツについても移動演出の対象とする。 In step S515, the movement mode determination unit 127 determines whether the occupant is looking at the displayed content (other than the specific content) that was determined to overlap with the first movement trajectory of the agent image AG1 in step S513. As described above, when driver D1 is driving, manual operation is often not possible. Therefore, it is assumed that the occupant targeted for each process in steps S511 to S516 is an occupant other than driver D1 (or a driver D1 whose driving load is below a threshold). It is also assumed that an occupant other than driver D1 (or a driver D1 whose driving load is below a threshold) is enjoying viewing the content displayed on the display unit 201. Therefore, if there is an occupant viewing the displayed content, it is important not to impede the visibility of that content, even if the content is not the specific content. Therefore, if the occupant is viewing the displayed content, a movement effect is executed that does not impede the visibility of that content. This movement effect can be the same as the movement effect for the specific content. That is, in step S514, the content the occupant is viewing is also subject to movement effect, along with the specific content.

 なお、以上では、特定のコンテンツの視認性を阻害しないように、エージェント画像AG1の移動経路又は表示態様を変更する例を示したが、本実施形態はこれに限定されない。例えば、エージェント画像AG1の移動タイミング、移動速度を変更することにより、特定のコンテンツの視認性を阻害しないようにしてもよい。例えば、通常の移動速度よりも早い移動速度でエージェント画像AG1を移動させることにより、特定のコンテンツの視認性を阻害しないようにすることが可能である。例えば、エージェント画像AG1と特定のコンテンツとが重複する部分が存在する場合でも、その重複部分を含む移動経路を、エージェント画像AG1を早く移動させることにより、特定のコンテンツが見えなくなる時間を短くすることが可能である。また、例えば、車両C1の何れかの乗員が特定のコンテンツを見ているか否かを判定し、乗員が特定のコンテンツを見ていないタイミングでエージェント画像AG1を移動(通常の速度、又は通常よりも早い速度)させることが可能である。例えば、エージェント画像AG1と特定のコンテンツとが重複する重複部分を含む移動経路に沿って、エージェント画像AG1を移動させる場合に、乗員が特定のコンテンツを見ていないタイミングでエージェント画像AG1を移動させることにより、特定のコンテンツが見えないと乗員に感じさせることを抑制することが可能である。 Note that while the above describes an example in which the movement path or display mode of the agent image AG1 is changed so as not to impede the visibility of specific content, this embodiment is not limited to this. For example, the movement timing or movement speed of the agent image AG1 may be changed so as not to impede the visibility of specific content. For example, by moving the agent image AG1 at a speed faster than the normal movement speed, it is possible to avoid impeding the visibility of specific content. For example, even if there is an overlapping portion between the agent image AG1 and the specific content, it is possible to shorten the time during which the specific content is invisible by moving the agent image AG1 quickly along a movement path that includes the overlapping portion. Furthermore, for example, it is possible to determine whether any occupant of vehicle C1 is looking at specific content, and move the agent image AG1 (at a normal speed or a speed faster than normal) when the occupant is not looking at the specific content. For example, when moving the agent image AG1 along a travel route that includes an overlapping portion where the agent image AG1 and specific content overlap, it is possible to prevent the occupant from feeling that the specific content is not visible by moving the agent image AG1 at a time when the occupant is not looking at the specific content.

 なお、図10、図11では、ユーザ入力が音声入力であるか否かに基づいて制御内容を変更する例を示したが、本実施形態はこれに限定されず、他の制御例で実現してもよい。例えば、何れかのユーザ入力がされた場合でもエージェント移動処理(ステップS503乃至S510、又は、ステップS511乃至S516)を実行してもよい。また、図10では、ドライバの運転負荷の高低の判定処理(ステップS505)に基づいて、エージェント画像AG1の移動態様を変化させる制御処理を示した。ただし、本実施形態はこれに限定されず、他の制御例で実現してもよい。例えば、ドライバD1の運転負荷の高低の判定処理(ステップS505、S510)を省略してもよい。また、図11では、乗員がコンテンツを見ているか否かの判定処理(ステップS515)に基づいて、エージェント画像AG1の移動態様を変化させる制御処理を示した。ただし、本実施形態はこれに限定されず、他の制御例で実現してもよい。例えば、乗員がコンテンツを見ているか否かの判定処理(ステップS515)を省略してもよい。 10 and 11 show an example in which the control content is changed based on whether the user input is a voice input, but this embodiment is not limited to this and may be realized using other control examples. For example, the agent movement process (steps S503 to S510 or steps S511 to S516) may be executed regardless of whether any user input is made. Also, FIG. 10 shows a control process in which the movement mode of the agent image AG1 is changed based on the process of determining whether the driver's driving load is high or low (step S505). However, this embodiment is not limited to this and may be realized using other control examples. For example, the process of determining whether the driver D1's driving load is high or low (steps S505, S510) may be omitted. Also, FIG. 11 shows a control process in which the movement mode of the agent image AG1 is changed based on the process of determining whether the occupant is viewing content (step S515). However, this embodiment is not limited to this and may be realized using other control examples. For example, the process of determining whether the occupant is viewing content (step S515) may be omitted.

 また、エージェント画像AG1と、特定のコンテンツとの重複の有無等に基づいて、エージェント画像AG1の移動態様を変化させる制御処理を実行する例を示した。ただし、他の方法によりその制御処理を実行してもよい。例えば、人工知能(AI:Artificial Intelligence)を利用して制御処理を実行することが可能である。例えば、ドライバD1、車両C1等に関する各状況と、それらの状況に対応して実行するエージェントの移動態様と、エージェント画像AG1と特定のコンテンツとの重複状態とを予め学習しておき、この学習データを制御処理に用いることが可能である。例えば、ドライバD1、車両C1等に関して生じた各状況、エージェント画像AG1と特定のコンテンツとの重複状態等についてその学習データを用いて、エージェントの移動態様を決定し、その決定された移動態様でエージェント画像AG1を移動させることが可能である。 Also, an example has been shown in which a control process is performed to change the movement pattern of the agent image AG1 based on whether or not the agent image AG1 overlaps with specific content. However, this control process may be performed using other methods. For example, the control process can be performed using artificial intelligence (AI). For example, various situations related to the driver D1, vehicle C1, etc., the movement pattern of the agent to be executed in response to those situations, and the overlapping state between the agent image AG1 and specific content can be learned in advance, and this learned data can be used in the control process. For example, the learned data for various situations that arise related to the driver D1, vehicle C1, etc., and the overlapping state between the agent image AG1 and specific content can be used to determine the movement pattern of the agent, and the agent image AG1 can be moved in that determined movement pattern.

 [本実施形態における効果例]
 このように、本実施形態では、ユーザ入力に対する応答として、そのユーザ入力に応じた位置にエージェント画像AG1を移動させる場合に、特定のコンテンツ(重要なコンテンツ)の視認性を阻害しないようにエージェント画像AG1を移動させることが可能である。すなわち、エージェント画像AG1の最短の移動経路の途中に特定のコンテンツが表示されている場合は、その特定のコンテンツの視認性を阻害しないように、その特定のコンテンツをエージェント画像AG1が避けるように移動させる。例えば、エージェント画像AG1を迂回経路に沿って移動させたり、エージェント画像AG1の形を変更したり、エージェント画像AG1の透明度を変更したり、エージェント画像AG1の移動スピードを変更したり、エージェント画像AG1の移動タイミングを変更したりすることが可能である。
[Example of effects of this embodiment]
As described above, in this embodiment, when the agent image AG1 is moved to a position corresponding to a user input in response to the user input, it is possible to move the agent image AG1 so as not to obstruct the visibility of specific content (important content). In other words, if specific content is displayed along the shortest movement route of the agent image AG1, the agent image AG1 is moved to avoid the specific content so as not to obstruct the visibility of the specific content. For example, it is possible to move the agent image AG1 along a detour route, change the shape of the agent image AG1, change the transparency of the agent image AG1, change the movement speed of the agent image AG1, or change the movement timing of the agent image AG1.

 これにより、ユーザ入力をした乗員は、そのユーザ入力に応じて移動するエージェント画像AG1を目視(又は存在感)で確認することが可能であるとともに、特定のコンテンツの視認も阻害されない。これにより、エージェント画像AG1がユーザに対して適切な操作支援を提供することが可能となる。また、特定のコンテンツを隠さないように移動するエージェント画像AG1に対して知性が感じられる演出を実行することが可能である。これにより、エージェント画像AG1に対する信頼度、愛着等を高めることが可能であり、エージェント画像AG1から各乗員に伝える情報の伝達能力を高めることが可能である。このように、本実施形態では、表示部201において、特定のコンテンツの視認性を阻害しないようにエージェント画像AG1を適切に移動させることが可能である。 As a result, the occupant who has made a user input can visually confirm (or feel the presence of) the agent image AG1 that moves in response to the user input, and the visibility of specific content is not obstructed. This allows the agent image AG1 to provide appropriate operational assistance to the user. It is also possible to create an impression of intelligence in the agent image AG1 that moves without obscuring specific content. This makes it possible to increase trust and attachment to the agent image AG1, and to improve the ability of the agent image AG1 to convey information to each occupant. In this way, in this embodiment, the agent image AG1 can be moved appropriately on the display unit 201 so as not to obstruct the visibility of specific content.

 [他の機器、他のシステムにおいて処理を実行させる例]
 なお、以上では、判定処理、検出処理、制御処理等を情報処理装置110(又は情報処理システム100)において実行する例を示したが、それらの各処理の全部又は一部を他の機器において実行してもよい。この場合には、それらの各処理の一部を実行する各機器により情報処理システムが構成される。例えば、車載機器、ユーザが使用可能な機器(例えば、スマートフォン、タブレット端末、パーソナルコンピュータ、カーナビゲーション装置、IVI)、インターネット等の所定のネットワークを介して接続可能なサーバ等の各種情報処理装置、各種電子機器を用いて各処理の少なくとも一部を実行させることができる。
[Example of executing processing in other devices or other systems]
Although the above describes an example in which the determination process, detection process, control process, etc. are executed in the information processing device 110 (or the information processing system 100), all or part of each of these processes may be executed in other devices. In this case, the information processing system is configured by the devices that execute part of each of these processes. For example, at least part of each process can be executed using various information processing devices and various electronic devices, such as in-vehicle devices, devices that can be used by the user (e.g., smartphones, tablet terminals, personal computers, car navigation devices, IVIs), and servers that can be connected via a predetermined network such as the Internet.

 また、情報処理装置110(又は情報処理システム100)の機能を実行可能な情報処理システムの一部(又は全部)については、インターネット等の所定のネットワークを介して提供可能なアプリケーションにより提供されてもよい。このアプリケーションは、例えばSaaS(Software as a Service)である。 Furthermore, part (or all) of the information processing system capable of executing the functions of the information processing device 110 (or the information processing system 100) may be provided by an application that can be provided via a predetermined network such as the Internet. This application is, for example, SaaS (Software as a Service).

 [本実施形態の構成例及びその効果]
 本実施形態に係る情報処理方法は、車両C1に乗車する乗員とのコミュニケーションを行うエージェント画像AG1を表示部201(フロントウインド4のHUD表示領域等の各表示部を含む)に表示させ、乗員の状態に基づいてエージェント画像AG1の表示状態を制御する情報処理方法である。この情報処理方法は、表示部201において第1位置(初期位置)に表示されているエージェント画像AG1が、乗員の状態に基づいて第2位置(ユーザ入力に応じて移動する位置)に移動する場合に、第1位置から第2位置までの直線状の移動経路(第1経路)と、表示部201に表示されているコンテンツとに基づいて、第1経路に沿ってエージェント画像AG1が移動すると特定のコンテンツの視認性を阻害するか否かを判定する判定処理(ステップS506、S507、S512、S513)と、特定のコンテンツの視認性を阻害しない場合には、第1経路に沿ってエージェント画像AG1を第1表示態様で移動させ(ステップS509、S516)、特定のコンテンツの視認性を阻害する場合には、第1経路とは異なる移動経路であって特定のコンテンツの視認性を阻害しない移動経路(第2経路)に沿ってエージェント画像AG1を移動させる第1移動処理と、第1表示態様とは異なる表示態様であって特定のコンテンツの視認性を阻害しない表示態様(第2表示態)様で、第1位置から第2位置までエージェント画像AG1を移動させる第2移動処理と、第1の表示態様での第1移動タイミングとは異なる移動タイミングであって特定のコンテンツの視認性を阻害しない第2移動タイミング、又は、第1の表示態様での第1移動速度とは異なる移動速度であって特定のコンテンツの視認性を阻害しない第2移動速度で、第1位置から第2位置までエージェント画像AG1を移動させる第3移動処理とのうちの何れかを実行する(ステップS508、S514)制御処理(ステップS508、S509、S514、S516)とを含む。例えば、第1移動処理では、図7乃至図9に示す移動処理が実行される。また、例えば、第2移動処理では、図6(B)(C)に示す移動処理が実行される。また、本実施形態に係るプログラムは、これらの各処理をコンピュータに実行させるプログラムである。言い換えると、本実施形態に係るプログラムは、情報処理装置110が実行可能な各機能をコンピュータに実現させるプログラムである。
[Configuration example and effects of this embodiment]
The information processing method of this embodiment is an information processing method in which an agent image AG1 that communicates with an occupant riding in vehicle C1 is displayed on display unit 201 (including each display unit such as the HUD display area of the front window 4), and the display state of the agent image AG1 is controlled based on the state of the occupant. This information processing method includes, when an agent image AG1 displayed at a first position (initial position) on the display unit 201 moves to a second position (a position to which the agent image AG1 moves in response to a user input) based on the state of the occupant, a determination process for determining whether or not the visibility of a specific content is obstructed when the agent image AG1 moves along the first path based on a linear movement path (first path) from the first position to the second position and the content displayed on the display unit 201 (steps S506, S507, S512, S513), and if the visibility of the specific content is not obstructed, moving the agent image AG1 along the first path in a first display mode (steps S509, S516). If the visibility of the specific content is obstructed, moving the agent image AG1 along the first path in a first display mode (steps S509, S516). The control process includes control processes (steps S508, S509, S514, S516) for executing one of the following: a first movement process for moving the agent image AG1 along a movement path (second path) that does not obstruct the visibility of the specific content; a second movement process for moving the agent image AG1 from a first position to a second position in a display mode (second display mode) that is different from the first display mode and does not obstruct the visibility of the specific content; and a third movement process for moving the agent image AG1 from the first position to the second position at a second movement timing that is different from the first movement timing in the first display mode and does not obstruct the visibility of the specific content, or at a second movement speed that is different from the first movement speed in the first display mode and does not obstruct the visibility of the specific content (steps S508, S514). For example, the first movement process is performed by the movement processes shown in FIGS. 7 to 9. Furthermore, for example, the second movement process is performed by the movement processes shown in FIGS. 6B and 6C. The program according to this embodiment is a program that causes a computer to execute each of these processes. In other words, the program according to this embodiment is a program that causes a computer to realize each of the functions that can be executed by the information processing device 110.

 この構成によれば、乗員の状態に基づいて移動するエージェント画像AG1を目視(透明の場合にはその存在感)で確認することが可能であるとともに、特定のコンテンツの視認も阻害されない。すなわち、表示部201において、特定のコンテンツの視認性を阻害しないようにエージェント画像AG1を適切に移動させることが可能である。 With this configuration, the agent image AG1, which moves based on the occupant's state, can be visually confirmed (or its presence, if transparent), and the visibility of specific content is not obstructed. In other words, the agent image AG1 can be moved appropriately on the display unit 201 so as not to obstruct the visibility of specific content.

 本実施形態に係る情報処理方法において、第1移動処理では、特定のコンテンツを迂回する迂回経路を第2経路としてエージェント画像AG1を移動させる。例えば、図7乃至図9に示す移動処理でエージェント画像AG1を移動させることが可能である。 In the information processing method according to this embodiment, the first movement process moves the agent image AG1 along a second route that is a detour route that bypasses specific content. For example, the agent image AG1 can be moved using the movement processes shown in Figures 7 to 9.

 この構成によれば、特定のコンテンツを迂回するエージェント画像AG1を、特定のコンテンツの近くにおいて目視で確認することが可能である。 With this configuration, the agent image AG1 that bypasses a specific piece of content can be visually confirmed near the specific content.

 本実施形態に係る情報処理方法において、第1移動処理では、迂回経路においてエージェント画像AG1が特定のコンテンツに重複する部分が存在する場合、又は、迂回経路においてエージェント画像AG1が特定のコンテンツに近接する部分が存在する場合には、当該重複部分又は当該近接部分のエージェント画像AG1を透明又は半透明とする表示態様、又は、当該重複部分のエージェント画像AG1が特定のコンテンツの後側(表示部201の奥行方向の奥側)に配置されたように表示させる表示態様とする。例えば、図7(B)に示すように、迂回経路においてエージェント画像AG1が特定のコンテンツCT1に重複する部分が存在する場合には、当該重複部分のエージェント画像AG1が特定のコンテンツCT1の後側に配置されたように表示させる表示態様とすることが可能である。 In the information processing method according to this embodiment, in the first movement process, if there is a portion of the detour route where the agent image AG1 overlaps with specific content, or if there is a portion of the detour route where the agent image AG1 is close to specific content, the display mode is such that the agent image AG1 in the overlapping or close portion is transparent or semi-transparent, or the agent image AG1 in the overlapping portion is displayed as if it were positioned behind the specific content (the farther depthwise of the display unit 201). For example, as shown in FIG. 7(B), if there is a portion of the detour route where the agent image AG1 overlaps with specific content CT1, it is possible to display the agent image AG1 in the overlapping portion as if it were positioned behind the specific content CT1.

 この構成によれば、迂回経路においてエージェント画像AG1と特定のコンテンツとが重複又は近接する場合に、その重複又は近接により特定のコンテンツが見難くなることを防止することが可能である。 With this configuration, if the agent image AG1 and specific content overlap or come close to each other on a detour route, it is possible to prevent the specific content from becoming difficult to see due to that overlap or proximity.

 本実施形態に係る情報処理方法において、第1移動処理では、迂回経路において表示部201における表示面からエージェント画像AG1がはみ出る部分が存在する場合には、当該はみ出る部分のエージェント画像AG1を表示させない。例えば、図7(A)に示すように、迂回経路において表示部201における表示面からエージェント画像AG1がはみ出る部分(エージェント画像AG1b乃至AG1eの上部分)が存在する場合には、そのはみ出る部分のエージェント画像AG1を表示させないようにすることが可能である。 In the information processing method according to this embodiment, in the first movement process, if there is a portion of the agent image AG1 that extends beyond the display surface of the display unit 201 on the detour route, that portion of the agent image AG1 that extends beyond the display surface of the display unit 201 is not displayed. For example, as shown in FIG. 7(A), if there is a portion of the agent image AG1 that extends beyond the display surface of the display unit 201 on the detour route (the upper portions of agent images AG1b to AG1e), it is possible to prevent that portion of the agent image AG1 from being displayed.

 この構成によれば、迂回経路において移動するエージェント画像AG1の移動経路が狭い場合でも、エージェント画像AG1のサイズ等を維持した状態でエージェント画像AG1を表示させることが可能である。 With this configuration, even if the route traveled by the agent image AG1 on a detour is narrow, it is possible to display the agent image AG1 while maintaining its size, etc.

 本実施形態に係る情報処理方法において、判定処理(ステップS506、S507、S512、S513)では、第1経路に沿って移動するエージェント画像AG1が特定のコンテンツが近接又は重複する場合に、その移動するエージェント画像AG1が特定のコンテンツの視認性を阻害すると判定してもよい。この場合には、第2移動処理では、第1経路に沿ってエージェント画像AG1を移動させ、特定のコンテンツとの重複部分のエージェント画像AG1が特定のコンテンツの後側に配置されたように表示させる表示態様、又は、特定のコンテンツとの近接部分又は重複部分のエージェント画像AG1を透明又は半透明とする表示態様を第2表示態様とすることが可能である。例えば、図6(B)に示すように、第1経路に沿ってエージェント画像AG1を移動させ、特定のコンテンツとの重複部分のエージェント画像AG1が特定のコンテンツの後側に配置されたように表示させる表示態様とすることが可能である。また、例えば、図6(C)に示すように、特定のコンテンツとの重複部分のエージェント画像AG1を透明又は半透明とする表示態様とすることが可能である。なお、特定のコンテンツとの近接部分のエージェント画像AG1を透明又は半透明とする表示態様としてもよい。 In the information processing method according to this embodiment, the determination process (steps S506, S507, S512, S513) may determine that the moving agent image AG1 obstructs the visibility of specific content when the agent image AG1 moving along the first path is adjacent to or overlaps with specific content. In this case, the second movement process may move the agent image AG1 along the first path, and display the overlapping portion of the agent image AG1 as if it were positioned behind the specific content, or may display the adjacent or overlapping portion of the agent image AG1 as if it were positioned behind the specific content, as the second display mode. For example, as shown in FIG. 6(B), the agent image AG1 may move along the first path, and display the overlapping portion of the agent image AG1 as if it were positioned behind the specific content. Furthermore, as shown in FIG. 6(C), the overlapping portion of the agent image AG1 may be displayed as if it were transparent or semi-transparent. Additionally, the agent image AG1 in the vicinity of specific content may be displayed transparently or semi-transparently.

 この構成によれば、第1経路に沿ってエージェント画像AG1を移動させることが可能であり、特定のコンテンツとの近接部分又は重複部分については、特定のコンテンツの後側に配置されたように表示させる表示態様、又は、エージェント画像AG1を透明又は半透明とする表示態様とすることが可能である。これにより、特定のコンテンツ以外のコンテンツの場合と同様の移動経路をエージェント画像AG1に移動させることが可能であるため、乗員に移動予測がされ易い移動演出を実行することが可能である。 With this configuration, it is possible to move the agent image AG1 along the first route, and for parts close to or overlapping with the specific content, it is possible to display the agent image AG1 as if it were positioned behind the specific content, or to display the agent image AG1 as transparent or semi-transparent. This makes it possible to move the agent image AG1 along the same movement route as for content other than the specific content, making it possible to perform movement effects that are easy for the occupant to predict.

 本実施形態に係る情報処理方法において、第3移動処理では、第1移動速度よりも早い移動速度を第2移動速度としてエージェント画像AG1を移動させてもよい。ただし、特定のコンテンツが、法律で表示し続けることが規定されている情報である場合には、この移動制御は実行しない。 In the information processing method according to this embodiment, in the third movement process, the agent image AG1 may be moved at a second movement speed that is faster than the first movement speed. However, if the specific content is information that is required by law to be continuously displayed, this movement control is not executed.

 この構成によれば、エージェント画像AG1と特定のコンテンツとが重複する部分が存在する場合でも、その重複部分を含む移動経路を、エージェント画像AG1を早く移動させることにより、特定のコンテンツが見えなくなる時間を短くすることが可能である。これにより、表示部201において、特定のコンテンツの視認性を阻害しないようにエージェント画像AG1を適切に移動させることが可能である。 With this configuration, even if there is an overlapping portion between the agent image AG1 and specific content, it is possible to shorten the time that the specific content is not visible by quickly moving the agent image AG1 along a path that includes the overlapping portion. This makes it possible to appropriately move the agent image AG1 on the display unit 201 so as not to impede the visibility of the specific content.

 本実施形態に係る情報処理方法において、乗員の眼を含む画像に基づいて乗員の視線を検出する検出処理をさらに含んでもよい。例えば、移動態様決定部127は、ドライバ状況取得部122、車室状況取得部123により取得された画像に基づいて、乗員(ドライバD1を含む)の視線を検出することが可能である。なお、この視線検出については、公知の視線検出技術を採用することが可能である。そして、第3移動処理では、乗員の視線に基づいて乗員が特定のコンテンツを見ているか否かを判定し、乗員が特定のコンテンツを見ていないときを第2移動タイミングとしてもよい。 The information processing method according to this embodiment may further include a detection process for detecting the gaze of an occupant based on an image including the occupant's eyes. For example, the movement mode determination unit 127 may detect the gaze of an occupant (including driver D1) based on images acquired by the driver state acquisition unit 122 and the vehicle interior state acquisition unit 123. Note that known gaze detection technology may be used for this gaze detection. Then, in the third movement process, it may be determined whether the occupant is looking at specific content based on the occupant's gaze, and the timing when the occupant is not looking at the specific content may be set as the second movement timing.

 この構成によれば、エージェント画像AG1と特定のコンテンツとが重複する重複部分を含む移動経路に沿って、エージェント画像AG1を移動させる場合に、乗員が特定のコンテンツを見ていないタイミングでエージェント画像AG1を移動させることが可能である。これにより、表示部201において、特定のコンテンツの視認性を阻害しないようにエージェント画像AG1を適切に移動させることが可能である。 With this configuration, when moving the agent image AG1 along a travel route that includes an overlapping portion where the agent image AG1 and specific content overlap, it is possible to move the agent image AG1 at a time when the occupant is not looking at the specific content. This makes it possible to appropriately move the agent image AG1 on the display unit 201 so as not to obstruct the visibility of the specific content.

 本実施形態に係る情報処理方法において、特定のコンテンツは、車両C1の進行方向を誘導するために表示される地図コンテンツと、地図コンテンツにおける車両C1の現在地から移動先までの移動経路を示す経路情報コンテンツと、車両C1の走行制御に関する走行制御情報コンテンツと、車両C1の異常発生に関する異常発生情報コンテンツと、法律で表示し続けることが規定されている情報と、のうちの少なくとも1つとすることが可能である。 In the information processing method according to this embodiment, the specific content can be at least one of map content displayed to guide the direction of travel of vehicle C1, route information content showing the route from the current location of vehicle C1 to the destination in the map content, driving control information content related to driving control of vehicle C1, abnormality occurrence information content related to the occurrence of an abnormality in vehicle C1, and information that is required by law to be continuously displayed.

 この構成によれば、重要な情報を特定のコンテンツとして設定が可能である。 This configuration makes it possible to set important information as specific content.

 本実施形態に係る情報処理方法において、車両C1の外部の環境に関する環境情報と、ドライバD1の運転行動に関する運転行動情報とのうちの少なくとも1つに基づいて、ドライバD1の運転負荷を判定する運転負荷判定処理(ステップS505)をさらに含んでもよい。この場合には、判定処理(ステップS506、S507)では、運転負荷が閾値未満である場合に、特定のコンテンツの視認性を阻害するか否かを判定する。そして、制御処理では、運転負荷が閾値以上である場合には、第1位置に表示されていたエージェント画像AG1を消去した後に、第2位置にエージェント画像AG1を表示させ(ステップS510)、運転負荷が閾値未満である場合には、判定処理(ステップS506、S507)での判定結果に基づいて、エージェント画像AG1の移動処理を実行する(ステップS508、S508)。 The information processing method according to this embodiment may further include a driving load determination process (step S505) that determines the driving load of driver D1 based on at least one of environmental information related to the environment outside vehicle C1 and driving behavior information related to driver D1's driving behavior. In this case, the determination process (steps S506, S507) determines whether the visibility of specific content is impaired if the driving load is below a threshold. Then, in the control process, if the driving load is equal to or greater than the threshold, the agent image AG1 displayed at the first position is erased and then displayed at the second position (step S510). If the driving load is below the threshold, a movement process of agent image AG1 is executed (steps S508, S508) based on the determination result of the determination process (steps S506, S507).

 例えば、ドライバD1の運転負荷が高い場合には、ドライバD1がエージェント画像AG1の移動を見る余裕がないと考えられるため、エージェント画像AG1を一時的に消去しても、エージェント画像AG1の存在感に影響はないと考えられる。また、ドライバD1の運転負荷が高い場合には、移動演出を実行するよりも、エージェント画像AG1を第1位置から第2位置に瞬間的に移動させた方がドライバD1に認識し易いと考えられる。このように、ドライバD1の運転負荷に基づいて、エージェント画像AG1の移動演出を適切に実行することが可能である。 For example, if driver D1 is under a high driving load, it is thought that driver D1 will not have time to see the movement of agent image AG1, and therefore temporarily erasing agent image AG1 will not affect the presence of agent image AG1. Furthermore, if driver D1 is under a high driving load, it is thought that it will be easier for driver D1 to recognize agent image AG1 if it is moved instantaneously from a first position to a second position rather than by executing a movement effect. In this way, it is possible to appropriately perform the movement effect of agent image AG1 based on driver D1's driving load.

 情報処理装置110は、車両C1に乗車する乗員とのコミュニケーションを行うエージェント画像AG1を表示部201(フロントウインド4のHUD表示領域等の各表示部を含む)に表示させ、乗員の状態に基づいてエージェント画像AG1の表示状態を制御する情報処理装置である。情報処理装置110は、表示部210において第1位置(初期位置)に表示されているエージェント画像AG1が、乗員の状態に基づいて第2位置(ユーザ入力に応じて移動する位置)に移動する場合に、第1位置から第2位置までの直線状の移動経路(第1経路)と、表示部201に表示されているコンテンツとに基づいて、第1経路に沿ってエージェント画像AG1が移動すると特定のコンテンツの視認性を阻害するか否かを判定する移動態様決定部127(判定部の一例)と、特定のコンテンツの視認性を阻害しない場合には、第1経路に沿ってエージェント画像AG1を第1表示態様で移動させ、特定のコンテンツの視認性を阻害する場合には、第1経路とは異なる移動経路であって特定のコンテンツの視認性を阻害しない移動経路(第2経路)に沿ってエージェント画像AG1を移動させる第1移動処理と、第1表示態様とは異なる表示態様であって特定のコンテンツの視認性を阻害しない表示態様(第2表示態様)で、第1位置から第2位置までエージェント画像AG1を移動させる第2移動処理と、第1の表示態様での第1移動タイミングとは異なる移動タイミングであって特定のコンテンツの視認性を阻害しない第2移動タイミング、又は、第1の表示態様での第1移動速度とは異なる移動速度であって特定のコンテンツの視認性を阻害しない第2移動速度で、第1位置から第2位置までエージェント画像を移動させる第3移動処理とのうちの何れかを実行する出力制御部128(制御部の一例)とを備える。なお、情報処理装置110は、出力機器200に内蔵される機器としてもよく、出力機器200とは異なる機器としてもよい。また、情報処理装置110の代わりに、情報処理装置110により実現される各処理を実行可能な複数の機器により構成される情報処理システムとしてもよい。 The information processing device 110 is an information processing device that displays an agent image AG1, which communicates with a passenger in the vehicle C1, on the display unit 201 (including each display unit such as the HUD display area of the windshield 4), and controls the display state of the agent image AG1 based on the state of the passenger. When the agent image AG1, which is displayed at a first position (initial position) on the display unit 210, moves to a second position (a position to which the image moves in response to user input) based on the state of the passenger, the information processing device 110 includes a movement mode determination unit 127 (an example of a determination unit) that determines whether or not the visibility of specific content will be obstructed when the agent image AG1 moves along the first path based on a linear movement path (first path) from the first position to the second position and the content displayed on the display unit 201; if the visibility of the specific content is not obstructed, the information processing device moves the agent image AG1 along the first path in a first display mode; and if the visibility of the specific content is obstructed, the information processing device 110 moves the agent image AG1 along a movement path different from the first path that obstructs the visibility of the specific content. The information processing device 110 includes an output control unit 128 (an example of a control unit) that executes one of the following: a first movement process that moves the agent image AG1 along a movement path (second path) that does not impair visibility of the specific content; a second movement process that moves the agent image AG1 from a first position to a second position in a display mode (second display mode) different from the first display mode that does not impair visibility of the specific content; and a third movement process that moves the agent image from the first position to the second position at a second movement timing different from the first movement timing in the first display mode that does not impair visibility of the specific content, or at a second movement speed different from the first movement speed in the first display mode that does not impair visibility of the specific content. Note that the information processing device 110 may be a device built into the output device 200 or may be a device separate from the output device 200. Furthermore, instead of the information processing device 110, an information processing system configured with multiple devices capable of executing the processes realized by the information processing device 110 may be implemented.

 この構成によれば、乗員の状態に基づいて移動するエージェント画像AG1を目視(透明の場合にはその存在感)で確認することが可能であるとともに、特定のコンテンツの視認も阻害されない。すなわち、表示部201において、特定のコンテンツの視認性を阻害しないようにエージェント画像AG1を適切に移動させることが可能である。 With this configuration, the agent image AG1, which moves based on the occupant's state, can be visually confirmed (or its presence, if transparent), and the visibility of specific content is not obstructed. In other words, the agent image AG1 can be moved appropriately on the display unit 201 so as not to obstruct the visibility of specific content.

 なお、本実施形態で示した各処理手順は、本実施形態を実現するための一例を示したものであり、本実施形態を実現可能な範囲で各処理手順の一部の順序を入れ替えてもよく、各処理手順の一部を省略したり他の処理手順を追加したりしてもよい。 Note that the processing steps shown in this embodiment are merely examples for realizing this embodiment, and the order of some of the processing steps may be changed, some of the processing steps may be omitted, or other processing steps may be added, to the extent that this embodiment can be realized.

 なお、本実施形態の各処理は、各種の処理手順をコンピュータに実行させるためのプログラムに基づいて実行される。本実施形態は、それらの各処理を実行する機能を実現するプログラム、そのプログラムを記憶する記録媒体の実施形態としても把握することができる。例えば、情報処理装置に新機能を追加するためのアップデート処理により、そのプログラムを情報処理装置の記憶装置に記憶させることができる。これにより、そのアップデートされた情報処理装置に本実施形態で示した各処理を実施させることが可能となる。 Note that each process in this embodiment is executed based on a program that causes a computer to execute various processing procedures. This embodiment can also be understood as an embodiment of a program that realizes the function of executing each process, and a recording medium that stores that program. For example, an update process for adding a new function to an information processing device can store the program in the storage device of the information processing device. This makes it possible to cause the updated information processing device to perform each process shown in this embodiment.

 以上、本発明の実施形態について説明したが、上記実施形態は本発明の適用例を示したに過ぎず、本発明の技術的範囲を上記実施形態の具体的構成に限定する趣旨ではない。 The above describes an embodiment of the present invention, but the above embodiment merely illustrates an application example of the present invention, and is not intended to limit the technical scope of the present invention to the specific configuration of the above embodiment.

Claims (11)

 車両に乗車する乗員とのコミュニケーションを行うエージェント画像を表示部に表示させ、前記乗員の状態に基づいて前記エージェント画像の表示状態を制御する情報処理方法であって、
 前記表示部において第1位置に表示されている前記エージェント画像が、前記乗員の状態に基づいて第2位置に移動する場合に、前記第1位置から前記第2位置までの直線状の移動経路である第1経路と、前記表示部に表示されているコンテンツとに基づいて、前記第1経路に沿って前記エージェント画像が移動すると特定のコンテンツの視認性を阻害するか否かを判定する判定処理と、
 前記特定のコンテンツの視認性を阻害しない場合には、前記第1経路に沿って前記エージェント画像を第1表示態様で移動させ、
 前記特定のコンテンツの視認性を阻害する場合には、
 前記第1経路とは異なる移動経路であって前記特定のコンテンツの視認性を阻害しない移動経路である第2経路に沿って前記エージェント画像を移動させる第1移動処理と、
 前記第1表示態様とは異なる表示態様であって前記特定のコンテンツの視認性を阻害しない表示態様である第2表示態様で、前記第1位置から前記第2位置まで前記エージェント画像を移動させる第2移動処理と、
 前記第1の表示態様での第1移動タイミングとは異なる移動タイミングであって前記特定のコンテンツの視認性を阻害しない第2移動タイミング、又は、前記第1の表示態様での第1移動速度とは異なる移動速度であって前記特定のコンテンツの視認性を阻害しない第2移動速度で、前記第1位置から前記第2位置まで前記エージェント画像を移動させる第3移動処理とのうちの何れかを実行する制御処理と、を含む、
情報処理方法。
1. An information processing method for displaying an agent image that communicates with a vehicle occupant on a display unit, and controlling a display state of the agent image based on a state of the occupant, comprising:
a determination process for determining, when the agent image displayed at a first position on the display unit moves to a second position based on the state of the occupant, whether or not the visibility of specific content is obstructed when the agent image moves along a first route, which is a linear movement route from the first position to the second position, and based on content displayed on the display unit;
When the visibility of the specific content is not impaired, the agent image is moved along the first route in a first display mode;
If the visibility of the specific content is impaired,
a first movement process of moving the agent image along a second route, which is a movement route different from the first route and does not obstruct the visibility of the specific content;
a second movement process of moving the agent image from the first position to the second position in a second display mode that is a display mode different from the first display mode and does not impair visibility of the specific content;
a control process for executing either a second movement timing different from a first movement timing in the first display mode and not interfering with the visibility of the specific content, or a third movement process for moving the agent image from the first position to the second position at a second movement speed different from a first movement speed in the first display mode and not interfering with the visibility of the specific content,
Information processing methods.
 請求項1に記載の情報処理方法であって、
 前記第1移動処理では、前記特定のコンテンツを迂回する迂回経路を前記第2経路として前記エージェント画像を移動させる、
情報処理方法。
2. The information processing method according to claim 1,
In the first movement process, the agent image is moved along a detour route that bypasses the specific content as the second route.
Information processing methods.
 請求項2に記載の情報処理方法であって、
 前記第1移動処理では、前記迂回経路において前記エージェント画像が前記特定のコンテンツに重複する部分が存在する場合、又は、前記迂回経路において前記エージェント画像が前記特定のコンテンツに近接する部分が存在する場合には、当該重複部分又は当該近接部分の前記エージェント画像を透明又は半透明とする表示態様、又は、当該重複部分の前記エージェント画像が前記特定のコンテンツの後側に配置されたように表示させる表示態様とする、
情報処理方法。
3. The information processing method according to claim 2,
In the first movement process, if there is a portion on the detour route where the agent image overlaps with the specific content, or if there is a portion on the detour route where the agent image is close to the specific content, the display mode is set to make the agent image in the overlapping portion or the close portion transparent or semi-transparent, or to display the agent image in the overlapping portion as if it were positioned behind the specific content.
Information processing methods.
 請求項2に記載の情報処理方法であって、
 前記第1移動処理では、前記迂回経路において前記表示部における表示面から前記エージェント画像がはみ出る部分が存在する場合には、当該はみ出る部分の前記エージェント画像を表示させない、
情報処理方法。
3. The information processing method according to claim 2,
In the first movement process, if there is a portion of the agent image that protrudes from the display surface of the display unit on the detour route, the protruding portion of the agent image is not displayed.
Information processing methods.
 請求項1に記載の情報処理方法であって、
 前記判定処理では、前記第1経路に沿って移動する前記エージェント画像が前記特定のコンテンツが近接又は重複する場合に、当該移動する前記エージェント画像が前記特定のコンテンツの視認性を阻害すると判定し、
 前記第2移動処理では、
 前記第1経路に沿って前記エージェント画像を移動させ、
 前記特定のコンテンツとの重複部分の前記エージェント画像が前記特定のコンテンツの後側に配置されたように表示させる表示態様、又は、前記特定のコンテンツとの近接部分又は重複部分の前記エージェント画像を透明又は半透明とする表示態様を前記第2表示態様とする、
情報処理方法。
2. The information processing method according to claim 1,
In the determination process, when the agent image moving along the first route is close to or overlaps with the specific content, it is determined that the moving agent image obstructs visibility of the specific content;
In the second movement process,
moving the agent image along the first path;
The second display mode is a display mode in which the agent image in the overlapping portion with the specific content is displayed as if it were positioned behind the specific content, or a display mode in which the agent image in the proximity or overlapping portion with the specific content is made transparent or semi-transparent.
Information processing methods.
 請求項1に記載の情報処理方法であって、
 前記第3移動処理では、前記第1移動速度よりも早い移動速度を前記第2移動速度として前記エージェント画像を移動させる、
情報処理方法。
2. The information processing method according to claim 1,
In the third movement process, the agent image is moved at a second movement speed that is faster than the first movement speed.
Information processing methods.
 請求項1に記載の情報処理方法であって、
 前記乗員の眼を含む画像に基づいて前記乗員の視線を検出する検出処理、をさらに含み、
 前記第3移動処理では、前記乗員の視線に基づいて前記乗員が前記特定のコンテンツを見ているか否かを判定し、前記乗員が前記特定のコンテンツを見ていないときを前記第2移動タイミングとする、
情報処理方法。
2. The information processing method according to claim 1,
a detection process for detecting a line of sight of the occupant based on an image including the eyes of the occupant,
In the third movement process, it is determined whether the occupant is watching the specific content based on a line of sight of the occupant, and a time when the occupant is not watching the specific content is set as the second movement timing.
Information processing methods.
 請求項1から7の何れかに記載の情報処理方法であって、
 前記特定のコンテンツは、前記車両の進行方向を誘導するために表示される地図コンテンツと、前記地図コンテンツにおける前記車両の現在地から移動先までの移動経路を示す経路情報コンテンツと、前記車両の走行制御に関する走行制御情報コンテンツと、前記車両の異常発生に関する異常発生情報コンテンツと、法律で表示し続けることが規定されている情報コンテンツと、のうちの少なくとも1つである、
情報処理方法。
8. An information processing method according to claim 1,
The specific content is at least one of map content displayed to guide the vehicle in the traveling direction, route information content indicating a route from the current location of the vehicle to the destination in the map content, driving control information content related to driving control of the vehicle, abnormality occurrence information content related to the occurrence of an abnormality in the vehicle, and information content that is required by law to be continuously displayed.
Information processing methods.
 請求項1から7の何れかに記載の情報処理方法であって、
 前記車両の外部の環境に関する環境情報と、前記ドライバの運転行動に関する運転行動情報とのうちの少なくとも1つに基づいて、前記ドライバの運転負荷を判定する運転負荷判定処理、をさらに含み、
 前記判定処理では、前記運転負荷が閾値未満である場合に、前記特定のコンテンツの視認性を阻害するか否かを判定し、
 前記制御処理では、
 前記運転負荷が閾値以上である場合には、前記第1位置に表示されていた前記エージェント画像を消去した後に、前記第2位置に前記エージェント画像を表示させ、
 前記運転負荷が閾値未満である場合には、前記判定処理での判定結果に基づいて、前記エージェント画像の移動処理を実行する、
情報処理方法。
8. An information processing method according to claim 1,
a driving load determination process for determining a driving load of the driver based on at least one of environmental information relating to an environment outside the vehicle and driving behavior information relating to a driving behavior of the driver,
In the determination process, when the driving load is less than a threshold, it is determined whether visibility of the specific content is impaired;
In the control process,
When the driving load is equal to or greater than a threshold, the agent image displayed at the first position is erased, and then the agent image is displayed at the second position;
If the driving load is less than a threshold, a process of moving the agent image is executed based on the determination result of the determination process.
Information processing methods.
 車両に乗車する乗員とのコミュニケーションを行うエージェント画像を表示部に表示させ、前記乗員の状態に基づいて前記エージェント画像の表示状態を制御する情報処理装置であって、
 前記表示部において第1位置に表示されている前記エージェント画像が、前記乗員の状態に基づいて第2位置に移動する場合に、前記第1位置から前記第2位置までの直線状の移動経路である第1経路と、前記表示部に表示されているコンテンツとに基づいて、前記第1経路に沿って前記エージェント画像が移動すると特定のコンテンツの視認性を阻害するか否かを判定する判定部と、
 前記特定のコンテンツの視認性を阻害しない場合には、前記第1経路に沿って前記エージェント画像を第1表示態様で移動させ、
 前記特定のコンテンツの視認性を阻害する場合には、
 前記第1経路とは異なる移動経路であって前記特定のコンテンツの視認性を阻害しない移動経路である第2経路に沿って前記エージェント画像を移動させる第1移動処理と、
 前記第1表示態様とは異なる表示態様であって前記特定のコンテンツの視認性を阻害しない表示態様である第2表示態様で、前記第1位置から前記第2位置まで前記エージェント画像を移動させる第2移動処理と、
 前記第1の表示態様での第1移動タイミングとは異なる移動タイミングであって前記特定のコンテンツの視認性を阻害しない第2移動タイミング、又は、前記第1の表示態様での第1移動速度とは異なる移動速度であって前記特定のコンテンツの視認性を阻害しない第2移動速度で、前記第1位置から前記第2位置まで前記エージェント画像を移動させる第3移動処理とのうちの何れかを実行する制御部と、を備える、
情報処理装置。
1. An information processing device that displays an agent image that communicates with a vehicle occupant on a display unit and controls a display state of the agent image based on a state of the occupant,
a determination unit that, when the agent image displayed at a first position on the display unit moves to a second position based on the state of the occupant, determines whether or not the visibility of specific content is impaired when the agent image moves along a first route, which is a linear movement route from the first position to the second position, based on the content displayed on the display unit;
When the visibility of the specific content is not impaired, the agent image is moved along the first route in a first display mode;
If the visibility of the specific content is impaired,
a first movement process of moving the agent image along a second route, which is a movement route different from the first route and does not obstruct the visibility of the specific content;
a second movement process of moving the agent image from the first position to the second position in a second display mode that is a display mode different from the first display mode and does not impair visibility of the specific content;
a control unit that executes either a second movement timing that is different from the first movement timing in the first display mode and does not impair visibility of the specific content, or a third movement process that moves the agent image from the first position to the second position at a second movement speed that is different from the first movement speed in the first display mode and does not impair visibility of the specific content,
Information processing device.
 車両に乗車する乗員とのコミュニケーションを行うエージェント画像を表示部に表示させ、前記乗員の状態に基づいて前記エージェント画像の表示状態を制御するコンピュータに実行させるプログラムであって、
 前記表示部において第1位置に表示されている前記エージェント画像が、前記乗員の状態に基づいて第2位置に移動する場合に、前記第1位置から前記第2位置までの直線状の移動経路である第1経路と、前記表示部に表示されているコンテンツとに基づいて、前記第1経路に沿って前記エージェント画像が移動すると特定のコンテンツの視認性を阻害するか否かを判定する判定処理と、
 前記特定のコンテンツの視認性を阻害しない場合には、前記第1経路に沿って前記エージェント画像を第1表示態様で移動させ、
 前記特定のコンテンツの視認性を阻害する場合には、
 前記第1経路とは異なる移動経路であって前記特定のコンテンツの視認性を阻害しない移動経路である第2経路に沿って前記エージェント画像を移動させる第1移動処理と、
 前記第1表示態様とは異なる表示態様であって前記特定のコンテンツの視認性を阻害しない表示態様である第2表示態様で、前記第1位置から前記第2位置まで前記エージェント画像を移動させる第2移動処理と、
 前記第1の表示態様での第1移動タイミングとは異なる移動タイミングであって前記特定のコンテンツの視認性を阻害しない第2移動タイミング、又は、前記第1の表示態様での第1移動速度とは異なる移動速度であって前記特定のコンテンツの視認性を阻害しない第2移動速度で、前記第1位置から前記第2位置まで前記エージェント画像を移動させる第3移動処理とのうちの何れかを実行する制御処理と、
をプログラムに実行させるプログラム。
A program executed by a computer to display an agent image that communicates with a vehicle occupant on a display unit and control a display state of the agent image based on a state of the occupant,
a determination process for determining, when the agent image displayed at a first position on the display unit moves to a second position based on the state of the occupant, whether or not the visibility of specific content is obstructed when the agent image moves along a first route, which is a linear movement route from the first position to the second position, and based on content displayed on the display unit;
When the visibility of the specific content is not impaired, the agent image is moved along the first route in a first display mode;
If the visibility of the specific content is impaired,
a first movement process of moving the agent image along a second route, which is a movement route different from the first route and does not obstruct the visibility of the specific content;
a second movement process of moving the agent image from the first position to the second position in a second display mode that is a display mode different from the first display mode and does not impair visibility of the specific content;
a control process for executing either a second movement timing different from a first movement timing in the first display mode and not interfering with the visibility of the specific content, or a third movement process for moving the agent image from the first position to the second position at a second movement speed different from a first movement speed in the first display mode and not interfering with the visibility of the specific content;
A program that causes a program to execute the following.
PCT/IB2024/000088 2024-02-29 2024-02-29 Information processing method, information processing device, and program Pending WO2025181507A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2024/000088 WO2025181507A1 (en) 2024-02-29 2024-02-29 Information processing method, information processing device, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2024/000088 WO2025181507A1 (en) 2024-02-29 2024-02-29 Information processing method, information processing device, and program

Publications (1)

Publication Number Publication Date
WO2025181507A1 true WO2025181507A1 (en) 2025-09-04

Family

ID=96920045

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2024/000088 Pending WO2025181507A1 (en) 2024-02-29 2024-02-29 Information processing method, information processing device, and program

Country Status (1)

Country Link
WO (1) WO2025181507A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090121076A1 (en) * 2005-07-08 2009-05-14 Donald George Blackburn Helicopter
JP2011070314A (en) * 2009-09-24 2011-04-07 Hi:Kk Information processor, character display method and program
JP2020055348A (en) * 2018-09-28 2020-04-09 本田技研工業株式会社 Agent device, agent control method, and program
JP2022142553A (en) * 2021-03-16 2022-09-30 日産自動車株式会社 Image display control device, image display control method and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090121076A1 (en) * 2005-07-08 2009-05-14 Donald George Blackburn Helicopter
JP2011070314A (en) * 2009-09-24 2011-04-07 Hi:Kk Information processor, character display method and program
JP2020055348A (en) * 2018-09-28 2020-04-09 本田技研工業株式会社 Agent device, agent control method, and program
JP2022142553A (en) * 2021-03-16 2022-09-30 日産自動車株式会社 Image display control device, image display control method and program

Similar Documents

Publication Publication Date Title
US10663963B2 (en) Method and apparatus for visualizing future events for passengers of autonomous vehicles
EP2857886B1 (en) Display control apparatus, computer-implemented method, storage medium, and projection apparatus
US20180017968A1 (en) Autonomous vehicle human driver takeover mechanism using electrodes
US20150331238A1 (en) System for a vehicle
WO2015136874A1 (en) Display control device, display device, display control program, display control method, and recording medium
JP7119846B2 (en) VEHICLE TRIP CONTROL METHOD AND TRIP CONTROL DEVICE
CN113401071B (en) Display control device, display control method, and computer-readable storage medium
JP2024029051A (en) In-vehicle display device, method and program
WO2023204076A1 (en) Acoustic control method and acoustic control device
CN114207685B (en) Autonomous Vehicle Interaction System
EP4453514A1 (en) Method, apparatus and computer program product for selecting content for display during a journey to alleviate motion sickness
JP2020158006A (en) Driving support method and driving support device
WO2025181507A1 (en) Information processing method, information processing device, and program
JP5424014B2 (en) Collision warning vehicle detection system
JP7737298B2 (en) Information recording support method, information recording support device, information recording support program, and information recording support system
JP7236897B2 (en) Driving support method and driving support device
US11215472B2 (en) Information providing device and in-vehicle device
JP7691828B2 (en) Information notification method and information notification device
JP7616372B2 (en) Vehicle display system, vehicle display method, and vehicle display program
WO2025238390A1 (en) Information processing method, information processing device, and program
WO2025211118A1 (en) Display control device for vehicle, display control system for vehicle, display control method for vehicle, and display control program for vehicle
JP2005308645A (en) Navigation system and its control method
WO2025115924A1 (en) Display control device, head-up display device, program, and on-vehicle agent system
JP2024083068A (en) Information processing method and information processing device
JP2024073109A (en) Information processing method and information processing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24926280

Country of ref document: EP

Kind code of ref document: A1