WO2024162454A1 - Information processing device, information processing method, and program - Google Patents
Information processing device, information processing method, and program Download PDFInfo
- Publication number
- WO2024162454A1 WO2024162454A1 PCT/JP2024/003383 JP2024003383W WO2024162454A1 WO 2024162454 A1 WO2024162454 A1 WO 2024162454A1 JP 2024003383 W JP2024003383 W JP 2024003383W WO 2024162454 A1 WO2024162454 A1 WO 2024162454A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- sound
- notification
- route
- information processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
Definitions
- the present invention relates to an information processing device, an information processing method, and a program.
- Voice navigation systems are widely used in car navigation systems and smartphones.
- the current location is detected using GPS or other devices, and the route to the destination is notified by voice.
- this disclosure proposes an information processing device, an information processing method, and a program that can guide a user accurately by voice.
- an information processing device having a direction of travel acquisition unit that acquires the direction of travel of a user's path, and a sound source localization calculation unit that calculates a position a predetermined distance away from the user in the direction of travel as the sound source localization of a notification sound that guides movement in the direction of travel.
- an information processing method in which the information processing of the information processing device is executed by a computer, and a program for causing a computer to realize the information processing of the information processing device.
- FIG. 1 is a diagram illustrating a conventional navigation system.
- FIG. 1 is a diagram illustrating a conventional navigation system.
- FIG. 1 is a diagram illustrating a conventional navigation system.
- FIG. 1 is a diagram illustrating a conventional navigation system.
- FIG. 1 is a diagram illustrating a conventional navigation system.
- FIG. 1 is a diagram illustrating a conventional navigation system.
- FIG. 1 is a diagram illustrating a conventional navigation system.
- FIG. 1 illustrates an example of a configuration of an information processing device.
- 11A and 11B are diagrams illustrating an example of a notification sound generation process.
- 11A and 11B are diagrams illustrating an example of a notification sound generation process.
- FIG. 13 is a diagram showing an example of mixing of notification sounds.
- FIG. 1 is a diagram illustrating a conventional navigation system.
- FIG. 1 is a diagram illustrating a conventional navigation system.
- FIG. 1 is a diagram illustrating a conventional navigation system.
- FIG. 1
- FIG. 13 is a diagram showing an example of mixing of notification sounds.
- FIG. 13 is a diagram showing an example of mixing of notification sounds.
- FIG. 13 is a diagram showing an example of mixing of notification sounds.
- FIG. 13 is an explanatory diagram of a sound effect.
- FIG. 1 is a diagram illustrating an example of a system configuration of a navigation system.
- FIG. 11 is a diagram showing another example of the system configuration of a navigation system.
- FIG. 13 is a diagram showing an example of a UI related to notification of a traveling direction.
- FIG. 13 is a diagram illustrating an example of a UI related to a danger area approach notification.
- FIG. 2 illustrates an example of a hardware configuration of an information processing device.
- FIG. 11A and 11B are diagrams illustrating an example of control of a notification sound based on a head direction.
- FIG. 13 illustrates an example of a configuration of an information processing device according to a modified example.
- FIG. 13 is a diagram illustrating an example of a notification method.
- FIG. 13 is a diagram illustrating an example of a notification method.
- FIG. 11 is a diagram illustrating an example of a flow of generating a notification sound.
- FIG. 11 is a diagram illustrating another example of the flow of generating a notification sound.
- FIG. 13 is a diagram illustrating an example of settings regarding notifications within a notification target range.
- FIG. 13 is a diagram showing an example of a notification of surrounding facilities.
- FIG. 13 illustrates an example of a configuration of an information processing device according to a modified example.
- FIG. 13 is a diagram illustrating an example of a notification method.
- FIG. 13 is a diagram illustrating an example of a notification method.
- FIG. 11 is
- FIG. 13 is a diagram illustrating an example of a configuration of a main part of an information processing device according to a modified example.
- FIG. 11 is a diagram showing an example of a process flow relating to surrounding facility guidance.
- FIG. 13 is a diagram showing an example of a notification of surrounding facilities.
- FIG. 13 is a diagram showing an example of a notification of surrounding facilities.
- FIG. 13 is a diagram showing an example of a notification of surrounding facilities.
- FIG. 35 is a diagram showing an example of a processing flow relating to setting of the sound source localization in FIG. 34 .
- Figures 1 to 7 are diagrams explaining conventional navigation systems.
- Fig. 1 shows an example in which the user is unable to accurately grasp the distance traveled after hearing the voice "Turn right 100 m ahead" and ends up on a different road (incorrect route CS E ) near the road he or she actually wants to turn onto (correct route CS R ).
- This problem occurs frequently, especially for visually impaired people, who have difficulty distinguishing between roads and non-roads, for example, between the entrance to a parking lot and not a road.
- JP 2020-188494 A proposes a method of notifying the direction and distance to the destination using a sound source with a localized position.
- obstacles such as buildings between the current location and the destination, and it is common for the direction to the destination to differ from the direction of travel on the actual route. For this reason, even if the direction to the destination is known, the user must find a route that bypasses the obstacles and move around it. Furthermore, if the destination is far away, it is difficult to perceive the distance to the destination from sound alone.
- the notification sound may be heard from a direction different from the actual direction of travel (see Figure 4). It is also difficult to set an appropriate notification sound volume that is not too loud when approaching and not too quiet when far away, as well as the spacing between navigation points (see Figure 5). Furthermore, if there are multiple navigation points within the moving person's hearing range, the notification sounds may interfere with each other, which is an issue (see Figure 6).
- JP 2013-47653 A proposes a method of using stereophonic audio to provide notifications similar to the so-called "cross-talk" audio signals, which alternate between audio of the starting point and audio of the destination point.
- Japanese Patent Application Laid-Open No. 2002-5675 proposes a mechanism in which a notification sound is emitted from the navigation point where the traveler's own position is closest, and when that navigation point is reached, the sound source location of the notification sound is moved to the next navigation point, thereby providing guidance along the travel route RT with a single notification sound.
- this method does not solve the problem in Figure 4.
- the present disclosure has been made in consideration of the above-mentioned problems.
- the present disclosure aims to provide a navigation system that can guide a user with high accuracy by voice.
- one of the features of the information processing device according to the present disclosure is that it solves some or all of the above-mentioned problems by using the following method.
- the notification sound using stereophonic sound conveys the direction of travel with higher resolution than voice guidance.
- II. By emitting a notification sound from a fixed position that always maintains a constant distance from the moving person and conveying only the direction of travel, the correct direction of travel can be conveyed even if the accuracy of the self-location and map information is poor.
- III. The next direction, the current direction, and the position of the corner are conveyed by changes in the volume and melody of the sound source conveying the current direction and the sound source conveying the next direction.
- FIG. 8 is a diagram showing an example of the configuration of the information processing device 1.
- the information processing device 1 generates navigation information to a destination based on sensor information acquired from various sensors such as a GPS (Global Positioning System), a GNSS (Global Navigation Satellite System), a beacon, a camera, a gyro sensor, and a geomagnetic sensor. These sensors function as a sensor unit for detecting the position and orientation of the head of the user US.
- the navigation information includes acoustic information such as a notification sound for guiding the user US and a warning sound for alerting the user US.
- the information processing device 1 divides the travel route into a number of routes, and sets a direction of travel, a notification sound, and a sound source for each route.
- the route, direction of travel, notification sound, and sound source are represented by the symbols "CS", "TD”, “SD”, and "SC”, respectively.
- the information processing device 1 has a route planning unit 11, a traveling direction acquisition unit 12, a head direction acquisition unit 13, a sound source localization calculation unit 14, a sound generation unit 15, a volume adjustment unit 16, and a notification sound output unit 17.
- the route planning unit 11 acquires the position and orientation of the head of the user US as user position information.
- the user position information is detected from sensor information using, for example, the above-mentioned GPS, GNSS, or a camera-based VPS (Visual Positioning System) to measure the absolute position, and, if necessary, a relative position estimation method such as PDR (Pedestrian Dead Reckoning).
- the route planning unit 11 plans a travel route RT to the destination based on the user position information.
- the head direction acquisition unit 13 acquires the orientation of the head of the user US as the head direction of the user US.
- the traveling direction acquisition unit 12 acquires the traveling direction TD of the route CS of the user US.
- the traveling direction TD of the route CS means the direction of the movement line that passes through the center of the route CS.
- the traveling direction acquisition unit 12 divides the movement route RT into multiple routes CS based on information such as the corners or curvature of the movement route RT.
- the route CS is a roughly straight route whose curvature satisfies an acceptable standard. The acceptable standard is set arbitrarily by the system developer.
- the traveling direction acquisition unit 12 acquires multiple routes CS divided along the movement route RT. For each divided route CS, the traveling direction acquisition unit 12 acquires the direction along the route CS as the traveling direction TD of the route CS.
- the traveling direction acquisition unit 12 can acquire a traveling direction TD for both the current course CS C and the next course CS (next course CS N ) following the current course CS C from the travel route RT.
- the traveling direction acquisition unit 12 has a first traveling direction acquisition unit 12A and a second traveling direction acquisition unit 12B.
- the first traveling direction acquisition unit 12A acquires a traveling direction TD C of the current course CS C.
- the second traveling direction acquisition unit 12B acquires a traveling direction TD N of the next course CS N.
- the sound source localization calculation unit 14 calculates a position a predetermined distance away from the user US in the traveling direction TD C as the sound source localization of a notification sound SD C (see FIG. 11 ) that guides the user US to move in the traveling direction TD C.
- the sound source localization of the notification sound SD C can be calculated based on the position and orientation of the head of the user US.
- the position is specified based on a relative positional relationship to the position and orientation of the head.
- the user US can move accurately on the course CS C by moving in the direction in which the notification sound SD C is sounded.
- the distance to the sound source localization is set arbitrarily by the system developer.
- the sound source localization of the notification sound SD C can be set to a position on the course CS C through which the user US must pass.
- the sound source localization calculation unit 14 can generate a notification sound SD for each of the current course CS C and the next course CS N.
- the sound source localization calculation unit 14 has a first sound source localization calculation unit 14A and a second sound source localization calculation unit 14B.
- the first sound source localization calculation unit 14A calculates a position a predetermined distance away from the user US in the traveling direction TD C of the current course CS C as the sound source localization of the notification sound SD C that guides the user to move in the traveling direction TD C of the current course CS C.
- the second sound source localization calculation unit 14B calculates a position a predetermined distance away from the user US in the traveling direction TD N of the next course CS N as the sound source localization of the notification sound SD N (see FIG. 11 ) that guides the user to move to the next course CS N.
- the sound generating unit 15 generates various notification sounds SD and warning sounds WS (see FIG. 16) for navigation.
- the notification sounds SD and warning sounds WS can be generated, for example, as sounds (loop sounds) that are output with a certain rhythm or at playback time intervals.
- the notification sounds SD can include pulse sounds and sound effects for notifying the user US of the traveling direction TD.
- the warning sounds WS can include pulse sounds and sound effects for alerting the user US to the presence of a warning area WA (see FIG. 16) and a danger area DA (see FIG. 16), etc.
- the sound generating unit 15 can generate two notification sounds SD related to the current route CS C and the next route CS N.
- the sound generating unit 15 has a first sound generating unit 15A and a second sound generating unit 15B.
- the first sound generating unit 15A generates a notification sound SD C related to the traveling direction TD C of the current route CS C.
- the second sound generating unit 15B generates a notification sound SD N related to the traveling direction TD N of the next route CS N.
- the volume adjustment unit 16 adjusts the volume of the two notification sounds SD to generate a notification sound SD for output.
- the notification sound output unit 17 outputs the notification sound SD generated by the sound generation unit 15 or the volume adjustment unit 16 via a speaker or the like.
- the position that is the boundary between the current course CS C and the next course CS N is set as the midpoint BP (see FIG. 11).
- the volume adjustment unit 16 adjusts the volume of the two notification sounds SD related to the current course CS C and the next course CS N based on the distance from the midpoint BP to the user US.
- the volume adjustment unit 16 generates the notification sound SD obtained by mixing the two notification sounds SD with the adjusted volume as a notification sound that induces switching from the current course CS C to the next course CS N.
- “Generating a notification sound SD obtained by mixing two notification sounds SD” means “generating a notification sound SD that is perceived by the user US as a sound that is a mix of two notification sounds SD.”
- the mixing may be performed at the signal processing stage, or may be performed in the brain of the user US when viewing.
- An example of the latter is a method in which two notification sounds SD are played alternately, like a "different species calling method.” For example, one possible method is to set two notification sounds SD to be played alternately, with 5 seconds of play and 5 seconds of stop, and play the other while one is stopped.
- the traveling direction acquisition unit 12 acquires the traveling direction TD C of the current course CS C and the traveling direction TD N of the next course CS N (step S1).
- the traveling direction TD can be acquired based on self-location information and map information detected using, for example, GPS, GNSS, a beacon, PDR, etc.
- the traveling direction TD C of the current course CS C may be designated by the user US by pointing the device equipped with the sensor unit in the direction in which the user wants to travel, without using map information.
- the traveling direction acquisition unit 12 divides the travel route RT to the destination into a plurality of routes CS based on the positions of corners.
- the traveling direction acquisition unit 12 sets the position of the corner toward the next route CS N as the midpoint BP.
- the traveling direction acquisition unit 12 acquires the distance on the current route CS C from the current location to the midpoint BP based on the distance information of the travel route RT (step S2).
- the traveling direction TD In order to notify the traveling direction TD using stereophonic sound, it is necessary to obtain the traveling direction TD on the head coordinate system CO with the face orientation set to 0° (front orientation).
- a geomagnetic sensor is built into earphones or the like and attached to the head of the user US, and the geomagnetic sensor detects the direction in which the face is facing.
- the head direction obtaining unit 13 obtains the detected face (head) orientation based on the sensor information of the geomagnetic sensor as the head direction (step S3).
- the sound source localization calculation unit 14 sets a head coordinate system CO based on the position and orientation of the head.
- the head coordinate system CO is, for example, a three-dimensional coordinate system in which the position of the head of the user US is the origin, and any one of the x-axis, y-axis, and z-axis coincides with the head direction.
- the sound source localization calculation unit 14 calculates the direction along the course CS from the position of the head as the traveling direction TD of the course CS on the head coordinate system CO.
- the sound source localization calculation unit 14 calculates a position a predetermined distance away from the position of the head in the traveling direction TD C of the current course CS C on the head coordinate system CO as the sound source localization of the notification sound SD C.
- the sound generation unit 15 generates a notification sound SD C that is localized to the calculated sound source localization (step S4).
- the traveling direction acquisition unit 12 determines whether the distance to the next corner (midpoint BP) is equal to or less than a threshold (step S5). If the distance is greater than the threshold (step S5: No), the notification sound output unit 17 outputs the notification sound SD C to the speaker. If the distance is equal to or less than the threshold (step S5: Yes), the sound source localization calculation unit 14 calculates a position that is a predetermined distance away from the head position in the traveling direction TD N of the next course CS N on the head coordinate system CO as the sound source localization of the notification sound SD N. The sound generation unit 15 generates the notification sound SD N that is localized to the calculated sound source localization (step S6). The volume adjustment unit 16 outputs the notification sound SD, the volume of which is adjusted for the notification sound SD C and the notification sound SD N according to the distance, to the speaker (step S7).
- the user US is walking on the sidewalk in front of him/her toward the crosswalk.
- the sidewalk in front of him/her is the current route CS C
- the crosswalk is the next route CS N. If it is still a long way from the current route CS C to the next route CS N , the sound source SC N notifying the switch to the next route CS N is silent, and only the sound source SC C notifying the traveling direction TD C of the current route CS C outputs the notification sound SD C (see the diagram on the left side of Fig. 10).
- the notification sound SDN that guides the user to the next route CS N gradually becomes louder (see the diagram in the center of FIG. 10).
- the notification sound SDN that notifies the user of the traveling direction TDN of the previous route CS C gradually becomes quieter and eventually becomes silent. Then, only the notification sound SDN that notifies the user of the traveling direction TDN of the next route CS N is output (see the diagram on the right side of FIG. 11).
- Notification sound mixing example 11 and 12 are diagrams showing examples of mixing of notification sounds SD.
- Fig. 12 shows notification sounds SD of five waypoints PT set on the route CS.
- Waypoint PT 3 coincides with the boundary (midpoint BP) between route CS C and next route CS N.
- the notification sound SDC which guides the user US to move to the next route CS N , starts to be output from the sound source SCC set on the travel direction TD N side of the next route CS N.
- the system developer arbitrarily sets how close the user US is to the midpoint BP when the notification sound SDC starts to be output.
- the distance from the user US to the sound source localization of the notification sound SDC and the distance from the user US to the sound source localization of the notification sound SDC are constant, for example, regardless of the position of the user US.
- the volume adjustment unit 16 increases the volume of the notification sound SD N that guides the user US to move to the next route CS N as the user US approaches the midpoint BP on the current route CS C. Conversely, the volume adjustment unit 16 decreases the volume of the notification sound SD C that guides the user US to move in the traveling direction of the current route CS C as the user US approaches the midpoint BP on the current route CS C. For example, when the user US reaches the midpoint BP, the volume adjustment unit 16 adjusts the mixing amount of the two notification sounds SD so that the volume of the notification sound SD C and the volume of the notification sound SD N become equal.
- the route CS to be guided is switched from the route CS C to the next route CS N.
- the volume adjustment unit 16 increases the volume of the notification sound SD N that guides the user US to move in the traveling direction TD N of the next route CS N as the user US moves away from the waypoint BP on the next route CS N.
- the volume adjustment unit 16 decreases the volume of the notification sound SD C that guides the user US to move in the traveling direction TD C of the previous route CS C as the user US moves away from the waypoint BP on the next route CS N.
- the volume adjustment unit 16 adjusts the volume of the notification sound SD C to zero when the user US reaches the waypoint PT 4 .
- FIGS. 13 and 14 show other mixing examples of notification sounds SD.
- the user US heads from the current route CS C to the next route CS N along the planned movement route, but mistakenly continues straight ahead without turning left at the midpoint BP.
- the route planning unit 11 resets the movement route RT.
- the route planning unit 11 plans a movement route RT that moves in the opposite direction along the wrong route CS E toward the midpoint BP.
- the traveling direction acquisition unit 12 acquires the direction returning to the midpoint BP as the traveling direction TD of the course CS E.
- the sound source localization calculation unit 14 sets the sound source SC of the notification sound SD E , which guides the user US to move to the midpoint BP, behind the user US. For example, when the user US passes the midpoint BP and moves in a direction different from the next course CS N , the sound source localization calculation unit 14 calculates a position a predetermined distance away in the direction of the midpoint BP as the sound source localization of the notification sound SD E.
- the volume of the notification sound SD E becomes louder the farther it is from the midpoint BP and becomes quieter the closer it is to the midpoint BP.
- the volume of the notification sound SD N becomes quieter the farther it is from the midpoint BP and becomes louder the closer it is to the midpoint BP.
- FIG. 15 is an explanatory diagram of sound effects.
- the sound generation unit 15 acquires the angle range within which movement is permitted as the guidance range RG.
- the guidance range RG is set as a predetermined angle range centered on the traveling direction TD.
- the guidance range RG is set arbitrarily by the system developer.
- the sound generation unit 15 can impart a sound effect to the notification sound SD that guides movement in the traveling direction TD.
- the direction of the head of the moving user US is detected as the direction of movement of the user US.
- head direction HD is within an angular range of ⁇ n° centered on the traveling direction TD of the course CS
- sound effects such as increasing the volume of the notification sound SD, strengthening the reverberation, and increasing the sound spread are applied. This makes it possible to more accurately convey that the direction of movement is within ⁇ n° of the traveling direction.
- a warning sound WS may be output if the direction of travel deviates from TD.
- the sound source localization calculation unit 14 sets the traffic line at the center of the course CS.
- the sound source localization calculation unit 14 sets an area slightly off the traffic line as a warning area WA, and sets an area outside the warning area into which it is not recommended to enter as a danger area DA.
- the sound source localization calculation unit 14 sets a safe area near the traffic line that is neither a warning area WA nor a danger area DA as a safety area SA.
- the sound source localization calculation unit 14 can identify dangerous areas based on sensor information and set warning areas WA and danger areas DA.
- the sensor information can include image information captured by a camera, distance information measured by a distance sensor, and location information acquired from GPS/GNSS or a Visual Positioning System (VPS).
- the widths of the safety area SA, warning area WA, and danger area DA are set arbitrarily by the system developer.
- the warning area WA is an area that is a certain distance or more away from the traffic flow
- the danger area DA is an area that is a certain distance or more away from the warning area WA toward the outside.
- the sound source localization calculation unit 14 calculates a position on the boundary BD between the warning area WA and the danger area DA as the sound source localization of the warning sound WS.
- the sound generation unit 15 generates a warning sound WS having a sound image at the calculated sound source localization, and outputs it to the notification sound output unit 17 when the user UD approaches the boundary BD.
- the system developer can arbitrarily set how close the user US must be to the boundary BD before the warning sound WS is output.
- the warning area WA may be an area that is parallel to the traffic flow and a certain distance away from the traffic flow
- the danger area DA may be an area that is parallel to the boundary between the safety area SA and the warning area WA and is a certain distance or more away from the boundary between the safety area SA and the warning area WA.
- the edge of the platform may be detected based on information obtained from a camera or distance sensor, and an area that is a certain distance or more away from the edge of the platform may be set as the danger area DA or warning area WA.
- the danger area DA and warning area WA may also be set in other ways.
- a warning sound WS is output from a sound source SC W on a boundary BD of the danger area DA.
- the sound source localization calculation unit 14 can calculate a point on the boundary BD or a planar area along the boundary BD as the sound source localization of the warning sound WS.
- the position where a straight line connecting the normal line of the line of movement and the head intersects with the boundary BD can be set as the center of the sound source SC W.
- the boundary BD of the line of movement and the danger area DA may be a curve.
- FIG. 17 is a diagram showing an example of the system configuration of the navigation system NV.
- the navigation system NV has a server SV, a client terminal TM, and headphones HP.
- the client terminal TM is a portable information terminal such as a smartphone.
- the headphones HP have a sensor unit SE for detecting the position and orientation of the head of the user US. Note that earphones, glasses-type devices, head-mounted displays, etc. may be used instead of the headphones HP.
- the functions of the information processing device 1 described above are shared between the server SV and the client terminal TM.
- the information processing device 1 calculates a position a predetermined distance away from the user US in the traveling direction TD of the user US's path CS, based on the position and orientation of the user US's head, as the sound source localization of the notification sound SD that guides movement in the traveling direction TD.
- Which functions are assigned to the client terminal TM and which functions are assigned to the server SV is set arbitrarily by the system developer.
- the headphones HP have speakers that reproduce the sound image of the notification sound SD at the calculated sound source localization.
- the client terminal TM detects the position and orientation of the head of the user US based on sensor information acquired from the sensor unit SE.
- the client terminal TM generates a notification sound SD and a warning sound WS and outputs them to the headphones HP.
- the client terminal TM sends its own position information to the server SV, and acquires map information and information related to the traveling direction TD from the server SV. If the headphones HP have a high-speed calculation device, the notification sound SD and warning sound WS may be generated by the headphones HP.
- FIG. 18 shows another example of the system configuration of the navigation system NV.
- the head orientation is acquired based on sensor information from an external camera MD or a depth sensor.
- the external camera MD or depth sensor functions as a sensor unit SE.
- the result of head pose estimation performed using face tracking technology is transmitted from the server SV to the client terminal TM.
- the client terminal TM After receiving the pose estimation result, the client terminal TM generates a notification sound SD and a warning sound WS using a method similar to that of the example of FIG. 17.
- FIG. 19 is a diagram showing an example of a UI related to notification of the traveling direction TD.
- Fig. 20 is a diagram showing an example of a UI related to a danger area approach notification.
- Pressing the "Select notification sound” button allows you to select the type of melody and the type of notification sound (whether to be notified by increasing or decreasing the volume or by changing the tempo). Pressing the "Notification ON/OFF” button switches between enabled and disabled notification.
- the distance to the sound source SC that notifies the traveling direction TD can be specified.
- the guidance range RG shown in FIG. 16 can be specified.
- the distance from the midpoint BP to the output start point of the notification sound SDN can be specified.
- the distance to the boundary BD of the danger zone DA where the output of the warning sound WS starts can be specified.
- the position for localizing the sound image of the notification sound SD and the change in sound pressure may be arbitrarily set on the UI.
- FIG. 21 is a diagram illustrating an example of a hardware configuration of the information processing device 1.
- the information processing of the information processing device 1 is realized, for example, by a computer 1000.
- the computer 1000 has a CPU (Central Processing Unit) 1100, a RAM (Random Access Memory) 1200, a ROM (Read Only Memory) 1300, a HDD (Hard Disk Drive) 1400, a communication interface 1500, and an input/output interface 1600.
- Each part of the computer 1000 is connected by a bus 1050.
- the CPU 1100 operates based on a program (program data 1450) stored in the ROM 1300 or HDD 1400, and controls each component. For example, the CPU 1100 loads a program stored in the ROM 1300 or HDD 1400 into the RAM 1200, and executes processing corresponding to the various programs.
- program data 1450 program data 1450
- the CPU 1100 loads a program stored in the ROM 1300 or HDD 1400 into the RAM 1200, and executes processing corresponding to the various programs.
- the ROM 1300 stores boot programs such as the BIOS (Basic Input Output System) that is executed by the CPU 1100 when the computer 1000 starts up, as well as programs that depend on the hardware of the computer 1000.
- BIOS Basic Input Output System
- HDD 1400 is a non-transitory computer-readable recording medium that non-temporarily records programs executed by CPU 1100 and data used by such programs.
- HDD 1400 is a recording medium that records an information processing program according to an embodiment, which is an example of program data 1450.
- the communication interface 1500 is an interface for connecting the computer 1000 to an external network 1550 (e.g., the Internet).
- the CPU 1100 receives data from other devices and transmits data generated by the CPU 1100 to other devices via the communication interface 1500.
- the input/output interface 1600 is an interface for connecting the input/output device 1650 and the computer 1000.
- the CPU 1100 receives data from an input device such as a keyboard or a mouse via the input/output interface 1600.
- the CPU 1100 also transmits data to an output device such as a display device, a speaker, or a printer via the input/output interface 1600.
- the input/output interface 1600 may also function as a media interface that reads programs and the like recorded on a specific recording medium.
- Examples of media include optical recording media such as DVDs (Digital Versatile Discs) and PDs (Phase change rewritable Discs), magneto-optical recording media such as MOs (Magneto-Optical Disks), tape media, magnetic recording media, and semiconductor memories.
- optical recording media such as DVDs (Digital Versatile Discs) and PDs (Phase change rewritable Discs)
- magneto-optical recording media such as MOs (Magneto-Optical Disks)
- tape media magnetic recording media
- magnetic recording media and semiconductor memories.
- the CPU 1100 of the computer 1000 executes an information processing program loaded onto the RAM 1200 to realize the functions of each of the above-mentioned units.
- the information processing program, various models, and various data according to the present disclosure are stored in the HDD 1400.
- the CPU 1100 reads and executes the program data 1450 from the HDD 1400, but as another example, the CPU 1100 may obtain these programs from other devices via the external network 1550.
- the information processing device 1 has a traveling direction acquisition unit 12 and a sound source localization calculation unit 14.
- the traveling direction acquisition unit 12 acquires a traveling direction TD of a path CS of the user US.
- the sound source localization calculation unit 14 calculates a position a predetermined distance away from the user US in the traveling direction TD as the sound source localization of a notification sound SD that guides movement in the traveling direction TD.
- the processing of the information processing device 1 is executed by a computer 1000.
- the program disclosed herein causes the computer 1000 to realize the processing of the information processing device 1.
- the user US is guided to move in the traveling direction TD by the notification sound SD heard from ahead in the traveling direction TD. Therefore, even if the accuracy of the user's own position or map information is poor, the user US can be guided in the traveling direction TD with high accuracy.
- the sound source localization calculation unit 14 calculates the sound source localization of the notification sound SD to be a position a predetermined distance away from the user US in the traveling direction TD, based on the position and orientation of the head of the user US.
- the traveling direction TD is calculated based on the direction of the head. Therefore, the traveling direction TD is accurately recognized.
- the information processing device 1 has a route planning unit 11.
- the route planning unit 11 plans a travel route RT to a destination.
- the travel direction acquisition unit 12 acquires a plurality of routes CS partitioned along the travel route RT. For each partitioned route CS, the travel direction acquisition unit 12 acquires the direction along the route CS as the travel direction TD of the route CS.
- the distance from the user US to the sound source location of the notification sound SD is constant regardless of the position of the user US.
- This configuration makes it difficult for the sound source SC of the notification sound SD to be lost. This allows the direction of the sound source SC (travel direction TD) to be reliably recognized.
- the traveling direction acquisition unit 12 acquires a traveling direction TD N of a next route CS N following the route CS C.
- the sound source localization calculation unit 14 calculates a position a predetermined distance away from the user US in the traveling direction TD N of the next route CS N as the sound source localization of a notification sound SD N that guides the user US to move to the next route CS N.
- the distance from the user US to the sound source location of the notification sound SDN that guides the user US to move to the next route CSN is constant regardless of the position of the user US.
- the information processing device 1 has a volume adjustment unit 16.
- the volume adjustment unit 16 adjusts the volumes of two notification sounds SD related to the current route CS C and the next route CS N based on the distance from the midpoint BP to the user US.
- the midpoint BP is a point that is a boundary between the current route CS C and the next route CS N.
- the volume adjustment unit 16 generates a notification sound SD obtained by mixing the two notification sounds SD whose volumes have been adjusted as a notification sound SD that induces switching from the current route CS C to the next route CS N.
- the volume adjustment unit 16 increases the volume of the notification sound SDN that guides the user US to move to the next route CS N as the user US approaches the midpoint BP on the current route CS C.
- the volume adjustment unit 16 increases the volume of the notification sound SDN , which guides the user US to move in the traveling direction TDN of the next route CSN , as the user US moves away from the midpoint BP on the next route CSN .
- the entrance of the CS N into the next route can be intuitively grasped by the change in volume.
- the volume adjustment unit 16 reduces the volume of the notification sound SD C that guides the user US to move in the traveling direction of the current route CS C as the user US approaches the midpoint BP on the current route CS C.
- the volume adjustment unit reduces the volume of the notification sound SD C that guides the user US to move in the traveling direction of the previous route CS C as the user US moves away from the midpoint BP on the next route CS N.
- the entrance of the CS N into the next route can be intuitively grasped by the change in volume.
- the sound source localization calculation unit 14 calculates a position a predetermined distance away in the direction of the midpoint BP as the sound source localization of the notification sound SDE .
- the information processing device 1 has a sound generating unit 15.
- the sound generating unit 15 When the moving direction of the user US is included in a predetermined angle range centered on the traveling direction TD of the course CS, the sound generating unit 15 imparts a sound effect to the notification sound SD that guides the user US to move in the traveling direction TD.
- the sound source localization calculation unit 14 calculates the position on the boundary BD between the warning area WA and the danger area DA as the sound source localization of the warning sound WS.
- This configuration allows the approach of the danger zone DA to be intuitively recognized.
- the sound source localization calculation unit 14 calculates the planar area along the boundary BD between the warning area WA and the danger area DA as the sound source localization of the warning sound WS.
- This configuration allows the boundary BD of the danger area DA to be intuitively grasped.
- the warning area WA is an area that is a certain distance or more away from the traffic flow line that is the center of the route CS, and the danger area DA is an area that is a certain distance or more away toward the outside from the warning area WA.
- the sound source localization calculation unit 14 identifies dangerous areas based on sensor information and sets warning areas WA and danger areas DA.
- the warning area WA and danger area DA are set appropriately.
- Fig. 22 is a diagram showing an example of control of the notification sound SD based on the head direction. The following description will focus on the differences from the above embodiment.
- notification of the traveling direction TD was performed using stereophonic sound.
- stereophonic sound alone can make it difficult to sense the positioning in front.
- the pulse interval of the notification sound SD (pulse sound) is changed according to the direction of the head of the user US. The difference between the traveling direction TD and the actual direction of movement can be intuitively grasped from the pulse interval, making it easier to guide the user in the traveling direction TD.
- FIG. 23 shows an example of the configuration of the information processing device 2 of this modified example.
- the sound generation unit 15 has an angle difference calculation unit 151, a movement detection unit 152, and a notification sound output determination unit 153.
- the angle difference calculation unit 151 calculates the angle difference between the orientation of the head of the user US and the traveling direction TD.
- the movement detection unit 152 detects the movement of the head of the user US.
- the notification sound output determination unit 153 determines whether or not to output the notification sound SD based on the movement of the head of the user US.
- the notification sound output unit 17 outputs the notification sound SD based on the determination result of the notification sound output determination unit 153.
- FIGS. 24 and 25 show examples of notification methods.
- the head direction is acquired by the head direction acquisition unit 13.
- the head direction acquisition unit 13 acquires the head direction of the user US as the head direction HD using a geomagnetic sensor or the like.
- the notification sound output determination unit 153 determines that it is appropriate to perform a notification when the angular difference between the head direction HD and the traveling direction TD is within the notification target range.
- the notification sound output determination unit 153 can also determine whether or not to notify based on the direction of head movement. For example, the notification sound output determination unit 153 can determine to notify when the user US is heading in the traveling direction TD, and not to notify when the user US is heading in a direction unrelated to the traveling direction TD.
- the notification range refers to the angular range that is the subject of notification.
- the notification sound output unit 17 outputs a notification sound SD that guides movement in the traveling direction TD when the angular difference between the head direction HD and the traveling direction TD is within the notification range.
- the maximum angular difference at which notification begins is shown as the maximum notification angle ⁇ .
- the notification range is the angular range in which the angular difference between the traveling direction TD and the head direction HD is greater than or equal to - ⁇ and less than or equal to ⁇ .
- the method for setting the notification range is not limited to this.
- the notification range is set arbitrarily by the system developer.
- the notification sound output unit 17 outputs the notification sound SD as a pulse sound.
- the notification sound output unit 17 sets the notification interval of the pulse sound to a magnitude that is inversely proportional to the angular difference between the head direction HD and the traveling direction TD.
- Figure 26 is a diagram explaining an example of the generation flow of the notification sound SD.
- the travel direction acquisition unit 12 acquires the travel direction TD C of the current course CS C (step S11).
- the head direction acquisition unit 13 acquires the head direction HD of the user US (step S12).
- the angle difference calculation unit 151 calculates the angle difference between the travel direction TD C and the head direction HD (step S13).
- the movement detection unit 152 detects the movement of the head of the user US.
- the notification sound output determination unit 153 detects the direction in which the head moves and determines whether or not to notify based on whether the head is moving in a direction approaching the travel direction TD (step S14).
- the notification sound output determination unit 153 determines whether the angular difference between the traveling direction TD C and the head direction HD is within the notification target range (step S15). When the angular difference is within the notification target range (step S15: Yes), the notification sound output determination unit 153 determines that it is appropriate to perform notification.
- the notification sound output unit 17 outputs the notification sound SD C based on the results of a series of determinations made by the notification sound output determination unit 153. That is, when the angular difference is within the notification target range and the head of the user US is moving in a direction approaching the traveling direction TD C , the notification sound output unit 17 outputs the notification sound SD C that guides the user US to move in the traveling direction TD C.
- step S14 determines that the head is moving away from the traveling direction TD C (step S14: No) and if the angular difference is outside the notification range (step S15: No).
- the notification sound output determiner 153 determines that it is not appropriate to provide a notification. In this case, the process returns to step S11, and the above-described process is repeated until it is determined that it is appropriate to provide a notification.
- FIG. 27 is a diagram illustrating another example of the generation flow of the notification sound SD.
- Step S26 has been added as a condition for determining whether or not to notify.
- Steps S21 to S25 are generally similar to steps S11 to S15. The only difference is that the order of steps S24 to S25 is different from that of steps S14 to S15.
- the time elapsed since the head starts to move n in the appropriate direction is added as a judgment condition (step S26).
- the notification sound output unit 17 monitors the time elapsed since the judgment results of steps S24 to S25 are issued.
- the notification sound output unit 17 starts outputting the notification sound SD C after a preset time interval has elapsed since the head of the user US starts to move in a direction approaching the traveling direction TD C.
- the time to start outputting the notification sound is set arbitrarily by the system developer.
- Figure 28 shows an example of settings for notifications within the notification range.
- the notification frequency, notification volume, and maximum notification angle ⁇ within the notification target range can be set using the UI.
- the time interval of the pulse sound output as the notification sound SD can be specified as the notification frequency.
- the volume of the notification sound SD can be specified.
- the maximum angle difference at which notifications begin can be specified as the maximum notification angle ⁇ .
- the notification frequency and notification volume are specified as numerical values with standardized levels ranging from 0 to 1, for example.
- FIG. 29 is a diagram showing an example of notification of the surrounding facilities SF. The following describes mainly the differences from the above embodiment.
- Patent No. 4315211 describes a method in which the user selects an object to be guided from a list of guide candidates by operating the UI of a mobile information terminal, and is notified of the location of the object or its location along the route via stereophonic audio data through headphones.
- voice notifications have few benefits for able-bodied people, as the amount of information they provide is less than when information is obtained visually from the screen of a mobile information terminal used to operate the device.
- the operating procedures are complex, making it difficult to operate the device without relying on vision.
- This modified example has been devised to solve the above-mentioned problems.
- the location and direction of the surrounding facilities SF are notified using stereophonic sound.
- the user US performs a specific trigger action, detailed information about the surrounding facilities SF and a travel route RT are presented. This makes it possible to execute processes that previously required complex operations from an app on a mobile information terminal with fewer operations.
- trigger actions can be performed using voice or gesture operations, hands-free operation can be realized without the need for a separate mobile information terminal or the like.
- the detailed information may include various information related to the surrounding facility SF other than the facility name. If the surrounding facility SF is a shop, examples of detailed information include business hours, sales information, and the distance to the surrounding facility SF. If the surrounding facility SF is a station, examples of detailed information include line information, operation information, and the distance to the surrounding facility SF.
- outdoor facilities such as banks and train stations are shown as the surrounding facilities SF, but the surrounding facilities SF are not limited to outdoor facilities.
- the surrounding facilities SF that are the subject of notifications may also be indoor facilities such as stores in a shopping mall or partitioned sales corners in a mass retailer.
- the user US can select a surrounding facility SF of interest by a preset trigger operation.
- the trigger operation include turning the head toward the surrounding facility SF (sound source location of the facility notification sound SDF ), or performing a voice operation, a button operation, or a gesture operation using a UI (User Interface) 25 while the head is facing the surrounding facility SF.
- a trigger operation is performed during notification of the facility notification sound SDF (for example, within a certain period of time from the start of notification)
- detailed information of the surrounding facility SF and a travel route RT to the surrounding facility SF are presented.
- FIG. 30 is a diagram showing an example of the main configuration of information processing device 3 of this modified example.
- FIG. 30 illustrates only the configuration necessary for providing information about nearby facilities, and does not exclude the configurations of the above-mentioned embodiments (the configuration of information processing device 1 shown in FIG. 8 and the configuration of information processing device 2 shown in FIG. 23). The following description focuses on the differences from the above-mentioned embodiments.
- the information processing device 3 has a surrounding information acquisition unit 21, a self-location acquisition unit 22, a route information/facility information acquisition unit 23, an operation determination unit 24, and a UI 25.
- the self-location acquisition unit 22 acquires the location of the user US, such as longitude and latitude, using a GPS/GNSS or the like installed in headphones or a mobile information terminal (client terminal TM) owned by the user US.
- the surrounding information acquisition unit 21 acquires map information around the self-location from the server and creates a list of surrounding facilities SF (notification target facilities) to be notified.
- the surrounding information acquisition unit 21 acquires the self-location and information on the aisle around the facility.
- the surrounding information acquisition unit 21 can customize the surrounding facilities SF to be notified based on the facility usage history of the user US. Customization means selecting and prioritizing the surrounding facilities SF to be notified. For example, the facility usage history specifies the date and time of use by the user US for each surrounding facility SF. The facility usage history is linked to the user US and stored on the server SV. The surrounding information acquisition unit 21 can also customize the surrounding facilities SF to be notified based on the detection frequency of the trigger action for each surrounding facility SF.
- Customization can be done using machine learning models.
- the machine learning model inputs user attributes, usage history, shopping history, etc., and outputs notification settings optimized for the user's habits. For example, for a user who goes to a specific hospital on Monday morning and then to a pharmacy every week, it can be set to prioritize notifications for the hospital and pharmacy on that day.
- One possible scenario would be to use the information that the user has visited the hospital as a trigger to prioritize notifications for the pharmacy.
- Customization can also be performed by referring to the settings of other users.
- the server SV holds facility usage histories of various users who have used the navigation system NV in the past.
- the machine learning model can use the facility usage histories of users with similar attributes to learn the selection and prioritization of surrounding facilities SF to be notified of.
- information on the surrounding facilities SF and surrounding passageways can also be obtained by image recognition processing using a camera.
- information on the surrounding facilities SF and surrounding passageways can be obtained from pattern matching of facility shapes, character recognition on signs, road surface recognition, etc.
- Distance information on the surrounding facilities SF may also be obtained from a depth sensor, etc.
- the surrounding information acquisition unit 21 is provided on the client terminal TM side, not on the server SV side. Also, by setting the position of the camera as the self-position, the self-position acquisition unit 22 can be omitted.
- the facility notification sound SDF is notified for each of the facilities to be notified.
- the order of the peripheral facilities SF to be notified can be set arbitrarily.
- the notification sound output unit 17 can output the facility notification sounds SDF of multiple peripheral facilities SF in the order of the furthest or closest distance between the peripheral facility SF and the user US.
- the user US selects and determines the desired peripheral facility SF using voice, gestures, etc. as a selection and determination operation.
- the operation determination unit 24 detects a selection and determination operation that specifies the direction of the sound source position of the facility notification sound SDF as a trigger operation.
- the selection/decision operation may include a selection operation and a decision operation.
- the selection operation is an operation for selecting the direction of the sound source localization of the facility notification sound SDF .
- the decision operation is an operation for deciding a peripheral facility SF having a sound source localization in the selected direction as a desired peripheral facility SF.
- the decision operation may include a trigger operation for causing the information processing device 3 to carry out a necessary process (such as reading out detailed information or route guidance) for the decided peripheral facility SF.
- the selection operation may include an action of turning the head of the user US in the direction of the sound source localization of the facility notification sound SD F within a predetermined time from the emission of the facility notification sound SD F.
- the operation determination unit 24 can determine that the selection operation has been performed when the sound source localization is directed in front of the face or ear of the user US within a certain time from the start of notification by the facility notification sound SD F.
- the decision operation can include a predetermined button operation, a voice operation, and a gesture operation.
- the operation determination unit 24 can determine whether or not a decision operation has been performed based on sensor information (such as a touch panel or microphone signal).
- the decision operation can also be performed as an action of maintaining the position of the head facing the direction of the sound source localization for a predetermined period of time during the selection operation. This allows the selection and decision of a surrounding facility SF to be performed with a single action of turning the face in the direction of the surrounding facility SF. In order to distinguish this from the face just happening to be facing in the direction of the surrounding facility SF, a requirement that the up and down posture of the face be close to horizontal can also be added to the requirements for the decision operation.
- the decision operation may include an operation by the user US in response to a notification of detailed information about the surrounding facility SF made in response to a selection operation. For example, in response to a decision operation made during notification of detailed information (e.g., within a certain period of time from the start of notification), the route planning unit 11 may plan a travel route RT with the notified surrounding facility SF as the destination. In this case, the decision operation may include an action of starting to move toward the surrounding facility SF. This allows for intuitive operation.
- the navigation system NV may also be equipped with a mechanism for earning advertising revenue when the user US selects or visits a surrounding facility SF.
- the route information and facility information acquisition unit 23 reads information related to the facility to be notified from the storage device on the server SV or the client terminal TM.
- the route information and facility information acquisition unit 23 reads route information for route guidance from the storage device on the server SV or the client terminal TM.
- Figure 31 shows an example of a processing flow for providing information about nearby facilities.
- the surrounding information acquisition unit 21 acquires the position of the surrounding facility SF (notification target facility) that is the subject of notification from the map information stored in the server SV (step S31).
- the self-position acquisition unit 22 acquires the position of the head of the user US as the self-position using a GPS or the like (step S32).
- the head direction acquisition unit 13 acquires the direction of the head of the user US using a posture sensor (geomagnetic sensor, etc.) mounted on the head-worn device (step S33).
- the head direction of the user US can also be obtained from the image of a surveillance camera installed around the user US.
- the server SV detects the head direction from the image of the user US captured by the surveillance camera and outputs it to the head direction acquisition unit 13.
- the head direction of the user US can also be obtained from the image of a camera mounted on the neckband.
- the sound source localization calculation unit 14 acquires the position and orientation of the head of the user US as user position information.
- the sound source localization calculation unit 14 calculates the direction and distance to the notification target facility from the user position information and the position information of the notification target facility (step S34).
- the sound source localization calculation unit 14 sets a position away from the user US by the calculated direction and distance as the sound source localization of the facility notification sound SDF .
- the notification sound output unit 17 notifies the user US of the facility names of the surrounding facilities SF, etc., using the facility notification sounds SDF generated as stereophonic sounds (step S35). If music or other notification sounds are being played when making the notification, the volume adjustment unit 16 may mix the facility notification sounds SDF with sounds other than the facility notification sounds SDF so that sounds other than the facility notification sounds SDF are reduced.
- the operation determination unit 24 determines whether or not a trigger operation related to the selection and determination of the surrounding facility SF has occurred based on the sensor information. For example, the operation determination unit 24 determines whether or not the user US has turned his/her face or the like in the direction of the surrounding facility SF being notified as a selection operation during notification of the facility notification sound SDF (step S36). If no selection operation is detected (step S36: No), the information processing device 3 does not provide surrounding facility guidance.
- the information processing device 3 notifies (feeds back) that the user US is facing the direction of the notified surrounding facility SF by sound or vibration (step S37).
- the feedback means the notification sound output unit 17 or a vibrator mounted on the head-mounted device can be used.
- the feedback means provides feedback to the user US when the direction of the head of the user US is within a predetermined angle range centered on the direction of the sound source localization of the facility notification sound SDF .
- the operation determination unit 24 determines whether or not a trigger operation that serves as a decision operation has been performed during the notification (step S38). Decision operations include voice operation, button operation, and gesture operation. If a trigger operation is not detected (step S38: No), the information processing device 3 does not provide guidance to nearby facilities.
- the information processing device 3 performs processing according to the trigger action. Examples of the processing include reading out detailed information about the surrounding facility SF and providing route guidance to the surrounding facility SF (step S39).
- the route planning unit 11 responds to the trigger action of the user US in response to the facility notification sound SDF indicating the location of the surrounding facility SF, and plans a travel route RT having the surrounding facility SF as a destination.
- Figures 32 to 35 show examples of notifications for surrounding facilities SF.
- the sound source location of the facility notification sound SDF is set on the passage PA facing the surrounding facility SF.
- the sound source location of the facility notification sound SDF can be set on the user US's own passage PA -C .
- the user US's own passage PA -C means the passage PA through which the user US is moving.
- the passage PA in the straight direction becomes the user US's own passage PA- C .
- the notification sound output unit 17 presents the facility notification sound SDF of the surrounding facility SF facing the user US's own passage PAC to the user US on the user's own passage PAC .
- the user US can easily grasp the position of the surrounding facility SF.
- the user US can move toward the surrounding facility SF by relying on the facility notification sound SDF .
- the sound source localization of the facility notification sound SDF is set on the passage PA intersecting with the user's own passage PA C.
- the notification sound output unit 17 presents the facility notification sound SDF when the user US reaches the intersection between the passage PA where the sound source localization is set and the current user's own passage PA C.
- the sound source location of the facility notification sound SDF is set on a search route SP searched as a route from the surrounding facility SF to the user US.
- the sound source location of the facility notification sound SDF is relocated from the position of the surrounding facility SF registered in a map database or the like to a position on the search route SP that is a branch point from the own passage PAC when accessing the surrounding facility SF.
- the facility notification sound SDF is notified as a stereophonic sound having a sound source localization at a position on an overlapping passage OV where the search route SP and the user's own passage PAC of the user US overlap.
- the overlapping portion between the user's own passage PAC and the search route SP to the university is shown as an overlapping passage OV1.
- the overlapping portion between the user's own passage PAC and the search route SP to the bank or station is shown as an overlapping passage OV2.
- the sound source localization of facility notification sounds SDF for multiple peripheral facilities SF is set on the overlapping passage OV2.
- the order of notification of each facility notification sound SDF can be set according to the distance to the peripheral facility SF.
- the notification sound output unit 17 can output the facility notification sounds SDF for multiple peripheral facilities SF whose sound source localization is set on the overlapping passage OV2 in order of the furthest or closest distance between the peripheral facility SF and the user US.
- the sound source localization calculation unit 14 sets the sound source localization of the facility notification sound SDF to the branch portion branching from the own passage PAC along the search route SP.
- the sound source localization of the facility notification sound SDF is set to the periphery of the surrounding facility SF.
- the sound source localization may be set anywhere in the periphery, but if it is set near the entrance of the surrounding facility SF, it is easier to enter the surrounding facility SF. Therefore, it is preferable that the sound source localization calculation unit 14 sets the sound source localization of the facility notification sound SDF on the passage PA facing the entrance of the surrounding facility SF.
- the position of the entrance of the surrounding facility SF can be obtained from the map information stored in the server SV. If the position of the entrance is not registered in the map information, the server SV can estimate the position of the entrance based on information on the flow of visitors to the surrounding facility SF and register it in the map information.
- a visitor means a past user US who has visited the surrounding facility SF using the navigation system NV. The flow of the user US is calculated based on the user position information transmitted to the server SV.
- the sound source localization calculation unit 14 can set the sound source localization of the facility notification sound SDF based on the position of the entrance detected based on the flow of visitors to the surrounding facility SF.
- the own passage PAC is curved.
- the sound source localization calculation unit 14 can set the sound source localization of the facility notification sound SDF to a position where a straight line connecting the user US and the sound source localization of the facility notification sound SDF does not go beyond the own passage PAC . This prevents the user US from going off the path even if he or she moves straight toward the sound source localization.
- FIG. 36 shows an example of a processing flow for setting the sound source localization in FIG. 34.
- the surrounding information acquisition unit 21 acquires the location of the surrounding facility SF and information on the passage around the surrounding facility SF from a map database of the server SV or the like (step S41).
- the self-location acquisition unit 22 acquires the self-location from sensor information or the like.
- the self-location acquisition unit 22 also acquires information on the self-path PAC from the self-location and the passage information (step S42).
- the head direction acquisition unit 13 acquires the head direction of the user US from the sensor information or the like (step S43).
- the sound source localization calculation unit 14 determines the sound source localization of the facility notification sound SDF on the passage PA facing the surrounding facility SF (step S44). The sound source localization calculation unit 14 determines whether the sound source localization of the facility notification sound SDF exists on the own passage PA -C (step S45). If the sound source localization of the facility notification sound SDF exists on the own passage PA -C (step S45: Yes), the notification sound output unit 17 outputs the facility notification sound SDF .
- step S45 If the sound source location of the facility notification sound SDF is not on the own passage PAC (step S45: No), the sound source location calculation unit 14 relocates the sound source location of the facility notification sound SDF to a position on the search route SP that is close to the own passage PAC and serves as a branch point when accessing the surrounding facility SF (step S47).
- the present technology can also be configured as follows.
- a direction acquisition unit that acquires a direction of travel of a user's route; a sound source localization calculation unit that calculates a position a predetermined distance away from the user in the traveling direction as a sound source localization of a notification sound that guides the user to move in the traveling direction; An information processing device having the above configuration.
- the sound source localization calculation unit calculates a position away from the user by the predetermined distance in the traveling direction as a sound source localization of the notification sound based on a position and orientation of the user's head.
- a route planning unit that plans a route to a destination
- the traveling direction acquisition unit acquires a plurality of routes partitioned along the travel route, and acquires, for each of the partitioned routes, a direction along the route as the traveling direction of the route.
- the information processing device according to (1) or (2) above.
- the distance from the user to the sound source location of the notification sound is constant regardless of the position of the user.
- An information processing device according to any one of (1) to (3) above.
- the traveling direction acquisition unit acquires a traveling direction of a next route following the route
- the sound source localization calculation unit calculates a position a predetermined distance away from the user in a traveling direction of the next route as a sound source localization of a notification sound that guides the user to move to the next route.
- An information processing device according to any one of (1) to (4) above.
- (6) The distance from the user to the sound source location of the notification sound that guides the user to move to the next route is constant regardless of the user's position.
- the information processing device according to (5) above.
- (7) a volume adjustment unit that adjusts volumes of the two notification sounds related to the current course and the next course based on a distance from a midpoint that is a boundary between the current course and the next course to the user, and generates a notification sound obtained by mixing the two notification sounds whose volumes have been adjusted as a notification sound that induces switching from the current course to the next course;
- the information processing device according to (5) or (6) above.
- the volume adjustment unit increases the volume of the notification sound that guides the user to move to the next route as the user approaches the waypoint on the current route.
- the information processing device according to (7) above. the volume adjustment unit increases the volume of the notification sound that guides the user to move in the traveling direction of the next route as the user moves away from the waypoint on the next route.
- the information processing device according to (8) above. (10) The volume adjustment unit reduces the volume of the notification sound that guides the user to move in a traveling direction of the current course as the user approaches the waypoint on the current course.
- the sound source localization calculation unit calculates a position at the predetermined distance in the direction of the midpoint as the sound source localization of the notification sound when the user passes the midpoint and moves in a direction different from the next route.
- An information processing device according to any one of (7) to (11) above.
- a sound generating unit that, when the moving direction of the user is within a predetermined angle range centered on the moving direction, adds a sound effect to the notification sound that guides the user to move in the moving direction; 13.
- the information processing device according to any one of (1) to (12) above.
- the sound source localization calculation unit calculates a position on the boundary between the warning area and the danger area as the sound source localization of the warning sound. 14.
- the information processing device calculates a planar area along a boundary between the warning area and the danger area as a sound source localization of the warning sound.
- the information processing device 14 above.
- the warning area is an area that is a certain distance or more away from the flow line that is the center of the path
- the danger area is an area that is a certain distance or more away from the warning area toward the outside.
- the information processing device 14 or (15) above.
- the sound source localization calculation unit identifies a dangerous area based on sensor information and sets the warning area and the danger area. 17.
- the information processing device according to any one of (14) to (16) above.
- an angle difference calculation unit that calculates an angle difference between a direction of the user's head and the traveling direction; a notification sound output unit that outputs the notification sound for guiding movement in the traveling direction when the angular difference is within a notification range; 18.
- the notification sound output unit outputs the notification sound as a pulse sound, The notification sound output unit increases the frequency of the pulse sound notification as the angle difference becomes smaller.
- a motion detection unit that detects a motion of the user's head, the notification sound output unit outputs the notification sound for guiding the user to move in the traveling direction when the angular difference is within a notification target range and the user's head is moving in a direction approaching the traveling direction.
- the information processing device according to (18) or (19) above.
- the notification sound output unit starts outputting the notification sound after a preset time interval has elapsed since the user's head starts to move in a direction approaching the traveling direction, The information processing device described in (20) above.
- the route planning unit in response to a trigger operation by the user in response to a facility notification sound indicating a location of a peripheral facility, plans the travel route with the peripheral facility as the destination;
- the facility notification sound is notified as a stereophonic sound having a sound source localization on the location of the peripheral facility or on the passageway facing the peripheral facility.
- the sound source localization calculation unit sets the sound source localization of the facility notification sound on the passageway facing the entrance of the surrounding facility.
- the sound source localization calculation unit sets the sound source localization of the facility notification sound based on a position of the entrance detected based on a flow line of visitors to the peripheral facility.
- (26) an operation determination unit that detects a selection/determination operation that specifies a direction of the sound source localization of the facility notification sound as the trigger operation; 26.
- the selection/confirmation operation includes a selection operation for selecting a direction of the sound source localization of the facility notification sound, and a confirmation operation by the user in response to a notification of detailed information of the surrounding facility performed in response to the selection operation.
- the selection operation includes an action of directing the user's head in the direction of the sound source localization of the facility notification sound within a predetermined time after the facility notification sound is emitted,
- (29) a feedback means for providing feedback to the user when the direction of the user's head is within a predetermined angle range centered on the direction of the sound source localization of the facility notification sound;
- the decision operation includes an action of starting to move toward the surrounding facility.
- the information processing device according to any one of (27) to (29) above. (31) a surrounding information acquisition unit that customizes the surrounding facilities to be notified based on the facility usage history of the user; 31.
- the information processing device is a (32)
- the surrounding information acquisition unit customizes the surrounding facilities to be notified based on a detection frequency of the trigger operation for each surrounding facility.
- the facility notification sound is notified as a stereophonic sound having a sound source localization at a position on an overlapping path where a search route searched as a route from the peripheral facility to the user and a self-path on which the user is moving overlap.
- the sound source localization calculation unit sets the sound source localization of the facility notification sound to a branching portion that branches off from the own passage along the search route.
- the information processing device is a (37) Obtaining the user's direction of travel; A position that is a predetermined distance away from the user in the traveling direction is calculated as a sound source localization of a notification sound that guides the user to move in the traveling direction.
- a computer-implemented information processing method comprising: (38) Obtaining the user's direction of travel; A position that is a predetermined distance away from the user in the traveling direction is calculated as a sound source localization of a notification sound that guides the user to move in the traveling direction.
- a program that makes a computer do this.
- a sensor unit for detecting a position and a direction of a user's head The information processing device according to any one of (1) to (36) above, which calculates a position a predetermined distance away from the user in a traveling direction of the user's path based on a position and direction of the user's head as a sound source localization of a notification sound that guides the user to move in the traveling direction; a speaker that reproduces a sound image of the notification sound at the calculated sound source localization;
- a navigation system having:
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
- Stereophonic System (AREA)
Abstract
Description
本発明は、情報処理装置、情報処理方法およびプログラムに関する。 The present invention relates to an information processing device, an information processing method, and a program.
音声を用いたナビゲーションシステムがカーナビやスマートフォンなどで広く普及している。この種のシステムでは、GPSなどを用いて現在位置が検出され、目的地までの進路が音声によって通知される。 Voice navigation systems are widely used in car navigation systems and smartphones. In this type of system, the current location is detected using GPS or other devices, and the route to the destination is notified by voice.
方向や距離の感覚は人によってばらつきがある。そのため、進路を言葉で正確に伝えることは難しい。目的地までの経由地や案内施設等に音源を設定し、音源から誘導音を発することも考えられるが、自己位置に誤差があると、正しく誘導が行われないという不具合が生じる。 The sense of direction and distance varies from person to person. This makes it difficult to communicate one's course accurately in words. One option would be to set up sound sources at intermediate points or guide facilities along the way to the destination and have the sound emit guidance sounds, but if there is an error in the self-position, this could result in incorrect guidance.
そこで、本開示では、音声によってユーザを精度よく誘導することが可能な情報処理装置、情報処理方法およびプログラムを提案する。 In view of this, this disclosure proposes an information processing device, an information processing method, and a program that can guide a user accurately by voice.
本開示によれば、ユーザの進路の進行方向を取得する進行方向取得部と、前記ユーザから前記進行方向に所定の距離だけ離れた位置を、前記進行方向への移動を誘導する通知音の音源定位として算出する音源定位算出部と、を有する情報処理装置が提供される。また、本開示によれば、前記情報処理装置の情報処理がコンピュータにより実行される情報処理方法、ならびに、前記情報処理装置の情報処理をコンピュータに実現させるプログラムが提供される。 According to the present disclosure, there is provided an information processing device having a direction of travel acquisition unit that acquires the direction of travel of a user's path, and a sound source localization calculation unit that calculates a position a predetermined distance away from the user in the direction of travel as the sound source localization of a notification sound that guides movement in the direction of travel. In addition, according to the present disclosure, there is provided an information processing method in which the information processing of the information processing device is executed by a computer, and a program for causing a computer to realize the information processing of the information processing device.
以下に、本開示の実施形態について図面に基づいて詳細に説明する。以下の各実施形態において、同一の部位には同一の符号を付することにより重複する説明を省略する。 Below, embodiments of the present disclosure will be described in detail with reference to the drawings. In each of the following embodiments, the same parts will be designated by the same reference numerals, and duplicated descriptions will be omitted.
なお、説明は以下の順序で行われる。
[1.概要]
[2.情報処理装置の構成]
[3.情報処理方法]
[4.通知音のミキシング例]
[5.効果音]
[6.警告音]
[7.システム構成例]
[8.通知音選択UI]
[9.ハードウェア構成例]
[10.効果]
[11.変形例]
[12.周辺施設案内に関する変形例]
The explanation will be given in the following order.
[1. Overview]
2. Configuration of information processing device
3. Information Processing Method
[4. Notification sound mixing example]
5. Sound Effects
6. Warning Sound
7. System configuration example
[8. Notification sound selection UI]
[9. Hardware Configuration Example]
[10. Effects]
11. Modifications
[12. Modifications regarding surrounding facility guidance]
[1.概要]
地図上で目的地を設定し、地図表示や音声で目的地まで誘導するナビゲーションシステムが広く普及している。しかし、地図表示の場合は、地図を見ながら移動している間に前方の物体等への注意が緩慢になる可能性がある。音声による経路案内も広く普及しているが、音声ガイドでは、頻繁に音声を出すと煩わしいため数十m~数百m程度の間隔でガイド情報を提示するのが一般的であり、かつ限られた情報しか通知できない。このため、音声による経路案内には以下のような問題があった。
[1. Overview]
Navigation systems that allow users to set a destination on a map and guide them to the destination by map display or voice are widely used. However, when using a map display, there is a possibility that users may not pay enough attention to objects ahead while driving while looking at the map. Voice route guidance is also widely used, but voice guidance generally presents guide information at intervals of tens to hundreds of meters because frequent voice announcements can be irritating, and it can only provide limited information. For these reasons, voice route guidance has the following problems:
図1ないし図7は従来のナビゲーションシステムを説明する図である。 Figures 1 to 7 are diagrams explaining conventional navigation systems.
まず、図1に示すように、音声案内の提示した距離と、移動者(ユーザUS)の感覚に基づいた実際の移動距離が異なる場合があるという課題がある。図1には、「100m先を右に曲がります」という音声が流れてから、進んだ距離を正確に把握できず、実際に曲がりたい道(正しい進路CSR)の近傍の違う道(誤った進路CSE)に入り込んでしまう例が示されている。特に視覚障がい者は道と道以外、例えば道でなく駐車場の入り口かどうかの識別が難しいため、この問題は頻繁に発生してしまう。 First, as shown in Fig. 1, there is a problem that the distance presented by the voice guidance may differ from the actual distance traveled based on the sense of the traveler (user US). Fig. 1 shows an example in which the user is unable to accurately grasp the distance traveled after hearing the voice "Turn right 100 m ahead" and ends up on a different road (incorrect route CS E ) near the road he or she actually wants to turn onto (correct route CS R ). This problem occurs frequently, especially for visually impaired people, who have difficulty distinguishing between roads and non-roads, for example, between the entrance to a parking lot and not a road.
図2に示すように、曲がり道の詳細な角度を提示できないという課題もある。「右です」のような音声では、どの程度の曲率の曲がり道なのかを示すことができない。このため、例えば曲率の少ない曲がり道で「右です」と案内し、移動者のイメージより実際の道(実際の進路CSA)が曲がっていない場合(もしくはその逆の場合)、移動者に対し混乱を生じさせる要因となり得る。特に視覚障がい者は動線が見えないので、「右です」と案内されただけではどのくらい曲がったら良いかが分からない。 As shown in Figure 2, there is also a problem that the detailed angle of the turn cannot be presented. A voice such as "Turn right" cannot indicate the degree of curvature of the turn. For this reason, for example, if "Turn right" is given on a turn road with little curvature, and the actual road (actual course CS A ) is not as curved as the traveler imagines (or vice versa), this can cause confusion for the traveler. In particular, visually impaired people cannot see the flow of traffic, so they cannot know how far they should turn if they are only told "turn right."
図3に示すように、次の音声案内までに自己位置や進行方向TDを見失ってしまうという課題もある。スマホを見ながら歩くなど何かに集中している状態、また開けた場所や目印の少ない場所などで、直前の音声案内に対して自分がどの程度、どの方向に進んだかを見失ってしまう場合がある。特に視覚障がい者の場合、道が広いと真っすぐ進んでいるつもりでも曲がってしまう「偏軌」と呼ばれる現象が生じることが知られている。 As shown in Figure 3, there is also the issue of losing track of one's own position and traveling direction TD before the next voice guidance. When concentrating on something, such as walking while looking at a smartphone, or in an open area or place with few landmarks, there are cases where one loses track of how far and in which direction one has traveled in response to the previous voice guidance. In particular, it is known that visually impaired people can experience a phenomenon called "deviation of track" when the road is wide, causing them to turn even when they think they are going straight.
このような問題を解決するため、定位のある立体音響で目的地の方位と距離を通知する方法が提案されている。例えば、特開2020-188494号公報では、目的地までの方位と距離を定位として持った音源で通知する方法が提案されている。しかし現在地から目的地の間には多くの場合、建物等の遮蔽物があり、目的地の方位と実際の経路の進行方向とは異なるのが普通である。このため目的地までの方位が分かっても遮蔽物を迂回したルートを自身で探して移動しなければならない。また目的地の距離が遠い場合、目的地までの距離を音だけで知覚することは難しい。 To solve these problems, a method has been proposed to notify the direction and distance to the destination using localized stereophonic sound. For example, JP 2020-188494 A proposes a method of notifying the direction and distance to the destination using a sound source with a localized position. However, there are often obstacles such as buildings between the current location and the destination, and it is common for the direction to the destination to differ from the direction of travel on the actual route. For this reason, even if the direction to the destination is known, the user must find a route that bypasses the obstacles and move around it. Furthermore, if the destination is far away, it is difficult to perceive the distance to the destination from sound alone.
これに対し、図4に示すように、地図上に設定した移動経路RT(図6参照)上の特定の位置(ナビゲーションポイント)に音源SCを配置し、地図上の移動体の自己位置と音源SCの位置関係から定位を求め、立体音響で移動経路RT上の進行方向を通知する方法がある。しかしその場合、以下の問題がある。 In response to this, as shown in Figure 4, there is a method in which a sound source SC is placed at a specific position (navigation point) on a travel route RT (see Figure 6) set on a map, the position is determined from the positional relationship between the moving object's own position on the map and the sound source SC, and the direction of travel on the travel route RT is notified using stereophonic sound. However, this method has the following problems.
まず、自己位置や地図情報の精度が悪い場合、実際の進行方向とは異なる方向から通知音が聞こえてしまう場合がある(図4参照)。また、通知音の音量が近づいても大きすぎず、離れていても小さすぎないような適当な通知音の音量、およびナビゲーションポイントの配置間隔を設定することが難しい(図5参照)。さらに、ナビゲーションポイントが移動者の視聴可能範囲内に複数ある場合、互いの通知音が干渉してしまうという課題が生じる(図6参照)。 First, if the accuracy of the user's own position or map information is poor, the notification sound may be heard from a direction different from the actual direction of travel (see Figure 4). It is also difficult to set an appropriate notification sound volume that is not too loud when approaching and not too quiet when far away, as well as the spacing between navigation points (see Figure 5). Furthermore, if there are multiple navigation points within the moving person's hearing range, the notification sounds may interfere with each other, which is an issue (see Figure 6).
図6の課題については、特開2013-47653号公報において、開始地点の音声と目的地点の音声を交互に通知する、いわゆる「異種鳴き交わし方式」の音響信号と同様の通知を立体音響で通知する手法が提案されている。 With regard to the issue in Figure 6, JP 2013-47653 A proposes a method of using stereophonic audio to provide notifications similar to the so-called "cross-talk" audio signals, which alternate between audio of the starting point and audio of the destination point.
また特開2002-5675号公報では、移動者の自己位置が最も近いナビゲーションポイントから通知音が鳴るようにし、そのナビゲーションポイントに到達すると、通知音の音源定位を次のナビゲーションポイントに移動させることで、移動経路RTに沿った進行方向を一つの通知音でガイドする仕組みが提案されている。しかしこの方法でも図4の課題は解決されない。 In addition, Japanese Patent Application Laid-Open No. 2002-5675 proposes a mechanism in which a notification sound is emitted from the navigation point where the traveler's own position is closest, and when that navigation point is reached, the sound source location of the notification sound is moved to the next navigation point, thereby providing guidance along the travel route RT with a single notification sound. However, this method does not solve the problem in Figure 4.
また例えば、図7に示すように、ナビゲーションポイントが曲がり角付近にあるような場合、現在の進路CSC上に音源SCCを配置し、次進路CSN付近に到達したときに次進路CSN上に次進路CSNへの誘導を行うための音源SCNを配置することが考えられる。この場合、位置誤差の影響で例えば移動者の自己位置が曲がり角に達する前に音源定位が曲がり角の先の方向に移動してしまうと、曲がり角までの進行方向が分からなくなる、音源SCの方向と移動経路RT上の進行方向がずれてしまう、といった問題が生じる可能性がある。 7, for example, when the navigation point is near a corner, it is possible to place a sound source SC C on the current route CS C , and when the navigation point reaches the vicinity of the next route CS N , place a sound source SC N on the next route CS N for guiding the user to the next route CS N. In this case, if the sound source localization moves in the direction beyond the corner before the user's own position reaches the corner due to the influence of a position error, problems may occur such as the user being unable to determine the direction of travel up to the corner, or the direction of the sound source SC being misaligned with the direction of travel on the travel route RT.
本開示は、上述した課題に鑑みてなされたものである。本開示は、音声によってユーザを精度よく誘導することが可能なナビゲーションシステムの提供を目的とする。例えば、本開示に係る情報処理装置は、以下の手法により上記課題の一部または全部を解決することを特徴の一つとする。 The present disclosure has been made in consideration of the above-mentioned problems. The present disclosure aims to provide a navigation system that can guide a user with high accuracy by voice. For example, one of the features of the information processing device according to the present disclosure is that it solves some or all of the above-mentioned problems by using the following method.
I.立体音響による通知音により、音声案内と比べて進行方向を分解能高く伝える。
II.移動者と常に一定距離を保った定位から通知音を発し、進行方向のみ伝えることで、自己位置や地図情報の精度が悪い場合でも正しい進行方向を伝える。
III.現在の進行方向を伝える音源と次の進行方向を伝える音源の音量とメロディの変化から次の進行方向、現在の進行方向、曲がり角の位置を伝える。
I. The notification sound using stereophonic sound conveys the direction of travel with higher resolution than voice guidance.
II. By emitting a notification sound from a fixed position that always maintains a constant distance from the moving person and conveying only the direction of travel, the correct direction of travel can be conveyed even if the accuracy of the self-location and map information is poor.
III. The next direction, the current direction, and the position of the corner are conveyed by changes in the volume and melody of the sound source conveying the current direction and the sound source conveying the next direction.
[2.情報処理装置の構成]
図8は、情報処理装置1の構成の一例を示す図である。
2. Configuration of information processing device
FIG. 8 is a diagram showing an example of the configuration of the
情報処理装置1は、GPS(Global Positioning System)、GNSS(Global Navigation Satellite System)、ビーコン、カメラ、ジャイロセンサ、地磁気センサなどの各種センサから取得したセンサ情報に基づいて、目的地までのナビゲーション情報を生成する。これらのセンサは、ユーザUSの頭部の位置および向きを検出するためのセンサ部として機能する。ナビゲーション情報は、ユーザUSを誘導するための通知音、および、ユーザUSに注意喚起を行うための警告音などの音響情報を含む。
The
詳細は後述するが、情報処理装置1は、移動経路を複数の進路に区画し、進路ごとに、進行方向、通知音および音源を設定する。以下の説明では、進路、進行方向、通知音および音源をそれぞれ符号「CS」、「TD」、「SD」および「SC」で表す。進行方向TD、通知音SDおよび音源SCを進路CSごとに区別する場合には、それぞれの符号に番号または文字を付して区別を行う。
Details will be described later, but the
情報処理装置1は、経路計画部11、進行方向取得部12、頭部方向取得部13、音源定位算出部14,音響生成部15、音量調整部16および通知音出力部17を有する。
The
経路計画部11は、ユーザUSの頭部の位置および向きをユーザ位置情報として取得する。ユーザ位置情報は、例えば上述のGPS、GNSS、またはカメラを用いたVPS(Visual Positioning System)による絶対位置の計測、および、必要に応じてPDR(Pedestrian Dead Reckoning)等による相対位置推定手法を用いてセンサ情報から検出される。経路計画部11は、ユーザ位置情報に基づいて、目的地までの移動経路RTを計画する。頭部方向取得部13は、ユーザUSの頭部の向きをユーザUSの頭部方向として取得する。
The
進行方向取得部12は、ユーザUSの進路CSの進行方向TDを取得する。進路CSの進行方向TDとは、進路CSの中心を通る動線の延びる方向を意味する。例えば、進行方向取得部12は、移動経路RTの曲がり角または曲率などの情報に基づいて、移動経路RTを複数の進路CSに区画する。進路CSは、曲率が許容基準を満たす概ね直線状の経路である。許容基準はシステム開発者により任意に設定される。進行方向取得部12は、移動経路RTに沿って区画された複数の進路CSを取得する。進行方向取得部12は、区画された進路CSごとに、進路CSに沿う方向を進路CSの進行方向TDとして取得する。
The traveling
進行方向取得部12は、移動経路RTから、現在の進路CSC、および、現在の進路CSCの後に続く次の進路CS(次進路CSN)の双方について進行方向TDを取得することができる。例えば、進行方向取得部12は、第1進行方向取得部12Aおよび第2進行方向取得部12Bを有する。第1進行方向取得部12Aは、現在の進路CSCの進行方向TDCを取得する。第2進行方向取得部12Bは、次進路CSNの進行方向TDNを取得する。
The traveling
音源定位算出部14は、ユーザUSから進行方向TDCに所定の距離だけ離れた位置を、進行方向TDCへの移動を誘導する通知音SDC(図11参照)の音源定位として算出する。通知音SDCの音源定位は、ユーザUSの頭部の位置および向きを基準として算出することができる。
The sound source
「頭部の位置および向きを基準として」とは、頭部の位置および向きに対する相対的な位置関係によって位置を特定することを意味する。ユーザUSは通知音SDCの鳴る方向に進むことで、進路CSC上を正確に移動することができる。音源定位までの距離は、システム開発者により任意に設定される。例えば、通知音SDCの音源定位は、ユーザUSが通過すべき進路CSC上の位置に設定することができる。 "Based on the position and orientation of the head" means that the position is specified based on a relative positional relationship to the position and orientation of the head. The user US can move accurately on the course CS C by moving in the direction in which the notification sound SD C is sounded. The distance to the sound source localization is set arbitrarily by the system developer. For example, the sound source localization of the notification sound SD C can be set to a position on the course CS C through which the user US must pass.
音源定位算出部14は、現在の進路CSCと次進路CSNのそれぞれについて通知音SDを生成することができる。例えば、音源定位算出部14は、第1音源定位算出部14Aおよび第2音源定位算出部14Bを有する。第1音源定位算出部14Aは、ユーザUSから現在の進路CSCの進行方向TDCに所定の距離だけ離れた位置を、現在の進路CSCの進行方向TDCへの移動を誘導する通知音SDCの音源定位として算出する。第2音源定位算出部14Bは、ユーザUSから次進路CSNの進行方向TDNに所定の距離だけ離れた位置を、次進路CSNへの移動を誘導する通知音SDN(図11参照)の音源定位として算出する。
The sound source
音響生成部15は、ナビゲーションを行うための各種通知音SDおよび警告音WS(図16参照)を生成する。通知音SDおよび警告音WSは、例えば、一定のリズムもしくは再生時間間隔で出力される音(ループ音)として生成することができる。通知音SDは、ユーザUSに進行方向TDを通知するためのパルス音および効果音などを含むことができる。警告音WSは、ユーザUSに警告域WA(図16参照)および危険域DA(図16参照)などの存在について注意喚起するためのパルス音および効果音などを含むことができる。
The
音響生成部15は、現在の進路CSCおよび次進路CSNに関する2つの通知音SDを生成することができる。例えば、音響生成部15は、第1音響生成部15Aおよび第2音響生成部15Bを有する。第1音響生成部15Aは、現在の進路CSCの進行方向TDCに関する通知音SDCを生成する。第2音響生成部15Bは、次進路CSNの進行方向TDNに関する通知音SDNを生成する。
The
音量調整部16は、2つの通知音SDの音量を調整して出力用の通知音SDを生成する。通知音出力部17は、音響生成部15または音量調整部16で生成された通知音SDをスピーカ等を介して出力する。例えば、現在の進路CSCと次進路CSNとの境界部となる位置を中間点BP(図11参照)とする。音量調整部16は、中間点BPからユーザUSまでの距離に基づいて、現在の進路CSCおよび次進路CSNに関する2つの通知音SDの音量を調整する。音量調整部16は、音量が調整された2つの通知音SDをミキシングして得られる通知音SDを、現在の進路CSCから次進路CSNへの切り替えを誘導する通知音として生成する。
The
「2つの通知音SDをミキシングして得られる通知音SDを生成する」とは、「2つの通知音SDをミキシングしたような音としてユーザUSに感知されるような通知音SDを生成する」ことを意味する。ミキシングは、信号処理の段階で行われてもよいし、視聴時のユーザUSの脳内で行われてもよい。後者の例としては、2つの通知音SDが「異種鳴き交わし方式」のように交互に再生される方式がある。例えば、2つの通知音SDをそれぞれ再生5秒、停止5秒で交互に再生されるように設定し、一方の停止中に他方を再生するような方式が考えられる。 "Generating a notification sound SD obtained by mixing two notification sounds SD" means "generating a notification sound SD that is perceived by the user US as a sound that is a mix of two notification sounds SD." The mixing may be performed at the signal processing stage, or may be performed in the brain of the user US when viewing. An example of the latter is a method in which two notification sounds SD are played alternately, like a "different species calling method." For example, one possible method is to set two notification sounds SD to be played alternately, with 5 seconds of play and 5 seconds of stop, and play the other while one is stopped.
[3.情報処理方法]
図9および図10は、通知音SDの生成処理の一例を説明する図である。
[3. Information Processing Method]
9 and 10 are diagrams illustrating an example of a process for generating a notification sound SD.
進行方向取得部12は現在の進路CSCの進行方向TDCおよび次進路CSNの進行方向TDNを取得する(ステップS1)。進行方向TDは、例えばGPS、GNSS、ビーコン、PDR等を用いて検出された自己位置情報および地図情報に基づいて取得することができる。現在の進路CSCの進行方向TDCは、地図情報を使わずに、ユーザUSがセンサ部を搭載したデバイスを進みたい方向に向けて指定してもよい。
The traveling
進行方向取得部12は、目的地までの移動経路RTを曲がり角の位置に基づいて複数の進路CSに区画する。進行方向取得部12は、次進路CSNに向かう曲がり角の位置を中間点BPとして設定する。進行方向取得部12は、移動経路RTの距離情報に基づいて、現在地から中間点BPまでの現在の進路CSC上の距離を取得する(ステップS2)。
The traveling
立体音響で進行方向TDを通知するために、顔の向きを0°(正面の定位)とした頭部座標系CO上での進行方向TDを取得する必要がある。例えば、地磁気センサをイヤフォン等に内蔵してユーザUSの頭部に装着し、顔の向いている方向を地磁気センサで検出する。頭部方向取得部13は、地磁気センサのセンサ情報に基づいて検出された顔(頭部)の向きを頭部方向として取得する(ステップS3)。
In order to notify the traveling direction TD using stereophonic sound, it is necessary to obtain the traveling direction TD on the head coordinate system CO with the face orientation set to 0° (front orientation). For example, a geomagnetic sensor is built into earphones or the like and attached to the head of the user US, and the geomagnetic sensor detects the direction in which the face is facing. The head
音源定位算出部14は、頭部の位置および向きを基準とする頭部座標系COを設定する。頭部座標系COは、例えば、ユーザUSの頭部の位置を原点とし、x軸、y軸およびz軸のいずれかが頭部方向と一致する3次元座標系である。音源定位算出部14は、頭部の位置から進路CSに沿う方向を頭部座標系CO上での進路CSの進行方向TDとして算出する。音源定位算出部14は、頭部座標系CO上で頭部の位置から現在の進路CSCの進行方向TDCに所定の距離だけ離れた位置を通知音SDCの音源定位として算出する。音響生成部15は、算出された音源定位に定位される通知音SDCを生成する(ステップS4)。
The sound source
進行方向取得部12は、次の曲がり角(中間点BP)までの距離が閾値以下か否かを判定する(ステップS5)。距離が閾値よりも大きい場合には(ステップS5:No)、通知音出力部17は、通知音SDCをスピーカに出力する。距離が閾値以下である場合には(ステップS5:Yes)、音源定位算出部14は、頭部座標系CO上で頭部の位置から次進路CSNの進行方向TDNに所定の距離だけ離れた位置を通知音SDNの音源定位として算出する。音響生成部15は、算出された音源定位に定位される通知音SDNを生成する(ステップS6)。音量調整部16は、距離に応じて通知音SDCと通知音SDNの音量を調整した通知音SDをスピーカに出力する(ステップS7)。
The traveling
図10の例では、ユーザUSは手前の歩道を横断歩道に向かって歩いている。手前の歩道が現在の進路CSCであり、横断歩道が次進路CSNである。現在の進路CSCから次進路CSNに変化するまでまだ遠い場合、次進路CSNへの切り替えを通知する音源SCNは無音であり、現在の進路CSCの進行方向TDCを通知する音源SCCのみ通知音SDCを出力する(図10の左側の図を参照)。 In the example of Fig. 10, the user US is walking on the sidewalk in front of him/her toward the crosswalk. The sidewalk in front of him/her is the current route CS C , and the crosswalk is the next route CS N. If it is still a long way from the current route CS C to the next route CS N , the sound source SC N notifying the switch to the next route CS N is silent, and only the sound source SC C notifying the traveling direction TD C of the current route CS C outputs the notification sound SD C (see the diagram on the left side of Fig. 10).
次進路CSNに近づくに従って、次進路CSNへの誘導を行う通知音SDNが徐々に大きくなる(図10の中央の図を参照)。ユーザUSが中間点BPを過ぎて次進路CSNに進入すると、一つ前の進路CSCの進行方向TDCを通知していた通知音SDCは徐々に小さくなり、やがて無音となる。そして、次進路CSNの進行方向TDNを通知する通知音SDNのみが出力されるようになる(図11の右側の図を参照)。 As the user US approaches the next route CS N , the notification sound SDN that guides the user to the next route CS N gradually becomes louder (see the diagram in the center of FIG. 10). When the user US passes the waypoint BP and enters the next route CS N , the notification sound SDN that notifies the user of the traveling direction TDN of the previous route CS C gradually becomes quieter and eventually becomes silent. Then, only the notification sound SDN that notifies the user of the traveling direction TDN of the next route CS N is output (see the diagram on the right side of FIG. 11).
[4.通知音のミキシング例]
図11および図12は、通知音SDのミキシング例を示す図である。
[4. Notification sound mixing example]
11 and 12 are diagrams showing examples of mixing of notification sounds SD.
ユーザUSは、計画された移動経路に沿って、L字状に交差した2つの進路CS(進路CSC、次進路CSN)を移動する。図12は、進路CS上に設定された5つの経由地PTの通知音SDを示す。経由地PT3は、進路CSCと次進路CSNとの境界部(中間点BP)と一致する。 The user US moves along two L-shaped intersecting routes CS (route CS C , next route CS N ) along a planned travel route. Fig. 12 shows notification sounds SD of five waypoints PT set on the route CS. Waypoint PT 3 coincides with the boundary (midpoint BP) between route CS C and next route CS N.
ユーザUSが進路CSCを進行する際には、ユーザUSは前方の音源SCCから出力される通知音SDCにしたがって移動する。ユーザUSが中間点BPの近くの経由地PT2に到達すると、次進路CSNの進行方向TDN側に設定された音源SCNから、次進路CSNへの移動を誘導する通知音SDNが出力され始める。どの程度中間点BPに近づいたときに通知音SDNの出力を開始するかは、システム開発者により任意に設定される。ユーザUSから通知音SDCの音源定位までの距離、および、ユーザUSから通知音SDNの音源定位までの距離は、例えば、ユーザUSの位置によらずに一定である。 When the user US travels along the route CS C , the user US moves according to the notification sound SDC output from the sound source SCC ahead. When the user US reaches the waypoint PT2 near the midpoint BP, the notification sound SDC , which guides the user US to move to the next route CS N , starts to be output from the sound source SCC set on the travel direction TD N side of the next route CS N. The system developer arbitrarily sets how close the user US is to the midpoint BP when the notification sound SDC starts to be output. The distance from the user US to the sound source localization of the notification sound SDC and the distance from the user US to the sound source localization of the notification sound SDC are constant, for example, regardless of the position of the user US.
音量調整部16は、ユーザUSが現在の進路CSC上を中間点BPに近づくにつれて、次進路CSNへの移動を誘導する通知音SDNの音量を大きくする。逆に、音量調整部16は、ユーザUSが現在の進路CSC上を中間点BPに近づくにつれて、現在の進路CSCの進行方向への移動を誘導する通知音SDCの音量を小さくする。例えば、音量調整部16は、ユーザUSが中間点BPに到達したときに、通知音SDCの音量と通知音SDNの音量とが等しくなるように2つの通知音SDのミキシング量を調整する。
The
ユーザUSが中間点BPを越えて次進路CSNに進入すると、誘導すべき進路CSが進路CSCから次進路CSNに切り替わる。音量調整部16は、ユーザUSが次進路CSN上を中間点BPから遠ざかるにつれて、次進路CSNの進行方向TDNへの移動を誘導する通知音SDNの音量を大きくする。逆に、音量調整部16は、ユーザUSが次進路CSN上を中間点BPから遠ざかるにつれて、一つ前の進路CSCの進行方向TDCへの移動を誘導する通知音SDCの音量を小さくする。例えば、音量調整部16は、ユーザUSが経由地PT4に到達したときに、通知音SDCの音量がゼロとなるように音量調整をする。
When the user US passes the waypoint BP and enters the next route CS N , the route CS to be guided is switched from the route CS C to the next route CS N. The
図13および図14は、通知音SDの他のミキシング例を示す図である。 FIGS. 13 and 14 show other mixing examples of notification sounds SD.
ユーザUSは、計画された移動経路に沿って現在の進路CSCから次進路CSNに向かうが、中間点BPで左折せずに、誤ってそのまま直進してしまう。ユーザUSが誤った進路CSEに進むと、経路計画部11は移動経路RTの再設定を行う。経路計画部11は誤って侵入した進路CSEを中間点BPに向かって逆方向に進む移動経路RTを計画する。
The user US heads from the current route CS C to the next route CS N along the planned movement route, but mistakenly continues straight ahead without turning left at the midpoint BP. When the user US moves to the wrong route CS E , the
進行方向取得部12は、中間点BPに戻る方向を進路CSEの進行方向TDとして取得する。音源定位算出部14は、ユーザUSの後方に、中間点BPへの移動を誘導する通知音SDEの音源SCを設定する。例えば、音源定位算出部14は、ユーザUSが中間点BPを通り過ぎて次進路CSNとは異なる方向に移動した場合に、中間点BPの方向に所定の距離だけ離れた位置を通知音SDEの音源定位として算出する。通知音SDEの音量は、中間点BPから遠い位置ほど大きくなり、中間点BPに近い位置ほど小さくなる。通知音SDNの音量は、中間点BPから遠い位置ほど小さくなり、中間点BPに近い位置ほど大きくなる。
The traveling
[5.効果音]
図15は、効果音の説明図である。
5. Sound Effects
FIG. 15 is an explanatory diagram of sound effects.
立体音響による通知においては、特に正面方向はステレオサウンドにおける左右の違いが少なく、定位が分かりにくい場合がある。このため音源SCの方向(進行方向TD)に向かって真っすぐ歩けていることをより正確に伝えたい場合、音源SCの方向からのずれに応じた音響効果を付与することで、どの程度適切に移動できているのかをユーザUSに認識させることができる。 When notifying the user through stereo sound, there is little difference between left and right in stereo sound, especially in the forward direction, and it can be difficult to determine the position. For this reason, if you want to convey more accurately that the user is walking straight in the direction of the sound source SC (travel direction TD), you can make the user US aware of how appropriately they are moving by adding sound effects that correspond to the deviation from the direction of the sound source SC.
例えば、音響生成部15は、移動が許容される角度範囲を誘導範囲RGとして取得する。誘導範囲RGは、進行方向TDを中心とする所定の角度範囲として設定される。誘導範囲RGは、システム開発者により任意に設定される。音響生成部15は、ユーザUSの移動方向が誘導範囲RGに含まれる場合に、進行方向TDへの移動を誘導する通知音SDに音響効果を付与することができる。
For example, the
図15の例では、移動中のユーザUSの頭部の向き(頭部方向HD)がユーザUSの移動方向として検出される。頭部方向HDが、進路CSの進行方向TDを中心として±n°以内の角度範囲に含まれる場合に、通知音SDの音量を増加したり、反響音を強めたり、音の広がりを増加させるといった音響効果が付与される。これにより、進行方向±n°以内に移動方向が収まっていることを、より正確に伝えることができる。 In the example of FIG. 15, the direction of the head of the moving user US (head direction HD) is detected as the direction of movement of the user US. When the head direction HD is within an angular range of ±n° centered on the traveling direction TD of the course CS, sound effects such as increasing the volume of the notification sound SD, strengthening the reverberation, and increasing the sound spread are applied. This makes it possible to more accurately convey that the direction of movement is within ±n° of the traveling direction.
[6.警告音]
図16は、警告音の説明図である。
6. Warning Sound
FIG. 16 is an explanatory diagram of the warning sound.
図15の例では、音響効果によって常時、移動の適正度が通知された。しかし、正確な進行方向TDを伝え続けるのではなく、進行方向TDから外れた場合に警告音WSが出力されるようにしてもよい。 In the example of FIG. 15, the appropriateness of movement is constantly notified through sound effects. However, instead of continuously notifying the accurate direction of travel TD, a warning sound WS may be output if the direction of travel deviates from TD.
例えば、音源定位算出部14は、進路CSの中心に動線を設定する。音源定位算出部14は、動線からやや外れた領域を警告域WAとして設定し、警告域よりも外側にあって進入することが推奨されない領域を危険域DAとして設定する。音源定位算出部14は、警告域WAでも危険域DAでもない動線付近の安全な区域を安全域SAとして設定する。
For example, the sound source
音源定位算出部14は、センサ情報に基づいて危険性のある区域を特定し、警告域WAおよび危険域DAを設定することができる。センサ情報は、カメラで撮影された映像情報、測距センサで計測された距離情報、および、GPS/GNSSもしくはVPS(Visual Positioning System)などから取得された位置情報を含むことができる。安全域SAの幅、警告域WAの幅および危険域DAの幅は、システム開発者により任意に設定される。例えば、警告域WAは、動線から一定距離以上離れた領域であり、危険域DAは、警告域WAから外側に向かって一定距離以上離れた領域である。
The sound source
音源定位算出部14は、警告域WAと危険域DAとの境界BD上の位置を警告音WSの音源定位として算出する。音響生成部15は、算出された音源定位に音像を持つ警告音WSを生成し、ユーザUDが境界BDに近づいたときに通知音出力部17に出力する。ユーザUSがどの程度境界BDに近づいたときに警告音WSを出力するかはシステム開発者により任意に設定される。
The sound source
警告域WAは、動線と並行であって且つ動線と一定距離離れた区域とし、危険域DAは、安全域SAと警告域WAの境界線と並行であって且つ安全域SAと警告域WAの境界線から一定距離以上外側に離れた区域としても良い。更には、カメラや測距センサ等から取得される情報に基づき、例えばホームの端を検知し、ホームの端から一定距離以上の区域を、危険域DAや警告域WAと設定しても良い。また、その他の危険域DAや警告域WAの設定の仕方をしても良い。 The warning area WA may be an area that is parallel to the traffic flow and a certain distance away from the traffic flow, and the danger area DA may be an area that is parallel to the boundary between the safety area SA and the warning area WA and is a certain distance or more away from the boundary between the safety area SA and the warning area WA. Furthermore, for example, the edge of the platform may be detected based on information obtained from a camera or distance sensor, and an area that is a certain distance or more away from the edge of the platform may be set as the danger area DA or warning area WA. The danger area DA and warning area WA may also be set in other ways.
図16の例では、ユーザUSが動線を外れ、動線と平行で一定幅の危険域DA(例えば駅ホームの端、歩道の端など)に近づくと、危険域DAの境界BD上の音源SCWから警告音WSが出力される。音源定位算出部14は、境界BD上の1点、または、境界BDに沿った面状領域を警告音WSの音源定位として算出することができる。境界BDに沿った面状領域を警告音WSの音源SCWとする場合、例えば動線の法線と頭部を結ぶ直線が境界BDと交わる位置を音源SCWの中心とすることができる。なお、動線および危険域DAの境界BDは曲線であってもよい。
In the example of FIG. 16, when the user US deviates from the line of movement and approaches a danger area DA (e.g., the edge of a station platform, the edge of a sidewalk, etc.) that is parallel to the line of movement and has a certain width, a warning sound WS is output from a sound source SC W on a boundary BD of the danger area DA. The sound source
[7.システム構成例]
図17は、ナビゲーションシステムNVのシステム構成例を示す図である。
7. System configuration example
FIG. 17 is a diagram showing an example of the system configuration of the navigation system NV.
ナビゲーションシステムNVは、サーバSV、クライアント端末TMおよびヘッドフォンHPを有する。クライアント端末TMは、スマートフォンなどの携帯型の情報端末である。ヘッドフォンHPは、ユーザUSの頭部の位置および向きを検出するためのセンサ部SEを有する。なお、ヘッドフォンHPの代わりにイヤフォンや眼鏡型デバイス、ヘッドマウントディスプレイ等が使用されてもよい。 The navigation system NV has a server SV, a client terminal TM, and headphones HP. The client terminal TM is a portable information terminal such as a smartphone. The headphones HP have a sensor unit SE for detecting the position and orientation of the head of the user US. Note that earphones, glasses-type devices, head-mounted displays, etc. may be used instead of the headphones HP.
上述した情報処理装置1の機能はサーバSVとクライアント端末TMによって分担される。情報処理装置1は、ユーザUSの頭部の位置および向きを基準としてユーザUSからユーザUSの進路CSの進行方向TDに所定の距離だけ離れた位置を、進行方向TDへの移動を誘導する通知音SDの音源定位として算出する。どの機能をクライアント端末TMに分担させ、どの機能をサーバSVに分担させるかは、システム開発者により任意に設定される。ヘッドフォンHPは、算出された音源定位に通知音SDの音像を再現するスピーカを有する。
The functions of the
例えば、クライアント端末TMは、センサ部SEから取得したセンサ情報に基づいてユーザUSの頭部の位置および向きを検出する。クライアント端末TMは、通知音SDおよび警告音WSを生成し、ヘッドフォンHPに出力する。クライアント端末TMは、自己位置情報をサーバSVに送り、サーバSVから地図情報や進行方向TDに関する情報を取得する。ヘッドフォンHPに高速な演算装置があれば、通知音SDおよび警告音WSをヘッドフォンHPで生成してもよい。 For example, the client terminal TM detects the position and orientation of the head of the user US based on sensor information acquired from the sensor unit SE. The client terminal TM generates a notification sound SD and a warning sound WS and outputs them to the headphones HP. The client terminal TM sends its own position information to the server SV, and acquires map information and information related to the traveling direction TD from the server SV. If the headphones HP have a high-speed calculation device, the notification sound SD and warning sound WS may be generated by the headphones HP.
図18は、ナビゲーションシステムNVの他のシステム構成例を示す図である。 FIG. 18 shows another example of the system configuration of the navigation system NV.
図18の例では、頭部の向きが外部カメラMDまたはデプスセンサのセンサ情報に基づいて取得される。この例では、外部カメラMDやデプスセンサがセンサ部SEとして機能する。顔トラッキング技術により頭部の姿勢推定を行った結果がサーバSVからクライアント端末TMに送信される。クライアント端末TMは、姿勢推定結果を受信した後、図17の例と同様の手法により通知音SDおよび警告音WSを生成する。 In the example of FIG. 18, the head orientation is acquired based on sensor information from an external camera MD or a depth sensor. In this example, the external camera MD or depth sensor functions as a sensor unit SE. The result of head pose estimation performed using face tracking technology is transmitted from the server SV to the client terminal TM. After receiving the pose estimation result, the client terminal TM generates a notification sound SD and a warning sound WS using a method similar to that of the example of FIG. 17.
[8.通知音選択UI]
以下、通知に関するUIの一例を説明する。図19は、進行方向TDの通知に関するUI例を示す図である。図20は、危険域接近通知に関するUI例を示す図である。
[8. Notification sound selection UI]
An example of a UI related to notification will be described below. Fig. 19 is a diagram showing an example of a UI related to notification of the traveling direction TD. Fig. 20 is a diagram showing an example of a UI related to a danger area approach notification.
「通知音の選択」ボタンを押すと、メロディの種類および通知音の種類(音量の増減による通知かテンポの変化による通知か)を選択することができる。「通知ON/OFF」ボタンを押すと、通知の有効状態と無効状態が切り替わる。 Pressing the "Select notification sound" button allows you to select the type of melody and the type of notification sound (whether to be notified by increasing or decreasing the volume or by changing the tempo). Pressing the "Notification ON/OFF" button switches between enabled and disabled notification.
「音源までの距離」の欄では、進行方向TDを通知する音源SCまでの距離を指定することができる。「強調確度範囲」の欄では、図16に示した誘導範囲RGを指定することができる。「曲がり角通知開始距離」の欄では、中間点BPから通知音SDNの出力開始地点(図11の例では経由地PT2)までの距離を指定することができる。「危険域の警告開始距離」の欄では、警告音WSの出力を開始する危険域DAの境界BDまでの距離を指定することができる。その他、通知音SDの音像を定位させる位置や音圧の変化具合をUI上で任意に設定できるようにしてもよい。 In the "distance to sound source" field, the distance to the sound source SC that notifies the traveling direction TD can be specified. In the "emphasis accuracy range" field, the guidance range RG shown in FIG. 16 can be specified. In the "turn notification start distance" field, the distance from the midpoint BP to the output start point of the notification sound SDN (waypoint PT2 in the example of FIG. 11) can be specified. In the "danger zone warning start distance" field, the distance to the boundary BD of the danger zone DA where the output of the warning sound WS starts can be specified. In addition, the position for localizing the sound image of the notification sound SD and the change in sound pressure may be arbitrarily set on the UI.
[9.ハードウェア構成例]
図21は、情報処理装置1のハードウェア構成の一例を示す図である。
[9. Hardware Configuration Example]
FIG. 21 is a diagram illustrating an example of a hardware configuration of the
情報処理装置1の情報処理は、例えば、コンピュータ1000によって実現される。コンピュータ1000は、CPU(Central Processing Unit)1100、RAM(Random Access Memory)1200、ROM(Read Only Memory)1300、HDD(Hard Disk Drive)1400、通信インターフェイス1500、および入出力インターフェイス1600を有する。コンピュータ1000の各部は、バス1050によって接続される。
The information processing of the
CPU1100は、ROM1300またはHDD1400に格納されたプログラム(プログラムデータ1450)に基づいて動作し、各部の制御を行う。たとえば、CPU1100は、ROM1300またはHDD1400に格納されたプログラムをRAM1200に展開し、各種プログラムに対応した処理を実行する。
The CPU 1100 operates based on a program (program data 1450) stored in the
ROM1300は、コンピュータ1000の起動時にCPU1100によって実行されるBIOS(Basic Input Output System)などのブートプログラムや、コンピュータ1000のハードウェアに依存するプログラムなどを格納する。
The
HDD1400は、CPU1100によって実行されるプログラム、および、かかるプログラムによって使用されるデータなどを非一時的に記録する、コンピュータが読み取り可能な非一時的記録媒体である。具体的には、HDD1400は、プログラムデータ1450の一例としての、実施形態にかかる情報処理プログラムを記録する記録媒体である。
通信インターフェイス1500は、コンピュータ1000が外部ネットワーク1550(たとえばインターネット)と接続するためのインターフェイスである。たとえば、CPU1100は、通信インターフェイス1500を介して、他の機器からデータを受信したり、CPU1100が生成したデータを他の機器へ送信したりする。
The
入出力インターフェイス1600は、入出力デバイス1650とコンピュータ1000とを接続するためのインターフェイスである。たとえば、CPU1100は、入出力インターフェイス1600を介して、キーボードやマウスなどの入力デバイスからデータを受信する。また、CPU1100は、入出力インターフェイス1600を介して、表示装置やスピーカやプリンタなどの出力デバイスにデータを送信する。また、入出力インターフェイス1600は、所定の記録媒体(メディア)に記録されたプログラムなどを読み取るメディアインターフェイスとして機能してもよい。メディアとは、たとえばDVD(Digital Versatile Disc)、PD(Phase change rewritable Disk)などの光学記録媒体、MO(Magneto-Optical disk)などの光磁気記録媒体、テープ媒体、磁気記録媒体、または半導体メモリなどである。
The input/
たとえば、コンピュータ1000が実施形態にかかる情報処理装置1として機能する場合、コンピュータ1000のCPU1100は、RAM1200上にロードされた情報処理プログラムを実行することにより、前述した各部の機能を実現する。また、HDD1400には、本開示にかかる情報処理プログラム、各種モデルおよび各種データが格納される。なお、CPU1100は、プログラムデータ1450をHDD1400から読み取って実行するが、他の例として、外部ネットワーク1550を介して、他の装置からこれらのプログラムを取得してもよい。
For example, when the
[10.効果]
情報処理装置1は、進行方向取得部12および音源定位算出部14を有する。進行方向取得部12は、ユーザUSの進路CSの進行方向TDを取得する。音源定位算出部14は、ユーザUSから進行方向TDに所定の距離だけ離れた位置を、進行方向TDへの移動を誘導する通知音SDの音源定位として算出する。本開示の情報処理方法は、情報処理装置1の処理がコンピュータ1000により実行される。本開示のプログラムは、情報処理装置1の処理をコンピュータ1000に実現させる。
[10. Effects]
The
この構成によれば、ユーザUSは、進行方向TDの前方から聞こえてくる通知音SDによって、進行方向TDへの移動を誘導される。そのため、自己位置や地図情報の精度が悪い場合でも、精度よくユーザUSを進行方向TDへ誘導することができる。 With this configuration, the user US is guided to move in the traveling direction TD by the notification sound SD heard from ahead in the traveling direction TD. Therefore, even if the accuracy of the user's own position or map information is poor, the user US can be guided in the traveling direction TD with high accuracy.
音源定位算出部14は、ユーザUSの頭部の位置および向きを基準としてユーザUSから進行方向TDに所定の距離だけ離れた位置を通知音SDの音源定位として算出する。
The sound source
この構成によれば、頭部の向きを基準として進行方向TDが算出される。そのため、進行方向TDが正確に認識される。 With this configuration, the traveling direction TD is calculated based on the direction of the head. Therefore, the traveling direction TD is accurately recognized.
情報処理装置1は、経路計画部11を有する。経路計画部11は、目的地までの移動経路RTを計画する。進行方向取得部12は、移動経路RTに沿って区画された複数の進路CSを取得する。進行方向取得部12は、区画された進路CSごとに、進路CSに沿う方向を進路CSの進行方向TDとして取得する。
The
この構成によれば、区画された進路CSごとに誘導が行われる。そのため、移動経路RTが複雑に入り組んでいても、目的地まで適切に誘導が行われる。 With this configuration, guidance is provided for each divided route CS. Therefore, even if the travel route RT is complicated and intricate, proper guidance is provided to the destination.
ユーザUSから通知音SDの音源定位までの距離は、ユーザUSの位置によらずに一定である。 The distance from the user US to the sound source location of the notification sound SD is constant regardless of the position of the user US.
この構成によれば、通知音SDの音源SCが見失われにくくなる。そのため、音源SCの方向(進行方向TD)を確実に認識させることができる。 This configuration makes it difficult for the sound source SC of the notification sound SD to be lost. This allows the direction of the sound source SC (travel direction TD) to be reliably recognized.
進行方向取得部12は、進路CSCの後に続く次進路CSNの進行方向TDNを取得する。音源定位算出部14は、ユーザUSから次進路CSNの進行方向TDNに所定の距離だけ離れた位置を、次進路CSNへの移動を誘導する通知音SDNの音源定位として算出する。
The traveling
この構成によれば、進路CSCを移動中に次進路CSNへの切り替わりおよび次進路CSNの進行方向を把握することができる。 According to this configuration, while traveling on the route CS C , it is possible to grasp the switch to the next route CS N and the traveling direction of the next route CS N.
ユーザUSから次進路CSNへの移動を誘導する通知音SDNの音源定位までの距離は、ユーザUSの位置によらずに一定である。 The distance from the user US to the sound source location of the notification sound SDN that guides the user US to move to the next route CSN is constant regardless of the position of the user US.
この構成によれば、通知音SDNの音源SCNが見失われにくくなる。そのため、音源SCNの方向(進路CSが切り替わる方向)を確実に認識させることができる。 According to this configuration, it is difficult to lose sight of the sound source SCN of the notification sound SDN , so that the direction of the sound source SCN (the direction in which the course CS changes) can be reliably recognized.
情報処理装置1は、音量調整部16を有する。音量調整部16は、中間点BPからユーザUSまでの距離に基づいて、現在の進路CSCおよび次進路CSNに関する2つの通知音SDの音量を調整する。中間点BPは、現在の進路CSCと次進路CSNとの境界部となる地点である。音量調整部16は、音量が調整された2つの通知音SDをミキシングして得られる通知音SDを、現在の進路CSCから次進路CSNへの切り替えを誘導する通知音SDとして生成する。
The
この構成によれば、進路CSの切り替わりが音量の変化によって直感的に把握される。 With this configuration, the change in the route CS can be intuitively understood by the change in volume.
音量調整部16は、ユーザUSが現在の進路CSC上を中間点BPに近づくにつれて、次進路CSNへの移動を誘導する通知音SDNの音量を大きくする。
The
この構成によれば、中間点BPへの接近が音量の変化によって直感的に把握される。 With this configuration, the approach of the midpoint BP can be intuitively grasped by the change in volume.
音量調整部16は、ユーザUSが次進路CSN上を中間点BPから遠ざかるにつれて、次進路CSNの進行方向TDNへの移動を誘導する通知音SDNの音量を大きくする。
The
この構成によれば、次進路へCSNの進入が音量の変化によって直感的に把握される。 According to this configuration, the entrance of the CS N into the next route can be intuitively grasped by the change in volume.
音量調整部16は、ユーザUSが現在の進路CSC上を中間点BPに近づくにつれて、現在の進路CSCの進行方向への移動を誘導する通知音SDCの音量を小さくする。
The
この構成によれば、中間点BPへの接近が音量の変化によって直感的に把握される。 With this configuration, the approach of the midpoint BP can be intuitively grasped by the change in volume.
音量調整部は、ユーザUSが次進路CSN上を中間点BPから遠ざかるにつれて、一つ前の進路CSCの進行方向への移動を誘導する通知音SDCの音量を小さくする。 The volume adjustment unit reduces the volume of the notification sound SD C that guides the user US to move in the traveling direction of the previous route CS C as the user US moves away from the midpoint BP on the next route CS N.
この構成によれば、次進路へCSNの進入が音量の変化によって直感的に把握される。 According to this configuration, the entrance of the CS N into the next route can be intuitively grasped by the change in volume.
音源定位算出部14は、ユーザUSが中間点BPを通り過ぎて次進路CSNとは異なる方向に移動した場合に、中間点BPの方向に所定の距離だけ離れた位置を通知音SDEの音源定位として算出する。
When the user US passes the midpoint BP and moves in a direction different from the next route CS N , the sound source
この構成によれば、中間点BPを通り過ぎたことが音像の位置の変化により直感的に把握される。 With this configuration, the change in the position of the sound image allows the listener to intuitively understand that they have passed the midpoint BP.
情報処理装置1は、音響生成部15を有する。音響生成部15は、ユーザUSの移動方向が進路CSの進行方向TDを中心とする所定の角度範囲に含まれる場合に、進行方向TDへの移動を誘導する通知音SDに音響効果を付与する。
The
この構成によれば、移動が適切に行われているか否かが音響効果によって直感的に把握される。 With this configuration, the sound effects allow the player to intuitively understand whether the movement is being performed appropriately.
音源定位算出部14は、警告域WAと危険域DAとの境界BD上の位置を警告音WSの音源定位として算出する。
The sound source
この構成によれば、危険域DAへの接近が直感的に把握される。 This configuration allows the approach of the danger zone DA to be intuitively recognized.
音源定位算出部14は、警告域WAと危険域DAとの境界BDに沿った面状領域を警告音WSの音源定位として算出する。
The sound source
この構成によれば、危険域DAの境界BDが直感的に把握される。 This configuration allows the boundary BD of the danger area DA to be intuitively grasped.
警告域WAは、進路CSの中心である動線から一定距離以上離れた領域であり、危険域DAは、警告域WAから外側に向かって一定距離以上離れた領域である。 The warning area WA is an area that is a certain distance or more away from the traffic flow line that is the center of the route CS, and the danger area DA is an area that is a certain distance or more away toward the outside from the warning area WA.
この構成によれば、動線からの距離に応じて段階的に安全度が管理される。 With this configuration, safety levels are managed in stages according to the distance from the traffic flow.
音源定位算出部14は、センサ情報に基づいて危険性のある区域を特定し、警告域WAおよび危険域DAを設定する。
The sound source
この構成によれば、警告域WAおよび危険域DAが適切に設定される。 With this configuration, the warning area WA and danger area DA are set appropriately.
なお、本明細書に記載された効果はあくまで例示であって限定されるものでは無く、また他の効果があってもよい。 Note that the effects described in this specification are merely examples and are not limiting, and other effects may also be present.
[11.変形例]
以下、通知制御の他の例を説明する。図22は、頭部方向に基づく通知音SDの制御例を示す図である。以下、上述の実施例との相違点を中心に説明する。
11. Modifications
Another example of notification control will be described below. Fig. 22 is a diagram showing an example of control of the notification sound SD based on the head direction. The following description will focus on the differences from the above embodiment.
上述の実施形態では、立体音響を用いて進行方向TDの通知が行われた。しかし、立体音響のみでは正面の定位を感じにくい場合がある。本変形例では、正面方向へ正確にユーザUSを誘導するために、通知音SD(パルス音)のパルス間隔をユーザUSの頭部の向きに応じて変化させる。進行方向TDと実際の移動方向との差がパルス間隔によって直感的に把握されため、進行方向TDへの誘導が行われやすくなる。 In the above embodiment, notification of the traveling direction TD was performed using stereophonic sound. However, stereophonic sound alone can make it difficult to sense the positioning in front. In this modified example, in order to accurately guide the user US in the forward direction, the pulse interval of the notification sound SD (pulse sound) is changed according to the direction of the head of the user US. The difference between the traveling direction TD and the actual direction of movement can be intuitively grasped from the pulse interval, making it easier to guide the user in the traveling direction TD.
図23は、本変形例の情報処理装置2の構成の一例を示す図である。
FIG. 23 shows an example of the configuration of the
本変形例において、音響生成部15は、角度差算出部151、動き検知部152および通知音出力判定部153を有する。角度差算出部151は、ユーザUSの頭部の向きと進行方向TDとの角度差を算出する。動き検知部152は、ユーザUSの頭部の動きを検知する。通知音出力判定部153は、ユーザUSの頭部の動きに基づいて通知音SDの出力の適否を判定する。通知音出力部17は、通知音出力判定部153の判定結果に基づいて通知音SDを出力する。
In this modified example, the
図24および図25は、通知方法の一例を示す図である。 FIGS. 24 and 25 show examples of notification methods.
頭部の向きは、頭部方向取得部13によって取得される。頭部方向取得部13は、地磁気センサなどを用いてユーザUSの頭部の向きを頭部方向HDとして取得する。例えば、通知音出力判定部153は、頭部方向HDと進行方向TDとの角度差が通知対象範囲内の角度である場合に、通知を行うことが妥当であると判定する。
The head direction is acquired by the head
通知音出力判定部153は、頭部の動きの方向に基づいて通知の適否を判定することもできる。例えば、通知音出力判定部153は、ユーザUSが進行方向TD側に向こうとしているときに通知を行い、進行方向TDとは無関係の方向を向こうとしているときに通知を行わないと判定することができる。
The notification sound
通知対象範囲とは、通知の対象となる角度範囲を意味する。通知音出力部17は、頭部方向HDと進行方向TDとの角度差が通知対象範囲内の角度である場合に、進行方向TDへの移動を誘導する通知音SDを出力する。図24の例では、通知が開始される最大の角度差が最大通知角度Θとして示されている。通知対象範囲は、進行方向TDと頭部方向HDとの角度差が-Θ以上Θ以下の角度範囲である。しかし、通知対象範囲の設定方法はこれに限られない。通知対象範囲は、システム開発者により任意に設定される。
The notification range refers to the angular range that is the subject of notification. The notification
通知音出力部17は、通知音SDをパルス音として出力する。通知音出力部17は、進行方向TDと頭部方向HDとの角度差が小さいほどパルス音の通知頻度を高くする。例えば、通知音出力部17は、パルス音の通知間隔を、頭部方向HDと進行方向TDとの角度差に反比例した大きさに設定する。
The notification
図26は、通知音SDの生成フローの一例を説明する図である。 Figure 26 is a diagram explaining an example of the generation flow of the notification sound SD.
進行方向取得部12は現在の進路CSCの進行方向TDCを取得する(ステップS11)。頭部方向取得部13は、ユーザUSの頭部方向HDを取得する(ステップS12)。角度差算出部151は、進行方向TDCと頭部方向HDとの角度差を算出する(ステップS13)。動き検知部152は、ユーザUSの頭部の動きを検出する。通知音出力判定部153は、頭部の動く向きを検出し、頭部が進行方向TDに近づく向きに動いているか否かに基づいて通知の適否を判定する(ステップS14)。
The travel
頭部が進行方向TDに近づく向きに動いている場合には(ステップS14:Yes)、通知音出力判定部153は、進行方向TDCと頭部方向HDとの角度差が通知対象案以内の角度であるか否かを判定する(ステップS15)。角度差が通知対象範囲内の角度である場合には(ステップS15:Yes)、通知音出力判定部153は、通知を行うことが妥当であると判定する。
When the head is moving toward the traveling direction TD (step S14: Yes), the notification sound
通知音出力部17は、通知音出力判定部153による一連の判定の結果に基づいて通知音SDCの出力を行う。すなわち、通知音出力部17は、角度差が通知対象範囲内の角度であり、且つ、ユーザUSの頭部が進行方向TDCに近づく向きに動いている場合に、進行方向TDCへの移動を誘導する通知音SDCを出力する。
The notification
頭部が進行方向TDCから遠ざかる向きに動いている場合(ステップS14:No)、および、角度差が通知対象範囲外の角度である場合(ステップS15:No)には、通知音出力判定部153は、通知を行うことが妥当ではないと判定する。この場合には、ステップS11に戻り、通知を行うことが妥当であると判定されるまで上述の処理が繰り返される。
If the head is moving away from the traveling direction TD C (step S14: No) and if the angular difference is outside the notification range (step S15: No), the notification
なお、上述のフローでは、頭部の動きの判定(ステップS14)が行われてから角度差の判定(ステップ15)が行われた。しかし、これらの判定はどちらが先に行われてもよい。通知音出力判定部153は、双方の判定結果を総合的に考慮して、最終的な通知の妥当性を判定する。
In the above flow, the head movement is determined (step S14) and then the angle difference is determined (step S15). However, either of these determinations can be performed first. The notification sound
図27は、通知音SDの生成フローの他の例を説明する図である。 FIG. 27 is a diagram illustrating another example of the generation flow of the notification sound SD.
本例において図26の例と異なる点は、通知の適否の判定条件としてステップS26が追加されている点である。ステップS21~ステップS25は、ステップS11~ステップS15と概ね同様である。相違点は、ステップS24~ステップS25の順番がステップS14~ステップS15と異なる点のみである。 This example differs from the example in FIG. 26 in that step S26 has been added as a condition for determining whether or not to notify. Steps S21 to S25 are generally similar to steps S11 to S15. The only difference is that the order of steps S24 to S25 is different from that of steps S14 to S15.
図27の例では、頭部が適正方向へn移動を開始からの経過時間が判定条件として追加されている(ステップS26)。通知音出力部17は、ステップS24~ステップS25の判定結果が出されてからの経過時間を監視する。通知音出力部17は、ユーザUSの頭部が進行方向TDCに近づく向きに動き始めてから、予め設定された時間間隔以上経過してから通知音SDCの出力を開始する。通知音の出力を開始する時間はシステム開発者により任意に設定される。
In the example of Fig. 27, the time elapsed since the head starts to move n in the appropriate direction is added as a judgment condition (step S26). The notification
図28は、通知対象範囲内の通知に関する設定例を示す図である。 Figure 28 shows an example of settings for notifications within the notification range.
通知対象範囲における通知頻度、通知音量および最大通知角度ΘはUIを用いて設定することができる。「通知頻度」の欄では、通知音SDとして出力されるパルス音の時間間隔を通知頻度として指定することができる。「通知音量」の欄では、通知音SDの音量を指定することができる。「最大通知角度」の欄では、通知が開始される最大の角度差を最大通知角度Θとして指定することができる。通知頻度および通知音量は、例えば0~1の範囲の規格化されたレベルの数値として指定される。 The notification frequency, notification volume, and maximum notification angle Θ within the notification target range can be set using the UI. In the "Notification frequency" field, the time interval of the pulse sound output as the notification sound SD can be specified as the notification frequency. In the "Notification volume" field, the volume of the notification sound SD can be specified. In the "Maximum notification angle" field, the maximum angle difference at which notifications begin can be specified as the maximum notification angle Θ. The notification frequency and notification volume are specified as numerical values with standardized levels ranging from 0 to 1, for example.
[12.周辺施設案内に関する変形例]
以下、周辺施設案内に関する変形例を説明する。図29は、周辺施設SFの通知例を示す図である。以下、上述の実施例との相違点を中心に説明する。
[12. Modifications regarding surrounding facility guidance]
The following describes a modified example of the surrounding facility guidance. Fig. 29 is a diagram showing an example of notification of the surrounding facilities SF. The following describes mainly the differences from the above embodiment.
店の立ち並ぶ繁華街やショッピングモールのような大型商業施設、また区画化された売り場で構成される大型小売店のような場所では、所望の店や売り場がどの位置にあるのか直感的に分かり難い場合が多い。このような場合に、周囲のどちらの方角に何があるかを音声で通知することで、地図を見るために立ち止まる煩わしさや、地図を見ながら歩行した際の衝突や転倒といった事故のリスクを低減することができる。 In places like busy shopping streets lined with shops, large commercial facilities such as shopping malls, and large retail stores with partitioned sales areas, it is often difficult to intuitively know where the desired store or sales area is located. In such cases, by providing audio notification of what is in which direction around you, it is possible to reduce the inconvenience of having to stop to look at a map and the risk of accidents such as bumping into or falling when walking while looking at a map.
なお、ユーザが視覚障がい者の場合、上記のような場所に限らず、視覚の及ばない周囲全ての場所に関する情報を音声等で得る必要があるため、音声等による視覚以外の情報提示の必要性は健常者より高い。 In addition, if the user is visually impaired, they need to receive information about all surrounding locations beyond the reach of their eyesight through audio, etc., not just the locations mentioned above, so the need for non-visual information to be presented through audio, etc. is greater than for able-bodied users.
特許第4315211号公報には、案内候補リストの中からユーザが案内対象物を携帯情報端末のUI操作で選択し、案内対象物まで、もしくは経路途中の位置を立体音響による音声データでヘッドフォンから通知する方法が記載されている。 Patent No. 4315211 describes a method in which the user selects an object to be guided from a list of guide candidates by operating the UI of a mobile information terminal, and is notified of the location of the object or its location along the route via stereophonic audio data through headphones.
しかし、例えば周囲数百メートル以内にある店や施設名の音声通知を受け、ショップの特売品やレストランのメニュー、開催イベントといった、その場所に関連する情報を得たいと思った場合、通知を受けた施設名を例えば携帯情報端末をポケットやカバンから取り出してブラウザを開いて検索したり、専用アプリを開いてリストから選択したりして詳細な情報を調べたりする必要がある。その施設まで遠く移動経路が不明な場合、ナビゲーションアプリを開いて移動経路を調べ、地図を見てその施設まで移動する必要がある。 However, if you receive a voice notification of the names of stores or facilities within a few hundred meters of you and want to obtain information related to that location, such as special offers at the store, restaurant menus, or upcoming events, you will need to take your mobile information device out of your pocket or bag, open a browser to search for the name of the facility, or open a dedicated app and select it from a list to find out more information. If the facility is far away and you do not know the route, you will need to open a navigation app to find the route and look at a map to get to the facility.
このような複雑な操作手順では、音声通知は健常者にとっては操作に用いる携帯情報端末の画面から視覚で情報を得た場合に比べ情報量が少ない等、メリットに乏しいという課題がある。また、視覚障がい者にとっては操作手順が複雑で、視覚に頼らず操作することが難しいという課題がある。 With such complex operating procedures, voice notifications have few benefits for able-bodied people, as the amount of information they provide is less than when information is obtained visually from the screen of a mobile information terminal used to operate the device. In addition, for visually impaired people, the operating procedures are complex, making it difficult to operate the device without relying on vision.
本変形例は、上述の課題を解決するために考案されたものである。本変形例では、周辺施設SFの場所や方向が立体音響で通知される。ユーザUSが特定のトリガ動作を行うことで、周辺施設SFの詳細情報や移動経路RTなどが提示される。これにより、従来携帯情報端末上のアプリ等からの複雑な操作が必要だった処理を少ない動作で実行することができる。特にトリガ動作を音声操作やジェスチャ操作等で行うことができれば、携帯情報端末等を別途必要としないハンズフリーによる操作を実現できる。 This modified example has been devised to solve the above-mentioned problems. In this modified example, the location and direction of the surrounding facilities SF are notified using stereophonic sound. When the user US performs a specific trigger action, detailed information about the surrounding facilities SF and a travel route RT are presented. This makes it possible to execute processes that previously required complex operations from an app on a mobile information terminal with fewer operations. In particular, if trigger actions can be performed using voice or gesture operations, hands-free operation can be realized without the need for a separate mobile information terminal or the like.
なお、詳細情報は、施設名以外の、周辺施設SFに関連する様々な情報を含むことができる。周辺施設SFがショップであれば、営業時間、販売情報、および、周辺施設SFまでの距離などが詳細情報の例として挙げられる。周辺施設SFが駅であれば、路線情報、運行情報、および、周辺施設SFまでの距離などが詳細情報の例として挙げられる。 The detailed information may include various information related to the surrounding facility SF other than the facility name. If the surrounding facility SF is a shop, examples of detailed information include business hours, sales information, and the distance to the surrounding facility SF. If the surrounding facility SF is a station, examples of detailed information include line information, operation information, and the distance to the surrounding facility SF.
図29の例では、周辺施設SFの位置を示す通知音SD(施設通知音SDF)が立体音響でユーザUSに提示される。例えば、施設通知音SDFは、周辺施設SFの位置(例えば周辺施設SFの中心、あるいは、サーバSVの地図データベース等に登録されている位置)、または、周辺施設SFが面する通路PA上に音源定位を有する立体音響として通知される。ユーザUSは、施設通知音SDFに基づいて周辺施設SFの存在および位置を認識することができる。 In the example of Fig. 29, a notification sound SD (facility notification sound SD F ) indicating the position of the peripheral facility SF is presented to the user US in stereophonic sound. For example, the facility notification sound SD F is notified as a stereophonic sound having a sound source localization at the position of the peripheral facility SF (e.g., the center of the peripheral facility SF, or a position registered in a map database of the server SV) or on the passage PA facing the peripheral facility SF. The user US can recognize the existence and position of the peripheral facility SF based on the facility notification sound SD F.
図29の例では、周辺施設SFとして銀行や駅などの屋外の施設が示されているが、周辺施設SFは屋外施設に限定されない。通知対象となる周辺施設SFは、ショッピングモール内の店舗や量販店の区画化された売り場コーナーなどの屋内の施設であってもよい。 In the example of FIG. 29, outdoor facilities such as banks and train stations are shown as the surrounding facilities SF, but the surrounding facilities SF are not limited to outdoor facilities. The surrounding facilities SF that are the subject of notifications may also be indoor facilities such as stores in a shopping mall or partitioned sales corners in a mass retailer.
ユーザUSは、予め設定されたトリガ動作によって、興味のある周辺施設SFを選択することができる。トリガ動作としては、周辺施設SF(施設通知音SDFの音源定位)の方向へ頭部を向ける、あるいは、周辺施設SFの方向へ頭部を向けた状態でUI(User Interface)25を用いて音声操作、ボタン操作またはジェスチャ操作を行う、などが挙げられる。例えば、施設通知音SDFの通知中(例えば通知開始から一定時間内)にトリガ動作が行われると、周辺施設SFの詳細情報や周辺施設SFへの移動経路RTの提示が行われる。 The user US can select a surrounding facility SF of interest by a preset trigger operation. Examples of the trigger operation include turning the head toward the surrounding facility SF (sound source location of the facility notification sound SDF ), or performing a voice operation, a button operation, or a gesture operation using a UI (User Interface) 25 while the head is facing the surrounding facility SF. For example, when a trigger operation is performed during notification of the facility notification sound SDF (for example, within a certain period of time from the start of notification), detailed information of the surrounding facility SF and a travel route RT to the surrounding facility SF are presented.
図30は、本変形例の情報処理装置3の要部構成の一例を示す図である。図30は、周辺施設案内に必要な構成を抜き出して記載したものであり、上述した各実施例の構成(図8に示した情報処理装置1の構成および図23に示した情報処理装置2の構成)を除外するものではない。以下、上述した実施例との相違点を中心に説明する。
FIG. 30 is a diagram showing an example of the main configuration of
情報処理装置3は、前述した各実施例の構成に加えて、周辺情報取得部21、自己位置取得部22、経路情報・施設情報取得部23、操作判定部24およびUI25を有する。
In addition to the configuration of each of the embodiments described above, the
自己位置取得部22は、ユーザUSの所有するヘッドフォンや携帯情報端末等(クライアント端末TM)に搭載されたGPS/GNSS等により経緯度のようなユーザUSの位置を取得する。周辺情報取得部21は、自己位置周辺の地図情報をサーバから取得し、通知対象とする周辺施設SF(通知対象施設)のリストを作成する。通路PA上に音源定位を設定する場合、周辺情報取得部21は、自己位置と施設周辺の通路情報を取得する。 The self-location acquisition unit 22 acquires the location of the user US, such as longitude and latitude, using a GPS/GNSS or the like installed in headphones or a mobile information terminal (client terminal TM) owned by the user US. The surrounding information acquisition unit 21 acquires map information around the self-location from the server and creates a list of surrounding facilities SF (notification target facilities) to be notified. When setting sound source localization on an aisle PA, the surrounding information acquisition unit 21 acquires the self-location and information on the aisle around the facility.
周辺情報取得部21は、ユーザUSの施設利用履歴に基づいて、通知すべき周辺施設SFをカスタマイズすることができる。カスタマイズとは、通知すべき周辺施設SFの取捨選択や優先順位付けを意味する。例えば、施設利用履歴は、周辺施設SFごとのユーザUSの利用日時を規定する。施設利用履歴は、ユーザUSと紐づけられてサーバSVに保存されている。周辺情報取得部21は、周辺施設SFごとのトリガ動作の検出頻度に基づいて、通知すべき周辺施設SFをカスタマイズすることもできる。 The surrounding information acquisition unit 21 can customize the surrounding facilities SF to be notified based on the facility usage history of the user US. Customization means selecting and prioritizing the surrounding facilities SF to be notified. For example, the facility usage history specifies the date and time of use by the user US for each surrounding facility SF. The facility usage history is linked to the user US and stored on the server SV. The surrounding information acquisition unit 21 can also customize the surrounding facilities SF to be notified based on the detection frequency of the trigger action for each surrounding facility SF.
カスタマイズは機械学習モデルを用いて行うことができる。機械学習モデルは、ユーザの属性、利用履歴、買い物の履歴等を入力することでユーザの習慣に最適化した通知設定を出力する。例えば、月曜午前に特定の病院、その後薬局というルートを毎週繰り返すユーザに対して、その曜日には病院と薬局を優先的に通知するように設定する。病院に行ったという情報をトリガとして、薬局の通知を優先するなどが考えられる。 Customization can be done using machine learning models. The machine learning model inputs user attributes, usage history, shopping history, etc., and outputs notification settings optimized for the user's habits. For example, for a user who goes to a specific hospital on Monday morning and then to a pharmacy every week, it can be set to prioritize notifications for the hospital and pharmacy on that day. One possible scenario would be to use the information that the user has visited the hospital as a trigger to prioritize notifications for the pharmacy.
カスタマイズは、他のユーザの設定等を参照しながら行うこともできる。例えば、サーバSVは、過去にナビゲーションシステムNVを利用した様々なユーザの施設利用履歴を保持する。機械学習モデルは、類似する属性のユーザの施設利用履歴を用いて、通知すべき周辺施設SFの選択および優先順位付けなどを学習することができる。 Customization can also be performed by referring to the settings of other users. For example, the server SV holds facility usage histories of various users who have used the navigation system NV in the past. The machine learning model can use the facility usage histories of users with similar attributes to learn the selection and prioritization of surrounding facilities SF to be notified of.
なお、周辺施設SFや周辺通路の情報は、カメラを用いた画像認識処理により取得することもできる。例えば、施設形状のパターンマッチング、看板の文字認識、路面認識等から、周辺施設SFや周辺通路の情報を取得することができる。周辺施設SFの距離情報はデプスセンサ等から取得してもよい。この場合、周辺情報取得部21はサーバSV側ではなくクライアント端末TM側に設けられる。また、カメラの位置を自己位置とすることで、自己位置取得部22を省略することができる。 In addition, information on the surrounding facilities SF and surrounding passageways can also be obtained by image recognition processing using a camera. For example, information on the surrounding facilities SF and surrounding passageways can be obtained from pattern matching of facility shapes, character recognition on signs, road surface recognition, etc. Distance information on the surrounding facilities SF may also be obtained from a depth sensor, etc. In this case, the surrounding information acquisition unit 21 is provided on the client terminal TM side, not on the server SV side. Also, by setting the position of the camera as the self-position, the self-position acquisition unit 22 can be omitted.
ユーザUSの周囲に複数の通知対象施設が存在する場合には、通知対象施設ごとに施設通知音SDFが通知される。通知される周辺施設SFの順番は任意に設定することができる。例えば、通知音出力部17は、複数の周辺施設SFの施設通知音SDFを、周辺施設SFとユーザUSとの距離が遠い順または近い順に出力することができる。ユーザUSは、音声やジェスチャなどを選択決定操作として用いて所望の周辺施設SFの選択および決定を行う。操作判定部24は、施設通知音SDFの音源定位の方向を特定する選択決定操作をトリガ動作として検出する。
When there are multiple facilities to be notified around the user US, the facility notification sound SDF is notified for each of the facilities to be notified. The order of the peripheral facilities SF to be notified can be set arbitrarily. For example, the notification
選択決定操作は、選択操作および決定操作を含むことができる。選択操作は、施設通知音SDFの音源定位の方向を選択する操作である。決定操作は、選択された方向に音源定位を有する周辺施設SFを所望の周辺施設SFとして決定する操作である。決定操作は、決定した周辺施設SFに関して必要な処理(詳細情報の読み上げや経路案内など)を情報処理装置3に実施させるためのトリガ動作を含むことができる。
The selection/decision operation may include a selection operation and a decision operation. The selection operation is an operation for selecting the direction of the sound source localization of the facility notification sound SDF . The decision operation is an operation for deciding a peripheral facility SF having a sound source localization in the selected direction as a desired peripheral facility SF. The decision operation may include a trigger operation for causing the
例えば、選択操作は、施設通知音SDFが発せられてから所定時間以内に、施設通知音SDFの音源定位の方向にユーザUSの頭部を向ける動作を含むことができる。操作判定部24は、施設通知音SDFによる通知の開始から一定時間内に音源定位がユーザUSの顔や耳の正面を向いた場合に選択操作が行われたと判定することができる。
For example, the selection operation may include an action of turning the head of the user US in the direction of the sound source localization of the facility notification sound SD F within a predetermined time from the emission of the facility notification sound SD F. The
決定操作は、所定のボタン操作、音声操作およびジェスチャ操作を含むことができる。操作判定部24は、センサ情報(タッチパネルやマイクの信号など)に基づいて決定操作の有無を判定することができる。決定操作は、選択操作において音源定位の方向に向けられた頭部の位置を所定時間維持する動作として行うこともできる。これにより、周辺施設SFの選択と決定を、周辺施設SFの方向へ顔を向けるという一つの動作によって行うことができる。たまたま顔が周辺施設SFの方向に向いていることと区別するために、顔の上下の姿勢を水平に近い状態にするという要件を決定操作の要件に加えることもできる。
The decision operation can include a predetermined button operation, a voice operation, and a gesture operation. The
決定操作は、選択操作に応答して行われた周辺施設SFの詳細情報の通知に対するユーザUSの操作を含むことができる。例えば、経路計画部11は、詳細情報の通知中(例えば通知開始から一定時間内)に行われた決定操作に応答して、通知中の周辺施設SFを目的地とする移動経路RTの計画を行うことができる。この場合、決定操作は、周辺施設SFへ向けて動き出す動作を含むことができる。これにより、直感的な操作が可能となる。なお、ナビゲーションシステムNVは、ユーザUSが周辺施設SFを選択または訪問した場合に広告収入を得る仕組みを備えていてもよい。
The decision operation may include an operation by the user US in response to a notification of detailed information about the surrounding facility SF made in response to a selection operation. For example, in response to a decision operation made during notification of detailed information (e.g., within a certain period of time from the start of notification), the
経路情報・施設情報取得部23は、通知対象施設に関連した情報をサーバSVやクライアント端末TM上の記憶装置から読みだす。経路情報・施設情報取得部23は、経路案内を行うための経路情報をサーバSVやクライアント端末TM上の記憶装置から読み出す。
The route information and facility
図31は、周辺施設案内に関する処理フローの一例を示す図である。 Figure 31 shows an example of a processing flow for providing information about nearby facilities.
周辺情報取得部21は、サーバSVに保存されている地図情報から、通知対象となる周辺施設SF(通知対象施設)の位置を取得する(ステップS31)。自己位置取得部22は、GPSなどを用いてユーザUSの頭部の位置を自己位置として取得する(ステップS32)。頭部方向取得部13は、頭部装着デバイスに搭載した姿勢センサ(地磁気センサ等)を用いて、ユーザUSの頭部の向きを取得する(ステップS33)。
The surrounding information acquisition unit 21 acquires the position of the surrounding facility SF (notification target facility) that is the subject of notification from the map information stored in the server SV (step S31). The self-position acquisition unit 22 acquires the position of the head of the user US as the self-position using a GPS or the like (step S32). The head
なお、ユーザUSの頭部の向きは、ユーザUSの周辺に設置された監視カメラの映像から求めることもできる。その場合、サーバSVは、監視カメラに写るユーザUSの映像から頭部の向きを検出し、頭部方向取得部13に出力する。ユーザUSの頭部の向きは、ネックバンド搭載カメラの映像から求めることもできる。
The head direction of the user US can also be obtained from the image of a surveillance camera installed around the user US. In this case, the server SV detects the head direction from the image of the user US captured by the surveillance camera and outputs it to the head
音源定位算出部14は、ユーザUSの頭部の位置および向きをユーザ位置情報として取得する。音源定位算出部14は、ユーザ位置情報および通知対象施設の位置情報から、通知対象施設までの方向および距離を計算する(ステップS34)。音源定位算出部14は、算出された方向および距離だけユーザUSから離れた位置を施設通知音SDFの音源定位として設定する。
The sound source
通知音出力部17は、立体音響として生成された施設通知音SDFを用いて周辺施設SFの施設名等をユーザUSに通知する(ステップS35)。通知を行う際に音楽や他の通知音が鳴っている場合には、音量調整部16は、施設通知音SDF以外の音が小さくなるように施設通知音SDFと施設通知音SDF以外の音とのミキシングを行ってもよい。
The notification
操作判定部24は、センサ情報に基づいて、周辺施設SFの選択および決定に関わるトリガ動作の有無を判定する。例えば、操作判定部24は、施設通知音SDFの通知中に、選択操作としてユーザUSが通知中の周辺施設SFの方向に顔等を向けたか否かを判定する(ステップS36)。選択操作が検出されない場合には(ステップS36:No)、情報処理装置3は周辺施設案内を行わない。
The
選択操作が検出された場合には(ステップS36:Yes)、情報処理装置3は、通知中の周辺施設SFの方向にユーザUSが向いていることを音や振動で通知(フィードバック)する(ステップS37)。フィードバック手段としては、通知音出力部17や、頭部装着デバイスに搭載したバイブレータなどを用いることができる。例えば、フィードバック手段は、ユーザUSの頭部の向きが施設通知音SDFの音源定位の方向を中心とした所定の角度範囲内に含まれる場合にユーザUSにフィードバックを行う。
When the selection operation is detected (step S36: Yes), the
操作判定部は24、通知中に決定操作となるトリガ動作が発動されたか否かを判定する(ステップS38)。決定操作としては、音声操作、ボタン操作およびジェスチャ操作などが挙げられる。トリガ動作が検出されない場合には(ステップS38:No)、情報処理装置3は周辺施設案内を行わない。
The
トリガ動作が検出された場合には(ステップS38:Yes)、情報処理装置3は、トリガ動作に応じた処理を行う。処理内容としては、周辺施設SFの詳細情報の読み上げや、周辺施設SFへの経路案内などが例示できる(ステップS39)。経路案内に関するトリガ動作が検出された場合には、経路計画部11は、周辺施設SFの位置を示す施設通知音SDFに対するユーザUSのトリガ動作に応答して、周辺施設SFを目的地とする移動経路RTの計画を行う。
When the trigger action is detected (step S38: Yes), the
図32ないし図35は、周辺施設SFの通知例を示す図である。 Figures 32 to 35 show examples of notifications for surrounding facilities SF.
図32の例では、施設通知音SDFの音源定位は、周辺施設SFが面する通路PA上に設定されている。周辺施設SFがユーザUSの自己通路PACに面している場合には、施設通知音SDFの音源定位は自己通路PAC上に設定することができる。自己通路PACとは、ユーザUSが移動中の通路PAを意味する。通路PA上に複数の分岐がある場合には、直進方向の通路PAが自己通路PACとなる。 In the example of Fig. 32, the sound source location of the facility notification sound SDF is set on the passage PA facing the surrounding facility SF. When the surrounding facility SF faces the user US's own passage PA -C , the sound source location of the facility notification sound SDF can be set on the user US's own passage PA -C . The user US's own passage PA -C means the passage PA through which the user US is moving. When there are multiple branches on the passage PA, the passage PA in the straight direction becomes the user US's own passage PA- C .
通知音出力部17は、自己通路PAC上のユーザUSに対して、自己通路PACに面した周辺施設SFの施設通知音SDFを提示する。音源定位が自己通路PAC上に設定されることで、周辺施設SFの位置が把握しやすくなる。ユーザUSは、施設通知音SDFを頼りに周辺施設SFに向かって移動することができる。
The notification
図33の例では、施設通知音SDFの音源定位は、自己通路PACと交差する通路PA上に設定されている。通知音出力部17は、ユーザUSが、音源定位の設定された通路PAと現在の自己通路PACとの交差部分に到達したときに施設通知音SDFを提示する。
In the example of Fig. 33, the sound source localization of the facility notification sound SDF is set on the passage PA intersecting with the user's own passage PA C. The notification
図34の例では、施設通知音SDFの音源定位は、周辺施設SFからユーザUSまでの経路として検索された検索経路SP上に設定されている。施設通知音SDFの音源定位は、地図データベース等に登録された周辺施設SFの位置から、周辺施設SFへアクセスする際に自己通路PACからの分岐点となる検索経路SP上の位置に再配置されている。 In the example of Fig. 34, the sound source location of the facility notification sound SDF is set on a search route SP searched as a route from the surrounding facility SF to the user US. The sound source location of the facility notification sound SDF is relocated from the position of the surrounding facility SF registered in a map database or the like to a position on the search route SP that is a branch point from the own passage PAC when accessing the surrounding facility SF.
例えば、施設通知音SDFは、検索経路SPとユーザUSの自己通路PACとが重複する重複通路OV上の位置に音源定位を有する立体音響として通知される。図34の例では、自己通路PACと大学までの検索経路SPとの重複部分が重複通路OV1として示されている。自己通路PACと銀行または駅までの検索経路SPとの重複部分が重複通路OV2として示されている。 For example, the facility notification sound SDF is notified as a stereophonic sound having a sound source localization at a position on an overlapping passage OV where the search route SP and the user's own passage PAC of the user US overlap. In the example of Fig. 34, the overlapping portion between the user's own passage PAC and the search route SP to the university is shown as an overlapping passage OV1. The overlapping portion between the user's own passage PAC and the search route SP to the bank or station is shown as an overlapping passage OV2.
重複通路OV2上には複数の周辺施設SF(例えば銀行と駅)の施設通知音SDFの音源定位が設定されている。この場合、各施設通知音SDFの通知の順番を周辺施設SFまでの距離に応じて設定することができる。例えば、通知音出力部17は、重複通路OV2上に音源定位が設定された複数の周辺施設SFの施設通知音SDFを、周辺施設SFとユーザUSとの距離が遠い順または近い順に出力することができる。
The sound source localization of facility notification sounds SDF for multiple peripheral facilities SF (e.g., a bank and a train station) is set on the overlapping passage OV2. In this case, the order of notification of each facility notification sound SDF can be set according to the distance to the peripheral facility SF. For example, the notification
施設通知音SDFの音源定位が重複通路OV上に設定されることで、周辺施設SFの位置が把握しやすくなる。ユーザUSは、施設通知音SDFを頼りに周辺施設SFに向かって移動することができる。この際、音源定位が周辺施設SFに向かう曲がり角の位置(分岐部分)に設定されると、周辺施設SFに向けてユーザUSはスムーズに進路を変更することができる。そのため、音源定位算出部14は、施設通知音SDFの音源定位を、自己通路PACから検索経路SPに沿って分岐する分岐部分に設定することが好ましい。
By setting the sound source localization of the facility notification sound SDF on the overlapping passage OV, the position of the surrounding facility SF can be easily grasped. The user US can move toward the surrounding facility SF by relying on the facility notification sound SDF . In this case, if the sound source localization is set at the position of the corner (branch portion) toward the surrounding facility SF, the user US can smoothly change the course toward the surrounding facility SF. Therefore, it is preferable that the sound source
図33および図34の例において、施設通知音SDFの音源定位は周辺施設SFの周辺部に設定されている。音源定位は、周辺部のどこに設置してもよいが、周辺施設SFの入り口付近に設定されれば、周辺施設SFに入りやすくなる。そのため、音源定位算出部14は、施設通知音SDFの音源定位を、周辺施設SFの入り口が面する通路PA上に設定することが好ましい。
33 and 34, the sound source localization of the facility notification sound SDF is set to the periphery of the surrounding facility SF. The sound source localization may be set anywhere in the periphery, but if it is set near the entrance of the surrounding facility SF, it is easier to enter the surrounding facility SF. Therefore, it is preferable that the sound source
周辺施設SFの入り口の位置はサーバSVに保存された地図情報から取得することができる。入り口の位置が地図情報に登録されていない場合には、サーバSVは周辺施設SFの訪問者の動線の情報に基づいて入り口の位置を推定し、地図情報に登録することができる。訪問者とは、ナビゲーションシステムNVを利用して周辺施設SFを訪問した過去のユーザUSを意味する。ユーザUSの動線は、サーバSVに送信されたユーザ位置情報に基づいて算出される。音源定位算出部14は、周辺施設SFの訪問者の動線に基づいて検出された入り口の位置に基づいて施設通知音SDFの音源定位を設定することができる。
The position of the entrance of the surrounding facility SF can be obtained from the map information stored in the server SV. If the position of the entrance is not registered in the map information, the server SV can estimate the position of the entrance based on information on the flow of visitors to the surrounding facility SF and register it in the map information. A visitor means a past user US who has visited the surrounding facility SF using the navigation system NV. The flow of the user US is calculated based on the user position information transmitted to the server SV. The sound source
図35の例では、自己通路PACがカーブしている。この場合、音源定位算出部14は、施設通知音SDFの音源定位を、ユーザUSと施設通知音SDFの音源定位とを結んだ直線が自己通路PACからはみ出さないような位置に設定することができる。これにより、ユーザUSは音源定位に向かって真っすぐに移動しても道を外れることがなくなる。
In the example of Fig. 35, the own passage PAC is curved. In this case, the sound source
図36は、図34の音源定位の設定に関する処理フローの一例を示す図である。 FIG. 36 shows an example of a processing flow for setting the sound source localization in FIG. 34.
周辺情報取得部21は、サーバSVの地図データベース等から周辺施設SFの位置および周辺施設SFの周辺の通路情報を取得する(ステップS41)。自己位置取得部22は、センサ情報等から自己位置を取得する。また、自己位置取得部22は、自己位置と通路情報から自己通路PACの情報を取得する(ステップS42)。頭部方向取得部13は、センサ情報等からユーザUSの頭部方向を取得する(ステップS43)。
The surrounding information acquisition unit 21 acquires the location of the surrounding facility SF and information on the passage around the surrounding facility SF from a map database of the server SV or the like (step S41). The self-location acquisition unit 22 acquires the self-location from sensor information or the like. The self-location acquisition unit 22 also acquires information on the self-path PAC from the self-location and the passage information (step S42). The head
音源定位算出部14は、周辺施設SFが面する通路PA上に施設通知音SDFの音源定位を配置する(ステップS44)。音源定位算出部14は、施設通知音SDFの音源定位が自己通路PAC上に存在するか否かを判定する(ステップS45)。施設通知音SDFの音源定位が自己通路PAC上に存在する場合には(ステップS45:Yes)、通知音出力部17は施設通知音SDFを出力する。
The sound source
施設通知音SDFの音源定位が自己通路PAC上に存在しない場合には(ステップS45:No)、音源定位算出部14は、施設通知音SDFの音源定位を、自己通路PACに近く、周辺施設SFへアクセスする際に分岐点となる、検索経路SP上の位置に再配置する(ステップS47)。
If the sound source location of the facility notification sound SDF is not on the own passage PAC (step S45: No), the sound source
[付記]
なお、本技術は以下のような構成も採ることができる。
(1)
ユーザの進路の進行方向を取得する進行方向取得部と、
前記ユーザから前記進行方向に所定の距離だけ離れた位置を、前記進行方向への移動を誘導する通知音の音源定位として算出する音源定位算出部と、
を有する情報処理装置。
(2)
前記音源定位算出部は、前記ユーザの頭部の位置および向きを基準として前記ユーザから前記進行方向に前記所定の距離だけ離れた位置を前記通知音の音源定位として算出する、
上記(1)に記載の情報処理装置。
(3)
目的地までの移動経路を計画する経路計画部を有し、
前記進行方向取得部は、前記移動経路に沿って区画された複数の進路を取得し、区画された進路ごとに、前記進路に沿う方向を前記進路の前記進行方向として取得する、
上記(1)または(2)に記載の情報処理装置。
(4)
前記ユーザから前記通知音の音源定位までの距離は、前記ユーザの位置によらずに一定である、
上記(1)ないし(3)のいずれか1つに記載の情報処理装置。
(5)
前記進行方向取得部は、前記進路の後に続く次進路の進行方向を取得し、
前記音源定位算出部は、前記ユーザから前記次進路の進行方向に所定の距離だけ離れた位置を、前記次進路への移動を誘導する通知音の音源定位として算出する、
上記(1)ないし(4)のいずれか1つに記載の情報処理装置。
(6)
前記ユーザから前記次進路への移動を誘導する前記通知音の音源定位までの距離は、前記ユーザの位置によらずに一定である、
上記(5)に記載の情報処理装置。
(7)
現在の前記進路と前記次進路との境界部となる中間点から前記ユーザまでの距離に基づいて、現在の前記進路および前記次進路に関する2つの前記通知音の音量を調整し、音量が調整された前記2つの通知音をミキシングして得られる通知音を、現在の前記進路から前記次進路への切り替えを誘導する通知音として生成する音量調整部を有する、
上記(5)または(6)に記載の情報処理装置。
(8)
前記音量調整部は、前記ユーザが現在の前記進路上を前記中間点に近づくにつれて、前記次進路への移動を誘導する前記通知音の音量を大きくする、
上記(7)に記載の情報処理装置。
(9)
前記音量調整部は、前記ユーザが前記次進路上を前記中間点から遠ざかるにつれて、前記次進路の進行方向への移動を誘導する前記通知音の音量を大きくする、
上記(8)に記載の情報処理装置。
(10)
前記音量調整部は、前記ユーザが現在の前記進路上を前記中間点に近づくにつれて、現在の前記進路の進行方向への移動を誘導する前記通知音の音量を小さくする、
上記(8)または(9)に記載の情報処理装置。
(11)
前記音量調整部は、前記ユーザが前記次進路上を前記中間点から遠ざかるにつれて、一つ前の前記進路の進行方向への移動を誘導する前記通知音の音量を小さくする、
上記(10)に記載の情報処理装置。
(12)
前記音源定位算出部は、前記ユーザが前記中間点を通り過ぎて前記次進路とは異なる方向に移動した場合に、前記中間点の方向に前記所定の距離だけ離れた位置を前記通知音の音源定位として算出する、
上記(7)ないし(11)のいずれか1つに記載の情報処理装置。
(13)
前記ユーザの移動方向が前記進行方向を中心とする所定の角度範囲に含まれる場合に、前記進行方向への移動を誘導する前記通知音に音響効果を付与する音響生成部を有する、
上記(1)ないし(12)のいずれか1つに記載の情報処理装置。
(14)
前記音源定位算出部は、警告域と危険域との境界上の位置を警告音の音源定位として算出する、
上記(1)ないし(13)のいずれか1つに記載の情報処理装置。
(15)
前記音源定位算出部は、前記警告域と前記危険域との境界に沿った面状領域を前記警告音の音源定位として算出する、
上記(14)に記載の情報処理装置。
(16)
前記警告域は、前記進路の中心である動線から一定距離以上離れた領域であり、前記危険域は、警告域から外側に向かって一定距離以上離れた領域である、
上記(14)または(15)に記載の情報処理装置。
(17)
前記音源定位算出部は、センサ情報に基づいて危険性のある区域を特定し、前記警告域および前記危険域を設定する、
上記(14)ないし(16)のいずれか1つに記載の情報処理装置。
(18)
前記ユーザの頭部の向きと前記進行方向との角度差を算出する角度差算出部と、
前記角度差が通知対象範囲内の角度である場合に、前記進行方向への移動を誘導する前記通知音を出力する通知音出力部と、
を有する上記(1)ないし(17)のいずれか1つに記載の情報処理装置。
(19)
前記通知音出力部は、前記通知音をパルス音として出力し、
前記通知音出力部は、前記角度差が小さいほど前記パルス音の通知頻度を高くする、
上記(18)に記載の情報処理装置。
(20)
前記ユーザの頭部の動きを検知する動き検知部を有し、
前記通知音出力部は、前記角度差が通知対象範囲内の角度であり、且つ、前記ユーザの頭部が前記進行方向に近づく向きに動いている場合に、前記進行方向への移動を誘導する前記通知音を出力する、
上記(18)または(19)に記載の情報処理装置。
(21)
前記通知音出力部は、前記ユーザの頭部が前記進行方向に近づく向きに動き始めてから、予め設定された時間間隔以上経過してから前記通知音の出力を開始する、
上記(20)に記載の情報処理装置。
(22)
前記経路計画部は、周辺施設の位置を示す施設通知音に対する前記ユーザのトリガ動作に応答して、前記周辺施設を前記目的地とする前記移動経路の計画を行う、
上記(3)に記載の情報処理装置。
(23)
前記施設通知音は、前記周辺施設の位置、または、前記周辺施設が面する通路上に音源定位を有する立体音響として通知される、
上記(22)に記載の情報処理装置。
(24)
前記音源定位算出部は、前記施設通知音の前記音源定位を、前記周辺施設の入り口が面する前記通路上に設定する、
上記(23)に記載の情報処理装置。
(25)
前記音源定位算出部は、前記周辺施設の訪問者の動線に基づいて検出された前記入り口の位置に基づいて前記施設通知音の前記音源定位を設定する、
上記(24)に記載の情報処理装置。
(26)
前記施設通知音の前記音源定位の方向を特定する選択決定操作を前記トリガ動作として検出する操作判定部を有する、
上記(23)ないし(25)のいずれか1つに記載の情報処理装置。
(27)
前記選択決定操作は、前記施設通知音の前記音源定位の方向を選択する選択操作と、前記選択操作に応答して行われた前記周辺施設の詳細情報の通知に対する前記ユーザの決定操作と、を含む、
上記(26)に記載の情報処理装置。
(28)
前記選択操作は、前記施設通知音が発せられてから所定時間以内に、前記施設通知音の前記音源定位の方向に前記ユーザの頭部を向ける動作を含む、
上記(27)に記載の情報処理装置。
(29)
前記ユーザの頭部の向きが前記施設通知音の前記音源定位の方向を中心とした所定の角度範囲内に含まれる場合に前記ユーザにフィードバックを行うフィードバック手段を有する、
上記(28)に記載の情報処理装置。
(30)
前記決定操作は、前記周辺施設へ向けて動き出す動作を含む、
上記(27)ないし(29)のいずれか1つに記載の情報処理装置。
(31)
前記ユーザの施設利用履歴に基づいて、通知すべき前記周辺施設をカスタマイズする周辺情報取得部を有する、
上記(22)ないし(30)のいずれか1つに記載の情報処理装置。
(32)
前記周辺情報取得部は、周辺施設ごとの前記トリガ動作の検出頻度に基づいて、通知すべき前記周辺施設をカスタマイズする、
上記(31)に記載の情報処理装置。
(33)
前記施設通知音は、前記周辺施設から前記ユーザまでの経路として検索された検索経路と、前記ユーザが移動中の自己通路と、が重複する重複通路上の位置に音源定位を有する立体音響として通知される、
上記(22)に記載の情報処理装置。
(34)
前記音源定位算出部は、前記施設通知音の前記音源定位を、前記自己通路から前記検索経路に沿って分岐する分岐部分に設定する、
上記(33)に記載の情報処理装置。
(35)
前記自己通路がカーブしている場合、前記音源定位算出部は、前記施設通知音の前記音源定位を、前記ユーザと前記施設通知音の前記音源定位とを結んだ直線が前記自己通路からはみ出さないような位置に設定する、
上記(33)に記載の情報処理装置。
(36)
前記重複通路上に前記音源定位が設定された複数の周辺施設の施設通知音を、前記周辺施設と前記ユーザとの距離が遠い順または近い順に出力する通知音出力部を有する、
上記(33)ないし(35)のいずれか1つに記載の情報処理装置。
(37)
ユーザの進路の進行方向を取得し、
前記ユーザから前記進行方向に所定の距離だけ離れた位置を、前記進行方向への移動を誘導する通知音の音源定位として算出する、
ことを有する、コンピュータにより実行される情報処理方法。
(38)
ユーザの進路の進行方向を取得し、
前記ユーザから前記進行方向に所定の距離だけ離れた位置を、前記進行方向への移動を誘導する通知音の音源定位として算出する、
ことをコンピュータに実現させるプログラム。
(39)
ユーザの頭部の位置および向きを検出するためのセンサ部と、
前記ユーザの頭部の位置および向きを基準として前記ユーザから前記ユーザの進路の進行方向に所定の距離だけ離れた位置を、前記進行方向への移動を誘導する通知音の音源定位として算出する上記(1)ないし(36)のいずれか1つに記載の情報処理装置と、
算出された前記音源定位に前記通知音の音像を再現するスピーカと、
を有するナビゲーションシステム。
[Additional Notes]
The present technology can also be configured as follows.
(1)
A direction acquisition unit that acquires a direction of travel of a user's route;
a sound source localization calculation unit that calculates a position a predetermined distance away from the user in the traveling direction as a sound source localization of a notification sound that guides the user to move in the traveling direction;
An information processing device having the above configuration.
(2)
The sound source localization calculation unit calculates a position away from the user by the predetermined distance in the traveling direction as a sound source localization of the notification sound based on a position and orientation of the user's head.
The information processing device according to (1) above.
(3)
A route planning unit that plans a route to a destination,
The traveling direction acquisition unit acquires a plurality of routes partitioned along the travel route, and acquires, for each of the partitioned routes, a direction along the route as the traveling direction of the route.
The information processing device according to (1) or (2) above.
(4)
The distance from the user to the sound source location of the notification sound is constant regardless of the position of the user.
An information processing device according to any one of (1) to (3) above.
(5)
The traveling direction acquisition unit acquires a traveling direction of a next route following the route,
The sound source localization calculation unit calculates a position a predetermined distance away from the user in a traveling direction of the next route as a sound source localization of a notification sound that guides the user to move to the next route.
An information processing device according to any one of (1) to (4) above.
(6)
The distance from the user to the sound source location of the notification sound that guides the user to move to the next route is constant regardless of the user's position.
The information processing device according to (5) above.
(7)
a volume adjustment unit that adjusts volumes of the two notification sounds related to the current course and the next course based on a distance from a midpoint that is a boundary between the current course and the next course to the user, and generates a notification sound obtained by mixing the two notification sounds whose volumes have been adjusted as a notification sound that induces switching from the current course to the next course;
The information processing device according to (5) or (6) above.
(8)
The volume adjustment unit increases the volume of the notification sound that guides the user to move to the next route as the user approaches the waypoint on the current route.
The information processing device according to (7) above.
(9)
the volume adjustment unit increases the volume of the notification sound that guides the user to move in the traveling direction of the next route as the user moves away from the waypoint on the next route.
The information processing device according to (8) above.
(10)
The volume adjustment unit reduces the volume of the notification sound that guides the user to move in a traveling direction of the current course as the user approaches the waypoint on the current course.
The information processing device according to (8) or (9) above.
(11)
the volume adjustment unit reduces the volume of the notification sound that guides the user to move in the traveling direction of the previous route as the user moves away from the waypoint on the next route.
The information processing device according to (10) above.
(12)
The sound source localization calculation unit calculates a position at the predetermined distance in the direction of the midpoint as the sound source localization of the notification sound when the user passes the midpoint and moves in a direction different from the next route.
An information processing device according to any one of (7) to (11) above.
(13)
a sound generating unit that, when the moving direction of the user is within a predetermined angle range centered on the moving direction, adds a sound effect to the notification sound that guides the user to move in the moving direction;
13. The information processing device according to any one of (1) to (12) above.
(14)
The sound source localization calculation unit calculates a position on the boundary between the warning area and the danger area as the sound source localization of the warning sound.
14. The information processing device according to any one of (1) to (13) above.
(15)
The sound source localization calculation unit calculates a planar area along a boundary between the warning area and the danger area as a sound source localization of the warning sound.
The information processing device according to (14) above.
(16)
The warning area is an area that is a certain distance or more away from the flow line that is the center of the path, and the danger area is an area that is a certain distance or more away from the warning area toward the outside.
The information processing device according to (14) or (15) above.
(17)
The sound source localization calculation unit identifies a dangerous area based on sensor information and sets the warning area and the danger area.
17. The information processing device according to any one of (14) to (16) above.
(18)
an angle difference calculation unit that calculates an angle difference between a direction of the user's head and the traveling direction;
a notification sound output unit that outputs the notification sound for guiding movement in the traveling direction when the angular difference is within a notification range;
18. The information processing device according to any one of (1) to (17) above,
(19)
the notification sound output unit outputs the notification sound as a pulse sound,
The notification sound output unit increases the frequency of the pulse sound notification as the angle difference becomes smaller.
The information processing device according to (18) above.
(20)
A motion detection unit that detects a motion of the user's head,
the notification sound output unit outputs the notification sound for guiding the user to move in the traveling direction when the angular difference is within a notification target range and the user's head is moving in a direction approaching the traveling direction.
The information processing device according to (18) or (19) above.
(21)
the notification sound output unit starts outputting the notification sound after a preset time interval has elapsed since the user's head starts to move in a direction approaching the traveling direction,
The information processing device described in (20) above.
(22)
the route planning unit, in response to a trigger operation by the user in response to a facility notification sound indicating a location of a peripheral facility, plans the travel route with the peripheral facility as the destination;
The information processing device according to (3) above.
(23)
The facility notification sound is notified as a stereophonic sound having a sound source localization on the location of the peripheral facility or on the passageway facing the peripheral facility.
The information processing device described in (22) above.
(24)
The sound source localization calculation unit sets the sound source localization of the facility notification sound on the passageway facing the entrance of the surrounding facility.
The information processing device described in (23) above.
(25)
The sound source localization calculation unit sets the sound source localization of the facility notification sound based on a position of the entrance detected based on a flow line of visitors to the peripheral facility.
The information processing device according to (24) above.
(26)
an operation determination unit that detects a selection/determination operation that specifies a direction of the sound source localization of the facility notification sound as the trigger operation;
26. The information processing device according to
(27)
The selection/confirmation operation includes a selection operation for selecting a direction of the sound source localization of the facility notification sound, and a confirmation operation by the user in response to a notification of detailed information of the surrounding facility performed in response to the selection operation.
The information processing device described in (26) above.
(28)
the selection operation includes an action of directing the user's head in the direction of the sound source localization of the facility notification sound within a predetermined time after the facility notification sound is emitted,
The information processing device described in (27) above.
(29)
a feedback means for providing feedback to the user when the direction of the user's head is within a predetermined angle range centered on the direction of the sound source localization of the facility notification sound;
The information processing device described in (28) above.
(30)
The decision operation includes an action of starting to move toward the surrounding facility.
30. The information processing device according to any one of (27) to (29) above.
(31)
a surrounding information acquisition unit that customizes the surrounding facilities to be notified based on the facility usage history of the user;
31. The information processing device according to claim 29, wherein the information processing device is a
(32)
The surrounding information acquisition unit customizes the surrounding facilities to be notified based on a detection frequency of the trigger operation for each surrounding facility.
The information processing device described in (31) above.
(33)
The facility notification sound is notified as a stereophonic sound having a sound source localization at a position on an overlapping path where a search route searched as a route from the peripheral facility to the user and a self-path on which the user is moving overlap.
The information processing device described in (22) above.
(34)
The sound source localization calculation unit sets the sound source localization of the facility notification sound to a branching portion that branches off from the own passage along the search route.
The information processing device described in (33) above.
(35)
When the user's path is curved, the sound source localization calculation unit sets the sound source localization of the facility notification sound to a position such that a straight line connecting the user and the sound source localization of the facility notification sound does not extend beyond the user's path.
The information processing device described in (33) above.
(36)
a notification sound output unit that outputs facility notification sounds of a plurality of peripheral facilities whose sound source localization is set on the overlapping passage in order of distance between the peripheral facilities and the user, in ascending or descending order;
36. The information processing device according to claim 33, wherein the information processing device is a
(37)
Obtaining the user's direction of travel;
A position that is a predetermined distance away from the user in the traveling direction is calculated as a sound source localization of a notification sound that guides the user to move in the traveling direction.
23. A computer-implemented information processing method comprising:
(38)
Obtaining the user's direction of travel;
A position that is a predetermined distance away from the user in the traveling direction is calculated as a sound source localization of a notification sound that guides the user to move in the traveling direction.
A program that makes a computer do this.
(39)
A sensor unit for detecting a position and a direction of a user's head;
The information processing device according to any one of (1) to (36) above, which calculates a position a predetermined distance away from the user in a traveling direction of the user's path based on a position and direction of the user's head as a sound source localization of a notification sound that guides the user to move in the traveling direction;
a speaker that reproduces a sound image of the notification sound at the calculated sound source localization;
A navigation system having:
1,2,3 情報処理装置
11 経路計画部
12 進行方向取得部
14 音源定位算出部
15 音響生成部
16 音量調整部
17 通知音出力部
151 角度差算出部
152 動き検知部
BD 境界
BP 中間点
CS,CSC 進路
CSN 次進路
DA 危険域
HD 頭部方向(頭部の向き)
OV 重複通路
PA 通路
PAC 自己通路
RT 移動経路
SD,SDC,SDN 通知音
SDF 施設通知音
SF 周辺施設
SP 検索経路
TD,TDC,TDN 進行方向
US ユーザ
WA 警告域
WS 警告音
1, 2, 3
OV overlapping passage PA passage PA C own passage RT moving route SD, SD C , SD N notification sound SD F facility notification sound SF surrounding facility SP search route TD, TD C , TD N direction of travel US user WA warning area WS warning sound
Claims (20)
前記ユーザから前記進行方向に所定の距離だけ離れた位置を、前記進行方向への移動を誘導する通知音の音源定位として算出する音源定位算出部と、
を有する情報処理装置。 A direction acquisition unit that acquires a direction of travel of a user's route;
a sound source localization calculation unit that calculates a position a predetermined distance away from the user in the traveling direction as a sound source localization of a notification sound that guides the user to move in the traveling direction;
An information processing device having the above configuration.
請求項1に記載の情報処理装置。 The sound source localization calculation unit calculates a position away from the user by the predetermined distance in the traveling direction as a sound source localization of the notification sound based on a position and a direction of the user's head.
The information processing device according to claim 1 .
前記進行方向取得部は、前記移動経路に沿って区画された複数の進路を取得し、区画された進路ごとに、前記進路に沿う方向を前記進路の前記進行方向として取得する、
請求項1に記載の情報処理装置。 A route planning unit that plans a route to a destination,
The traveling direction acquisition unit acquires a plurality of routes partitioned along the travel route, and acquires, for each of the partitioned routes, a direction along the route as the traveling direction of the route.
The information processing device according to claim 1 .
請求項1に記載の情報処理装置。 The distance from the user to the sound source location of the notification sound is constant regardless of the position of the user.
The information processing device according to claim 1 .
前記音源定位算出部は、前記ユーザから前記次進路の進行方向に所定の距離だけ離れた位置を、前記次進路への移動を誘導する通知音の音源定位として算出する、
請求項1に記載の情報処理装置。 The traveling direction acquisition unit acquires a traveling direction of a next route following the route,
The sound source localization calculation unit calculates a position a predetermined distance away from the user in a traveling direction of the next route as a sound source localization of a notification sound that guides the user to move to the next route.
The information processing device according to claim 1 .
請求項5に記載の情報処理装置。 The distance from the user to the sound source location of the notification sound that guides the user to move to the next route is constant regardless of the user's position.
The information processing device according to claim 5 .
請求項5に記載の情報処理装置。 a volume adjustment unit that adjusts volumes of the two notification sounds related to the current course and the next course based on a distance from a midpoint that is a boundary between the current course and the next course to the user, and generates a notification sound obtained by mixing the two notification sounds whose volumes have been adjusted as a notification sound that induces switching from the current course to the next course;
The information processing device according to claim 5 .
請求項7に記載の情報処理装置。 The volume adjustment unit increases the volume of the notification sound that guides the user to move to the next route as the user approaches the waypoint on the current route.
The information processing device according to claim 7.
請求項8に記載の情報処理装置。 the volume adjustment unit increases the volume of the notification sound that guides the user to move in the traveling direction of the next route as the user moves away from the waypoint on the next route.
The information processing device according to claim 8.
請求項8に記載の情報処理装置。 The volume adjustment unit reduces the volume of the notification sound that guides the user to move in a traveling direction of the current course as the user approaches the waypoint on the current course.
The information processing device according to claim 8.
請求項10に記載の情報処理装置。 the volume adjustment unit reduces the volume of the notification sound that guides the user to move in the traveling direction of the previous route as the user moves away from the waypoint on the next route.
The information processing device according to claim 10.
請求項7に記載の情報処理装置。 The sound source localization calculation unit calculates a position at the predetermined distance in the direction of the midpoint as the sound source localization of the notification sound when the user passes the midpoint and moves in a direction different from the next route.
The information processing device according to claim 7.
請求項1に記載の情報処理装置。 a sound generating unit that, when the moving direction of the user is within a predetermined angle range centered on the moving direction, adds a sound effect to the notification sound that guides the user to move in the moving direction;
The information processing device according to claim 1 .
請求項1に記載の情報処理装置。 The sound source localization calculation unit calculates a position on the boundary between the warning area and the danger area as the sound source localization of the warning sound.
The information processing device according to claim 1 .
請求項14に記載の情報処理装置。 The sound source localization calculation unit calculates a planar area along a boundary between the warning area and the danger area as a sound source localization of the warning sound.
The information processing device according to claim 14.
請求項14に記載の情報処理装置。 The sound source localization calculation unit identifies a dangerous area based on sensor information and sets the warning area and the danger area.
The information processing device according to claim 14.
前記角度差が通知対象範囲内の角度である場合に、前記進行方向への移動を誘導する前記通知音を出力する通知音出力部と、
を有する請求項1に記載の情報処理装置。 an angle difference calculation unit that calculates an angle difference between a direction of the user's head and the traveling direction;
a notification sound output unit that outputs the notification sound for guiding movement in the traveling direction when the angular difference is within a notification range;
The information processing apparatus according to claim 1 ,
前記通知音出力部は、前記角度差が小さいほど前記パルス音の通知頻度を高くする、
請求項17に記載の情報処理装置。 the notification sound output unit outputs the notification sound as a pulse sound,
The notification sound output unit increases the frequency of the pulse sound notification as the angle difference becomes smaller.
The information processing device according to claim 17.
前記ユーザから前記進行方向に所定の距離だけ離れた位置を、前記進行方向への移動を誘導する通知音の音源定位として算出する、
ことを有する、コンピュータにより実行される情報処理方法。 Obtaining the user's direction of travel;
A position that is a predetermined distance away from the user in the traveling direction is calculated as a sound source localization of a notification sound that guides the user to move in the traveling direction.
23. A computer-implemented information processing method comprising:
前記ユーザから前記進行方向に所定の距離だけ離れた位置を、前記進行方向への移動を誘導する通知音の音源定位として算出する、
ことをコンピュータに実現させるプログラム。 Obtaining the user's direction of travel;
A position that is a predetermined distance away from the user in the traveling direction is calculated as a sound source localization of a notification sound that guides the user to move in the traveling direction.
A program that makes a computer do this.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2024575012A JPWO2024162454A1 (en) | 2023-02-02 | 2024-02-02 |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2023-014636 | 2023-02-02 | ||
| JP2023014636 | 2023-02-02 | ||
| JPPCT/JP2023/013488 | 2023-03-31 | ||
| PCT/JP2023/013488 WO2024161667A1 (en) | 2023-02-02 | 2023-03-31 | Information processing device, information processing method, and program |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024162454A1 true WO2024162454A1 (en) | 2024-08-08 |
Family
ID=92146030
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2023/013488 Ceased WO2024161667A1 (en) | 2023-02-02 | 2023-03-31 | Information processing device, information processing method, and program |
| PCT/JP2024/003383 Ceased WO2024162454A1 (en) | 2023-02-02 | 2024-02-02 | Information processing device, information processing method, and program |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2023/013488 Ceased WO2024161667A1 (en) | 2023-02-02 | 2023-03-31 | Information processing device, information processing method, and program |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JPWO2024162454A1 (en) |
| WO (2) | WO2024161667A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH07103781A (en) * | 1993-10-04 | 1995-04-18 | Aqueous Res:Kk | Voice navigation device |
| JP2021156600A (en) * | 2020-03-25 | 2021-10-07 | ヤマハ株式会社 | Moving body position estimation device and moving body position estimation method |
-
2023
- 2023-03-31 WO PCT/JP2023/013488 patent/WO2024161667A1/en not_active Ceased
-
2024
- 2024-02-02 WO PCT/JP2024/003383 patent/WO2024162454A1/en not_active Ceased
- 2024-02-02 JP JP2024575012A patent/JPWO2024162454A1/ja active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH07103781A (en) * | 1993-10-04 | 1995-04-18 | Aqueous Res:Kk | Voice navigation device |
| JP2021156600A (en) * | 2020-03-25 | 2021-10-07 | ヤマハ株式会社 | Moving body position estimation device and moving body position estimation method |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2024162454A1 (en) | 2024-08-08 |
| WO2024161667A1 (en) | 2024-08-08 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| AU2020256377B2 (en) | Facilitating interaction between users and their environments using sounds | |
| US12188777B2 (en) | Spatial audio navigation | |
| US6700505B2 (en) | Lane guidance display method, and navigation device and recording medium for realizing the method | |
| US9746338B2 (en) | Binaural navigation cues | |
| JP2602158B2 (en) | Audio output device | |
| EP2842529A1 (en) | Audio rendering system categorising geospatial objects | |
| US10972855B2 (en) | Location information through directional sound provided by mobile computing device | |
| WO2024162454A1 (en) | Information processing device, information processing method, and program | |
| JP2013047653A (en) | Audio processing device, audio processing method, program and guidance system | |
| US20230262413A1 (en) | Information providing device, information providing system, information providing method, and non-transitory computer readable medium | |
| WO2023160794A1 (en) | Navigation device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24750403 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2024575012 Country of ref document: JP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |