WO2022210652A1 - コンテンツ再生システム、情報処理装置及びコンテンツ再生制御アプリケーション - Google Patents
コンテンツ再生システム、情報処理装置及びコンテンツ再生制御アプリケーション Download PDFInfo
- Publication number
- WO2022210652A1 WO2022210652A1 PCT/JP2022/015307 JP2022015307W WO2022210652A1 WO 2022210652 A1 WO2022210652 A1 WO 2022210652A1 JP 2022015307 W JP2022015307 W JP 2022015307W WO 2022210652 A1 WO2022210652 A1 WO 2022210652A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- content
- user
- unit
- detection value
- reproduction system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/687—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/01—Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/63—Querying
- G06F16/635—Filtering based on additional data, e.g. user or group profiles
- G06F16/636—Filtering based on additional data, e.g. user or group profiles by using biological or physiological data
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K15/00—Acoustics not otherwise provided for
- G10K15/02—Synthesis of acoustic waves
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
Definitions
- the present disclosure relates to a content reproduction system, an information processing device, and a content reproduction control application that control output to a user.
- Patent Document 1 There is a technology that recognizes utterances and environmental sounds, and selects and outputs content such as music based on the recognized sounds.
- the technology to recognize speech and environmental sounds is applicable only to environments with sounds. Therefore, a user who does not want to make noise or a situation where he/she does not want to make noise may not be able to select appropriate content. Also, natural language processing requires high computational power, making it difficult to process locally.
- an object of the present disclosure is to provide a content reproduction system, an information processing device, and a content reproduction control application that appropriately control output to the user regardless of the situation.
- a content reproduction system includes: wearable devices and a user state estimation unit that estimates a user state of a user wearing the wearable device; an environment estimation unit that estimates an environmental state of the user based on the user state; A content providing application that provides content generates a cue for selecting content based on the environmental state, outputs the cue to the content providing application, and causes the content providing application to select content based on the cue. a content control unit that reproduces the content; a content playback control application having an information processing device having a control circuit that executes Equipped with
- the appropriate content can be played back from the content providing application without the user having to actively select it.
- the control circuit of the information processing device executes a plurality of different content providing applications;
- the content control unit may select a predetermined content providing application for reproducing the content based on the environmental state.
- the control circuit of the information processing device executes a plurality of different content providing applications;
- the wearable device has an input device,
- the content control unit may select a predetermined content providing application for reproducing the content based on different operations input by a user to the wearable device.
- the control circuit of the information processing device may execute a preset application that assigns the plurality of different operations to selection of the plurality of different content providing applications.
- the preset application may be included in the content reproduction control application.
- preset applications are assigned in advance to the selection of a plurality of different content providing applications.
- a plurality of different operations e.g., single-tap, double-tap, triple-tap, radio button press, etc.
- input by the user to the input device of the wearable device can be pre-assigned to select a plurality of different content providing applications.
- the wearable device has a sensor unit
- the content playback control application is a user position estimation unit that estimates a user position based on a detection value input from a sensor unit of the wearable device worn by the user; a location attribute estimating unit that estimates a location attribute, which is an attribute of a location where the user is located, based on the user location; further having The user state estimation unit may estimate the user state based on the location attribute.
- the sensor unit of the wearable device may include at least one of an acceleration sensor, a gyro sensor, a compass, a biosensor, and a geomagnetic sensor.
- the content providing application may select a plurality of content candidates based on the cue, and select content to be reproduced from the plurality of candidates based on the detection value input from the sensor unit.
- the content providing application may select an attribute of content to be played back based on the detected value input from the sensor unit during playback of the content, and play back the selected content.
- the content providing application may select multiple content candidates based on the cue from the content playback control application, and select content to be played back from the multiple candidates based on detection values input from the sensor unit of the wearable device. good. Also, the content providing application may select content with a fast tempo that matches the user's running speed, for example, based on the detected value input from the sensor unit.
- the content control unit generates a cue for the content providing application to stop playing the content based on the environmental state, outputs the cue to the content providing application, and instructs the content providing application to stop the reproduction of the content based on the cue. Playback of the content may be stopped.
- the content playback control application can detect these conditions and send a stop command to the content providing application.
- the content playback control application is further comprising a context acquisition unit that acquires the context of the user;
- the user state estimation unit may estimate the user state based on the acquired context.
- the user position estimation unit an angle correction unit that calculates a correction value of the azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user; an angle estimation unit that estimates an azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user and the correction value;
- the azimuth angle may be used to estimate the user position.
- the angle at which the wearable device is worn differs for each user. Therefore, the angles of the sensor axes of the acceleration sensor and the gyro sensor are different for each user. Therefore, the user position estimation unit can estimate the angle of the sensor axis of the sensor unit for each user, and use this as a correction value to estimate the direction (angle) with high accuracy without depending on individual differences.
- the sensor unit of the wearable device includes an acceleration sensor,
- the angle corrector is calculating the inclination of the user in the pitch direction and the inclination in the roll direction from the gravitational acceleration when the user faces the roll direction, which is the detection value of the acceleration sensor; calculating the inclination of the user in the Yaw direction from the gravitational acceleration when the user faces the Pitch direction as the detection value of the acceleration sensor, the inclination in the Pitch direction, and the inclination in the Roll direction;
- the tilt in the Pitch direction, the tilt in the Roll direction, and the tilt in the Yaw direction may be used as the correction values.
- the correction value of the user's azimuth angle can be calculated using only the acceleration sensor. As a result, it can be implemented in an environment with few mounted sensors, and low cost, power saving, and miniaturization can be realized.
- the content control unit may continuously reproduce related content across the same environmental state.
- Content playback system a database generating unit that associates and registers a detection value for registration detected by the sensor unit and an environmental state to be presented to the user when the detection value for registration is detected; Matching the new detection value detected by the sensor unit with the registered detection value, and determining whether or not the difference between the new detection value and the registration detection value is equal to or less than a matching threshold.
- a matching unit further comprising The content control unit may generate and output the cue based on the environmental state registered in association with the detection value for registration when it is determined that the difference is equal to or less than the matching threshold.
- the matching unit matches the new detection value and the detection value for registration when the user stops for a first time, When the stop time until the user starts moving is longer than a second time, the database generation unit detects the new detection value until the user starts moving and the new detection value. It may be newly registered in association with the environmental state to be presented to the user at the time of occurrence.
- the database generation unit may register, as the detection value for registration, an average value of a plurality of detection values detected by the sensor unit within a predetermined period of time.
- a content reproduction system includes: wearable devices and a database generation unit that associates and registers a detection value for registration detected by a sensor unit of the wearable device worn by a user and an environmental state to be presented to the user when the detection value for registration is detected; Matching the new detection value detected by the sensor unit with the registered detection value, and determining whether or not the difference between the new detection value and the registration detection value is equal to or less than a matching threshold. a matching unit; When it is determined that the difference is equal to or less than the matching threshold, a content providing application that provides content generates a queue for selecting content based on the environmental state registered in association with the detection value for registration. a content control unit that outputs the cue to the content providing application, causes the content providing application to select content based on the cue, and reproduces the content; a content playback control application having an information processing device having a control circuit that executes Equipped with
- An information processing device includes: a user state estimation unit that estimates a user state of a user wearing the wearable device; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state; A content providing application that provides content generates a cue for selecting content based on the environmental state, outputs the cue to the content providing application, and causes the content providing application to select content based on the cue. a content control unit that reproduces the content; a content playback control application having a control circuit for performing
- a content playback control application includes: The control circuit of the information processing device, a user state estimation unit that estimates a user state of a user wearing the wearable device; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state; A content providing application that provides content generates a cue for selecting content based on the environmental state, outputs the cue to the content providing application, and causes the content providing application to select content based on the cue. It operates as a content control unit that reproduces the content.
- 1 shows a configuration of an information processing system according to an embodiment of the present disclosure
- 1 schematically shows a worn wearable device; Schematically shows individual differences in wearing wearable devices.
- the concept of angle correction is shown schematically.
- 4 shows an operation flow of an angle correction unit; Schematically shows a user's movement.
- the concept of angle correction is shown schematically. Specific processing of the angle correction unit will be shown.
- a specific calculation example is shown. Shows the relationship between initial frames. Shows how to specify the natural front. It is a figure for demonstrating the process of a place estimation part.
- 4 shows an application example of the processing of the location estimator.
- 4 shows a recognition example of the processing of the place estimating unit.
- 4 shows an operation flow of a location estimation unit; 4 shows a supplemental operational flow of the location estimator; The following shows the operation when different walking styles are identified for the same route.
- 4 shows a modification of a method for estimating a place by a place estimating unit; It is a flow for estimating the environmental state presented to the user from the context.
- the operation of the user state estimator is shown. It shows the mapping relationship between context and user state.
- Fig. 3 shows how the user state estimator determines the user state.
- the operation of the environment estimator is shown.
- 4 shows the operation of the content control unit of the output control unit; The operation of the notification control unit of the output control unit is shown.
- 1 shows the configuration of a content reproduction system according to the present embodiment
- 4 shows an example of a GUI of a preset application.
- 4 shows an operational flow of a content playback control application
- 1 shows an example of a table used to select a content providing application
- 4 shows a functional configuration of an angle correction unit according to one embodiment
- 4 shows an operation flow of an angle correction unit
- a method for deriving the angle ⁇ is shown. Shows the axis of gravity when facing forward. Shows the axis of gravity when facing downwards. Yaw rotation calculation from measurement data and measurement singularity are shown.
- Fig. 3 shows a flow chart for determining whether the conditions are met; Shows a face-on based yaw rotation definition.
- the effect of vertical movement and bending angle on calculation results is shown. Shows the selection of playlists for scenes. An example of continuously reproducing a playlist across the same divided scenes will be shown. Another example of a user experiencing scene-appropriate content is shown.
- a first implementation a content control application controls a content providing application
- a second example of implementation a content control application records information about content that was being played back at the end of a scene and specifies a content ID for each context
- An example of a content information acquisition method is shown. Indicates that one playlist is played by connecting the same scenes.
- An example of a table held by a content reproduction control application is shown.
- An example of a table held by a content reproduction control application is shown.
- the user-front property will be explained using search as an example.
- the user front property of this embodiment will be described.
- the user front property of this embodiment will be described.
- 1 shows the configuration of an information processing system according to another embodiment of the present disclosure; An overview of the registration and matching part is shown.
- 4 shows the configuration of a registration and matching unit; 4 shows an operation flow of registration and confirmation processing.
- 1 shows an example of a registration and confirmation GUI. An example of an instruction screen for moving the head displayed on a smartphone is shown.
- An example of an instruction screen for moving the head displayed on a personal computer is shown.
- 4 shows an operational flow of multiple location registration.
- 1 shows an example of a GUI for multi-location registration. Shows the operational flow of button type additional registration.
- 3 shows the operation flow of layout formula additional registration.
- 4 shows the operation flow of automatic additional registration.
- a method (table type) for registering detection values for registration and performing matching is shown.
- a method (machine learning formula) for registering detection values for registration and performing matching is shown. Calculation of average values is shown.
- An example of the algorithm of a user state estimation part and an environment estimation part is shown typically.
- An example of the algorithm of a user state estimation part and an environment estimation part is shown typically.
- FIG. 1 shows the configuration of an information processing system according to one embodiment of the present disclosure.
- the information processing system 10 has an information processing device 100 and a wearable device 200 .
- the information processing device 100 is a terminal device used by an end user, such as a smartphone, tablet computer, or personal computer. Information processing apparatus 100 is connected to a network such as the Internet.
- the wearable device 200 is a device worn on the user's head.
- the wearable device 200 is typically a wireless earphone (FIG. 2), but may be a wireless headphone, a wired headphone, a wired earphone, an HMD (Head Mount Display) for AR (Augmented Reality) or VR (Virtual Reality), or the like. There may be.
- FIG. 2 shows an open-ear earphone that does not completely cover the ear canal, it may be a canal-type earphone, a hearing aid, or a sound collector that closes the ear canal.
- the information processing apparatus 100 and the wearable device 200 are connected to various types of proximity such as Bluetooth (registered trademark) (specifically, BLE (Bluetooth Low Energy) GATT (Generic Attribute Profile)) and Wi-Fi (registered trademark). They are communicably connected to each other by long-distance wireless communication.
- Wearable device 200 has sensor section 210 .
- the sensor unit 210 includes an acceleration sensor 211 that detects acceleration, a gyro sensor 212 that detects angular velocity, and a compass 213 that detects azimuth.
- the sensor unit 210 further includes a biosensor 214 such as a heartbeat sensor, blood flow sensor, electroencephalogram sensor, or the like. Sensor unit 210 may further include a geomagnetic sensor.
- the wearable device 200 supplies the detection value of the sensor unit 210 to the information processing device 100 .
- the information processing apparatus 100 has a context acquisition unit 110 and a PDR (Pedestrian Dead Reckoning) unit 120 (user position estimating unit), location estimating unit 130 (location attribute estimating unit), user state estimating unit 140, environment estimating unit 150, and output control unit 160.
- PDR Registered Dead Reckoning
- the context acquisition unit 110 acquires the user's context.
- the user's context includes location information and terminal information.
- the context is, for example, a sensor value obtained from the sensor unit 210, user's schedule information obtained from a calendar application, or the like.
- the context acquisition unit 110 has a device such as a GPS sensor 111 and a beacon transmitter/receiver 112 that acquires location information as a context.
- Context acquisition section 110 further includes terminal information acquisition section 113 that acquires terminal information as a context.
- the terminal information acquisition unit 113 acquires screen lock information (locked, unlocked), user behavior information (run, bicycle, stationary, walking, riding, etc.), location (specific location such as home, office, etc.) as terminal information that is context.
- the PDR section 120 (user position estimation section) estimates the user position based on the detection values (acceleration, angular velocity and azimuth angle) of the sensor section 210 of the wearable device 200 worn by the user.
- PDR section 120 has angle correction section 121 , angle estimation section 122 , and user position estimation section 123 .
- the angle correction unit 121 calculates a correction value for the user's azimuth angle based on the detection values (acceleration, angular velocity, and azimuth angle) of the sensor unit 210 of the wearable device 200 worn by the user.
- the angle estimation unit 122 estimates the azimuth angle of the user based on the detection values (acceleration, angular velocity, and azimuth angle) of the sensor unit 210 of the wearable device 200 worn by the user and the correction value.
- the user position estimation unit 123 estimates the user position using the corrected azimuth angle.
- PDR Pedestrian Dead Reckoning
- the PDR unit 120 detects changes in the user position from room to room, that is, movement of the user position, based on acceleration, angular velocity, and azimuth angle detected by the acceleration sensor 211, gyro sensor 212, and compass 213. Estimate a route.
- the location estimation unit 130 estimates the attribute of the user's location (location attribute) based on the change in the user's position estimated by the PDR unit 120 . In other words, based on the moving route estimated by the PDR unit 120, the location attribute after the user moves is estimated.
- a location attribute is, for example, a division within a building that is even finer than the building itself.
- the location attribute is living room, bedroom, toilet, kitchen, washroom, etc. within one house.
- the location attribute is a desk, conference room, etc. within one co-working space.
- the location attribute is not limited to this, and the location attribute may indicate the building itself or the like, or may indicate both the building itself and the section within the building.
- User state estimation unit 140 is based on the context acquired by context acquisition unit 110, detection values (acceleration, angular velocity, and azimuth angle) of sensor unit 210 of wearable device 200, and location attributes estimated by location estimation unit 130. , to estimate the user state.
- a user state indicates a user's multi-level activity state. For example, the user state indicates four levels of activity: break time, neutral, DND (Do Not Disturb) and offline. Break time is the most relaxed activity state, Neutral is the normal activity state, DND is the relatively busy activity state, and Offline is the busiest activity state. In addition to the four levels described above, it may be possible to set an arbitrary number of levels on the system, or allow the user to set the number of levels as appropriate.
- the environment estimation unit 150 estimates the environmental state to be presented to the user based on the user state estimated by the user state estimation unit 140 .
- the environment estimation unit 150 may further estimate the environmental state presented to the user based on the location attributes estimated by the location estimation unit 130 .
- the environmental state presented to the user is, for example, an environmental state in which the user can focus (concentrate) or an environmental state in which the user can relax.
- the output control unit 160 controls output based on the environmental state estimated by the environment estimation unit 150 .
- the output control unit 160 has a content control unit 161 and a notification control unit 162 .
- the content control unit 161 reproduces content (music, video, etc.) selected based on the environmental state estimated by the environment estimation unit 150 .
- the content control unit 161 notifies the DSP (Digital Service Provider) of the environmental state via the network, and the DSP selects content based on this environmental state (for example, content that the user can focus on, content that the user can relax, etc.) content, etc.) may be received and reproduced.
- the notification control unit 162 controls the number of notifications to the user based on environmental conditions.
- the notification control unit 162 reduces or eliminates the number of notifications (e.g., notifications of new arrivals of applications or messages) so that the user can focus, or sets the number of notifications to normal if the user is relaxing. may be processed.
- Fig. 2 schematically shows the worn wearable device.
- the wearable device 200 is typically a wireless earphone.
- a wearable device 200 which is a wireless earphone, has a speaker 221, a driver unit 222, and a sound conduit 223 connecting them.
- the speaker 221 is inserted into the ear canal to position the wearable device 200 against the ear, and the driver unit 222 is located behind the ear.
- a sensor section 210 including an acceleration sensor 211 and a gyro sensor 212 is built in a driver unit 222 .
- Fig. 3 schematically shows individual differences in wearable devices worn.
- the angle of the driver unit 222 of the wearable device 200 with respect to the front of the face differs for each user. Therefore, the angles of the sensor axes of the acceleration sensor 211 and the gyro sensor 212 of the sensor unit 210 built in the driver unit 222 with respect to the front of the face differ for each user.
- (a) shows the case where the user wears the wearable device 200 shallowly hooked on the ear
- (b) shows the case where the user wears the wearable device 200 deeply fixed to the ear.
- the difference between the angle of the user's sensor axis with respect to the front face of (a) and the angle of the user's sensor axis with respect to the front of the face of (b) may be 30° or more. Therefore, the PDR unit 120 estimates the angle of the sensor axis of the sensor unit 210 with respect to the front of the face for each user, and uses this as a correction value to accurately estimate the orientation (angle) of the face without depending on individual differences.
- FIG. 4 schematically shows the concept of angle correction.
- Azimuth E is obtained from the three-dimensional posture obtained by integrating sensor values obtained by the gyro sensor 212 that detects angular velocity.
- the Azimuth Offset differs for each user and cannot be measured just by wearing the device, so it is necessary to estimate the Azimuth Offset for each user.
- Coordinate system (1) is a global frame (fixed), and is composed of a vertical Z-axis extending overhead, an X-axis connecting both ears and positive in the right direction, and a Y-axis orthogonal to the X-axis and Z-axis.
- a coordinate system (2) is a sensor frame, and is a coordinate system (X E , Y E , Z E ) that is fixed with respect to the sensor unit 210 of the wearable device 200 .
- Azimuth Offset which is a correction value, indicates the amount of rotation of the coordinate system (2) with respect to the coordinate system (1).
- FIG. 5 shows the operation flow of the angle corrector.
- FIG. 6 schematically shows user movements.
- FIG. 7 schematically shows the concept of angle correction.
- FIG. 8 shows specific processing of the angle corrector.
- FIG. 9 shows a specific calculation example.
- the user wears the wearable device 200 and moves the head downward so as to look diagonally downward from the front ((a) of FIG. 6) ((b) of FIG. 6) (step S101).
- the angle correction unit 121 calculates Pitch and Roll with respect to the global frame coordinate system (X, Y, Z) from the acceleration value when moving the head downward (step S102).
- the angle correction unit 121 starts collecting angular velocity values of the gyro sensor 212 . Let the time at this time be t0 (step S103) (process (2) in FIG. 8). Next, the user slowly moves his or her head up so as to look up diagonally from the front without blurring left and right ((c) in FIG. 6) (step S104).
- the angle correction unit 121 continues collecting angular velocity values of the gyro sensor 212 (step S105). When the user raises his or her head to the limit, the angle corrector 121 stops collecting the angular velocity values of the gyro sensor 212 . The time at this time is set to t1 (step S106, YES).
- R Z ( ⁇ ), R X ( ⁇ ), and R Y ( ⁇ ) are the rotation matrices of the Z-axis, Y-axis, and X-axis, respectively.
- RotMat *axis is set to [ rX ,ry, rz ] T (step S107). If r Z deviates from the threshold value (if the difference from 0 is large), the angle correction unit 121 fails and redoes the process (step S108, NO). If r Z is within the threshold, the process proceeds to the next step (step S108, YES).
- the angle corrector 121 obtains a correction value (Azimuth Offset ) from rX and rY (step S109) (process (5) in FIG. 8).
- the angle correction unit 121 obtains a rotation matrix (RotMat) from Azimuth Offset , Pitch and Roll (step S110). This RotMat is based on the front face axis.
- FIG. 10 shows the relationship between initial frames.
- Fig. 11 shows a method of specifying a natural front view.
- R t0 which is the posture of the right sensor (Right Sensor Pose) is obtained by the method of FIG.
- Rt2 in the new attitude can be obtained from Rt0 and the acceleration sensor value in the new attitude by the method of FIG.
- FIG. 12 is a diagram for explaining the processing of the location estimation unit.
- (1) is the route from the living room to the bedroom
- (2) is the route from the bedroom to the living room
- (3) is the route from the living room to the toilet
- (4) is A route from the toilet to the living room, (5) from the living room to the kitchen, and (6) from the kitchen to the living room.
- the user wears the wearable device 200 and starts working in the living room. After a while, after going to the toilet, I returned to my seat after washing my hands in the washroom. After a while, I moved to the kitchen, got a drink, and returned to the living room.
- the movement pattern here is as follows. From the living room to the toilet (route (3)). From the toilet to the living room (route (4)). From the living room to the kitchen (route (5)). From the kitchen to the living room (route (6)).
- the place estimation unit 130 stores these four patterns and their order. The next time the user moves, the movement pattern is matched with the stored pattern. If the matching is successful, the place estimating unit 130 can specify the post-movement place, and if the matching is unsuccessful, the place estimating unit 130 adds it to the route list as a new pattern.
- the route list includes movement patterns (top row) of "(1) living room to bedroom, (2) bedroom to living room, (5) living room to kitchen", and "(2) bedroom to living room, (5) living room
- the location estimation unit 130 holds a plurality of movement routes, and matches the movement routes estimated by the PDR unit 120 with the plurality of held movement routes to obtain location attributes after movement (living room , bedroom, toilet, kitchen, washroom, etc.) can be estimated. Also, the location estimation unit 130 may estimate location attributes by determining how long the user stays at the location where the user is. By determining the staying time in addition to the moving route, the location attribute can be estimated more accurately.
- FIG. 13 shows an application example of the processing of the location estimation unit.
- the coordinate system of FIG. 13 shows the transition of the user position with the origin as the starting point and the user position plotted periodically (eg, every second) as it progresses from the origin (starting point) to another room.
- the axis (1) indicates the moving route from the living room (origin) to the bedroom.
- the axis (2) indicates the movement path (distance) from the bedroom (origin) to the living room.
- the axis (3) indicates the moving route from the living room (origin) to the toilet.
- the axis (4) indicates the moving route from the toilet (origin) to the living room.
- FIG. 14 shows a recognition example of processing by the location estimation unit.
- the location estimation unit 130 attaches labels indicating attributes when learning routes. As a result, the label indicating the attribute can be automatically displayed when the matching is successful. Next, the operation of the location estimation unit 130 will be described more specifically.
- FIG. 15 shows the operation flow of the location estimation unit.
- the PDR unit 120 estimates the change of the user position from room to room, that is, the movement route of the user position (step S201).
- the place estimating unit 130 detects that the user has stopped based on the change in the user's position detected and estimated by the PDR unit 120 (step S202, YES).
- the location estimation unit 130 increments (+1) the stop counter (step S203).
- Matching is performed with a plurality of moving routes (step S205). If the matching is successful (step S206, YES), the place estimating unit 130 identifies the post-movement place (step S207). On the other hand, if the matching fails (step S206, NO), the location estimating unit 130 adds it to the route list as a new pattern (step S208).
- FIG. 16 shows a supplementary operation flow of the location estimation unit.
- step S206 NO
- step S209 YES
- step S208 if enough new travel routes are accumulated in the route list to the extent that matching is successful (step S208), matching is successful (step S206, YES), and the location after travel can be identified ( step S207).
- the place estimation unit 130 When the matching failure continues for a predetermined number of times (step S209, YES), the place estimation unit 130 outputs a warning indicating that there is a possibility of another place not registered in the route list (step S210). This makes it possible to notify the user that the location attribute after movement will be estimated from the new movement route.
- FIG. 17 shows the operation when different walking styles are identified for the same route.
- DTW dynamic time warping
- DTW dynamic time warping
- FIG. 18 shows a modification of the method for estimating the location by the location estimating unit.
- the location estimation unit 130 may estimate the attribute of the location where the user is located (location attribute), especially outdoors, based on the location information acquired by the GPS sensor 111 and the beacon transmitter/receiver 112 .
- the place estimation unit 130 may estimate the attribute of the place where the user is (place attribute) based on the biometric information acquired by the biosensor 214 . For example, if it is known that the user is falling asleep based on the biometric sensor 214 (heartbeat sensor or the like), the location estimation unit 130 may estimate the bedroom as the location attribute.
- FIG. 19 is a flow for estimating the environmental state presented to the user from the context.
- the context acquisition unit 110 acquires the user's context.
- User state estimation unit 140 is based on the context acquired by context acquisition unit 110, detection values (acceleration, angular velocity, and azimuth angle) of sensor unit 210 of wearable device 200, and location attributes estimated by location estimation unit 130. , to estimate the user state.
- the environment estimation unit 150 estimates the environmental state (focus (concentration), relaxation, etc.) to be presented to the user.
- FIG. 20 shows the operation of the user state estimation unit.
- User state estimation unit 140 is based on the context acquired by context acquisition unit 110, detection values (acceleration, angular velocity, and azimuth angle) of sensor unit 210 of wearable device 200, and location attributes estimated by location estimation unit 130. , to estimate the user state.
- the user's context includes location information and terminal information.
- Terminal information includes screen lock information (lock, unlock), user behavior information (run, bicycle, stationary, walking, riding, etc.), location (specific location such as home or office, unspecified location), calendar application information ( Scheduled meeting, no meeting), time information (during work time, outside work time), phone application information (during a call), voice recognition application information (during speaking), automatic DND (Do Not Disturb) setting (during time frame, time out of frame), manual DND settings (on, offline), etc.
- a user state indicates a user's multi-level activity state. For example, the user state indicates four levels of activity: break time, neutral, DND (Do Not Disturb) and offline. Break time is the most relaxed activity state, Neutral is the normal activity state, DND is the relatively busy activity state, and Offline is the busiest activity state.
- FIG. 21 shows the mapping relationship between context and user state.
- the user state estimation unit 140 estimates the user state by mapping the context to the user state. For example, if the screen lock information as the context is unlocked, the user state estimation unit 140 estimates that the user state is DND, and if the screen lock information is locked, the user state is estimated to be neutral. The user state estimating unit 140 also estimates user states for other contexts. Also, the context is not limited to that shown in FIG. 21, and any context may be used as long as it represents some kind of context.
- FIG. 22 shows how the user state estimation unit determines the user state.
- the user state estimation unit 140 estimates the user state as offline if even one of the contexts includes offline.
- the user state estimation unit 140 estimates the user state as DND if there are no offline contexts and at least one context includes DND.
- the user state estimation unit 140 estimates the user state as neutral if there is no offline, DND and break time for a plurality of contexts.
- the user state estimating unit 140 estimates the user state as the break time if there is no offline or DND and the break time is included.
- FIG. 23 shows the operation of the environment estimation unit.
- the environment estimation unit 150 estimates the environmental state to be presented to the user based on the user state estimated by the user state estimation unit 140 and the location attribute estimated by the location estimation unit 130 .
- the environmental state presented to the user is, for example, an environmental state in which the user can focus (concentrate) or an environmental state in which the user can relax. For example, (1) the environment estimating unit 150 estimates that the environmental state to be presented to the user is the focus when the time period is at work, the user state is neutral, the action is stay, and the place is desk. (2) If the time zone is working and the user state is break time, the environment estimation unit 150 estimates that the environmental state to be presented to the user is relaxed. (3) If the time zone is non-work and the user state is break time, the environment estimation unit 150 estimates that the environmental state to be presented to the user is relaxed.
- Figures 73 and 74 schematically show an example of the algorithm of the user state estimator and the environment estimator.
- the user state estimation unit 140 acquires data such as wearing, microphone input, speech, body movement, vital information (including biological/emotional information), etc. from the wearable device 200, and acquires data from the context acquisition unit 110 Get data such as time, calendar, location, etc.
- the user state estimation unit 140 determines busy/not busy from the acquired data. When not busy, the user state estimation unit 140 determines whether or not the user may output the music content. For example, upon detecting a trigger such as wearing the wearable device 200, ending a call, or moving, the user state estimation unit 140 determines that the user can output music content.
- the user state estimating unit 140 determines the situation (morning commute, wanting to focus/relax, coming home from work, running, walking the dog, meditating, sleeping, etc.).
- the environment estimator 150 estimates the environmental state (stress, focus, relaxation, etc.) based on the situation determined by the user state estimator 140 .
- FIG. 24 shows the operation of the content control section of the output control section.
- the content control unit 161 of the output control unit 160 reproduces content (music, video, etc.) selected based on the environmental state estimated by the environment estimation unit 150 .
- the content control unit 161 notifies the DSP (Digital Service Provider) of the environmental state via the network, and the DSP selects content based on this environmental state (content that allows the user to focus, content that allows the user to relax). content) is received and played back.
- the content control unit 161 plays music that helps the user concentrate, and if the user state is relaxed, the content control unit 161 plays music that helps the user relax.
- the content control unit 161 reproduces sleep-promoting music if the user state is relaxed, and stops the music when the user falls asleep.
- FIG. 25 shows the operation of the notification control section of the output control section.
- the notification control unit 162 of the output control unit 160 controls the number of notifications to the user based on the environmental conditions. For example, the notification control unit 162 may reduce or eliminate the number of notifications (notifications of new arrivals of applications or messages) so that the user can focus, or may keep the number of notifications normal if the user is relaxing. For example, if the user is at work and the user state is focused, the notification control unit 162 reduces the number of notifications, and if the user state is relaxed, the notification control unit 162 issues the normal number of notifications.
- the present embodiment it is possible to output content that encourages focus (concentration) and relaxation based on the user's location in the house and other user contexts. It is possible to appropriately control the output to the user regardless of the situation such as a situation where it is desired not to make a sound. For example, based on user context, if the user is at their desk while teleworking, we can output focusable content, and if they are at their resting place, we can play relaxing music.
- the present embodiment it is possible to identify the position inside the house using the sensor unit 210 (the acceleration sensor 211, the gyro sensor 212, and the compass 213) attached to the wearable device 200 without any external equipment. can. Specifically, by storing the pattern of moving places and their order, it is possible to identify the place after the user moves from the N patterns of the most recent moves.
- Telework has become commonplace, and users are spending more time at home, not only relaxing, but also focusing on work. At this time, it is thought that there are more users who do not want to make noise and situations in which they do not want to make noise than in the past when telework was not widespread. Therefore, as in the present embodiment, it will be more and more useful to specify the location in the house, estimate the environmental state to be presented to the user, and control the output to the user without the need to speak. .
- the user state is estimated by mapping the context obtained from each sensor information to the user state, so the user state can be estimated without speaking and making a sound.
- the context obtained from each sensor information is mapped to the user state, the amount of calculation is much smaller than that of natural language processing, and local processing is easy.
- FIG. 26 shows the configuration of a content reproduction system according to this embodiment.
- the content reproduction system 20 has an information processing device 100 and a wearable device 200 .
- the information processing apparatus 100 loads a content reproduction control application 300, a content providing application 400, and a preset application 500, in which a processor such as a CPU of a control circuit is recorded in a ROM, into a RAM and executes them.
- a processor such as a CPU of a control circuit is recorded in a ROM
- the content reproduction control application 300 may be installed in the wearable device 200 instead of the information processing apparatus 100 and executed by the wearable device 200 .
- the wearable device 200 is, as described above, wireless earphones (see FIG. 2), wireless headphones, wired headphones, wired earphones, or the like.
- the wearable device 200 has a sensor section 210 and an input device 220 .
- the sensor unit 210 includes an acceleration sensor 211, a gyro sensor 212, a compass 213, and a biosensor 214 such as a heart rate sensor, a blood flow sensor, an electroencephalogram sensor (see FIG. 1).
- Wearable device 200 inputs the detection value of sensor unit 210 to content reproduction control application 300 and content providing application 400 .
- the input device 220 is a touch sensor, a physical button, a non-contact sensor, or the like, and inputs a contact or non-contact operation by the user.
- the input device 220 is provided on the outer surface of the driver unit 222 (see FIG. 2) of the wearable device 200, for example.
- the content providing application 400 provides content.
- a content providing application 400 is an application group including a plurality of different content providing applications 401 and 402 .
- a plurality of different content providing applications 401 and 402 respectively provide content (specifically, audio content) of different genres such as music, environmental sounds, healing sounds, and radio programs.
- the content providing application 400 is simply referred to when the different content providing applications 401 and 402 are not distinguished.
- the content reproduction control application 300 includes the context acquisition unit 110, the PDR (Pedestrian Dead Reckoning) unit 120 (user position estimation unit), the location estimation unit 130 (location attribute estimation unit), and the user state estimation unit 140. , the environment estimation unit 150, and the content control unit 161 of the output control unit 160 (see FIG. 1).
- the content control unit 161 selects the content providing application 400 based on the environmental state estimated by the environment estimation unit 150 or based on different operations input by the user to the input device 220 of the wearable device 200 .
- the content control unit 161 generates a cue for the content providing application 400 to select content based on the environmental state, outputs the generated cue to the selected content providing application 400, and instructs the content providing application 400 to provide the content based on the cue.
- the content is reproduced from the wearable device 200 by making the selection.
- the preset application 500 pre-assigns a plurality of different operations input by the user to the input device 220 of the wearable device 200 to a plurality of different functions related to services provided by the content providing application 400 .
- the preset application 500 pre-assigns a selection of different content providing applications 401,402.
- a plurality of different operations input by the user to the input device 220 of the wearable device 200 are assigned in advance to selection of a plurality of different content providing applications 401 and 402.
- Preset application 500 may be independent of content reproduction control application 300 or may be included in content reproduction control application 300 .
- FIG. 27 shows an example of the GUI of the preset application.
- the preset application 500 has, for example, a playback control GUI 710, a volume control GUI 720, and a quick access control GUI 730. Note that the GUI provided by the preset application 500 and the combination of settable functions and operations differ depending on the model of the wearable device 200 .
- the user can use the playback control GUI 710 to assign a plurality of different operations input by the user to the input devices 220 of the left and right wearable devices 200 to each function during content playback. For example, the user assigns a single-tap operation of the wearable device 200 on the right side to play and pause, assigns a double-tap operation to play the next song, assigns a triple-tap operation to play the previous song, and assigns a long press operation to the voice assistant. Can be assigned to activate a function. Note that the functions assigned to each operation may be functions other than those described above, and the functions may be assigned to each operation by default.
- the user can use the volume control GUI 720 to assign a plurality of different operations that the user inputs to the input devices 220 of the left and right wearable devices 200 to each function of the volume control. For example, the user can assign a single-tap operation of the left wearable device 200 to volume up and a long press operation to volume down.
- the user uses the quick access control GUI 730 to convert a plurality of different operations that the user inputs to the input devices 220 of the left and right wearable devices 200 into a quick access function that selects and activates a plurality of different content providing applications 401 and 402. can be assigned. For example, the user can assign a double tap operation on the left wearable device 200 to launch the content providing application 401 and a triple tap operation to launch the content providing application 402 .
- the preset application 500 can perform a plurality of different operations input by the user to the input devices 220 of the left and right wearable devices 200 not only through playback control and volume control while the content providing application 400 is running, but also through the content providing application 400 . can be assigned to the selection and activation of
- FIG. 28 shows the operational flow of the content playback control application.
- the context acquisition unit 110 acquires the user's context.
- User state estimation unit 140 is based on the context acquired by context acquisition unit 110, detection values (acceleration, angular velocity, and azimuth angle) of sensor unit 210 of wearable device 200, and location attributes estimated by location estimation unit 130. , to estimate the user state (four-level activity state: break time, neutral, DND (Do Not Disturb) and offline).
- the user state estimation unit 150 estimates the environmental state (focus (concentration), relaxation, etc.) to be presented to the user (see FIG. 19).
- the content control unit 161 of the output control unit 160 detects an appropriate timing to start reproducing content based on the environmental state estimated by the environment estimation unit 150 (step S301).
- the content control unit 161 of the output control unit 160 selects the content providing application 400 .
- the content control unit 161 selects the content providing application 400 based on different operations input by the user to the input device 220 of the wearable device 200 .
- the content control unit 161 selects the content providing application 401 if the operation input by the user to the input device 220 of the wearable device 200 is a double tap, and selects the content providing application 402 if it is a triple tap.
- the content control unit 161 selects the content providing application 400 based on the environmental state (scenario described later) estimated by the environment estimation unit 150 (step S302).
- the content control unit 161 can be set by the user (for example, by setting the content providing application 400 in advance according to the situation) such that the scenario will not fire even under the same conditions if the refusal is repeated. Based on this, the content providing application 400 may be selected.
- FIG. 29 shows an example of a table used for selecting content providing applications.
- the content control unit 161 refers to the table 600 and selects the content providing application 400 .
- Table 600 has ID 601 , scenario 602 , user context 603 and queue 604 .
- a scenario 602 corresponds to the environmental state estimated by the environment estimation unit 150 .
- the user context 603 corresponds to the user state estimated by the user state estimation unit 140 based on the user's context acquired by the context acquisition unit 110 .
- a queue 604 is a queue for the content providing application 400 to select content.
- selection flag 605 of content providing application 401 and selection flag 606 of content providing application 402 are recorded in nine records of Music_01 to 09 with ID 601, respectively.
- a record in which only the selection flag 605 is recorded means that the content providing application 401 is selected in the scenario 602 (environmental state).
- both of the selection flags 605 and 606 mean that either one of the content providing applications 401 and 402 is selected under different conditions in the scenario 602 (environmental state).
- the content control unit 161 may learn in advance and select the content providing application 400 that is frequently executed at the current time, the content providing application 400 that is frequently used, and the like.
- the content control unit 161 of the output control unit 160 generates a cue 604 for the selected content providing application 400 to select content based on the scenario 602 (environmental state) (step S303). .
- the content control unit 161 outputs the generated cue to the selected content providing application 400, causes the content providing application 400 to select content based on the cue, and reproduces the content from the wearable device 200 (step S304).
- the content providing application 400 selects a plurality of content candidates based on the cue from the content reproduction control application 300, and reproduces from the plurality of candidates based on the detected value input from the sensor unit 210 of the wearable device 200. You can choose content.
- the content providing application 400 may select content with a fast tempo that matches the user's running speed based on the detected value input from the sensor unit 210 .
- the content control unit 161 of the content reproduction control application 300 detects the timing to start reproducing another content based on the environmental state (step S301), selects the content providing application 400 (steps S302, This step can be omitted), the queue 604 is generated (step S303), and the content is reproduced from the wearable device 200 (step S304).
- the content reproduction control application 300 has user information (that is, user context 603 (user state), scenario 602 (environmental state)) that the content providing application 400 cannot know. Therefore, the content reproduction control application 300 can know cases where it is desirable to change the content being reproduced by the content providing application 400 .
- the content reproduction control application 300 knows (that is, the user context 603 (user state) and the scenario 602 (environmental state)), it sends a cue to the content providing application 400 to change the content being reproduced. By transmitting, it is possible to provide the user with more desirable contents (music, healing sounds, etc.).
- the content control unit 161 of the content reproduction control application 300 generates a cue for the content providing application 400 to stop (rather than change) the reproduction of the content based on the scenario 602 (environmental state) (step S303). is output to the content providing application, and the content providing application 400 is caused to stop the reproduction of the content based on the cue (step S304). For example, there are cases where it is better to stop the music due to a state change such as the start of a meeting.
- the content playback control application 300 detects these states and sends a stop command to the content providing application 400 .
- the content providing application 400 generates content with a fast tempo that matches the running speed of the user based on the detected values input from the sensor unit 210, for example, according to predetermined values of heart rate and acceleration. You can select and play.
- the content providing application 400 actively reproduces the content based on the detection values input from the sensor unit 210 without receiving a cue from the content control unit 161 of the content reproduction control application 300. Attributes of content (tempo, pitch, etc.) can be selected and the selected content can be played back. In short, during content playback, the content providing application 400 can actively change the content to be played back.
- the content reproduction control application 300 selects the content providing application 400 and outputs a cue to the content providing application 400 . Therefore, it is not necessary for the content providing application 400 to consider content reproduction conflicts between a plurality of different content providing applications 401 and 402 .
- the content reproduction control application 300 generates a cue for the content providing application 400 to select content based on the environmental state, which is the user's sensitive information. Therefore, the content providing application 400 does not share the environmental state, which is the user's sensitive information, with the content providing application 400 from the content reproduction control application 300. The reflected content can be played back. Therefore, it is possible to improve the user experience while reducing the security risk.
- the content reproduction control application 300 selects the content providing application 400, and the selected content providing application 400 reproduces the content. Furthermore, the preset application 500 allows the content reproduction control application 300 to select the content providing application 400 based on different operations input by the user to the input device 220 of the wearable device 200 . This makes it possible to provide a user experience that integrates the services of a plurality of different content providing applications 401 and 402 without requiring active selection by the user.
- the shape of the user's ear, the method of wearing the wearable device 200, and the method of mounting the sensor unit 210 on the wearable device 200 vary depending on the individual and the environment. For this reason, the “front as seen from the user” and the “front of the sensor unit 210 of the wearable device 200” are not the same, and a discrepancy occurs. It is necessary that the wearable device 200 worn on the user's head can indicate the correct direction in an arbitrary coordinate system.
- the angle correction unit 121 calculates the inclination in the pitch direction and the inclination in the roll direction from the acceleration value of the acceleration sensor 211 when the head is moved downward ((b) in FIG. 6, step S101 in FIG. 5). is calculated (step S102).
- the angle correction unit 121 can calculate the tilt in the Yaw direction from the angular velocity value of the gyro sensor 212 when the head is slowly moved upward so as to look up obliquely from the front (FIG. 6(c), step S104). (4) process).
- the angle correction unit 121 can obtain not only the tilt in the pitch direction and the tilt in the roll direction but also the tilt in the yaw direction from only the acceleration value of the acceleration sensor 211 without using the angular velocity value of the gyro sensor 212 .
- a method for calculating the inclination will be described.
- FIG. 30 shows the functional configuration of the angle corrector according to one embodiment.
- FIG. 31 shows the operation flow of the angle corrector.
- the information processing device 100 (smartphone, tablet computer, personal computer, or the like) has a setting application 800 installed as a user interface, and the user operates the display device and operation device (touch panel, etc.) of the information processing device 100.
- the settings application 800 can be used by using.
- the user operates the operation device and instructs the start of measurement from the setting application 800 .
- the setting application 800 outputs angle correction operation data 801 to the wearable device 200 (step S400).
- the wearable device 200 receives an instruction (angle correction operation data 801 ) from the setting application 800 and starts transmitting gravitational acceleration, which is a detection value detected by the acceleration sensor 211 , to the angle correction unit 121 .
- the setting application 800 outputs (displays on the display device) an instruction to the user wearing the wearable device 200 to face the front ((a) in FIG. 6) (step S401).
- the angle correction unit 121 calculates the tilt in the pitch direction and the tilt in the roll direction 802 from the gravitational acceleration value when the user faces the front (roll direction) ((a) in FIG. 6) (step S402). A calculation method will be described later in detail.
- the setting application 800 instructs the user wearing the wearable device 200 to slowly move his/her head up and down so as not to shake left and right and to stop for about 1 second ((b) and (c) in FIG. 6). is output (displayed on the display device) (step S403).
- the angle correction unit 121 calculates the angle formed by the gravity axis from the X, Y, and Z axes (step S404).
- the angle correction unit 121 determines whether the calculated angle satisfies a predetermined condition (step S405). This condition is to prevent the measured value from approaching 0 when the user faces the front, and the X and Y axes of the accelerometer become nearly perpendicular to the gravity axis.
- the condition is that the angle formed from the axis is a sufficient bending angle and that errors due to operation are not measured (details will be described later). If the conditions are not satisfied, the angle correction unit 121 outputs (displays on the display device) the measurement progress data 808 for instructing to redo the vertical movement (step S405, No).
- step S405 determines the gravitational acceleration value and the pitch direction when the user faces up and down (the pitch direction) ((b) and (c) in FIG. 6). and the inclination 802 in the Roll direction, the inclination 803 in the Yaw direction of the user is calculated (step S406).
- the angle correction unit 121 stores the tilt in the pitch direction, the tilt in the roll direction 802, and the tilt in the yaw direction 803 as correction values 804 in the nonvolatile storage area 805 (step S407), and completes the measurement (step S408).
- the angle estimating unit 122 reads out the correction values 806 (Pitch direction tilt and Roll direction tilt 802, and Yaw direction tilt 803) stored in the nonvolatile storage area 805 .
- the angle estimation unit 122 estimates the azimuth angle 807 of the user based on the detected value (acceleration) of the acceleration sensor 211 of the sensor unit 210 of the wearable device 200 worn by the user and the read correction value 806 .
- Angle estimator 122 may output azimuth angle 807 to setting application 800 .
- a coordinate system fixed to the user in a certain reference posture is expressed as (X, Y, Z).
- the X axis (Pitch axis) is horizontally rightward
- the Y axis (Roll axis) is horizontally front (forward)
- the Z axis (Yaw axis) is vertically upward.
- the three-dimensional local coordinate system of the acceleration sensor 211 attached to the wearable device 200 is expressed as (x, y, z). All three-dimensional coordinate systems are right-handed.
- the above two coordinate systems (X, Y, Z) and (x, y, z) have a relative deviation of 3 degrees of freedom due to individual differences in how the wearable device 200 is worn by the user. If this deviation can be identified, the user coordinate system (X, Y, Z) can be derived from the local coordinate system (x, y, z) of the wearable device 200 .
- the 2-degree-of-freedom component representing the inclination with respect to the horizontal plane out of the deviation is calculated using the values of the acceleration sensor 211 of the wearable device 200 measured while the user is stationary in the reference posture.
- FIG. 32 shows the definition of the device coordinate system.
- the coordinate axes that match the user coordinate system are rotated in three steps to match the coordinate system of the wearable device 200 so as to be suitable for the quaternion calculation described later.
- rotate ⁇ around the X axis This ⁇ is finally matched with the angle that the y-axis forms with the horizontal plane.
- it is rotated by ⁇ around the rotated y-axis.
- the angle that the x-axis makes with the horizontal plane is finally made to match the angle ( ⁇ ) that the x-axis makes with the horizontal plane.
- rotate ⁇ around the Z axis are rotated in three steps to match the coordinate system of the wearable device 200 so as to be suitable for the quaternion calculation described later.
- This ⁇ is matched with the angle formed by the horizontal plane component of the final y-axis vector and the Y-axis.
- the angles ⁇ , ⁇ are calculated from the values of the acceleration sensor 211 when the user is stationary. Since ⁇ cannot be calculated (all values are solutions), another method is used to obtain ⁇ .
- FIG. 33 shows a method of deriving the angle ⁇ .
- Equation 2 is derived using angles ⁇ and ⁇ between the x and z axes and the horizontal plane.
- the angle ⁇ is obtained as the formula (3) from the formula (2).
- the coordinate system (x', y', Z ) can be converted to Both x' and y' exist on the horizontal plane and correspond to x and y rotated by ⁇ around the Z axis.
- the acceleration value in the coordinate system of the wearable device 200 after the tilt is removed and corrected to calculate ⁇ which will be described later, it is possible to perform highly accurate calculations without axis deviation.
- An example of performing a rotation calculation using a quaternion from an acceleration vector (Ax, Ay, Az) in the coordinate system of the wearable device 200 to an acceleration vector (Ax', Ay', Az') in the coordinate system of the wearable device 200 after correction. indicates The relationship between the two coordinate systems is considered to be a synthesis of the first two stages of rotation in FIG. Assuming that each rotational quaternion is Q1 and Q2, it can be expressed by the following equation.
- the quaternion R which represents the rotation that combines these, can be expressed by the following formula. where * represents a conjugated quaternion.
- the calculation for converting the acceleration vector measured in the coordinate system of the wearable device 200 to the corrected coordinate system of the wearable device 200 can be expressed by the following formula using R.
- Fig. 34 shows the gravity axis when facing forward.
- the Yaw rotation is calculated by converting the gravitational acceleration values (x, y, z) measured on the three axes of the acceleration sensor 211 into polar coordinates. Define the distance from the origin as r, the angle from the Z axis as ⁇ , and the angle from the X axis as ⁇ . At this time, (x, y, z) and (r, ⁇ , ⁇ ) have the following relational expressions.
- Equation 5 Equation 5 (step S404).
- the deviation between the front direction of the user for which ⁇ is to be obtained and the front of the sensor of the wearable device 200 is the tilt in the Yaw direction (step S406).
- FIG. 35 shows the gravity axis when facing downward.
- FIG. 36 shows Yaw rotation calculation from measurement data and measurement singularity.
- FIG. 37 shows a flow chart for determining whether the conditions are met.
- FIG. 38 shows the Yaw rotation definition on a face-on basis.
- FIG. 39 shows the effect of vertical motion and bending angle on the calculation result.
- the calculation of ⁇ uses the measurement result when the user is facing up and down (Fig. 35). This is to avoid the fact that when the user faces the front, the X and Y axes of the acceleration sensor 211 become nearly perpendicular to the gravity axis, and the measured values approach 0. Since the denominator of the formula of 5 approaches 0, a correct value cannot be calculated (FIG. 36).
- the measurement results are used for calculation when the conditions of ⁇ >45 and ⁇ standard deviation ⁇ 3 are satisfied so that the bending angle ( ⁇ ) is sufficient and errors due to operation are not measured (step S405). (Fig. 37).
- the upward/downward orientation may not meet the conditions, so two patterns of operation are implemented. It has already been confirmed in FIGS. 36, 38 and 39 that there is no difference in the calculation results between the upper and lower measurements.
- Patent Document 1 detects and adjusts the user's head rotation.
- the gyro sensor measures the rotation angle and the acceleration sensor measures the gyro inclination, calculates the "user's head rotation", and corrects the sound image localization position.
- the front direction can be set by the user's operation, and the rotational movement from there can be traced, but since all measurements are relative to the "user front" as a reference, it cannot be applied to an absolute coordinate system such as azimuth.
- Patent Document 2 calculates the mounting angle of the navigation device with respect to the vehicle by excluding the influence of the road inclination.
- An acceleration sensor, a gyro sensor in the yaw direction, a running speed sensor, and GPS are used in combination. Data is collected while detecting the state of the vehicle, such as when the vehicle is stopped or running, and acceleration in the vehicle's traveling direction and lateral direction is detected, and the mounting angle is calculated from these. It is a technology that depends on the unique characteristics of automobiles and cannot be applied to devices worn by people.
- the difference between the sensor coordinate system in the device installed on the user's head and the coordinate system set in any direction by the user is measured and corrected. Therefore, the output result can be made constant regardless of the shape of the user's ears and head, or the wearing method. Since the correction is not made within relative coordinates, it can be expanded to an absolute coordinate system such as azimuth.
- the inclination in the Yaw direction is calculated from the gravitational acceleration by the user performing an action (pitch rotation) in which the head is turned up or down.
- Pitch rotation an action in which the head is turned up or down.
- the Yaw axis and the gravity axis are close, it is difficult to calculate the tilt of the Yaw report from the gravity acceleration, but by tilting in the Pitch direction, the gravity acceleration applied to each axis changes and can be calculated.
- the correction value of the user's azimuth angle can be calculated using only the acceleration sensor.
- the gyro sensor itself drifts depending on the usage environment and continuous use, but the acceleration sensor is not affected by the drift, so it is highly reliable.
- Fig. 40 shows selection of a playlist suitable for a scene.
- the content control unit 161 (FIG. 26) of the output control unit 160 assumes that the wearable device 200 is always worn. Suggest appropriate timing of content playback. Some specific examples will be described.
- the content control unit 161 may restart (resume) content playback based on a user trigger (tap, gesture, etc.) (upper part of FIG. 40).
- the content control unit 161 may restart content reproduction based on an auto-trigger (wearing, movement, after a call) (middle of FIG. 40).
- the content control unit 161 may restart content playback based on an auto-trigger involving an interaction (morning commute, evening leaving, running, etc.) (lower part of FIG. 40).
- the content control unit 161 may restart the reproduction of the content that was being reproduced during the previous morning's commute through interaction with the user.
- Fig. 45 shows an example of switching and proposing playlists according to the scene.
- the content control unit 161 reproduces the set resume when the wearable device 200 is attached in the morning.
- the content control unit 161 reproduces the playlist set to "go to work” when the user goes to work.
- the content control unit 161 changes the playlist according to the scene by reproducing the playlist set to "office work”.
- the content control unit 161 stops the playback while the user is in a meeting or calling, and restarts the playlist set to "office work" when the meeting or calling ends.
- the content control unit 161 proposes the start of playback of a playlist that matches the scene.
- the content control unit 161 reproduces the playlist set to "go to work”.
- the content control unit 161 stops the content according to the scene by stopping content reproduction.
- FIG. 41 shows an example of continuously reproducing a playlist across the same divided scenes.
- the previous day was as described with reference to FIG.
- the content control unit 161 reproduces the music next to the music played last (the previous day). Therefore, the new song is played.
- the content control unit 161 restarts the last song (on the previous day) of the playlist set to "go to work". That is, the content control unit 161 reproduces the playlist by connecting it across days.
- the content control unit 161 connects the scenes over time and reproduces the playlist. to play.
- the content control unit 161 When the user arrives at the office, the content control unit 161 resumes the last song played (on the previous day) in the playlist set to "office work". When the user leaves work, the content control unit 161 restarts the song that was played last (on the previous day) in the playlist set to "leave work”. That is, the content control unit 161 reproduces the same environmental state (scene ) can be continuously played back.
- FIG. 42 shows another example in which the user experiences content that matches the scene.
- the content control unit 161 reproduces content suitable for the morning.
- the content control unit 161 restarts the song that was played last (the previous day) in the playlist set to "commuting to school" when the user (student in this example) is commuting to school.
- the content control unit 161 reproduces the playlist set to "for work” and turns on noise canceling.
- the content control unit 161 reproduces up-tempo content during running.
- the content control unit 161 reproduces intensive BGM when the user sits at his desk at home and studies.
- the content control unit 161 reproduces BGM that encourages meditation when the user's stress is high.
- the content control unit 161 reproduces sleep BGM when the user lies down in bed at night, and stops the content when the user falls asleep. As a result, the wearable device 200 is worn all the time and the content is automatically reproduced according to the behavior of the user, so that the user can live comfortably.
- FIG. 43 shows a first implementation example (a content control application controls a content providing application).
- the content control application 300 controls the content providing application 400 .
- the content control application 300 determines the scene ID based on the user's status (Not busy), etc., and notifies the content providing application 400 of the scene ID. Not busy means not busy (conversation, call, calendar event scheduled).
- the content providing application 400 determines a playlist suitable for the scene based on the context, the user's own content table, and the content played last, and plays it.
- FIG. 44 shows a second implementation example (the content control application records the information of the content that was being reproduced at the end of the scene and designates the content ID for each context).
- the content control application 300 records the information of the content that was being reproduced at the end of the scene, and designates the content ID for each context.
- the content control application 300 determines the scene ID from the user's state (Not busy), etc. Based on the scene ID, the content ID and artist ID are obtained from the context, the user's own content table, and the last played content. to the content providing application 400;
- the content providing application 400 selects and reproduces a playlist including content identified by the content ID and artist ID.
- FIG. 46 shows an example of a content information acquisition method.
- the content control unit 161 remembers content information that the user has listened to for 30 seconds or more (when reproduction of 30 seconds or more is counted as 1 reproduction). At that time, it is recorded as a log along with context information such as time, place, action type, etc., as well as linking with the prescribed "scene" classification.
- the content information includes, for example, song information, artist information, album information, playlist information, information on the number of songs in the playlist, playback application information, and information on how many seconds the song has been played back.
- the content control unit 161 detects a context that matches the scene determination rule, the content control unit 161 resumes playback from the point where it was previously stopped in that scene.
- the number of seconds for the content control unit 161 to remember the content information may be shorter than 30 seconds, longer than 30 seconds, may be appropriately set by the user, or may be automatically set for each content. may
- AVRCP Audio/Video Remote Control Profile
- resumes may be less reproducible.
- the content control unit 161 issued a playback request to service B based on the meta information of the song played by the user on service A because the playback application information could not be acquired and the song information was text-based meta information. , there are times when a matching song cannot be found and cannot be reproduced.
- the advantage of acquiring content information via the SDK is that resumes can be reproduced for each song/artist. Since the song ID/artist ID/album ID managed by the content providing application 400 can be obtained, it is possible to reproduce the album containing the song/artist.
- the advantage of acquiring content information via GATT is that the resume can be reproduced on a song/artist/playlist basis, providing the highest quality experience.
- GATT Generic Attribute Profile
- the song ID/artist ID/album ID managed by the content providing application 400 if the playlist URI (Uniform Resource Identifier) and song order can be obtained, the song can be reproduced from the middle of the playlist.
- playlist URI Uniform Resource Identifier
- Fig. 47 shows the reproduction of one playlist by connecting the same scenes.
- a playlist As a first example of a playlist, several categories based on the user's preferences are presented to the user in the form of a playlist, which is dynamically generated based on the preferences selected by the user.
- a second example of a playlist is a playlist (fixed songs) generated by selecting songs by a creator.
- the content control unit 161 plays back the playlist from the beginning when the playback of the playlist ends, or recommends a playlist that seems to be related and starts playback if the user accepts it. There are three options: implement or terminate.
- FIG. 48 shows an example of a table held by the content reproduction control application.
- the content for the morning commute is recommended by voice, and if the user interacts with Yes, they can continue listening to the song they last listened to on the morning commute.
- the content control unit 161 memorizes, as an example, content information that the user has listened to for 30 seconds or more (when reproduction of 30 seconds or more is counted as one reproduction).
- the content information includes, for example, song information, artist information, album information, playlist information, information on the number of songs in the playlist, playback application information, and information on how many seconds the song has been played back.
- the content control unit 161 associates it with the prescribed “scene” classification, and records it as a log together with context information such as time, place, action type, and the like.
- context information such as time, place, action type, and the like.
- the content control unit 161 detects a context that matches the “scene” determination rule, the content control unit 161 resumes playback from the point where it stopped last time in that scene.
- the number of seconds for the content control unit 161 to remember the content information may be shorter than 30 seconds, longer than 30 seconds, may be appropriately set by the user, or may be automatically set for each content. may
- FIG. 49 shows an example of a table held by the content reproduction control application.
- the content control unit 161 based on the song information, the playlist information, and the information on how many seconds of the song has been played back, specifies the last playback content "YYYY" of the same playlist and the playback time to reproduce the same scene. Sometimes it is possible to resume the previously played playlist.
- the user may specify in advance a playlist to be reproduced in the scene. For example, the user sets scene (1): playlist A, scene (2): playlist B, and scene (3): none.
- the content control unit 161 records what the user is playing.
- the content control unit 161 records the playlist C when the playlist C is reproduced as the scene (3).
- the content control unit 161 reproduces the playlist C when the wearable device 200 is attached in scene (3).
- the content control unit 161 does not change the scene and playlist.
- the content control unit 161 When the user manually changes the playlist during the scene, the content control unit 161 reproduces playlist A in scene (1) and changes it to playlist D during scene (1). The content control unit 161 finishes the scene (1), and when the scene (1) comes again after the passage of time, the play list A is reproduced (proposed). If the playlist A is rejected, the content control unit 161 proposes the playlist C (higher priority). It can also be done on the GUI. It also includes the case where the user is prompted to change and the user accepts it.
- the content control unit 161 may make recommendations based on the scene.
- the content control unit 161 can reflect the music preferences of the scene in the dynamically generated playlist.
- the content control unit 161 analyzes preferences using Skip and Like in each scene, and can reflect the content preferred in each scene and a dynamic playlist generated for each scene.
- the content control unit 161 continues the playlist when the scene is the same across multiple devices. For example, if the content control unit 161 is listening to a playlist for Saturday night on a smartphone, stops the music once at home, and starts playing music on an audio device after eating, Resume playlist.
- the content control application 300 defines a scene based on the user's behavior, information related to the user, and the environment. Actions are walking, running, laughing, riding a train, staying at home, feeling good, not feeling well, and so on. Information related to the user is in a meeting, at work, shopping, at work, and the like. The environment is the weather, the time of day, and the like. A scene is defined by combining the above information (although it does not necessarily include all of them). Scenes include commuting, being in the office, running on holidays, and the like.
- the content control application 300 selects and reproduces a playlist according to the scene. Specifically, a playlist is selected and played back at the playback start time point or at the change point of the scene being played back.
- the user may associate scenes and playlists in advance. If the playlist is changed during the scene, it may be replaced (return to the preset playlist once playback is stopped). For example, when commuting, select and play a playlist that matches your commute, or a playlist that allows you to concentrate when you are at work.
- the song being played is played to the end, and after the end, the playlist matching the current scene is played. Propose playback according to the scene when wearing. There is also an option not to play.
- the content control application 300 When selecting music in a scene, the content control application 300 reproduces the continuation of the playlist reproduced in the same scene in the past. When playback is stopped during a scene, the song can be stored, and when the same scene appears next time, the stored song can be resumed.
- the content control application 300 can confirm whether or not the change is allowed at the time of scene switching, and the user can refuse the confirmation.
- a notification sound is superimposed on the song currently being played to notify the user that the playlist will be changed.
- the user can reject or approve the change confirmation by the notification sound by key operation, voice, or gesture input.
- the content control unit 161 can also be applied to present and recommend content other than music.
- the content control unit 161 can provide contents other than music to be viewed depending on the scene.
- the content control unit 161 reproduces a playlist of economic news videos on the train when going to work.
- the content control unit 161 plays videos of favorite YouTubers in a playlist on the train on the way home.
- the content control unit 161 can also switch SNS content and the like to be displayed according to the scene.
- the content control unit 161 selects economic news on the train on the way to work, and selects entertainment news on the train on the way home.
- the content control unit 161 can change the content to be provided according to the detected scene by defining what to present according to the scene for each device (category of).
- FIG. 50 shows the concept of user frontality.
- the wearable device 200 and the content playback control application 300 are the user front, and serve as interfaces that provide background creators and applications of each company.
- FIG. 51 explains the user front property by taking a search as an example.
- FIG. 52 explains the user front property of this embodiment.
- FIG. 53 explains the user front property of this embodiment.
- the content playback control application 300 provides the user's context to the content providing application 400 .
- the content providing application 400 applies to the content reproduction control application 300 for content reproduction.
- the content reproduction control application 300 permits the content providing application 400 to reproduce the content.
- the content providing application 400 provides content to the wearable device 200 and reproduces it.
- Fig. 54 shows playlist designation by a creator.
- the content playback control application 300 transmits the context when the wearable device 200 is worn, and selects a playlist based on tags.
- Creators create playlists and set the context they want to hear.
- Creator-provided playlists are selected when the user is in a particular context.
- FIG. 55 shows a method of providing content in accordance with scenes.
- the content playback control application 300 provides an experience of listening to music content in such a scene, rather than having the user listen to the music content.
- the content reproduction control application 300 makes it possible to search for content that cannot be identified by its title using tags. Tags can be added by users and creators.
- the content playback control application 300 can make music content searchable in context (Run, Night+Run, etc.).
- the content reproduction control application 300 can search contexts using user behavior as a search key.
- FIG. 56 shows a method of playing music content when the user wants to listen to it.
- the content playback control application 300 acquires the context of the timing at which the music content Like button was pressed, and when the same context appears again, the same content is played back and provided to the user. For example, when the content reproduction control application 300 detects a Night+Run context and Like, it reproduces the same content when the same context (Night+Run) situation occurs again. As a result, the content reproduction control application 300 reproduces songs with common tags and songs detected through cooperation.
- Fig. 57 shows an example of dynamically changing content based on tags.
- the content playback control application 300 detects changes in tags and dynamically changes content to be played back.
- FIG. 58 shows the configuration of an information processing system according to another embodiment of the present disclosure.
- FIG. 59 shows an overview of the registration and matching part.
- the information processing system 30 is obtained by adding a registration and matching unit 170 to the configuration of the information processing system 10 in FIG. Moreover, the sensor unit 210 further has a geomagnetic sensor 215 .
- the registration and matching unit 170 of the information processing system 30 associates and registers in advance the detection values (detection values for registration) of the sensor unit 210 corresponding to the environmental conditions (focus, etc.). For example, as shown in FIG. 59, the registration and matching unit 170 associates the detection value of the sensor unit 210 during teleworking in the living room with the environmental state (focus) and registers them in the database (db0).
- the registration and matching unit 170 associates the detection value of the sensor unit 210 in the bedroom with the environmental state (relaxed) and registers them in the database (db1).
- the information processing system 30 can provide content that allows the user to focus on work, relaxation, and so on, based on the environmental state that is associated and registered. It is possible to output content that allows you to sleep soundly.
- FIG. 60 shows the configuration of the registration and matching unit.
- the registration and matching unit 170 has a sensor reception unit 171, a stop and movement detection unit 172, an average processing unit 173, a database generation unit 174, a database 175, and a matching unit 176.
- the sensor reception unit 171 receives detection values of the sensor unit 210 (the acceleration sensor 211 and the geomagnetic sensor 215) transmitted from the sensor transmission unit 216 of the wearable device 200.
- the sensor unit 210 is composed of the acceleration sensor 211 and the geomagnetic sensor 215 as an example. can be That is, the number and types of sensors are not limited to the example in FIG.
- the stop and movement detector 172 determines stop and movement of the user based on the detected value.
- the average processing unit 173 calculates an average value of a plurality of detection values detected by the sensor unit 210 within a predetermined period of time as a detection value for registration.
- the database generation unit 174 associates the detection value for registration detected by the sensor unit 210 with the environmental state (such as focus) to be presented to the user when the detection value for registration is detected, and registers them in the database 175 . Note that all of the database generation unit 174 or the part 174A with a large amount of calculation may be installed not locally in the information processing apparatus 100 but on the cloud.
- the matching unit 176 matches the new detection value detected by the sensor unit 210 with the detection value for registration registered in the database 175, and the difference between the new detection value and the detection value for registration registered in the database 175 is It is determined whether or not it is equal to or less than the matching threshold.
- the content control unit 161 of the output control unit 160 determines that the difference between the new detection value and the registered detection value for registration is equal to or less than the matching threshold value, the content control unit 161 associates the detection value for registration with the detection value for registration and registers it in the database 175 .
- the output is controlled based on the environmental conditions (focus, etc.). The processing of the registration and matching unit 170 will be described in more detail below.
- FIG. 61 shows the operation flow of registration and confirmation processing.
- FIG. 62 shows an example of a registration and confirmation GUI.
- a user wearing the wearable device 200 stops (for example, takes a seat) at a place where the detected value of the sensor unit 210 is to be registered (for example, a place where telework is performed) (step S501).
- the registration and matching unit 170 displays the status 902 of the acceleration sensor 211 on the GUI 900 displayed on the information processing device 100 (smartphone or the like). , the status 903 of the geomagnetic sensor 215 (geomagnetic sensor), and the like.
- the user operates the registration start button 901 of the GUI 900 (step S502).
- the registration and matching unit 170 displays an instruction screen for moving the head on the information processing device 100 (smartphone, etc.) or another information processing device (for example, a personal computer used for telework).
- the user moves the head slowly by gradually changing the position and angle of the head according to the instruction screen (step S503).
- the user operates the registration end button 904 (step S504).
- Fig. 63 shows an example of an instruction screen for moving the head displayed on the smartphone.
- the registration and matching unit 170 displays an icon 905 such as an arrow on the GUI 900 displayed on the information processing apparatus 100 (smartphone, etc.), and the user moves his/her head slowly.
- an animation is displayed that smoothly moves the icon 905 so as to trace the movement of the head expected of the user.
- a message 906 is displayed to request that the registration end button 904 be operated (step S504).
- FIG. 64 shows an example of an instruction screen for moving the head displayed on a personal computer.
- the registration and matching unit 170 displays an instruction screen 908 including an icon 907 such as a circle, an instruction message, an illustration of the direction of the face, and the like to another information processing device (for example, (Personal computer used for telework), and at a speed suitable for the user to move his or her head slowly (step S503), an animation that smoothly moves the icon 907 so as to trace the movement of the head expected by the user. indicate.
- a message is displayed requesting that the registration end button 904 displayed on the GUI 900 of the information processing apparatus 100 (smartphone or the like) be operated (step S504).
- An instruction screen 908 including an icon 907 such as a circle, an instruction message, an illustration of the orientation of the face, and the like may be displayed on the GUI 900 of the information processing apparatus 100 (smartphone, etc.).
- Fig. 72 shows the calculation of the average value.
- the average processing unit 173 calculates a plurality of detections detected by the sensor unit 210 within a predetermined time period (from immediately before the stop (step S501) to the end of registration (step S504)).
- the average value of the values is calculated as the detected value for registration and temporarily stored in memory.
- the database generation unit 174 associates the detection value for registration detected by the sensor unit 210 with the environmental state (such as focus) to be presented to the user when the detection value for registration is detected, and registers them in the database 175 .
- the matching unit 176 confirms whether or not the detection values for registration registered in the database 175 are valid (step S506). Specifically, the matching unit 176 matches the new detection value detected by the sensor unit 210 after registration with the registration detection value already registered in the database 175, and matches the new detection value registered in the database 175 with the detection value registered in the database 175. It is determined whether or not the difference from the detection value for registration is equal to or less than the matching threshold. If it is equal to or less than the matching threshold, it means that the detection value for registration registered in the database 175 is valid. The matching unit 176 outputs (for example, displays on the GUI 900) the result of the confirmation, that is, whether or not the detected value for registration registered in the database 175 is valid.
- FIG. 65 shows the operational flow of multiple location registration.
- FIG. 66 shows an example of a GUI for multiple location registration.
- the GUI 910 for multiple location registration includes a location tag input button 911 and a layout image 912 showing the layout of the house in addition to the display of the registration and confirmation GUI 900 (FIG. 62).
- step S501 When the user wearing the wearable device 200 takes a seat (step S501), he/she operates the icon 913 of the place to be registered from the icons 913 displayed on the layout image 912, and operates the place tag input button 911 (step S507). After that, the user performs the same operation as above to move the head (steps S502 to S504).
- the database generation unit 174 stores the detection value for registration detected by the sensor unit 210, the environmental state (such as focus) presented to the user when the detection value for registration is detected, and the location tag input (step S507). are associated with each other and registered in the database 175 . Multiple locations can be registered by repeating this.
- the registration and matching unit 170 registers in the database 175 the detection value of the sensor unit 210 when teleworking in the living room, the environmental state (focus), and the location tag "living room” in association with each other.
- the registration and matching unit 170 also associates the detection value of the sensor unit 210 in the bedroom, the environmental state (relaxed), and the location tag “room” and registers them in another database 175 .
- the information processing system 30 can provide content that allows the user to focus on work, relaxation, and so on, based on the environmental state that is associated and registered. It is possible to output content that allows you to sleep soundly.
- FIG. 67 shows the operation flow of button type additional registration.
- Additional registration is performed when registration fails in (1) registration and confirmation or (2) multiple location registration above.
- the button type additional registration GUI is similar to the registration and confirmation GUI 900 (FIG. 62).
- step S501 When the user wearing the wearable device 200 takes a seat (step S501), he/she operates the additional registration button 914 (FIG. 62) (step S508). After that, the user performs the same operation as above to move the head (steps S502 to S503).
- the user operates the addition end button (step S509).
- a button dedicated to the addition end button may be displayed on the GUI 900, or the registration end button 904 may be used as the addition end button (FIG. 62).
- the database generation unit 174 associates the detection value for registration detected by the sensor unit 210 with the environmental state (focus, etc.) to be presented to the user when the detection value for registration is detected, and additionally registers them in the database 175 .
- FIG. 68 shows the operational flow of layout formula additional registration.
- the layout type additional registration GUI is the same as the multiple location registration GUI 910 (Fig. 66).
- step S501 When the user wearing the wearable device 200 takes a seat (step S501), he/she operates the icon 913 of the place to be added from the icons 913 displayed in the layout image 912 (step S510). After that, the user performs the same operation as above to move the head (step S503).
- the database generation unit 174 associates the detection value for registration detected by the sensor unit 210 with the environmental state (focus, etc.) to be presented to the user when the detection value for registration is detected, and additionally registers them in the database 175 .
- FIG. 69 shows the operational flow of automatic additional registration.
- the stop and movement detection unit 172 determines that the user has stopped based on the detection value of the sensor unit 210 transmitted from the sensor transmission unit 216 of the wearable device 200 (step S511, YES).
- the stop and movement detection unit 172 determines that the user has stopped for a first time (for example, 3 to 4 seconds) (that is, the user has stopped for 3 to 4 seconds) (step S512, YES).
- the averaging unit 173 calculates the average value of a plurality of detection values detected by the sensor unit 210 within a predetermined period of time (from immediately before stopping (step S511) to after stopping for the first period of time (step S512)). , is temporarily stored in the memory (step S513).
- the matching unit 176 matches the new detection value temporarily stored in the memory with the registration detection value registered in the database 175 (step S514). Specifically, the matching unit 176 matches the average value of the new detection values temporarily stored in the memory with the registration detection values already registered in the database 175, and calculates the average value of the detection values temporarily stored in the memory. It is determined whether or not the difference between the value and the detection value for registration registered in the database 175 is equal to or less than the matching threshold.
- the content control unit 161 of the output control unit 160 determines that the difference between the average value of the new detection values and the registered detection value for registration is equal to or less than the matching threshold value, the content control unit 161 associates the detection value for registration with the The output is controlled based on the environmental state (focus etc.) registered in (step S514, YES). Note that if the difference between the average value of the new detection values and the registered detection value for registration is greater than the matching threshold value (step S514, NO), there is a possibility that it will become a candidate to be registered in the database 175. The average value of the newly detected values temporarily saved in is saved without being deleted.
- step S511, NO if the user has not stopped (step S511, NO), if the user's stop time is less than the first time (step S512, NO), and the average value of the detected values and the registration A case where the difference from the detected value is greater than the matching threshold (step S514, NO) will be described.
- the stop and movement detection unit 172 determines that the user has started moving based on the detection value of the sensor unit 210 transmitted from the sensor transmission unit 216 of the wearable device 200 (step S515, YES).
- the matching unit 176 matches the new detection value with the registration detection value registered in the database 175 (step S516).
- step S516, NO Since the user has started moving, the difference between the detection value and the detection value for registration registered in the database 175 is greater than the matching threshold (step S516, NO).
- the database generation unit 174 calculates the stop time until the user starts moving (step S515, YES) (that is, the stop time immediately before the start of movement) (step S517).
- the database generation unit 174 determines whether the calculated stop time (that is, the stop time immediately before the start of movement) is longer than the second time (for example, M minutes) (that is, whether or not the user has been stopped for more than M minutes). It is determined whether or not (step S518).
- the database generating unit 174 When the stop time until the user starts moving is longer than the second time (M minutes) (step S518, YES), the database generating unit 174 generates a predetermined period until the user starts moving (step S515, YES). The average value of a plurality of new detection values detected within a period of time and the environmental state to be presented to the user when this new detection value is detected are associated and newly registered in the database 175 (step S519).
- the database generation unit 174 is based on the detection values of the sensor unit 210 (particularly, the biosensor 214 such as a heartbeat sensor, a blood flow sensor, and an electroencephalogram sensor) transmitted from the sensor transmission unit 216 of the wearable device 200, and based on the user's activity level. Based on this, the database 175 may be updated when the degree of focus or relaxation is high.
- FIG. 70 shows a method (table format) for registering detection values for registration and performing matching.
- the sensor reception unit 171 receives the detection value (for example, 25 Hz) of the sensor unit 210 transmitted from the sensor transmission unit 216 of the wearable device 200.
- the averaging unit 173 averages a plurality of detection values (for example, 25 Hz) detected by the sensor unit 210 every second, slides every 0.5 seconds, and extracts M 2 Hz detection values (step S522).
- 0.5 seconds and 2 Hz are examples, and are not limited to these.
- the database generator 174 creates a table in which the extracted M detected values are registered, and registers it in the database 175 (step S523).
- the 8-dimensional feature amount includes the 3-axis feature amount of the acceleration sensor 211, the 3-axis feature amount of the geomagnetic sensor 215 (geomagnetic sensor), and the like.
- the sensor reception unit 171 receives a new detection value (for example, 25 Hz) of the sensor unit 210 transmitted from the sensor transmission unit 216 of the wearable device 200 (step S524). ).
- the matching unit 176 calculates an average value for one second from multiple detection values (for example, 25 Hz) detected by the sensor unit 210 (step S525).
- the matching unit 176 calculates an eight-dimensional feature quantity from the average value for one second.
- the matching unit 176 performs matching by comparing the calculated feature amount with the feature amount registered in the database 175 (step S523) (step S526). Note that the above-described number of seconds for obtaining an average, such as 1 second and 0.5 seconds, and the number of detected values to be extracted, such as 2 Hz, are examples, and are not limited to these.
- FIG. 71 shows a method (machine learning formula) for registering detection values for registration and performing matching.
- the sensor reception unit 171 receives the detection value (for example, 25 Hz) of the sensor unit 210 transmitted from the sensor transmission unit 216 of the wearable device 200.
- the averaging unit 173 averages a plurality of detection values (for example, 25 Hz) detected by the sensor unit 210 every second, slides every 0.5 seconds, and extracts M 2 Hz detection values (step S522).
- the 8-dimensional feature amount includes the 3-axis feature amount of the acceleration sensor 211, the 3-axis feature amount of the geomagnetic sensor 215 (geomagnetic sensor), and the like. Note that the number of feature amounts is not limited to this, and may be any number.
- the sensor receiving unit 171 receives a new detection value (for example, 25 Hz) of the sensor unit 210 transmitted from the sensor transmitting unit 216 of the wearable device 200 (step S524).
- the matching unit 176 calculates an average value for one second from multiple detection values (for example, 25 Hz) detected by the sensor unit 210 (step S525).
- the matching unit 176 calculates an eight-dimensional feature quantity from the average value for one second.
- the matching unit 176 performs matching (step S528) by inputting the calculated feature amount to a model using a neural network that has been trained (step S527).
- step S502 You may continue to use a model that has been trained at least once.
- learning may be performed at the start of registration (step S502), at the time of additional registration (step S508), or at the time of automatic additional registration (step S511) to update the model.
- the registration and matching unit 170 associates and registers in advance the detection values (detection values for registration) of the sensor unit 210 corresponding to the environmental conditions (focus, etc.), so that the information processing system 30 is content that allows you to focus on your work or content that allows you to relax and have a good night's sleep, based on the associated and registered environmental conditions, when a new detection value that matches the registered detection value for registration is newly detected. can be output.
- the present disclosure may have the following configurations.
- a user state estimation unit that estimates a user state
- an environment estimation unit that estimates an environmental state to be presented to the user based on the user state
- an output control unit that controls output based on the environmental state
- An information processing device comprising: (2) The information processing device according to (1) above, a user position estimating unit that estimates a user position based on a detection value of a sensor unit of the wearable device worn by the user; a location attribute estimating unit that estimates a location attribute, which is an attribute of a location where the user is located, based on the user location; further comprising The information processing apparatus, wherein the user state estimation unit estimates the user state based on the location attribute.
- the user position estimation unit an angle correction unit that calculates a correction value of the azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user; an angle estimation unit that estimates an azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user and the correction value; a user position estimation unit that estimates the user position using the azimuth angle; An information processing device.
- the user position estimation unit estimates a moving route of the user position, The information processing apparatus, wherein the location attribute estimation unit estimates the location attribute after movement based on the movement route.
- the location attribute estimation unit stores a plurality of movement routes, and estimates the location attribute after movement by matching the estimated movement route with the plurality of held movement routes.
- DTW dynamic time warping
- the information processing device according to any one of (1) to (11) above, further comprising a context acquisition unit that acquires the context of the user; The information processing apparatus, wherein the user state estimation unit estimates the user state based on the acquired context.
- the context includes at least one of location information of the user and terminal information of the information processing device.
- Information processing apparatus wherein the user state estimation unit estimates the user state based on the detection value of the sensor unit of the wearable device and/or the location attribute.
- the information processing apparatus indicates a plurality of activity states of the user.
- the output control unit is An information processing apparatus comprising: a content control unit that reproduces content selected based on the environmental state; and/or a notification control unit that controls the number of notifications to the user based on the environmental state.
- (17) Estimate the user state, estimating an environmental state to be presented to a user based on the user state; controlling output based on the environmental conditions; Information processing methods.
- the processor of the information processing device a user state estimation unit that estimates a user state; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state; An information processing program operated as an output control unit that controls output based on the environmental state.
- wearable devices and a user state estimation unit that estimates a user state of a user wearing the wearable device; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state; an output control unit that controls output based on the environmental state; an information processing device having An information processing system comprising (20) the processor of the information processing device, a user state estimation unit that estimates a user state; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state; A non-transitory computer-readable recording medium recording an information processing program operated as an output control unit that controls output based on the environmental state.
- the present disclosure may have the following configurations.
- wearable devices and a user state estimation unit that estimates a user state of a user wearing the wearable device; an environment estimation unit that estimates an environmental state of the user based on the user state;
- a content providing application that provides content generates a cue for selecting content based on the environmental state, outputs the cue to the content providing application, and causes the content providing application to select content based on the cue.
- a content control unit that reproduces the content
- a content playback control application having an information processing device having a control circuit that executes
- a content playback system comprising: (2) The content reproduction system according to (1) above, the control circuit of the information processing device executes a plurality of different content providing applications; The content reproduction system, wherein the content control unit selects a predetermined content providing application for reproducing the content based on the environmental state. (3) The content reproduction system according to (1) or (2) above, the control circuit of the information processing device executes a plurality of different content providing applications; The wearable device has an input device, The content reproduction system, wherein the content control unit selects a predetermined content providing application for reproducing the content based on different operations input by a user to the wearable device.
- the content reproduction system according to any one of (1) to (5) above,
- the wearable device has a sensor unit
- the content playback control application is a user position estimation unit that estimates a user position based on a detection value input from a sensor unit of the wearable device worn by the user; a location attribute estimating unit that estimates a location attribute, which is an attribute of a location where the user is located, based on the user location; further having The content reproduction system, wherein the user state estimation unit estimates the user state based on the location attribute.
- the content reproduction system according to (6) above The content reproduction system, wherein the sensor unit of the wearable device includes at least one of an acceleration sensor, a gyro sensor, a compass, a biosensor, and a geomagnetic sensor.
- the content reproduction system according to (6) or (7) above, The content providing application selects a plurality of content candidates based on the cue, and selects content to be played back from the plurality of candidates based on the detection value input from the sensor unit.
- the content control unit generates a cue for the content providing application to stop playing the content based on the environmental state, outputs the cue to the content providing application, and instructs the content providing application to stop the reproduction of the content based on the cue.
- a content reproduction system that stops the reproduction of the content.
- the content playback control application is further comprising a context acquisition unit that acquires the context of the user; The content reproduction system, wherein the user state estimation unit estimates the user state based on the acquired context.
- the user position estimation unit an angle correction unit that calculates a correction value of the azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user; an angle estimation unit that estimates an azimuth angle of the user based on the detection value of the sensor unit of the wearable device worn by the user and the correction value; A content reproduction system that estimates the user position using the azimuth angle.
- the sensor unit of the wearable device includes an acceleration sensor,
- the angle corrector is calculating the inclination of the user in the pitch direction and the inclination in the roll direction from the gravitational acceleration when the user faces the roll direction, which is the detection value of the acceleration sensor; calculating the inclination of the user in the Yaw direction from the gravitational acceleration when the user faces the Pitch direction as the detection value of the acceleration sensor, the inclination in the Pitch direction, and the inclination in the Roll direction;
- a content reproduction system in which the tilt in the pitch direction, the tilt in the roll direction, and the tilt in the yaw direction are used as the correction values.
- the content reproduction system according to any one of (1) to (13) above, A content reproduction system in which the content control unit continuously reproduces related content across the same environmental state.
- a database generating unit that associates and registers a detection value for registration detected by the sensor unit and an environmental state to be presented to the user when the detection value for registration is detected; Matching the new detection value detected by the sensor unit with the registered detection value, and determining whether or not the difference between the new detection value and the registration detection value is equal to or less than a matching threshold.
- a matching unit further comprising The content reproduction system, wherein the content control unit generates and outputs the cue based on the environmental state registered in association with the detection value for registration when it is determined that the difference is equal to or less than the matching threshold.
- the matching unit matches the new detection value and the detection value for registration when the user stops for a first time, When the stop time until the user starts moving is longer than a second time, the database generation unit detects the new detection value until the user starts moving and the new detection value.
- a content reproduction system that associates and newly registers an environmental state to be presented to the user when the content reproduction system is activated.
- a matching unit When it is determined that the difference is equal to or less than the matching threshold, a content providing application that provides content generates a queue for selecting content based on the environmental state registered in association with the detection value for registration. a content control unit that outputs the cue to the content providing application, causes the content providing application to select content based on the cue, and reproduces the content; a content playback control application having an information processing device having a control circuit that executes A content playback system comprising: (19) a user state estimation unit that estimates a user state of a user wearing the wearable device; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state; A content providing application that provides content generates a cue for selecting content based on the environmental state, outputs the cue to the content providing application, and causes the content providing application to select content based on the cue.
- a content control unit that reproduces the content
- a content playback control application having An information processing device comprising a control circuit for executing (20) The control circuit of the information processing device, a user state estimation unit that estimates a user state of a user wearing the wearable device; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
- a content providing application that provides content generates a cue for selecting content based on the environmental state, outputs the cue to the content providing application, and causes the content providing application to select content based on the cue.
- a content reproduction control application that operates as a content control unit that reproduces the content.
- the control circuit of the information processing device a user state estimation unit that estimates a user state of a user wearing the wearable device; an environment estimation unit that estimates an environmental state to be presented to the user based on the user state;
- a content providing application that provides content generates a cue for selecting content based on the environmental state, outputs the cue to the content providing application, and causes the content providing application to select content based on the cue.
- a non-transitory computer-readable recording medium recording a content reproduction control application that operates as a content control unit that reproduces the content.
- information processing system 100 information processing device 110 context acquisition unit 111 GPS sensor 112 beacon transceiver 113 terminal information acquisition unit 120 PDR unit 121 angle correction unit 122 angle estimation unit 123 user position estimation unit 130 location estimation unit 140 user state estimation unit 150 Environment estimation unit 160 Output control unit 161 Content control unit 162 Notification control unit 200 Wearable device 210 Sensor unit 211 Acceleration sensor 212 Gyro sensor 213 Compass 214 Biosensor 215 Geomagnetic sensor
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Molecular Biology (AREA)
- Physiology (AREA)
- Library & Information Science (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Signal Processing (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Audiology, Speech & Language Pathology (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
ウェアラブルデバイスと、
前記ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
前記ユーザ状態に基づき前記ユーザの環境状態を推定する環境推定部と、
前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部と、
を有するコンテンツ再生制御アプリケーションと、
を実行する制御回路を有する情報処理装置と、
を具備する。
前記コンテンツ制御部は、前記環境状態に基づき、前記コンテンツを再生させる所定のコンテンツ提供アプリケーションを選択してもよい。
前記ウェアラブルデバイスは、入力装置を有し、
前記コンテンツ制御部は、前記ウェアラブルデバイスにユーザが入力した異なる操作に基づき、前記コンテンツを再生させる所定のコンテンツ提供アプリケーションを選択してもよい。
前記コンテンツ再生制御アプリケーションは、
前記ユーザが装着したウェアラブルデバイスが有するセンサ部から入力された検出値に基づき、ユーザ位置を推定するユーザ位置推定部と、
前記ユーザ位置に基づき、ユーザがいる場所の属性である場所属性を推定する場所属性推定部と、
をさらに有し、
前記ユーザ状態推定部は、前記場所属性に基づき、前記ユーザ状態を推定してもよい。
ユーザのコンテクストを取得するコンテクスト取得部をさらに具備し、
前記ユーザ状態推定部は、取得された前記コンテクストに基づき、前記ユーザ状態を推定してもよい。
前記ユーザが装着した前記ウェアラブルデバイスが有する前記センサ部の前記検出値に基づき、前記ユーザの方位角の補正値を算出する角度補正部と、
前記ユーザが装着した前記ウェアラブルデバイスが有する前記センサ部の前記検出値と、前記補正値とに基づき、前記ユーザの方位角を推定する角度推定部と、を有し、
前記方位角を利用して前記ユーザ位置を推定してもよい。
前記角度補正部は、
前記加速度センサの前記検出値としての前記ユーザがRoll方向を向いたときの重力加速度から、前記ユーザのPitch方向の傾き及び前記Roll方向の傾きを算出し、
前記加速度センサの前記検出値としての前記ユーザが前記Pitch方向を向いたときの重力加速度と、前記Pitch方向の傾き及び前記Roll方向の傾きとから、前記ユーザのYaw方向の傾きを算出し、
前記Pitch方向の傾き、前記Roll方向の傾き及び前記Yaw方向の傾きを、前記補正値としてもよい。
本実施形態によれば、加速度センサのみを使用して、ユーザの方位角の補正値を算出することができる。これにより、搭載センサが少ない環境でも実施でき、低コスト、省電力、小型化を実現可能である。
前記コンテンツ制御部は、同一の環境状態を跨いで連続的に関連するコンテンツを再生してもよい。
前記センサ部が検出した登録用検出値と、前記登録用検出値が検出されたときに前記ユーザに提示する環境状態とを関連付けて登録するデータベース生成部と、
前記センサ部が検出した新たな検出値と登録済みの前記登録用検出値とをマッチングし、前記新たな検出値と前記登録用検出値との差がマッチング閾値以下であるか否かを判断するマッチング部と、
をさらに具備し、
前記コンテンツ制御部は、前記差が前記マッチング閾値以下であると判断されると、前記登録用検出値に関連付けて登録された前記環境状態に基づき、前記キューを生成及び出力してもよい。
前記マッチング部は、前記ユーザが第1の時間停止したとき、前記新たな検出値と前記登録用検出値とをマッチングし、
前記データベース生成部は、前記ユーザが移動を開始するまでの停止時間が第2の時間より長いとき、前記ユーザが移動を開始するまでの前記新たな検出値と、前記新たな検出値が検出されたときに前記ユーザに提示する環境状態とを関連付けて新たに登録してもよい。
前記データベース生成部は、前記センサ部が所定時間内に検出した複数の検出値の平均値を前記登録用検出値として登録してもよい。
本開示の一形態に係るコンテンツ再生システムは、
ウェアラブルデバイスと、
ユーザが装着した前記ウェアラブルデバイスが有するセンサ部が検出した登録用検出値と、前記登録用検出値が検出されたときに前記ユーザに提示する環境状態とを関連付けて登録するデータベース生成部と、
前記センサ部が検出した新たな検出値と登録済みの前記登録用検出値とをマッチングし、前記新たな検出値と前記登録用検出値との差がマッチング閾値以下であるか否かを判断するマッチング部と、
前記差が前記マッチング閾値以下であると判断されると、前記登録用検出値に関連付けて登録された前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部と、
を有するコンテンツ再生制御アプリケーションと、
を実行する制御回路を有する情報処理装置と、
を具備する。
ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
前記ユーザ状態に基づき前記ユーザに提示する環境状態を推定する環境推定部と、
前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部と、
を有するコンテンツ再生制御アプリケーションと、
を実行する制御回路
を具備する。
情報処理装置の制御回路を、
ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
前記ユーザ状態に基づき前記ユーザに提示する環境状態を推定する環境推定部と、
前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部
として動作させる。
例えば、ウェアラブルデバイス200を装着し、リビングで仕事を始める。しばらくしてトイレに行った後、洗面所で手を洗った後に席に戻る。またしばらくしてキッチンに移動して飲み物を取ってリビングに戻る。ここでの移動パターンは次のような移動パターンとなる。リビングからトイレ(経路(3))。トイレからリビング(経路(4))。リビングからキッチン(経路(5))。キッチンからリビング(経路(6))。
ユーザ状態を推定するユーザ状態推定部と、
前記ユーザ状態に基づきユーザに提示する環境状態を推定する環境推定部と、
前記環境状態に基づき出力を制御する出力制御部と、
を具備する情報処理装置。
(2)
上記(1)に記載の情報処理装置であって、
前記ユーザが装着したウェアラブルデバイスが有するセンサ部の検出値に基づき、ユーザ位置を推定するユーザ位置推定部と、
前記ユーザ位置に基づき、ユーザがいる場所の属性である場所属性を推定する場所属性推定部と、
をさらに具備し、
前記ユーザ状態推定部は、前記場所属性に基づき、前記ユーザ状態を推定する
情報処理装置。
(3)
上記(2)に記載の情報処理装置であって、
前記ユーザ位置推定部は、PDR(Pedestrian Dead Reckoning)を用いて前記ユーザ位置を推定する
情報処理装置。
(4)
上記(2)又は(3)に記載の情報処理装置であって、
前記環境推定部は、前記場所属性に基づき、前記環境状態を推定する
情報処理装置。
(5)
上記(2)乃至(4)の何れか一つに記載の情報処理装置であって、
前記ウェアラブルデバイスが有する前記センサ部は、加速度センサ、ジャイロセンサ、コンパス、生体センサ及び地磁気センサの内、少なくとも一つを含む
情報処理装置。
(6)
上記(3)乃至(5)の何れか一つに記載の情報処理装置であって、
前記ユーザ位置推定部は、
前記ユーザが装着した前記ウェアラブルデバイスが有する前記センサ部の前記検出値に基づき、前記ユーザの方位角の補正値を算出する角度補正部と、
前記ユーザが装着した前記ウェアラブルデバイスが有する前記センサ部の前記検出値と、前記補正値とに基づき、前記ユーザの方位角を推定する角度推定部と、
前記方位角を利用して前記ユーザ位置を推定するユーザ位置推定部と、
を有する
情報処理装置。
(7)
上記(3)乃至(6)の何れか一つに記載の情報処理装置であって、
前記ユーザ位置推定部は、前記ユーザ位置の移動経路を推定し、
前記場所属性推定部は、前記移動経路に基づき、移動後の前記場所属性を推定する
情報処理装置。
(8)
上記(7)に記載の情報処理装置であって、
前記場所属性推定部は、複数の移動経路を保持し、推定された前記移動経路を保持された前記複数の移動経路とマッチングすることにより、移動後の前記場所属性を推定する
情報処理装置。
(9)
上記(8)に記載の情報処理装置であって、
前記場所属性推定部は、マッチングが所定回数失敗すると、警告を出力する
情報処理装置。
(10)
上記(8)又は(9)に記載の情報処理装置であって、
前記場所属性推定部は、前記マッチングをDTW(dynamic time warping、動的時間伸縮法)を用いて行う
情報処理装置。
(11)
上記(1)乃至(10)の何れか一つに記載の情報処理装置であって、
前記場所属性推定部は、前記ユーザがいる場所での前記ユーザの滞在時間を判断することにより、前記場所属性を推定する
情報処理装置。
(12)
上記(1)乃至(11)の何れか一つに記載の情報処理装置であって、
ユーザのコンテクストを取得するコンテクスト取得部をさらに具備し、
前記ユーザ状態推定部は、取得された前記コンテクストに基づき、前記ユーザ状態を推定する
情報処理装置。
(13)
上記(12)に記載の情報処理装置であって、
前記コンテクストは、前記ユーザの位置情報と前記情報処理装置の端末情報の少なくともいずれかを含む
情報処理装置。
(14)
上記(1)乃至(13)の何れか一つに記載の情報処理装置であって、
前記ユーザ状態推定部は、前記ウェアラブルデバイスが有する前記センサ部の前記検出値及び/又は前記場所属性に基づき、前記ユーザ状態を推定する
情報処理装置。
(15)
上記(1)乃至(14)の何れか一つに記載の情報処理装置であって、
前記ユーザ状態は、前記ユーザの複数の活動状態を示す
情報処理装置。
(16)
上記(1)乃至(15)の何れか一つに記載の情報処理装置であって、
前記出力制御部は、
前記環境状態に基づき選択されたコンテンツを再生するコンテンツ制御部、及び/又は
前記環境状態に基づき前記ユーザへの通知の回数を制御する通知制御部
を有する
情報処理装置。
(17)
ユーザ状態を推定し、
前記ユーザ状態に基づきユーザに提示する環境状態を推定し、
前記環境状態に基づき出力を制御する、
情報処理方法。
(18)
情報処理装置のプロセッサを、
ユーザ状態を推定するユーザ状態推定部と、
前記ユーザ状態に基づきユーザに提示する環境状態を推定する環境推定部と、
前記環境状態に基づき出力を制御する出力制御部
として動作させる情報処理プログラム。
(19)
ウェアラブルデバイスと、
前記ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
前記ユーザ状態に基づき前記ユーザに提示する環境状態を推定する環境推定部と、
前記環境状態に基づき出力を制御する出力制御部と、
を有する情報処理装置と、
を具備する情報処理システム。
(20)
情報処理装置のプロセッサを、
ユーザ状態を推定するユーザ状態推定部と、
前記ユーザ状態に基づきユーザに提示する環境状態を推定する環境推定部と、
前記環境状態に基づき出力を制御する出力制御部
として動作させる情報処理プログラム
を記録した非一過性のコンピュータ読み取り可能な記録媒体。
ウェアラブルデバイスと、
前記ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
前記ユーザ状態に基づき前記ユーザの環境状態を推定する環境推定部と、
前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部と、
を有するコンテンツ再生制御アプリケーションと、
を実行する制御回路を有する情報処理装置と、
を具備するコンテンツ再生システム。
(2)
上記(1)に記載のコンテンツ再生システムであって、
前記情報処理装置の前記制御回路は、複数の異なるコンテンツ提供アプリケーションを実行し、
前記コンテンツ制御部は、前記環境状態に基づき、前記コンテンツを再生させる所定のコンテンツ提供アプリケーションを選択する
コンテンツ再生システム。
(3)
上記(1)又は(2)に記載のコンテンツ再生システムであって、
前記情報処理装置の前記制御回路は、複数の異なるコンテンツ提供アプリケーションを実行し、
前記ウェアラブルデバイスは、入力装置を有し、
前記コンテンツ制御部は、前記ウェアラブルデバイスにユーザが入力した異なる操作に基づき、前記コンテンツを再生させる所定のコンテンツ提供アプリケーションを選択する
コンテンツ再生システム。
(4)
上記(1)乃至(3)の何れか一つに記載のコンテンツ再生システムであって、
前記情報処理装置の前記制御回路は、複数の前記異なる操作を前記複数の異なるコンテンツ提供アプリケーションの選択に割り当てるプリセットアプリケーションを実行する
コンテンツ再生システム。
(5)
上記(4)に記載のコンテンツ再生システムであって、
前記プリセットアプリケーションは、前記コンテンツ再生制御アプリケーションに含まれる
コンテンツ再生システム。
(6)
上記(1)乃至(5)の何れか一つに記載のコンテンツ再生システムであって、
前記ウェアラブルデバイスは、センサ部を有し、
前記コンテンツ再生制御アプリケーションは、
前記ユーザが装着したウェアラブルデバイスが有するセンサ部から入力された検出値に基づき、ユーザ位置を推定するユーザ位置推定部と、
前記ユーザ位置に基づき、ユーザがいる場所の属性である場所属性を推定する場所属性推定部と、
をさらに有し、
前記ユーザ状態推定部は、前記場所属性に基づき、前記ユーザ状態を推定する
コンテンツ再生システム。
(7)
上記(6)に記載のコンテンツ再生システムであって、
前記ウェアラブルデバイスが有する前記センサ部は、加速度センサ、ジャイロセンサ、コンパス、生体センサ及び地磁気センサの内、少なくとも一つを含む
コンテンツ再生システム。
(8)
上記(6)又は(7)に記載のコンテンツ再生システムであって、
前記コンテンツ提供アプリケーションは、前記キューに基づきコンテンツの複数の候補を選択し、前記センサ部から入力された前記検出値に基づき前記複数の候補から再生すべきコンテンツを選択する
コンテンツ再生システム。
(9)
上記(6)乃至(8)の何れか一つに記載のコンテンツ再生システムであって、
前記コンテンツ提供アプリケーションは、コンテンツの再生中に、前記センサ部から入力された前記検出値に基づき、再生すべきコンテンツの属性を選択し、選択したコンテンツを再生する
コンテンツ再生システム。
(10)
上記(1)乃至(9)の何れか一つに記載のコンテンツ再生システムであって、
前記コンテンツ制御部は、前記環境状態に基づき前記コンテンツ提供アプリケーションが前記コンテンツの再生を停止するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づき前記コンテンツの再生を停止させる
コンテンツ再生システム。
(11)
上記(1)乃至(10)の何れか一つに記載のコンテンツ再生システムであって、
前記コンテンツ再生制御アプリケーションは、
ユーザのコンテクストを取得するコンテクスト取得部をさらに具備し、
前記ユーザ状態推定部は、取得された前記コンテクストに基づき、前記ユーザ状態を推定する
コンテンツ再生システム。
(12)
上記(6)に記載のコンテンツ再生システムであって、
前記ユーザ位置推定部は、
前記ユーザが装着した前記ウェアラブルデバイスが有する前記センサ部の前記検出値に基づき、前記ユーザの方位角の補正値を算出する角度補正部と、
前記ユーザが装着した前記ウェアラブルデバイスが有する前記センサ部の前記検出値と、前記補正値とに基づき、前記ユーザの方位角を推定する角度推定部と、を有し、
前記方位角を利用して前記ユーザ位置を推定する
コンテンツ再生システム。
(13)
上記(12)に記載のコンテンツ再生システムであって、
前記ウェアラブルデバイスが有する前記センサ部は、加速度センサを含み、
前記角度補正部は、
前記加速度センサの前記検出値としての前記ユーザがRoll方向を向いたときの重力加速度から、前記ユーザのPitch方向の傾き及び前記Roll方向の傾きを算出し、
前記加速度センサの前記検出値としての前記ユーザが前記Pitch方向を向いたときの重力加速度と、前記Pitch方向の傾き及び前記Roll方向の傾きとから、前記ユーザのYaw方向の傾きを算出し、
前記Pitch方向の傾き、前記Roll方向の傾き及び前記Yaw方向の傾きを、前記補正値とする
コンテンツ再生システム。
(14)
上記(1)乃至(13)のいずれか一項に記載のコンテンツ再生システムであって、
前記コンテンツ制御部は、同一の環境状態を跨いで連続的に関連するコンテンツを再生する
コンテンツ再生システム。
(15)
上記(7)に記載のコンテンツ再生システムであって、
前記センサ部が検出した登録用検出値と、前記登録用検出値が検出されたときに前記ユーザに提示する環境状態とを関連付けて登録するデータベース生成部と、
前記センサ部が検出した新たな検出値と登録済みの前記登録用検出値とをマッチングし、前記新たな検出値と前記登録用検出値との差がマッチング閾値以下であるか否かを判断するマッチング部と、
をさらに具備し、
前記コンテンツ制御部は、前記差が前記マッチング閾値以下であると判断されると、前記登録用検出値に関連付けて登録された前記環境状態に基づき、前記キューを生成及び出力する
コンテンツ再生システム。
(16)
上記(15)に記載のコンテンツ再生システムであって、
前記マッチング部は、前記ユーザが第1の時間停止したとき、前記新たな検出値と前記登録用検出値とをマッチングし、
前記データベース生成部は、前記ユーザが移動を開始するまでの停止時間が第2の時間より長いとき、前記ユーザが移動を開始するまでの前記新たな検出値と、前記新たな検出値が検出されたときに前記ユーザに提示する環境状態とを関連付けて新たに登録する
コンテンツ再生システム。
(17)
上記(16)に記載のコンテンツ再生システムであって、
前記データベース生成部は、前記センサ部が所定時間内に検出した複数の検出値の平均値を前記登録用検出値として登録する
コンテンツ再生システム。
(18)
ウェアラブルデバイスと、
ユーザが装着した前記ウェアラブルデバイスが有するセンサ部が検出した登録用検出値と、前記登録用検出値が検出されたときに前記ユーザに提示する環境状態とを関連付けて登録するデータベース生成部と、
前記センサ部が検出した新たな検出値と登録済みの前記登録用検出値とをマッチングし、前記新たな検出値と前記登録用検出値との差がマッチング閾値以下であるか否かを判断するマッチング部と、
前記差が前記マッチング閾値以下であると判断されると、前記登録用検出値に関連付けて登録された前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部と、
を有するコンテンツ再生制御アプリケーションと、
を実行する制御回路を有する情報処理装置と、
を具備するコンテンツ再生システム。
(19)
ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
前記ユーザ状態に基づき前記ユーザに提示する環境状態を推定する環境推定部と、
前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部と、
を有するコンテンツ再生制御アプリケーションと、
を実行する制御回路
を具備する情報処理装置。
(20)
情報処理装置の制御回路を、
ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
前記ユーザ状態に基づき前記ユーザに提示する環境状態を推定する環境推定部と、
前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部
として動作させるコンテンツ再生制御アプリケーション。
(21)
情報処理装置の制御回路を、
ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
前記ユーザ状態に基づき前記ユーザに提示する環境状態を推定する環境推定部と、
前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部
として動作させるコンテンツ再生制御アプリケーション
を記録した非一過性のコンピュータ読み取り可能な記録媒体。
100 情報処理装置
110 コンテクスト取得部
111 GPSセンサ
112 ビーコン送受信機
113 端末情報取得部
120 PDR部
121 角度補正部
122 角度推定部
123 ユーザ位置推定部
130 場所推定部
140 ユーザ状態推定部
150 環境推定部
160 出力制御部
161 コンテンツ制御部
162 通知制御部
200 ウェアラブルデバイス
210 センサ部
211 加速度センサ
212 ジャイロセンサ
213 コンパス
214 生体センサ
215 地磁気センサ
Claims (20)
- ウェアラブルデバイスと、
前記ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
前記ユーザ状態に基づき前記ユーザの環境状態を推定する環境推定部と、
前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部と、
を有するコンテンツ再生制御アプリケーションと、
を実行する制御回路を有する情報処理装置と、
を具備するコンテンツ再生システム。 - 請求項1に記載のコンテンツ再生システムであって、
前記情報処理装置の前記制御回路は、複数の異なるコンテンツ提供アプリケーションを実行し、
前記コンテンツ制御部は、前記環境状態に基づき、前記コンテンツを再生させる所定のコンテンツ提供アプリケーションを選択する
コンテンツ再生システム。 - 請求項1に記載のコンテンツ再生システムであって、
前記情報処理装置の前記制御回路は、複数の異なるコンテンツ提供アプリケーションを実行し、
前記ウェアラブルデバイスは、入力装置を有し、
前記コンテンツ制御部は、前記ウェアラブルデバイスにユーザが入力した異なる操作に基づき、前記コンテンツを再生させる所定のコンテンツ提供アプリケーションを選択する
コンテンツ再生システム。 - 請求項1に記載のコンテンツ再生システムであって、
前記情報処理装置の前記制御回路は、複数の前記異なる操作を前記複数の異なるコンテンツ提供アプリケーションの選択に割り当てるプリセットアプリケーションを実行する
コンテンツ再生システム。 - 請求項4に記載のコンテンツ再生システムであって、
前記プリセットアプリケーションは、前記コンテンツ再生制御アプリケーションに含まれる
コンテンツ再生システム。 - 請求項1に記載のコンテンツ再生システムであって、
前記ウェアラブルデバイスは、センサ部を有し、
前記コンテンツ再生制御アプリケーションは、
前記ユーザが装着したウェアラブルデバイスが有するセンサ部から入力された検出値に基づき、ユーザ位置を推定するユーザ位置推定部と、
前記ユーザ位置に基づき、ユーザがいる場所の属性である場所属性を推定する場所属性推定部と、
をさらに有し、
前記ユーザ状態推定部は、前記場所属性に基づき、前記ユーザ状態を推定する
コンテンツ再生システム。 - 請求項6に記載のコンテンツ再生システムであって、
前記ウェアラブルデバイスが有する前記センサ部は、加速度センサ、ジャイロセンサ、コンパス、生体センサ及び地磁気センサの内、少なくとも一つを含む
コンテンツ再生システム。 - 請求項6に記載のコンテンツ再生システムであって、
前記コンテンツ提供アプリケーションは、前記キューに基づきコンテンツの複数の候補を選択し、前記センサ部から入力された前記検出値に基づき前記複数の候補から再生すべきコンテンツを選択する
コンテンツ再生システム。 - 請求項6に記載のコンテンツ再生システムであって、
前記コンテンツ提供アプリケーションは、コンテンツの再生中に、前記センサ部から入力された前記検出値に基づき、再生すべきコンテンツの属性を選択し、選択したコンテンツを再生する
コンテンツ再生システム。 - 請求項1に記載のコンテンツ再生システムであって、
前記コンテンツ制御部は、前記環境状態に基づき前記コンテンツ提供アプリケーションが前記コンテンツの再生を停止するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づき前記コンテンツの再生を停止させる
コンテンツ再生システム。 - 請求項1に記載のコンテンツ再生システムであって、
前記コンテンツ再生制御アプリケーションは、
ユーザのコンテクストを取得するコンテクスト取得部をさらに具備し、
前記ユーザ状態推定部は、取得された前記コンテクストに基づき、前記ユーザ状態を推定する
コンテンツ再生システム。 - 請求項6に記載のコンテンツ再生システムであって、
前記ユーザ位置推定部は、
前記ユーザが装着した前記ウェアラブルデバイスが有する前記センサ部の前記検出値に基づき、前記ユーザの方位角の補正値を算出する角度補正部と、
前記ユーザが装着した前記ウェアラブルデバイスが有する前記センサ部の前記検出値と、前記補正値とに基づき、前記ユーザの方位角を推定する角度推定部と、を有し、
前記方位角を利用して前記ユーザ位置を推定する
コンテンツ再生システム。 - 請求項12に記載のコンテンツ再生システムであって、
前記ウェアラブルデバイスが有する前記センサ部は、加速度センサを含み、
前記角度補正部は、
前記加速度センサの前記検出値としての前記ユーザがRoll方向を向いたときの重力加速度から、前記ユーザのPitch方向の傾き及び前記Roll方向の傾きを算出し、
前記加速度センサの前記検出値としての前記ユーザが前記Pitch方向を向いたときの重力加速度と、前記Pitch方向の傾き及び前記Roll方向の傾きとから、前記ユーザのYaw方向の傾きを算出し、
前記Pitch方向の傾き、前記Roll方向の傾き及び前記Yaw方向の傾きを、前記補正値とする
コンテンツ再生システム。 - 請求項1に記載のコンテンツ再生システムであって、
前記コンテンツ制御部は、同一の環境状態を跨いで連続的に関連するコンテンツを再生する
コンテンツ再生システム。 - 請求項7に記載のコンテンツ再生システムであって、
前記センサ部が検出した登録用検出値と、前記登録用検出値が検出されたときに前記ユーザに提示する環境状態とを関連付けて登録するデータベース生成部と、
前記センサ部が検出した新たな検出値と登録済みの前記登録用検出値とをマッチングし、前記新たな検出値と前記登録用検出値との差がマッチング閾値以下であるか否かを判断するマッチング部と、
をさらに具備し、
前記コンテンツ制御部は、前記差が前記マッチング閾値以下であると判断されると、前記登録用検出値に関連付けて登録された前記環境状態に基づき、前記キューを生成及び出力する
コンテンツ再生システム。 - 請求項15に記載のコンテンツ再生システムであって、
前記マッチング部は、前記ユーザが第1の時間停止したとき、前記新たな検出値と前記登録用検出値とをマッチングし、
前記データベース生成部は、前記ユーザが移動を開始するまでの停止時間が第2の時間より長いとき、前記ユーザが移動を開始するまでの前記新たな検出値と、前記新たな検出値が検出されたときに前記ユーザに提示する環境状態とを関連付けて新たに登録する
コンテンツ再生システム。 - 請求項16に記載のコンテンツ再生システムであって、
前記データベース生成部は、前記センサ部が所定時間内に検出した複数の検出値の平均値を前記登録用検出値として登録する
コンテンツ再生システム。 - ウェアラブルデバイスと、
ユーザが装着した前記ウェアラブルデバイスが有するセンサ部が検出した登録用検出値と、前記登録用検出値が検出されたときに前記ユーザに提示する環境状態とを関連付けて登録するデータベース生成部と、
前記センサ部が検出した新たな検出値と登録済みの前記登録用検出値とをマッチングし、前記新たな検出値と前記登録用検出値との差がマッチング閾値以下であるか否かを判断するマッチング部と、
前記差が前記マッチング閾値以下であると判断されると、前記登録用検出値に関連付けて登録された前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部と、
を有するコンテンツ再生制御アプリケーションと、
を実行する制御回路を有する情報処理装置と、
を具備するコンテンツ再生システム。 - ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
前記ユーザ状態に基づき前記ユーザに提示する環境状態を推定する環境推定部と、
前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部と、
を有するコンテンツ再生制御アプリケーションと、
を実行する制御回路
を具備する情報処理装置。 - 情報処理装置の制御回路を、
ウェアラブルデバイスを装着したユーザのユーザ状態を推定するユーザ状態推定部と、
前記ユーザ状態に基づき前記ユーザに提示する環境状態を推定する環境推定部と、
前記環境状態に基づき、コンテンツを提供するコンテンツ提供アプリケーションがコンテンツを選択するためのキューを生成し、前記キューを前記コンテンツ提供アプリケーションに出力し、前記コンテンツ提供アプリケーションに前記キューに基づきコンテンツを選択させて前記コンテンツを再生させるコンテンツ制御部
として動作させるコンテンツ再生制御アプリケーション。
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2023511341A JPWO2022210652A1 (ja) | 2021-03-30 | 2022-03-29 | |
| US18/551,949 US20240176818A1 (en) | 2021-03-30 | 2022-03-29 | Content playback system, information processing apparatus, and content playback controlling application |
Applications Claiming Priority (10)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2021056342 | 2021-03-30 | ||
| JP2021-056342 | 2021-03-30 | ||
| PCT/JP2021/021261 WO2022208906A1 (ja) | 2021-03-30 | 2021-06-03 | コンテンツ再生システム、情報処理装置及びコンテンツ再生制御アプリケーション |
| JPPCT/JP2021/021261 | 2021-06-03 | ||
| JPPCT/JP2021/043551 | 2021-11-29 | ||
| PCT/JP2021/043551 WO2022209000A1 (ja) | 2021-03-30 | 2021-11-29 | コンテンツ再生システム、情報処理装置及びコンテンツ再生制御アプリケーション |
| PCT/JP2022/007708 WO2022209474A1 (ja) | 2021-03-30 | 2022-02-24 | コンテンツ再生システム、情報処理装置及びコンテンツ再生制御アプリケーション |
| JPPCT/JP2022/007708 | 2022-02-24 | ||
| JPPCT/JP2022/013225 | 2022-03-22 | ||
| PCT/JP2022/013225 WO2022210113A1 (ja) | 2021-03-30 | 2022-03-22 | コンテンツ再生システム、情報処理装置及びコンテンツ再生制御アプリケーション |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2022210652A1 true WO2022210652A1 (ja) | 2022-10-06 |
Family
ID=83455223
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2022/013225 Ceased WO2022210113A1 (ja) | 2021-03-30 | 2022-03-22 | コンテンツ再生システム、情報処理装置及びコンテンツ再生制御アプリケーション |
| PCT/JP2022/015307 Ceased WO2022210652A1 (ja) | 2021-03-30 | 2022-03-29 | コンテンツ再生システム、情報処理装置及びコンテンツ再生制御アプリケーション |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2022/013225 Ceased WO2022210113A1 (ja) | 2021-03-30 | 2022-03-22 | コンテンツ再生システム、情報処理装置及びコンテンツ再生制御アプリケーション |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US20240176818A1 (ja) |
| JP (1) | JPWO2022210652A1 (ja) |
| WO (2) | WO2022210113A1 (ja) |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2006008790A1 (ja) * | 2004-07-15 | 2006-01-26 | C & N Inc | 携帯端末装置 |
| JP2011141492A (ja) * | 2010-01-08 | 2011-07-21 | Nec Corp | 音楽配信システム、音楽受信端末、音楽配信方法およびプログラム |
| JP2011259259A (ja) * | 2010-06-10 | 2011-12-22 | Alpine Electronics Inc | 電子機器および操作キーの割当方法 |
| JP2012212234A (ja) * | 2011-03-30 | 2012-11-01 | Kddi Corp | 自律測位に用いる重力ベクトルを補正する携帯装置、プログラム及び方法 |
| WO2014181380A1 (ja) * | 2013-05-09 | 2014-11-13 | 株式会社ソニー・コンピュータエンタテインメント | 情報処理装置およびアプリケーション実行方法 |
| JP2015152559A (ja) * | 2014-02-19 | 2015-08-24 | 株式会社リコー | 慣性装置、制御方法及びプログラム |
| JP2018078398A (ja) * | 2016-11-07 | 2018-05-17 | 株式会社ネイン | 多機能イヤホンによる自律型アシスタントシステム |
| WO2018179644A1 (ja) * | 2017-03-27 | 2018-10-04 | ソニー株式会社 | 情報処理装置、情報処理方法及び記録媒体 |
| JP2019158933A (ja) * | 2018-03-08 | 2019-09-19 | シャープ株式会社 | 音声再生機器、制御装置および制御方法 |
| WO2020208894A1 (ja) * | 2019-04-12 | 2020-10-15 | ソニー株式会社 | 情報処理装置、及び情報処理方法 |
| JP2020201138A (ja) * | 2019-06-11 | 2020-12-17 | 本田技研工業株式会社 | 情報処理装置、情報処理方法、およびプログラム |
Family Cites Families (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2718931A4 (en) * | 2011-06-10 | 2014-11-05 | Aliphcom | MEDIA DEVICE, APPLICATION AND CONTENT MANAGEMENT THROUGH SENSOR INPUT |
| WO2015035098A2 (en) * | 2013-09-04 | 2015-03-12 | Zero360, Inc. | Processing system and method |
| EP2975472A1 (fr) * | 2014-07-15 | 2016-01-20 | The Swatch Group Research and Development Ltd. | Dispositif portable incorporant un dispositif de mesure de la température ambiante |
| US10951973B2 (en) * | 2017-03-09 | 2021-03-16 | Huawei Technologies Co., Ltd. | Headset, terminal, and control method |
| WO2020230458A1 (ja) * | 2019-05-16 | 2020-11-19 | ソニー株式会社 | 情報処理装置、情報処理方法、及びプログラム |
| US10754614B1 (en) * | 2019-09-23 | 2020-08-25 | Sonos, Inc. | Mood detection and/or influence via audio playback devices |
-
2022
- 2022-03-22 WO PCT/JP2022/013225 patent/WO2022210113A1/ja not_active Ceased
- 2022-03-29 JP JP2023511341A patent/JPWO2022210652A1/ja active Pending
- 2022-03-29 US US18/551,949 patent/US20240176818A1/en not_active Abandoned
- 2022-03-29 WO PCT/JP2022/015307 patent/WO2022210652A1/ja not_active Ceased
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2006008790A1 (ja) * | 2004-07-15 | 2006-01-26 | C & N Inc | 携帯端末装置 |
| JP2011141492A (ja) * | 2010-01-08 | 2011-07-21 | Nec Corp | 音楽配信システム、音楽受信端末、音楽配信方法およびプログラム |
| JP2011259259A (ja) * | 2010-06-10 | 2011-12-22 | Alpine Electronics Inc | 電子機器および操作キーの割当方法 |
| JP2012212234A (ja) * | 2011-03-30 | 2012-11-01 | Kddi Corp | 自律測位に用いる重力ベクトルを補正する携帯装置、プログラム及び方法 |
| WO2014181380A1 (ja) * | 2013-05-09 | 2014-11-13 | 株式会社ソニー・コンピュータエンタテインメント | 情報処理装置およびアプリケーション実行方法 |
| JP2015152559A (ja) * | 2014-02-19 | 2015-08-24 | 株式会社リコー | 慣性装置、制御方法及びプログラム |
| JP2018078398A (ja) * | 2016-11-07 | 2018-05-17 | 株式会社ネイン | 多機能イヤホンによる自律型アシスタントシステム |
| WO2018179644A1 (ja) * | 2017-03-27 | 2018-10-04 | ソニー株式会社 | 情報処理装置、情報処理方法及び記録媒体 |
| JP2019158933A (ja) * | 2018-03-08 | 2019-09-19 | シャープ株式会社 | 音声再生機器、制御装置および制御方法 |
| WO2020208894A1 (ja) * | 2019-04-12 | 2020-10-15 | ソニー株式会社 | 情報処理装置、及び情報処理方法 |
| JP2020201138A (ja) * | 2019-06-11 | 2020-12-17 | 本田技研工業株式会社 | 情報処理装置、情報処理方法、およびプログラム |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2022210652A1 (ja) | 2022-10-06 |
| US20240176818A1 (en) | 2024-05-30 |
| WO2022210113A1 (ja) | 2022-10-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10915291B2 (en) | User-interfaces for audio-augmented-reality | |
| US11343613B2 (en) | Prioritizing delivery of location-based personal audio | |
| CN109643158B (zh) | 使用多模态信号分析进行命令处理 | |
| JP3834848B2 (ja) | 音情報提供装置、及び音情報選択方法 | |
| US10869154B2 (en) | Location-based personal audio | |
| US20200178017A1 (en) | Directional audio selection | |
| US20200142667A1 (en) | Spatialized virtual personal assistant | |
| CN105190480B (zh) | 信息处理设备和信息处理方法 | |
| JP2019220194A (ja) | 情報処理装置、情報処理方法及びプログラム | |
| US11016723B2 (en) | Multi-application control of augmented reality audio | |
| JP2023503219A (ja) | 複数のデータソースを用いた発話転写 | |
| US20090058611A1 (en) | Wearable device | |
| US10820132B2 (en) | Voice providing device and voice providing method | |
| EP2614631A1 (en) | User device, server, and operating conditions setting system | |
| KR102855455B1 (ko) | 기억 메트릭에 기초한 컨텐츠 생성, 저장 및 제시 | |
| US20200280814A1 (en) | Augmented reality audio playback control | |
| WO2022210652A1 (ja) | コンテンツ再生システム、情報処理装置及びコンテンツ再生制御アプリケーション | |
| WO2022210649A1 (ja) | 情報処理装置、情報処理方法、情報処理プログラム及び情報処理システム | |
| WO2022209474A1 (ja) | コンテンツ再生システム、情報処理装置及びコンテンツ再生制御アプリケーション | |
| WO2022209473A1 (ja) | 情報処理装置、情報処理方法、情報処理プログラム及び情報処理システム | |
| WO2022208906A1 (ja) | コンテンツ再生システム、情報処理装置及びコンテンツ再生制御アプリケーション | |
| WO2022208999A1 (ja) | 情報処理装置、情報処理方法、情報処理プログラム及び情報処理システム | |
| WO2022209000A1 (ja) | コンテンツ再生システム、情報処理装置及びコンテンツ再生制御アプリケーション | |
| US11936718B2 (en) | Information processing device and information processing method | |
| WO2023168064A1 (en) | Composing electronic messages based on speech input |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22780863 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2023511341 Country of ref document: JP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 18551949 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22780863 Country of ref document: EP Kind code of ref document: A1 |