WO2015145541A1 - Dispositif d'affichage vidéo - Google Patents
Dispositif d'affichage vidéo Download PDFInfo
- Publication number
- WO2015145541A1 WO2015145541A1 PCT/JP2014/058074 JP2014058074W WO2015145541A1 WO 2015145541 A1 WO2015145541 A1 WO 2015145541A1 JP 2014058074 W JP2014058074 W JP 2014058074W WO 2015145541 A1 WO2015145541 A1 WO 2015145541A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- unit
- user
- display device
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/02—Improving the quality of display appearance
- G09G2320/0261—Improving the quality of display appearance in the context of movement of objects on the screen or movement of the observer relative to the screen
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0407—Resolution change, inclusive of the use of different resolutions for different screen areas
- G09G2340/0435—Change or adaptation of the frame rate of the video stream
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2354/00—Aspects of interface with display user
Definitions
- the present invention relates to a video display device, and more particularly to a video display device suitable for use by a user while moving.
- HMD head-mounted display
- video display devices such as HMDs are based on the premise that video is always displayed regardless of the user's environment and movement conditions.
- a symptom discomfort
- the user may be uncomfortable with it and may be less alert to the surrounding environment.
- Patent Document 1 there is provided an HMD that secures a field of view by providing movement detection means for detecting the movement of the user, and stopping the image output (display) when the movement of the user is detected. It is disclosed.
- Patent Document 1 since the image display of the HMD is stopped when the user is moving, the user cannot obtain any information from the HMD at all and the convenience is lowered.
- the magnitude of the above-mentioned unpleasantness and the reduction in attention given to a moving user when viewing a video depend on the type of video to be displayed, for example, the degree of motion of the video. In other words, the feeling of the user differs greatly between the moving image and the still image, and it is not always necessary to stop the display of the image.
- an object of the present invention is to provide a video display device that displays video so as not to cause discomfort to a moving user and to reduce attention.
- the present invention is an image display device that can be used while a user moves, and includes an information amount changing unit that changes an information amount per unit time for an image provided from the image providing unit, and an information amount changing unit.
- a video display unit that displays a video with a changed amount of information, an imaging unit that captures a subject around the user, a movement detection unit that detects a movement state of the user based on an image captured by the imaging unit, and a movement detection unit
- an information amount determination unit that determines a change value of the information amount of the video per unit time with respect to the information amount change unit according to the movement state of the user detected in (1).
- the information amount determination unit determines a change value of the information amount per unit time of the video smaller when the user is moving than when the user is not moving according to the detection result of the movement detection unit. To do.
- an image display device that displays an image so as not to cause discomfort to a moving user and to prevent a reduction in attention.
- FIG. 1 is a block diagram of a video display device according to Embodiment 1.
- FIG. FIG. 3 is a diagram illustrating an example of an image captured by an imaging unit 3; The figure which shows an example of the relationship between a movement state and a frame rate. The figure explaining the specific example of frame rate conversion.
- FIG. 9 is a block diagram of a video display apparatus according to a third embodiment.
- FIG. 10 is a block diagram of a video display apparatus according to a fourth embodiment.
- an HMD head mounted display
- the user uses the HMD by wearing it on his / her head.
- FIG. 1 is a block diagram of a video display apparatus according to the first embodiment.
- the video display device (HMD) 10 is changed by the video receiving unit 2 that receives the video provided from the video providing unit 1, the information amount changing unit 6 that changes the information amount of the received video, and the information amount changing unit 6.
- the video display device 10 is detected by the imaging unit 3 that images a surrounding subject, a movement detection unit 4 that detects the movement state of the user based on an image acquired by the imaging unit 3, and the movement detection unit 4.
- An information amount determination unit 5 that determines the amount of information per unit time of the video displayed on the video display unit 8 according to the moving state of the user.
- the information amount changing unit 6 changes the video received by the video receiving unit 2 to the information amount per unit time determined by the information amount determining unit 5.
- it has a mounting part for mounting the HMD 10 on the user's head.
- the video providing unit 1 is a device that provides video to be displayed.
- the video providing unit 1 provides video from a storage device disposed in an external data center.
- the video providing unit 1 may be built in the HMD 10 and may be a storage element such as a flash memory or an imaging element such as a camera.
- the imaging unit 3 is attached to the HMD 10 and images a subject around the user. At that time, for example, it is attached to the front surface of the HMD 10 so that a subject in the forward direction that is the moving direction of the user can be imaged.
- the movement detection unit 4 detects the movement state of the user based on the image acquired by the imaging unit 3. That is, it is determined whether or not the user is moving, and when the user is moving, the speed is determined.
- FIG. 2 is a diagram illustrating an example of an image captured by the imaging unit 3, and shows a captured image 21 at time T1 and a captured image 22 at time T2.
- the imaging unit 3 images a subject in the forward direction of the user at a predetermined time interval such as every second, and sends the captured image to the movement detection unit 4.
- the movement detection unit 4 recognizes an arbitrary subject existing around the user or a part thereof as a feature around the user from the captured images at each time.
- “tree” and “building” are recognized as feature objects 21a, 21b, 22a, and 22b.
- it is not necessary to recognize the type and name of the feature for example, a tree or a building
- the feature only needs to be composed of a specific graphic element (for example, a line or a polygon).
- the movement detection unit 4 tracks the on-screen position of the feature in a plurality of continuous captured images. Then, when the characteristic object moves radially from the vicinity of the center in the screen as time passes, it is determined that the user is moving. In the example of FIG. 2, the feature 21a moves toward the lower left corner of the screen like 22a, and the feature 21b moves toward the lower right corner of the screen like 22b, so it is determined that the user is moving. it can. On the other hand, if the feature object has not moved in the four corner directions, it is determined that the user has not moved (still).
- the attachment position is not limited to this.
- the determination of the movement state of the user is different. For example, when imaging the user's right direction, the feature moves from left to right in the screen, and when imaging the user's left direction, the feature moves from right to left in the screen. If it is, the user may determine that the user is moving.
- the imaging unit 3 may be attached so as to image a subject in the user's foot direction.
- the movement detection unit 4 recognizes, for example, the user's limbs or road surface as a characteristic object, and determines whether or not the user is moving from those movements.
- the movement detection unit 4 estimates the moving speed. That is, even when moving, a case of “walking” with a low moving speed and a case of “running” with a high moving speed are distinguished.
- the moving speed of the user is estimated based on the moving speed of the feature in the captured image.
- the movement speed of the user can be estimated with higher accuracy by tracking the position of the feature in the screen at a plurality of times. For example, when the moving speed is less than 6 km / h, “walking” is determined, and when the moving speed is 6 km or more, “running” is determined.
- the other moving state of the user is up and down stairs. To detect this, image recognition is used together.
- the movement detection unit 4 determines that the user is moving, and determines that the user is moving up and down the stairs when the captured image detects that the user is on the stairs.
- the user himself / herself may be driving and moving.
- Image recognition is also used when detecting this.
- the movement detection unit 4 determines that the user is driving the vehicle when detecting that the user is moving and detecting that the user is sitting in the driver's seat of the vehicle.
- the moving state is not limited to these, and various moving states are detected by using image recognition together. It is possible.
- the information amount determination unit 5 receives the detection result of the movement state of the user detected by the movement detection unit 4, and determines the information amount per unit time of the video displayed on the user by the HMD 10 according to the movement state.
- the information amount changing unit 6 changes the video received from the video receiving unit 2 so as to have the information amount per unit time determined by the information amount determining unit 5.
- the video frame rate is taken up.
- the frame rate is the number of images per second (number of frames) (unit: fps), but here it is defined by the frequency at which the images constituting the video are updated to different images.
- the information amount determination unit 5 determines that the information amount per unit time, that is, the video frame rate is equal to or less than a predetermined frame rate.
- the information amount changing unit 6 performs a conversion process of the video frame rate so as to be equal to or less than the frame rate determined by the information amount determining unit 5.
- the information amount determination unit 5 and the information amount change unit 6 determine the frame rate and change the frame rate as follows, for example.
- FIG. 3 is a diagram illustrating an example of the relationship between the movement state and the frame rate. This relationship is stored in advance as a lookup table, and the information amount determination unit 5 determines the frame rate with reference to this table.
- the moving state of the user is classified into “at rest”, “walking”, “running”, “driving”, and the like, and the maximum frame rate of the output video in each state is defined.
- the frame rate is not limited when stationary, but is limited to 10 fps or less during walking and 1 fps or less during traveling. In operation, it is set to 0 fps (still image).
- the frame rate is lowered when the user is moving compared to when the user is stationary.
- the frame rate is lowered as the moving speed of the user increases.
- the still image is not updated.
- the setting of the frame rate shown in FIG. 3 is merely an example, and can be appropriately set according to the moving state. For example, even in the same walking state, different frame rates may be set for walking at 3 km / h and walking at 4 km / h. Further, the frame rate may be continuously changed according to the speed.
- FIG. 4 is a diagram for explaining a specific example of frame rate conversion.
- (A) is an image before frame rate conversion
- (b) and (c) are images after frame rate conversion.
- the information amount changing unit 6 extracts a frame corresponding to the converted frame rate from the video frame sequence (a) received by the video receiving unit 2, and generates a new video frame sequence (b) or (c). .
- the frames extracted from the received video are preferably at the same time, but when the times do not match, it is possible to convert the video to a smooth motion by selecting the frame that minimizes the time lag.
- the number of images (number of frames) per second of the video 31 to be output becomes the value (10 fps) determined by the information amount determination unit 5.
- the conversion of (c) is also performed to convert the video rate 32 to 10 fps, but the number of images to be updated per second is changed without changing the number of images to be output per second. Is. Then, at the timing not to be updated, the previous image is repeatedly output.
- s 0, 3, and 6 are extracted from the frame of the pre-conversion video 30, and these are repeated three times to reduce the number of image updates. Also in the conversion of (c), the substantial frame rate of the received video is changed to the frame rate determined by the information amount determination unit 5.
- the information amount changing unit 6 stores the image (frame) once output, and repeatedly outputs the same image at the next output timing.
- converting the frame rate may give a sense of discomfort to the user who sees the video at the time of switching. Therefore, it is desirable that the information amount changing unit 6 should gradually switch the frame rate of the video to be output instead of instantaneously with a certain time width.
- the video output unit 7 receives the video whose information amount has been changed by the information amount changing unit 6 and generates a display signal for driving the video display unit 8.
- the video display unit 8 receives a display signal from the video output unit 7 and displays a video.
- the moving user can reduce the degree of user's concentration (realism) with respect to the video by reducing the movement of the displayed video.
- discomfort similar to sickness is reduced, and attention to the surrounding environment is improved.
- the video display is continued for the user, the provision of information is not interrupted.
- the video display apparatus which can provide information continuously can be implement
- the frame rate of the video is lowered.
- another method for reducing the amount of information per unit time will be described.
- a method of changing the display image to black and white or gray scale display In order to reduce the amount of information per unit time, the information amount determination unit 5 determines to display the video in black and white or gray scale, and the information amount change unit 6 converts the color video received from the video reception unit 2 into black and white. Alternatively, processing for changing to gray scale display is performed. First, the process of changing to grayscale is performed as follows. For the color video received from the video receiver 2, the RGB values constituting each pixel in each frame are (R, G, B).
- the threshold value Ath for the value A of each pixel is determined. If A ⁇ Ath, the pixel is white, and if A ⁇ Ath, the pixel is black.
- the information amount determination unit 5 determines to match the hue of the image with the surrounding hues, and the information amount change unit 6 applies to the image received from the image reception unit 2. Then, a process for matching the hue with the surrounding hue is performed. Specifically, based on the surrounding image captured by the image capturing unit 3, the movement detection unit 4 detects in what hue environment the user is moving, and the detection result is used as the information amount determination unit 5. Send to. The information amount determination unit 5 sends the presence / absence of the hue change and the surrounding hue acquired from the movement detection unit 4 to the information amount change unit 6.
- the information amount changing unit 6 changes the hue of each pixel of each frame of the video received from the video receiving unit 2 to the surrounding hue sent from the information amount determining unit 5.
- the hue of the video to be displayed with the surrounding hue the video can be assimilated to the surroundings, and the user's degree of concentration (realism) can be reduced.
- the information amount determination unit 5 determines to reduce and display the video
- the information amount change unit 6 performs processing to reduce and display the video received from the video reception unit 2 Apply.
- the video received from the video receiver 2 is reduced in the vertical and / or horizontal directions, and the reduced video is arranged in a part of the video display area output by the information amount changing unit 6. . It is desirable to convert the display area where the reduced video is not arranged into a single color such as black.
- a method of displaying a part of an image and reducing the angle of view of the image In order to reduce the amount of information per unit time, the information amount determination unit 5 determines to display a part of the video, and the information amount change unit 6 applies to the video received from the video reception unit 2 with respect to the video.
- a process of displaying a part is performed. Specifically, a part of the video received from the video receiver 2 is cut out and output. The remaining display area that is not cut out is preferably converted into a single color such as black.
- a method of reducing the brightness of the displayed image In order to reduce the amount of information per unit time, the information amount determination unit 5 determines to reduce the luminance of the video, and the information amount change unit 6 applies to the video received from the video reception unit 2 with respect to the video.
- a process for reducing the luminance is performed. Specifically, the luminance, hue, and saturation of each pixel of each frame of the image received from the image receiving unit 2 are obtained, and the hue and saturation are processed while maintaining the hue and saturation. The changed luminance is used as the color data of the pixel.
- the information amount changing unit 6 performs processing for lowering the luminance of the video received from the video receiving unit 2.
- the luminance can be lowered by the video output unit 7 instead of the information amount changing unit 6.
- FIG. 5 is a block diagram showing a modification of the video display device.
- the information amount changing unit 6 in FIG. 1 is deleted, the video output unit 7 receives the video from the video receiving unit 2, and changes the luminance of the video according to the luminance control signal determined by the information amount determining unit 5. It is configured to do.
- the video display unit 8 includes a light source (backlight or the like) (not shown) for displaying a video, and changes the luminance by changing the intensity of light emitted from the light source.
- the video output unit 7 sends a drive signal to the video display unit 8 so as to reduce the intensity of light emitted from the light source.
- video display part 8 falls. According to this configuration, there is a secondary effect of reducing power consumption at the light source.
- the image acquired by the imaging unit 3 is used to detect the movement of the user.
- the measurement is performed by another sensor in addition to the image acquired by the imaging unit 3.
- the sensor is, for example, a three-dimensional acceleration sensor, and thereby detects the user's walk.
- FIG. 6 is a block diagram of the video display apparatus according to the third embodiment.
- the video display device 10b newly adds a sensor 11.
- the same elements as those in the first embodiment (FIG. 1) are denoted by the same reference numerals, and redundant description is omitted.
- the sensor 11 is, for example, a three-dimensional acceleration sensor, and the movement detection unit 4 detects the movement state of the user using both the image acquired by the imaging unit 3 and the result measured by the sensor 11. At that time, the movement detection unit 4 uses the two pieces of information (captured image and sensor measurement result) to obtain the probability values that the user is moving, respectively, and statistically determines the final movement state of the user from the two probability values. Make a decision. Alternatively, the movement determination criterion based on one information can be changed according to the other information. The operation of this embodiment will be described below.
- the detection of the movement of the user using the image from the imaging unit 3 is basically the same as in the first embodiment.
- the probability that the user is moving is obtained instead of binaryly determining whether or not the user is moving by tracking the movement of the above-described feature in the image.
- the moving speed of the feature can be obtained by comparing two or more captured images obtained at different times acquired from the imaging unit 3, and the probability that the user is moving can be obtained from the moving speed.
- the body motion associated with the user walking (or running) is measured by an acceleration sensor.
- the three-dimensional acceleration sensor is installed on the HMD so that when the HMD is mounted on the head and the user stands up and faces the front, the acceleration in three axial directions can be measured.
- the user's up and down direction (Sz) is the first axis
- the user's left and right direction (Sy) is the second axis
- the user's front and rear direction (Sx) is the third axis.
- FIG. 7 is a diagram illustrating a detection signal of the three-dimensional acceleration sensor with respect to the user's walking motion.
- the user's walking (or running) operation is repeated with the following four states (W1) to (W4) as one cycle.
- (W1) A state in which the right foot is on the front side in the traveling direction from the trunk and the left foot is on the rear side in the traveling direction from the trunk.
- (W2) A state in which both the left and right feet are directly under the torso and the right foot is in contact with the ground while the left foot is not in contact with the ground.
- W3 A state in which the left foot is on the front side in the traveling direction from the trunk and the right foot is on the rear side in the traveling direction from the trunk.
- (W4) A state in which both the left and right feet are directly under the torso and the left foot is in contact with the ground, while the right foot is not in contact with the ground.
- the acceleration sensors (Sz, Sx) in the first axis direction and the third axis direction output signals of two cycles, and the acceleration sensor (Sy) in the second axis direction has one cycle.
- the signal is output.
- the frequency (fz, fx) of the output of the acceleration sensor in the first axis direction and the third axis direction is twice the frequency (fy) of the output of the acceleration sensor in the second axis direction, and further the frequency in the second axis direction.
- the frequency (fy) of the output of the acceleration sensor has a relationship that is the reciprocal (1 / T) of the time required for one cycle of walking.
- the frequency ratio (fz / fy) of the output of the acceleration sensor in the first axis and second axis directions, or the frequency ratio (fx / fy) of the output of the acceleration sensor in the third axis and second axis directions are both
- a walking determination condition is a value of 2.
- the actual output of the acceleration sensor includes noise of other frequency components. Therefore, when determining whether the user is walking from the frequency ratio (fz / fy or fx / fy) of the output of the acceleration sensor, the determination condition is widened, and the frequency ratio is, for example, 1.7-2. It is better to judge that the possibility that the user is walking is in the range of .3.
- the time required for one cycle of walking (running) varies depending on the user, and changes depending on the walking (running) environment. Therefore, when determining the user's walking from the output frequency (fy) of the acceleration sensor (Sy) in the second axis direction, the determination condition is widened, and the frequency ranges from 0.8 Hz to 1.2 Hz, for example. It is determined that there is a high possibility that the user is walking. Further, when the frequency is, for example, 2 Hz or more, it is determined that the user is likely to travel.
- the frequency ratio (fz / fy, fx / fy) of the outputs between the acceleration sensors (Sz, Sy, Sx) in the triaxial direction and the acceleration sensor (Sy) in the second axial direction Both of the output frequencies (fy) are used, but only one of them may be used.
- the output frequency (fy) of the acceleration sensor (Sy) in the second axis direction is used, but the first axis or third axis acceleration sensor (Sz, Sx), or the first to third axes. You may use the frequency of the output of several acceleration sensors among axes.
- a one-dimensional acceleration sensor or a two-dimensional acceleration sensor may be used as the sensor 11.
- the movement detection unit 4 determines the movement state of the user based on the probability of the user moving from the captured image of the imaging unit 3 and the probability of the user moving determined from the measurement result of the sensor 11. .
- FIG. 8 is a diagram illustrating an example of determination of the movement state of the user.
- the walking probability obtained from the captured image of the imaging unit 3 is P1
- the walking probability obtained from the measurement result of the sensor 11 is P2.
- the arithmetic average is obtained from the two probabilities P1 and P2. If the arithmetic average Pav is equal to or greater than the threshold value Pth, the user is walking, and if the arithmetic average Pav is less than the threshold value Pth, it is determined that the user is not walking.
- the threshold value Pth is set to 60%.
- the arithmetic average Pav is also 80%, so that it is determined that the user is walking.
- the walking probability P1 obtained from the imaging unit 3 is 80% and the walking probability P2 obtained from the sensor 11 is 20%, the arithmetic average Pav is 50%, so that it is determined that the user is not walking.
- Vibration detection switch A vibration detection switch is installed in the HMD in order to measure the movement of the user in the first axis direction (vertical direction). By installing the vibration detection switch in the HMD as described above, the vibration detection switch detects two vibrations in one cycle of user walking. By detecting this vibration, the movement detector 4 determines whether or not the user is walking.
- Position measuring device A position measuring device using, for example, a GPS signal is installed in the HMD.
- the movement detection unit 4 obtains the moving speed of the user from the position information calculated by the position measurement device. If the value is 3 km to 6 km per hour, for example, it is determined that the user is likely to be walking.
- Imaging means for user eyes Apart from the imaging section 3 that acquires surrounding images, imaging means (camera) that detects the movement of the user's eyes is installed. When the user is walking, the movement of the eyes becomes active in order to grasp the surrounding environment. When the movement of the line of sight is detected and the user's line of sight is actively moving, it is determined that the user is likely to be walking.
- Sweating sensor A sweating sensor is attached to the user's body. Since the sweating action is active when the user is walking, it is determined that the possibility that the user is walking is higher as the sweating amount per unit time is larger.
- Pulse meter A pulse meter is attached to the user's body. Since the heart rate increases when the user is walking, it is determined that the higher the heart rate, the higher the possibility that the user is walking.
- each sensor In the description of each sensor described above, it is assumed that the user's walking is detected, but it is needless to say that traveling can also be detected. In the above description, the sensor 11 is assumed to be mounted on the HMD. However, the sensor measurement result may be input from an external device.
- FIG. 9 is a block diagram showing a modification of the video display device.
- a sensor result input unit 13 for inputting a measurement result of a sensor mounted on the external device 12 is provided instead of the sensor 11.
- the movement detection unit 4 determines the movement of the user using the sensor measurement result input to the sensor result input unit 13.
- the external device 12 for example, a smartphone can be used.
- Bluetooth registered trademark
- the sensor result input unit 13 is a Bluetooth receiver.
- the movement detection unit 4 determines only the image acquired by the imaging unit 3 by determining the movement of the user using both the image acquired by the imaging unit 3 and the result measured by the sensor 11. Compared with the case where it uses, the detection accuracy of a movement state can be improved.
- the video receiving unit 2 receives video from the video providing unit 1, and the information amount changing unit 6 changes the information amount per unit time of the received video according to the determination of the information amount determining unit 5. It was something to do.
- the video providing unit 2 in accordance with the determination by the information amount determining unit 5, the video providing unit 2 is requested to provide a video having the determined information amount per unit time.
- FIG. 10 is a block diagram of a video display apparatus according to the fourth embodiment.
- the video display device 10 d is provided with a video request unit 14 by deleting the information amount changing unit 6.
- the same elements as those in the first embodiment (FIG. 1) are denoted by the same reference numerals, and redundant description is omitted.
- the information amount determination unit 5 determines the information amount per unit time of the video to be displayed according to the movement state of the user, the information amount determination unit 5 sets a video specification (for example, a frame rate) for realizing this, and the video request unit 14 Send to.
- the video request unit 14 sends a request signal related to the video to be provided to the video providing unit 1 based on the video specification received from the information amount determination unit 5.
- the video providing unit 1 transmits a video corresponding to the request to the video receiving unit 2.
- the video output unit 7 and the video display unit 8 can display a video having an information amount per unit time determined by the information amount determination unit 5. That is, the request signal from the video request unit 14 to the video providing unit 1 reflects the movement state of the user detected by the movement detection unit 4.
- the video providing unit 1 is a storage device arranged in an external data center, for example, and is a device that can convert and provide a video in response to a request from the video requesting unit 14.
- the video providing unit 1 may be a configuration built in the video display device 10d, for example, an imaging device.
- the movement detection unit 4 detects the movement of the user based on the image acquired by the imaging unit 3.
- the sensor 11 measured in addition to the imaging unit 3. A configuration in which the movement of the user is detected using the result may be used.
- the fourth embodiment there is an effect of minimizing the amount of video data transmitted from the video providing unit 1 to the video receiving unit 2.
- the video providing unit 1 is a storage device arranged in an external data center, the amount of data transmitted over the network is reduced, and the load on the transmission path is reduced.
- the video providing unit 1 is an imaging device, the number of operations of the imaging element in the device is reduced, and there is a secondary effect of power saving and longer life of the device.
- the present invention is not limited to the above-described embodiments, and includes various modifications.
- the HMD has been described as an example of the video display device used while the user is moving.
- the present invention is not limited to this.
- a head-up display hereinafter abbreviated as HUD
- the traveling state of the vehicle is detected as the movement state of the user, and the information amount per unit time of the video to be displayed is changed accordingly. Thereby, the gaze degree with respect to the user's image at the time of vehicle travel can be eased, and HUD which secured safety can be provided.
- Video provider 2 Video receiver
- 10a to 10d video display device
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
L'invention concerne un dispositif d'affichage vidéo (10) qui est pourvu d'une unité de modification de quantité d'informations (6), qui modifie la quantité d'informations par unité de temps dans une vidéo fournie par une unité de fourniture de vidéo (1), d'une unité d'affichage vidéo (8), qui affiche la vidéo obtenue, d'une unité de capture d'image (3), qui capture des images d'un objet proche d'un utilisateur, d'une unité de détection de mouvement (4), qui détecte l'état de mouvement de l'utilisateur sur la base des images capturées, et d'une unité de réglage de quantité d'informations (5), qui règle la quantité modifiée d'informations par unité de temps pour la vidéo, en fonction de l'état de mouvement détecté de l'utilisateur. Si l'utilisateur est en mouvement, l'unité de réglage de quantité d'informations (5) ajuste une quantité modifiée d'informations par unité de temps pour la vidéo de façon à être plus petite que si l'utilisateur ne se déplace pas. Ceci donne un dispositif d'affichage vidéo qui affiche une vidéo de façon à ne pas provoquer de gêne chez un utilisateur en mouvement, ni à réduire l'attention dudit utilisateur.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2014/058074 WO2015145541A1 (fr) | 2014-03-24 | 2014-03-24 | Dispositif d'affichage vidéo |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2014/058074 WO2015145541A1 (fr) | 2014-03-24 | 2014-03-24 | Dispositif d'affichage vidéo |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2015145541A1 true WO2015145541A1 (fr) | 2015-10-01 |
Family
ID=54194143
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2014/058074 Ceased WO2015145541A1 (fr) | 2014-03-24 | 2014-03-24 | Dispositif d'affichage vidéo |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2015145541A1 (fr) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2017125883A (ja) * | 2016-01-12 | 2017-07-20 | 株式会社デンソー | メガネ型情報表示装置 |
| WO2017145753A1 (fr) * | 2016-02-22 | 2017-08-31 | シャープ株式会社 | Dispositif, procédé et programme de commande d'affichage |
| CN107343392A (zh) * | 2015-12-17 | 2017-11-10 | 松下电器(美国)知识产权公司 | 显示方法以及显示装置 |
| WO2020044916A1 (fr) * | 2018-08-29 | 2020-03-05 | ソニー株式会社 | Dispositif de traitement d'informations, procédé de traitement d'informations et programme |
| US10819428B2 (en) | 2016-11-10 | 2020-10-27 | Panasonic Intellectual Property Corporation Of America | Transmitting method, transmitting apparatus, and program |
| US10951309B2 (en) | 2015-11-12 | 2021-03-16 | Panasonic Intellectual Property Corporation Of America | Display method, non-transitory recording medium, and display device |
| CN112650212A (zh) * | 2019-10-11 | 2021-04-13 | 丰田自动车株式会社 | 远程自动驾驶车辆及车辆远程指示系统 |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH10206787A (ja) * | 1997-01-20 | 1998-08-07 | Honda Motor Co Ltd | 車両用ヘッドマウントディスプレイ装置 |
| JP2006217520A (ja) * | 2005-02-07 | 2006-08-17 | Konica Minolta Photo Imaging Inc | 映像表示装置、及び眼鏡型映像表示装置 |
| JP2007101618A (ja) * | 2005-09-30 | 2007-04-19 | Konica Minolta Photo Imaging Inc | 表示装置 |
| JP2007336211A (ja) * | 2006-06-14 | 2007-12-27 | Mitsubishi Electric Corp | 車載用放送受信装置 |
| JP2011091789A (ja) * | 2009-09-24 | 2011-05-06 | Brother Industries Ltd | ヘッドマウントディスプレイ |
| JP2012222628A (ja) * | 2011-04-08 | 2012-11-12 | Brother Ind Ltd | 画像表示装置 |
| WO2013111185A1 (fr) * | 2012-01-25 | 2013-08-01 | 三菱電機株式会社 | Appareil d'informations de carrosserie mobile |
-
2014
- 2014-03-24 WO PCT/JP2014/058074 patent/WO2015145541A1/fr not_active Ceased
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH10206787A (ja) * | 1997-01-20 | 1998-08-07 | Honda Motor Co Ltd | 車両用ヘッドマウントディスプレイ装置 |
| JP2006217520A (ja) * | 2005-02-07 | 2006-08-17 | Konica Minolta Photo Imaging Inc | 映像表示装置、及び眼鏡型映像表示装置 |
| JP2007101618A (ja) * | 2005-09-30 | 2007-04-19 | Konica Minolta Photo Imaging Inc | 表示装置 |
| JP2007336211A (ja) * | 2006-06-14 | 2007-12-27 | Mitsubishi Electric Corp | 車載用放送受信装置 |
| JP2011091789A (ja) * | 2009-09-24 | 2011-05-06 | Brother Industries Ltd | ヘッドマウントディスプレイ |
| JP2012222628A (ja) * | 2011-04-08 | 2012-11-12 | Brother Ind Ltd | 画像表示装置 |
| WO2013111185A1 (fr) * | 2012-01-25 | 2013-08-01 | 三菱電機株式会社 | Appareil d'informations de carrosserie mobile |
Cited By (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10951309B2 (en) | 2015-11-12 | 2021-03-16 | Panasonic Intellectual Property Corporation Of America | Display method, non-transitory recording medium, and display device |
| CN107343392B (zh) * | 2015-12-17 | 2020-10-30 | 松下电器(美国)知识产权公司 | 显示方法以及显示装置 |
| CN107343392A (zh) * | 2015-12-17 | 2017-11-10 | 松下电器(美国)知识产权公司 | 显示方法以及显示装置 |
| JP2017125883A (ja) * | 2016-01-12 | 2017-07-20 | 株式会社デンソー | メガネ型情報表示装置 |
| JPWO2017145753A1 (ja) * | 2016-02-22 | 2018-08-02 | シャープ株式会社 | 表示制御装置、表示制御方法及びプログラム |
| WO2017145753A1 (fr) * | 2016-02-22 | 2017-08-31 | シャープ株式会社 | Dispositif, procédé et programme de commande d'affichage |
| US10819428B2 (en) | 2016-11-10 | 2020-10-27 | Panasonic Intellectual Property Corporation Of America | Transmitting method, transmitting apparatus, and program |
| WO2020044916A1 (fr) * | 2018-08-29 | 2020-03-05 | ソニー株式会社 | Dispositif de traitement d'informations, procédé de traitement d'informations et programme |
| JPWO2020044916A1 (ja) * | 2018-08-29 | 2021-09-24 | ソニーグループ株式会社 | 情報処理装置、情報処理方法及びプログラム |
| US11726320B2 (en) | 2018-08-29 | 2023-08-15 | Sony Corporation | Information processing apparatus, information processing method, and program |
| JP7400721B2 (ja) | 2018-08-29 | 2023-12-19 | ソニーグループ株式会社 | 情報処理装置、情報処理方法及びプログラム |
| CN112650212A (zh) * | 2019-10-11 | 2021-04-13 | 丰田自动车株式会社 | 远程自动驾驶车辆及车辆远程指示系统 |
| JP2021064118A (ja) * | 2019-10-11 | 2021-04-22 | トヨタ自動車株式会社 | 遠隔自動運転車両、及び車両遠隔指示システム |
| JP7310524B2 (ja) | 2019-10-11 | 2023-07-19 | トヨタ自動車株式会社 | 遠隔自動運転車両、及び車両遠隔指示システム |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10559065B2 (en) | Information processing apparatus and information processing method | |
| WO2015145541A1 (fr) | Dispositif d'affichage vidéo | |
| US10984756B2 (en) | Adaptive parameters in image regions based on eye tracking information | |
| US10474411B2 (en) | System and method for alerting VR headset user to real-world objects | |
| US11037532B2 (en) | Information processing apparatus and information processing method | |
| US10437060B2 (en) | Image display device and image display method, image output device and image output method, and image display system | |
| JP7173126B2 (ja) | 情報処理装置、情報処理方法、および記録媒体 | |
| US20140152530A1 (en) | Multimedia near to eye display system | |
| US20170083084A1 (en) | Information processing apparatus, information processing method, computer program, and image processing system | |
| JP2015114757A (ja) | 情報処理装置、情報処理方法及びプログラム | |
| CN110998666A (zh) | 信息处理装置、信息处理方法以及程序 | |
| JP7078568B2 (ja) | 表示装置、表示制御方法、及び表示システム | |
| EP3877899A1 (fr) | Plateforme de réduction de charge cognitive ayant une amélioration de bord d'image | |
| US11589001B2 (en) | Information processing apparatus, information processing method, and program | |
| US12229334B2 (en) | Video display device and video display method | |
| WO2022004130A1 (fr) | Dispositif de traitement d'informations, procédé de traitement d'informations et support de stockage | |
| US20200258316A1 (en) | Method of garbling real-world image for see-through head mount display and see-through head mount display with realworld image garbling function | |
| US10602116B2 (en) | Information processing apparatus, information processing method, and program for performing display control | |
| JP6591667B2 (ja) | 画像処理システム、画像処理装置、及びプログラム | |
| EP4086102B1 (fr) | Procédé et appareil de navigation, dispositif électronique, support de stockage lisible et produit de programme informatique | |
| KR101331055B1 (ko) | 시각 주의 분석에 근거한 시각 보조 시스템 및 상기 시각 주의 분석을 이용한 시각 보조 방법 | |
| JPH11237581A (ja) | ヘッドマウントディスプレイ装置 | |
| JP2018088604A (ja) | 画像表示装置、画像表示方法、システム | |
| KR102360557B1 (ko) | 실세계 영상 왜곡 기능을 가지는 씨스루 헤드 마운트 디스플레이 | |
| KR102185519B1 (ko) | 다이렉트 인코딩 방식의 씨스루 헤드 마운트 디스플레이를 위한 실세계 영상 왜곡 방법 및 실세계 영상 왜곡 기능을 가지는 다이렉트 인코딩 방식의 씨스루 헤드 마운트 디스플레이 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14887312 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| NENP | Non-entry into the national phase |
Ref country code: JP |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 14887312 Country of ref document: EP Kind code of ref document: A1 |