WO2023032173A1 - 仮想空間提供装置、仮想空間提供方法、及びコンピュータ読み取り可能な記憶媒体 - Google Patents
仮想空間提供装置、仮想空間提供方法、及びコンピュータ読み取り可能な記憶媒体 Download PDFInfo
- Publication number
- WO2023032173A1 WO2023032173A1 PCT/JP2021/032507 JP2021032507W WO2023032173A1 WO 2023032173 A1 WO2023032173 A1 WO 2023032173A1 JP 2021032507 W JP2021032507 W JP 2021032507W WO 2023032173 A1 WO2023032173 A1 WO 2023032173A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- virtual space
- predetermined range
- image
- avatar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/024—Multi-user, collaborative environment
Definitions
- the present disclosure relates to technology for controlling virtual space.
- Patent Literatures 1 and 2 disclose techniques for displaying a three-dimensional virtual space and allowing the user to freely move within the three-dimensional virtual space and communicate with other users by operating an avatar. It is
- Patent Literatures 1 and 2 describe drawing a line in the line-of-sight direction of the avatar and specifying the first object that the avatar hits as the object pointed by the user.
- the image showing the inside of the virtual space displayed to the user there are cases where not only the object ahead of the line of sight of the avatar but also the surrounding objects and the like are shown. Therefore, the user is not necessarily looking at the object ahead of the line of sight of the avatar. There is room for improvement in this respect.
- the present disclosure has been made in view of the above problems, and aims to provide a virtual space providing device etc. that can more accurately estimate the line of sight of a user who uses a virtual space using an avatar. be one.
- a virtual space providing apparatus includes detection means for detecting an orientation of an avatar in a virtual space that changes in accordance with a user's operation; output control means for controlling to output to the user an output image which is an image corresponding to a direction and in which a display mode outside a predetermined range on the image is changed; and an estimating means for estimating the line of sight of the user based on the above.
- a virtual space providing method detects an orientation of an avatar in a virtual space whose orientation changes according to a user's operation, and detects the orientation of the avatar in the virtual space. controlling to output to the user an output image obtained by changing the display mode of the image outside a predetermined range on the image, and controlling the line of sight of the user based on the predetermined range of the output image; to estimate
- a computer-readable storage medium is an avatar in a virtual space, a process of detecting an orientation of the avatar that changes in accordance with a user's operation; and a process of controlling to output to the user an output image in which a display mode outside a predetermined range on the image is changed according to the orientation of the image, and based on the predetermined range of the output image
- a program for causing a computer to execute a process of estimating the line of sight of the user is stored.
- FIG. 1 is a diagram schematically showing an example of a configuration including a virtual space providing device according to the first embodiment of the present disclosure
- FIG. FIG. 3 is a diagram schematically showing an example of virtual space displayed on the user terminal according to the first embodiment of the present disclosure
- FIG. 1 is a block diagram showing an example of the functional configuration of a virtual space providing device according to the first embodiment of the present disclosure
- FIG. 4 is a flow chart showing an example of the operation of the virtual space providing device according to the first embodiment of the present disclosure
- FIG. 10 is a block diagram showing an example of a functional configuration of a virtual space providing device according to the second embodiment of the present disclosure
- FIG. FIG. 11 is a diagram showing an example of an output image according to the second embodiment of the present disclosure
- FIG. 10 is a diagram showing another example of an output image according to the second embodiment of the present disclosure
- FIG. 9 is a flow chart showing an example of the operation of the virtual space providing device according to the second embodiment of the present disclosure
- FIG. 13 is a block diagram showing an example of a functional configuration of a virtual space providing device according to a third embodiment of the present disclosure
- FIG. 12 is a diagram showing an example of an output image according to the third embodiment of the present disclosure
- FIG. FIG. 11 is a flow chart showing an example of the operation of the virtual space providing device according to the third embodiment of the present disclosure
- FIG. FIG. 12 is a block diagram showing an example of a functional configuration of a virtual space providing device according to a fourth embodiment of the present disclosure
- FIG. 12 is a diagram showing an example of an output image according to the fourth embodiment of the present disclosure
- FIG. FIG. 14 is a flow chart showing an example of the operation of the virtual space providing device according to the fourth embodiment of the present disclosure
- FIG. 1 is a block diagram showing an example of a hardware configuration of a computer device that implements a virtual space providing device according to first, second, third, and fourth embodiments of the present disclosure
- FIG. 1 is a diagram schematically showing an example of a configuration including a virtual space providing device 100.
- a virtual space providing apparatus 100 communicates with user terminals 200-1, 200-2, . Connected as possible.
- the user terminal 200 is a device operated by a user.
- the user terminal 200 is, for example, a personal computer, but is not limited to this example.
- the user terminal 200 may be a smartphone or a tablet terminal, or may be a device including a goggle-type wearable terminal (also referred to as a head-mounted display) having a display.
- the user terminal 200 also includes an input device such as a keyboard, a mouse, a microphone, and a wearable device that operates based on user's actions, and an output device such as a display and a speaker. Furthermore, the user terminal 200 may be equipped with a photographing device.
- an input device such as a keyboard, a mouse, a microphone, and a wearable device that operates based on user's actions
- an output device such as a display and a speaker.
- the user terminal 200 may be equipped with a photographing device.
- a virtual space is a virtual space shared by a plurality of users, and is a space in which user operations are reflected.
- the virtual space is also called VR (Virtual Reality) space.
- the virtual space is provided by the virtual space providing device 100 .
- the user terminal 200 displays an image showing the virtual space.
- FIG. 2 is a diagram schematically showing an example of the virtual space displayed on the user terminal 200. As shown in FIG. In the example of FIG. 2, a virtual space is displayed on the display of the user terminal 200.
- An avatar is an object operated by a user. A user utilizes the virtual space by operating an avatar.
- the user terminal 200 displays an image of the virtual space from the viewpoint of the avatar operated by the user.
- the image displayed on the user terminal 200 may be updated according to the action of the avatar.
- a user may be able to communicate with another user by performing an action on an avatar operated by the other user.
- the device that provides the virtual space may not be the virtual space providing device 100 .
- an external device (not shown) may provide the virtual space.
- FIG. 3 is a block diagram showing an example of the functional configuration of the virtual space providing device 100 of the first embodiment.
- the virtual space providing apparatus 100 includes a detection section 110, an output control section 120, and an estimation section .
- the detection unit 110 detects the orientation of an avatar in the virtual space that changes in accordance with the user's operation.
- the detection unit 110 is an example of detection means.
- the output control unit 120 controls to output various data to the user.
- the output control unit 120 controls the user terminal 200 used by the user to output an image representing the virtual space.
- the image representing the virtual space that is output to the user is also called an output image.
- the output image is, for example, an image showing the inside of the virtual space from the viewpoint of the avatar. Since the orientation of the avatar is changed by the user's operation, the output image differs depending on the orientation of the avatar. Therefore, the output control unit 120 may update the output image according to the orientation of the avatar, for example. Then, the output control unit 120 blurs the outside of the predetermined range on the image in the output image. For example, the output control unit 120 defines a predetermined range including the center of the output image.
- the output control unit 120 sets, for example, an image obtained by changing the display mode of the outside of the predetermined range as an output image.
- the output control unit 120 outputs an image in which objects appearing outside the predetermined range are not displayed, or an image in which objects appearing outside the predetermined range are blurred.
- the image in which the object is not shown may be an image in which the object outside the predetermined range is not displayed.
- the blurred image may be an image with low resolution.
- the resolution of the predetermined range on the output image is higher than the resolution outside the predetermined range on the output image.
- the example of blurring an image is not limited to this example.
- the method of determining the predetermined range is not limited to the above example.
- the output control unit 120 performs control to output to the user an output image that corresponds to the orientation of the avatar in the virtual space and that blurs the outside of a predetermined range on the image.
- the output control unit 120 is an example of output control means.
- the estimation unit 130 estimates the line of sight of the user. For example, the estimating unit 130 may estimate that the user is looking at a predetermined range of directions in the output image. The example of estimation is not limited to this example. In this manner, the estimation unit 130 estimates the line of sight of the user based on the predetermined range of the output image.
- the estimating unit 130 is an example of estimating means.
- each step of the flowchart is expressed using a number attached to each step, such as “S1”.
- FIG. 4 is a flowchart explaining an example of the operation of the virtual space providing device 100.
- the detection unit 110 detects the orientation of the avatar in the virtual space (S1).
- the output control unit 120 outputs, to the user operating the avatar, an image representing a virtual space corresponding to the direction of the avatar, in which the display mode of the outside of a predetermined range on the image is changed. control (S2).
- the estimation unit 130 estimates the line of sight of the user based on the predetermined range of the output image (S3).
- the virtual space providing apparatus 100 of the first embodiment detects the orientation of the avatar in the virtual space that changes in accordance with the user's operation, and detects the orientation of the avatar in the virtual space.
- Control is performed to output to the user an output image in which the display mode of the outside of a predetermined range on the image is changed, and the line of sight of the user is estimated based on the predetermined range of the output image.
- a portion of the image output to the user is, for example, blurred. Therefore, the user operates the avatar, for example, so that the part that the user wants to see is not obscured.
- the virtual space providing apparatus 100 can prompt the user to perform an operation to display the part the user wants to see at a specific position on the output image. This makes it more likely that the user is looking at a specific location on the output image. Therefore, the virtual space providing apparatus 100 can more accurately estimate the line of sight of the user who uses the virtual space using the avatar.
- FIG. 5 is a block diagram showing an example of the functional configuration of the virtual space providing device 100 of the second embodiment.
- the virtual space providing apparatus 100 includes a detection section 110, an output control section 120, and an estimation section .
- the detection unit 110 detects the orientation of the avatar. For example, the detection unit 110 detects the orientation of the face of the avatar as the orientation of the avatar. Note that the example of the orientation of the avatar detected by the detection unit 110 is not limited to this example.
- the output control unit 120 displays a viewpoint image from the avatar operated by the user. That is, when a part of the avatar is used as a camera, the user terminal 200 displays the virtual space reflected by the camera. Therefore, the detection unit 110 may detect the orientation of a part of the avatar, which is the camera, as the orientation of the avatar.
- the output control unit 120 includes an image generation unit 121 and an image transmission unit 122 .
- the image generator 121 generates an output image.
- the image generator 121 determines the field of view of the avatar according to the detected orientation of the avatar.
- the image generation unit 121 determines the range within the virtual space that is captured by the camera when a part of the avatar is used as the camera, according to the orientation of the avatar.
- the image generator 121 generates an output image in which the display mode of the area outside the predetermined range on the image showing the determined range is changed.
- a predetermined range is also called an attention range.
- FIG. 6 is a diagram showing an example of an output image.
- the output image is a viewpoint image from a predetermined avatar.
- the range of interest is defined as a range including the center of the output image.
- a hatched portion outside the attention range indicates a range in which the display mode has been changed.
- avatar A and the table are displayed without being blurred, and avatar B and avatar C are displayed with being blurred.
- blurring may be processing that lowers the resolution.
- the resolution of the hatched portion is lower than that of the non-hatched portion.
- the blurring process is not limited to this example, and may be, for example, a process of making the color lighter, a process of lowering the contrast, or a mask process.
- the color of the hatched portion when processing to lighten the color is performed, the color of the hatched portion is lighter than that of the non-hatched portion.
- the color contrast of the hatched portion is lower than that of the non-hatched portion.
- the image generation unit 121 may generate an output image in which the hatched objects are seen through the other superimposed image. Note that the position, shape, and size of the attention range are not limited to the example in FIG.
- the image generation unit 121 generates an image as an output image, which is an image of the viewpoint from the avatar and in which the display mode of the outside of the predetermined range is changed.
- the image generator 121 is an example of an image generator.
- FIG. 7 is a diagram showing another example of the output image.
- avatar A appears near the center of the attention range, but appears to protrude outside the attention range.
- Objects near the center of the attention range are likely to be focused on by the user. Therefore, when an object that appears in the center of the attention range also appears outside the attention range, the image generation unit 121 may blur a range other than the range in which the object appears out of the attention range. Note that the image generation unit 121 may perform this process even if the object is not the object appearing in the center of the attention range.
- the image generation unit 121 determines the range in which the object appears out of the attention range. You can blur the area excluding . In this way, when an object appears within a predetermined distance from the center of the predetermined range, and the object appearing within the predetermined distance also appears outside the predetermined range, the image generation unit 121 An output image may be generated in which the display mode of a range outside the predetermined range, which is a range that does not include an object within the predetermined distance, is changed.
- the image generation unit 121 may generate an image that does not display objects appearing outside the attention range as the output image.
- the image generation unit 121 may generate an image in which all objects outside the attention range are not displayed, or may generate an image in which a specific object among the objects outside the attention range is not displayed.
- the specific object may be, for example, another avatar or an object different from the background, such as a screen in virtual space.
- the attention range may be a range along the shape of the object existing in the center of the image.
- the image transmission unit 122 transmits the generated output image to the user terminal 200.
- the image transmission unit 122 transmits the output image to a display device such as the user terminal 200 having a display or the like, whereby the output image is displayed on the display device.
- the image transmission unit 122 transmits the generated output image to the display device used by the user.
- the image transmission unit 122 is an example of image transmission means.
- the estimation unit 130 estimates the line of sight of the user based on the output image. Specifically, estimation section 130 estimates that the user is gazing at the attention range. Also, the estimation unit 130 may estimate that the user is gazing at an object appearing in the attention range. When a plurality of objects appear in the attention range, estimation section 130 may estimate that the user is gazing at a plurality of objects, or may estimate that the user is gazing at any one of the plurality of objects. good too.
- the attention range includes avatar A and the table. In this case, the estimation unit 130 may estimate that the user is gazing at the avatar A, which is an object closer to the center of the attention range.
- FIG. 8 is a flow chart showing an example of the operation of the virtual space providing device 100. As shown in FIG. Specifically, FIG. 8 shows an operation example when the virtual space providing apparatus 100 estimates the line of sight of the user.
- the detection unit 110 detects the orientation of the avatar (S101).
- the image generation unit 121 determines the range within the virtual space that is visible to the avatar according to the orientation of the avatar (S102). Then, the image generation unit 121 generates an output image that is an image showing the determined range and blurs the outside of the attention range on the image (S103).
- the image transmission unit 122 transmits the generated output image to the user terminal 200 (S104).
- the estimation unit 130 estimates the line of sight based on the predetermined range of the output image (S105). For example, the estimation unit 130 estimates that the user is gazing at the attention range of the output image.
- the process of S105 may be performed at any timing after the process of S103.
- the virtual space providing apparatus 100 of the second embodiment detects the orientation of the avatar in the virtual space that changes in accordance with the user's operation, and detects the orientation of the avatar in the virtual space. Control is performed so as to output to the user an output image in which the display mode of the outside of a predetermined range on the image is changed, and the line of sight of the user is estimated based on the output image.
- a portion of the image output to the user is, for example, blurred. Therefore, the user operates the avatar, for example, so that the part that the user wants to see is not obscured.
- the virtual space providing apparatus 100 can prompt the user to perform an operation to display the part the user wants to see at a specific position on the output image. This makes it more likely that the user is looking at a specific location on the output image. Therefore, the virtual space providing apparatus 100 can more accurately estimate the line of sight of the user who uses the virtual space using the avatar.
- the virtual space providing apparatus 100 estimates the user's line of sight from the output image corresponding to the orientation of the avatar, so it is possible to reduce the calculation load for line of sight estimation. Moreover, in the former method, it is necessary to transmit a photographed image showing the user's face to a device for line-of-sight estimation via a network, which may increase the amount of communication. On the other hand, since the virtual space providing apparatus 100 does not need to send captured images for line-of-sight estimation, the amount of communication can be reduced.
- the virtual space providing apparatus 100 of the second embodiment may estimate that the user's line of sight is directed toward an object closer to the center of the predetermined range when a plurality of objects are included in the predetermined range. . As a result, the virtual space providing apparatus 100 can identify which object the user is directing his or her line of sight to.
- the virtual space providing apparatus 100 of the second embodiment is used when an object appears within a predetermined distance from the center of the predetermined range, and the object appearing within the predetermined distance is also displayed outside the predetermined range.
- an output image may be generated in which the display mode of a range outside the predetermined range, which is a range that does not include an object within the predetermined distance, is changed.
- the virtual space providing apparatus 100 can clarify the range in which the object that the user may be paying attention to appears.
- FIG. 9 is a block diagram showing an example of the functional configuration of the virtual space providing device 101 of the third embodiment. Similar to the virtual space providing device 100, the virtual space providing device 101 is communicably connected to a plurality of user terminals 200 via a wireless or wired network.
- the virtual space providing device 101 includes a detection unit 110, an output control unit 120, an estimation unit 131, and a setting reception unit 140.
- the estimating unit 131 performs the following processing. Specifically, the estimation unit 131 may estimate the line of sight according to the user's operation.
- FIG. 10 is a diagram showing an example of an output image. The output image shown in FIG. 10 differs from the output image shown in FIG. 6 in that a cursor is superimposed thereon.
- the user performs various operations using a device such as a mouse provided in the user terminal 200 .
- the cursor shown in FIG. 10 is, for example, a mouse-operated cursor.
- the cursor points to avatar A in the example of FIG.
- the estimation unit 131 may estimate that the user is gazing at the avatar A indicated by the cursor. Also, in the example of FIG.
- the estimation unit 131 may estimate that the user is gazing at the table. In addition, when the cursor is out of the attention range, the estimation unit 131 may estimate that the user is gazing at the object in the attention range instead of the object indicated by the cursor. In this way, when the cursor indicated by the user operating the device is within the predetermined range, the estimation unit 131 estimates that the user is facing the object indicated by the cursor.
- the setting reception unit 140 receives settings regarding the attention range from the user terminal 200 .
- the settings related to the attention range are, for example, the position, size, shape, etc. on the output image.
- the setting reception unit 140 receives setting information including at least one of the position, size, and shape of the attention range input by the user from the user terminal 200 . Then, the setting reception unit 140 sets the attention range based on the received setting information. In this way, the setting reception unit 140 receives the setting of at least one of the position, size and shape of the predetermined range.
- the setting receiving unit 140 is an example of setting receiving means.
- FIG. 11 is a flow chart showing an example of the operation of the virtual space providing device 101. As shown in FIG. Specifically, FIG. 11 shows an operation example when the virtual space providing device 101 estimates the line of sight of the user.
- the setting reception unit 140 receives a setting related to the attention range from the user (S201). Specifically, the setting reception unit 140 receives setting information including at least one of the position, size, and shape of the attention range from the user terminal 200 . Then, the setting reception unit 140 sets the attention range based on the received setting information (S202).
- the processing from S203 to S206 is the same as the processing from S101 to S105 in FIG. 8, so the description is omitted.
- the estimation unit 131 estimates the line of sight based on the attention range of the output image (S208), similarly to the process of S105. . If the cursor exists within the attention range (“Yes” in S207), the estimation unit 131 estimates the line of sight based on the position pointed by the cursor (S209).
- the virtual space providing device 101 of the third embodiment may accept at least one setting of the position, size and shape of a predetermined range. Thereby, the virtual space providing apparatus 101 can set the range desired by the user as the predetermined range.
- the virtual space providing apparatus 101 of the third embodiment when the cursor indicated by the user operating the device is within the predetermined range, the user is facing the object indicated by the cursor. can be estimated. There is a high possibility that the user is paying attention to the location indicated by the user's operation. With the configuration described above, the virtual space providing apparatus 101 can more accurately estimate the line of sight of the user.
- FIG. 12 is a block diagram showing an example of the functional configuration of the virtual space providing device 102 of the fourth embodiment.
- the virtual space providing device 102 like the virtual space providing devices 100 and 101, is communicably connected to a plurality of user terminals 200 via a wireless or wired network.
- the virtual space providing device 102 includes a detection unit 110, an output control unit 120, an estimation unit 131, a setting reception unit 140, and an emotion estimation unit 150.
- the emotion estimation unit 150 acquires a photographed image taken by a photographing device and estimates the user's emotion reflected in the photographed image.
- the emotion estimating unit 150 for example, extracts a feature amount from an area in which the user's face appears in the captured image.
- Emotion estimation section 150 estimates an emotion based on the extracted feature amount and data indicating the relationship between the feature amount and emotion. Data indicating the relationship between the feature amount and emotion may be stored in advance in a storage device (not shown) included in the virtual space providing device 102 .
- the data indicating the relationship between the feature quantity and emotion may be stored in an external device that is communicably connected to the virtual space providing device 102 .
- the estimated emotions are predetermined emotions such as “happy”, “angry”, “sad”, “enjoyable”, “impatient”, and “tensed”. Further, when the emotion estimation unit 150 cannot estimate a characteristic emotion from the user, the emotion estimation unit 150 may estimate "calmness” indicating that the user is calm. The emotion estimation unit 150 may also estimate motions caused by emotions such as “laughing” and “crying”. Note that these are examples of estimated emotions, and emotions other than these may be estimated.
- a method for estimating the user's emotion from a captured image includes, for example, an area on the captured image in which the user's face is shown and an image registered in an image database, which is associated with information indicating the person's emotion.
- a method of estimating by pattern matching between images may also be used.
- the image database is stored in a storage device (not shown) of the virtual space providing device 102, for example.
- a method of estimating a user's emotion from a photographed image is to extract a feature of the user from an area on the photographed image in which the user's face is captured, and use an estimation model such as a neural network with the extracted feature as input. may be a method of outputting an emotion corresponding to the feature amount of the user.
- the emotion estimating unit 150 estimates the user's emotion with respect to an object appearing within a predetermined range, based on the photographed image of the user captured by the photographing device.
- Emotion estimation unit 150 is an example of an emotion estimation means.
- FIG. 13 is a diagram showing an example of an output image.
- a product shelf on which products A, B, and C are arranged appears in the output image.
- the estimation unit 131 estimates that the user is gazing at the product B, and the emotion estimation unit 150 estimates that the user's emotion is "happy". In this case, it can be seen that the user has a positive reaction to product B.
- the emotion estimation unit 150 may store information that associates the object being gazed with the user's emotion.
- the emotion estimation unit 150 may add information indicating the estimated user's emotion to the avatar operated by the user. At this time, the emotion estimation unit 150 may add characters, symbols, colors, and the like according to the emotion to the avatar. Emotion estimation section 150 may change the facial expression of the avatar or change the shape of the avatar according to the emotion. Furthermore, when adding information indicating the user's emotion to the avatar, the emotion estimation unit 150 may further add to the avatar information indicating what the emotion is for.
- FIG. 14 is a flow chart showing an example of the operation of the virtual space providing device 102. As shown in FIG. Specifically, FIG. 14 shows an operation example when the virtual space providing device 102 estimates the user's emotion.
- the processing from S301 to S305 is the same as the processing from S101 to S105 in FIG. 8, so the description is omitted.
- the emotion estimation unit 150 estimates the user's emotion based on the captured image captured by the user terminal 200 (S306). Emotion estimation unit 150 then adds information indicating the estimated emotion to the avatar (S307).
- the operation example of FIG. 14 is merely an example of the operation of the virtual space providing device 102.
- the virtual space providing apparatus 102 may perform the operations of S306 and S307 after the processing of S208 or S209.
- the virtual space providing apparatus 102 of the fourth embodiment may estimate the user's emotion with respect to the object appearing within a predetermined range, based on the photographed image of the user captured by the photographing device. Thereby, the virtual space providing apparatus 102 can acquire the user's emotion with respect to the estimated target to which the user's gaze is directed.
- the virtual space providing device 102 estimates the line of sight of the user based on the predetermined range of the output image. Therefore, the virtual space providing apparatus 102 can reduce the computational load for estimating the line of sight compared to the method of estimating the line of sight of the user from the photographed image showing the user's face.
- the process of estimating the line of sight and the process of estimating the emotion are performed by the virtual space providing apparatus.
- the process of estimating a line of sight and the process of estimating an emotion may be performed in the user terminal 200, for example.
- the estimator 130 or 131 and the emotion estimator 150 may also be provided in the user terminal 200 .
- the user terminal 200 estimates the line of sight of the user based on the attention range of the output image. Then, the user terminal 200 may transmit information about the estimated line of sight of the user to the virtual space providing apparatus.
- the user terminal 200 captures the user's face and estimates the user's emotion based on the captured image. Then, the user terminal 200 may transmit information indicating the estimated user's emotion to the virtual space providing apparatus.
- the virtual space providing device presumes that the user is paying attention to the product B, and also presumes the user's feelings toward the product B.
- the manager of the virtual store can estimate which users have feelings for which products.
- the administrator can obtain, for example, the content of the product and the reaction of the customer (user) according to the description of the product. Therefore, the manager can analyze for product improvement and examine sales methods, etc., based on the customer's reaction.
- FIG. 15 is a block diagram showing an example of the hardware configuration of a computer that implements the virtual space providing device in each embodiment.
- the computer device 90 implements the virtual space providing device and the virtual space providing method described in each embodiment and each modified example.
- the computer device 90 includes a processor 91, a RAM (Random Access Memory) 92, a ROM (Read Only Memory) 93, a storage device 94, an input/output interface 95, a bus 96, and a drive device 97.
- the virtual space providing device may be realized by a plurality of electric circuits.
- the storage device 94 stores a program (computer program) 98.
- Processor 91 uses RAM 92 to execute program 98 of the virtual space providing apparatus.
- the program 98 includes a program that causes a computer to execute the processes shown in FIGS. 4, 8, 11 and 14.
- FIG. As the processor 91 executes the program 98, the function of each component of the virtual space providing apparatus is realized.
- Program 98 may be stored in ROM 93 .
- the program 98 may be recorded on the storage medium 80 and read using the drive device 97, or may be transmitted from an external device (not shown) to the computer device 90 via a network (not shown).
- the input/output interface 95 exchanges data with peripheral devices (keyboard, mouse, display device, etc.) 99 .
- the input/output interface 95 functions as means for acquiring or outputting data.
- a bus 96 connects each component.
- the virtual space providing device can be implemented as a dedicated device.
- the virtual space providing device can be realized based on a combination of a plurality of devices.
- a processing method in which a program for realizing each component in the function of each embodiment is recorded in a storage medium, the program recorded in the storage medium is read as code, and a computer executes the processing method is also included in the scope of each embodiment. . That is, a computer-readable storage medium is also included in the scope of each embodiment. Further, each embodiment includes a storage medium in which the above-described program is recorded, and the program itself.
- the storage medium is, for example, a floppy (registered trademark) disk, hard disk, optical disk, magneto-optical disk, CD (Compact Disc)-ROM, magnetic tape, non-volatile memory card, or ROM, but is not limited to this example.
- the programs recorded on the storage medium are not limited to programs that execute processing independently, but also work together with other software and expansion board functions to run on an OS (Operating System) to execute processing.
- a program for executing the program is also included in the category of each embodiment.
- detection means for detecting the orientation of the avatar, which is an avatar in a virtual space and whose orientation changes according to a user's operation; an output control means for controlling to output to the user an image corresponding to the orientation of the avatar in the virtual space, the output image having a changed display mode outside a predetermined range on the image; estimating means for estimating the line of sight of the user based on the predetermined range of the output image; Virtual space providing device.
- the output control means controls to output to the user the output image in which the outside of the predetermined range is blurred.
- the virtual space providing device according to appendix 1.
- the estimation means estimates that the user's line of sight is directed toward an object closer to the center of the predetermined range. 3.
- the output control means is an image generating means for generating, as the output image, an image of the viewpoint from the avatar, the image obtained by changing the display mode of the outside of the predetermined range; an image transmission means for transmitting the generated output image to a display device used by the user; 4.
- the virtual space providing device according to any one of Appendices 1 to 3.
- Appendix 6 A setting receiving means for receiving settings of at least one of the position, size and shape of the predetermined range, 6.
- the virtual space providing device according to any one of Appendices 1 to 5.
- Appendix 8 an emotion estimating means for estimating the user's emotion with respect to the object appearing in the predetermined range based on the captured image of the user captured by a photographing device; 8. The virtual space providing device according to any one of Appendices 1 to 7.
- the emotion estimation means adds information indicating the estimated emotion of the user to the avatar operated by the user.
- the predetermined range is a range including the center of the output image. 10.
- the virtual space providing device according to any one of Appendices 1 to 9.
- Appendix 13 In the step of estimating the user's line of sight, if the predetermined range includes a plurality of objects, the user's line of sight is estimated to be directed toward an object closer to the center of the predetermined range. 13. The virtual space providing method according to appendix 11 or 12.
- Appendix 16 Accepting setting of at least one of the position, size and shape of the predetermined range; 16. The virtual space providing method according to any one of appendices 11 to 15.
- Appendix 18 estimating the emotion of the user with respect to the object appearing in the predetermined range based on a photographed image of the user photographed by a photographing device; 18. The virtual space providing method according to any one of appendices 11 to 17.
- the predetermined range is a range including the center of the output image. 20.
- Appendix 21 A process of detecting the orientation of the avatar, which is an avatar in a virtual space and whose orientation changes according to a user's operation; a process of controlling to output to the user an output image, which is an image corresponding to the orientation of the avatar in the virtual space and in which the display mode of the outside of a predetermined range on the image is changed; storing a program for causing a computer to execute a process of estimating the line of sight of the user based on the predetermined range of the output image; computer readable storage medium;
- Appendix 23 In the process of estimating the user's line of sight, if the predetermined range includes a plurality of objects, the user's line of sight is estimated to be directed toward an object closer to the center of the predetermined range. 23.
- Appendix 24 In the controlling process, generating, as the output image, an image of the viewpoint from the avatar, the image obtained by changing the display mode of the outside of the predetermined range; sending the generated output image to a display method used by the user; 24.
- the computer-readable storage medium according to any one of appendices 21-23.
- Appendix 26 storing a program that causes a computer to further execute a process of accepting settings of at least one of the position, size and shape of the predetermined range; 26.
- the computer-readable storage medium according to any one of clauses 21-25.
- Appendix 28 storing a program for causing a computer to further execute a process of estimating the user's emotion with respect to an object appearing in the predetermined range based on a photographed image of the user photographed by a photographing device; 28.
- a computer-readable storage medium according to any one of clauses 21-27.
- the predetermined range is a range including the center of the output image.
- a computer-readable storage medium according to any one of clauses 21-29.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
本開示の仮想空間提供装置の概要について説明する。
次に、第2の実施形態の仮想空間提供装置について説明する。第2の実施形態では、第1の実施形態で説明した仮想空間提供装置100について、より詳細に説明する。
図5は、第2の実施形態の仮想空間提供装置100の機能構成の一例を示すブロック図である。図5に示すように、仮想空間提供装置100は、検出部110、出力制御部120、及び推定部130を備える。
次に第2の実施形態の仮想空間提供装置100の動作の一例を、図8を用いて説明する。図8は、仮想空間提供装置100の動作の一例を示すフローチャートである。具体的には、図8は、仮想空間提供装置100がユーザの視線を推定する際の動作例を示す。
次に、第3の実施形態の仮想空間提供装置について説明する。第3の実施形態では、ユーザの操作に対する処理について主に説明する。なお、第1の実施形態及び第2の実施形態と重複する説明は、その一部を省略する。
図9は、第3の実施形態の仮想空間提供装置101の機能構成の一例を示すブロック図である。仮想空間提供装置101は、仮想空間提供装置100と同様に、複数のユーザ端末200と、無線または有線のネットワークを介して通信可能に接続される。
次に第3の実施形態の仮想空間提供装置101の動作の一例を、図11を用いて説明する。図11は、仮想空間提供装置101の動作の一例を示すフローチャートである。具体的には、図11は、仮想空間提供装置101がユーザの視線を推定する際の動作例を示す。
次に、第4の実施形態の仮想空間提供装置について説明する。第4の実施形態では、ユーザの感情を推定する処理を用いる例について主に説明する。なお、第1の実施形態、第2の実施形態、及び第3の実施形態と重複する説明は、その一部を省略する。
図12は、第4の実施形態の仮想空間提供装置102の機能構成の一例を示すブロック図である。仮想空間提供装置102は、仮想空間提供装置100、101と同様に、複数のユーザ端末200と、無線または有線のネットワークを介して通信可能に接続される。
次に第4の実施形態の仮想空間提供装置102の動作の一例を、図14を用いて説明する。図14は、仮想空間提供装置102の動作の一例を示すフローチャートである。具体的には、図14は、仮想空間提供装置102がユーザの感情を推定する際の動作例を示す。
第1乃至第4の実施形態において、視線を推定する処理と感情を推定する処理とが仮想空間提供装置によって行われる例について説明した。視線を推定する処理と感情を推定する処理とは、例えばユーザ端末200において行われてもよい。言い換えると、推定部130または131と、感情推定部150とは、ユーザ端末200にも備えられてよい。例えば、ユーザ端末200が、出力画像の注目範囲に基づいてユーザの視線を推定する。そして、ユーザ端末200が、推定されたユーザの視線に関する情報を仮想空間提供装置に送信してもよい。また、例えば、ユーザ端末200が、ユーザの顔を撮影し、撮影画像に基づいて、ユーザの感情を推定する。そして、ユーザ端末200は、推定したユーザの感情を示す情報を仮想空間提供装置に送信してもよい。
次に、本開示の仮想空間提供装置が適用されるシーンの例を説明する。なお、以降の説明もあくまで一例であり、本開示の仮想空間提供装置が適用されるシーンは、以降のシーンに限定されない。
企業等がテレワークを行う際、社員は、例えばメールやチャット等でコミュニケーションをとる。しかしながら、メールやチャット等のみでは社員は、他の社員がどのような状態であるか把握するのが困難である。
例えば、仮想空間において、セミナーが開催されるとする。このような場合に、例えば、セミナーの講演者であるユーザは、聴衆であるユーザがセミナー中にどこを見ているか把握することができる。また、講演者であるユーザは聴衆がどのような感情となったか把握することができる。これらの情報を用いることで、講演者であるユーザは、例えば、講演した内容についてのフィードバックを得ることができる。そのため、講演者であるユーザは、例えば、フィードバックの結果から聴衆があまり話を理解できていないことがわかる場合、必要に応じて説明を追加することができる。
例えば、仮想空間に、実店舗を模した仮想店舗が構築されるとする。この場合、ユーザはアバターを利用して、仮想店舗内で買い物を行う。
上述した第1、第2、第3、及び第4の実施形態の仮想空間提供装置を構成するハードウェアについて説明する。図15は、各実施形態における仮想空間提供装置を実現するコンピュータ装置のハードウェア構成の一例を示すブロック図である。コンピュータ装置90において、各実施形態及び各変形例で説明した、仮想空間提供装置及び仮想空間提供方法が実現される。
<付記>
[付記1]
仮想空間内のアバターであって、ユーザの操作に応じて向きが変化する前記アバターの向きを検出する検出手段と、
前記仮想空間における前記アバターの向きに応じた画像であって、当該画像上の所定範囲の外部の表示態様を変更した出力画像を、前記ユーザに対して出力するよう制御する出力制御手段と、
前記出力画像の前記所定範囲に基づいて、前記ユーザの視線を推定する推定手段と、を備える、
仮想空間提供装置。
前記出力制御手段は、前記所定範囲の外部をぼやかした前記出力画像を、前記ユーザに対して出力するよう制御する、
付記1に記載の仮想空間提供装置。
前記推定手段は、前記所定範囲に複数のオブジェクトが含まれる場合、前記ユーザの視線は、前記所定範囲の中央により近いオブジェクトに向いていると推定する、
付記1または2に記載の仮想空間提供装置。
前記出力制御手段は、
前記アバターからの視点の画像であって、前記所定範囲の外部の表示態様を変更した画像を、前記出力画像として生成する画像生成手段と、
生成された前記出力画像を、前記ユーザが利用する表示装置に送信する、画像送信手段と、を備える、
付記1乃至3のいずれか一項に記載の仮想空間提供装置。
前記画像生成手段は、前記所定範囲の中央から所定の距離内にオブジェクトが映る場合であって、当該所定の距離内に映るオブジェクトが、前記所定範囲の外部にも映っている場合、当該所定の距離内に映るオブジェクトを含まない範囲であって、前記所定範囲の外部の範囲の表示態様を変更した前記出力画像を生成する、
付記4に記載の仮想空間提供装置。
前記所定範囲の位置、大きさ及び形の少なくともいずれかの設定を受け付ける設定受付手段を備える、
付記1乃至5のいずれか一項に記載の仮想空間提供装置。
前記ユーザがデバイスを操作することによって示されるカーソルが、前記所定範囲の内部にある場合、前記推定手段は、前記ユーザが、当該カーソルによって指し示されるオブジェクトを向いていると推定する、
付記1乃至6のいずれか一項に記載の仮想空間提供装置。
撮影装置によって撮影された前記ユーザが映る撮影画像に基づいて、前記所定範囲に映るオブジェクトに対する、前記ユーザの感情を推定する感情推定手段を備える、
付記1乃至7のいずれか一項に記載の仮想空間提供装置。
前記感情推定手段は、推定された前記ユーザの感情を示す情報を、前記ユーザが操作する前記アバターに付加する、
付記8に記載の仮想空間提供装置。
前記所定範囲は、前記出力画像の中央を含む範囲である、
付記1乃至9のいずれか一項に記載の仮想空間提供装置。
仮想空間内のアバターであって、ユーザの操作に応じて向きが変化する前記アバターの向きを検出し、
前記仮想空間における前記アバターの向きに応じた画像であって、当該画像上の所定範囲の外部の表示態様を変更した出力画像を、前記ユーザに対して出力するよう制御し、
前記出力画像の前記所定範囲に基づいて、前記ユーザの視線を推定する、
仮想空間提供方法。
前記制御するステップにおいて、前記所定範囲の外部をぼやかした前記出力画像を、前記ユーザに対して出力するよう制御する、
付記11に記載の仮想空間提供方法。
前記ユーザの視線を推定するステップにおいて、前記所定範囲に複数のオブジェクトが含まれる場合、前記ユーザの視線は、前記所定範囲の中央により近いオブジェクトに向いていると推定する、
付記11または12に記載の仮想空間提供方法。
前記制御するステップにおいて、
前記アバターからの視点の画像であって、前記所定範囲の外部の表示態様を変更した画像を、前記出力画像として生成し、
生成された前記出力画像を、前記ユーザが利用する表示方法に送信する、
付記11乃至13のいずれか一項に記載の仮想空間提供方法。
前記生成するステップにおいて、前記所定範囲の中央から所定の距離内にオブジェクトが映る場合であって、当該所定の距離内に映るオブジェクトが、前記所定範囲の外部にも映っている場合、当該所定の距離内に映るオブジェクトを含まない範囲であって、前記所定範囲の外部の範囲の表示態様を変更した前記出力画像を生成する、
付記13に記載の仮想空間提供方法。
前記所定範囲の位置、大きさ及び形の少なくともいずれかの設定を受け付ける、
付記11乃至15のいずれか一項に記載の仮想空間提供方法。
前記ユーザの視線を推定するステップにおいて、前記ユーザがデバイスを操作することによって示されるカーソルが、前記所定範囲の内部にある場合、前記ユーザが、当該カーソルによって指し示されるオブジェクトを向いていると推定する、
付記11乃至16のいずれか一項に記載の仮想空間提供方法。
撮影装置によって撮影された前記ユーザが映る撮影画像に基づいて、前記所定範囲に映るオブジェクトに対する、前記ユーザの感情を推定する、
付記11乃至17のいずれか一項に記載の仮想空間提供方法。
前記ユーザの感情を推定するステップにおいて、推定された前記ユーザの感情を示す情報を、前記ユーザが操作する前記アバターに付加する、
付記18に記載の仮想空間提供方法。
前記所定範囲は、前記出力画像の中央を含む範囲である、
付記11乃至19のいずれか一項に記載の仮想空間提供方法。
仮想空間内のアバターであって、ユーザの操作に応じて向きが変化する前記アバターの向きを検出する処理と、
前記仮想空間における前記アバターの向きに応じた画像であって、当該画像上の所定範囲の外部の表示態様を変更した出力画像を、前記ユーザに対して出力するよう制御する処理と、
前記出力画像の前記所定範囲に基づいて、前記ユーザの視線を推定する処理と、をコンピュータに実行させるプログラムを格納する、
コンピュータ読み取り可能な記憶媒体。
前記制御する処理において、前記所定範囲の外部をぼやかした前記出力画像を、前記ユーザに対して出力するよう制御する、
付記21に記載のコンピュータ読み取り可能な記憶媒体。
前記ユーザの視線を推定する処理において、前記所定範囲に複数のオブジェクトが含まれる場合、前記ユーザの視線は、前記所定範囲の中央により近いオブジェクトに向いていると推定する、
付記21または22に記載のコンピュータ読み取り可能な記憶媒体。
前記制御する処理において、
前記アバターからの視点の画像であって、前記所定範囲の外部の表示態様を変更した画像を、前記出力画像として生成し、
生成された前記出力画像を、前記ユーザが利用する表示方法に送信する、
付記21乃至23のいずれか一項に記載のコンピュータ読み取り可能な記憶媒体。
前記生成する処理において、前記所定範囲の中央から所定の距離内にオブジェクトが映る場合であって、当該所定の距離内に映るオブジェクトが、前記所定範囲の外部にも映っている場合、当該所定の距離内に映るオブジェクトを含まない範囲であって、前記所定範囲の外部の表示態様を変更した前記出力画像を生成する、
付記24に記載のコンピュータ読み取り可能な記憶媒体。
前記所定範囲の位置、大きさ及び形の少なくともいずれかの設定を受け付ける処理をさらにコンピュータに実行させるプログラムを格納する、
付記21乃至25のいずれか一項に記載のコンピュータ読み取り可能な記憶媒体。
前記ユーザの視線を推定する処理において、前記ユーザがデバイスを操作することによって示されるカーソルが、前記所定範囲の内部にある場合、前記ユーザが、当該カーソルによって指し示されるオブジェクトを向いていると推定する、
付記21乃至26のいずれか一項に記載のコンピュータ読み取り可能な記憶媒体。
撮影装置によって撮影された前記ユーザが映る撮影画像に基づいて、前記所定範囲に映るオブジェクトに対する、前記ユーザの感情を推定する処理をさらにコンピュータに実行させるプログラムを格納する、
付記21乃至27のいずれか一項に記載のコンピュータ読み取り可能な記憶媒体。
前記ユーザの感情を推定する処理において、推定された前記ユーザの感情を示す情報を、前記ユーザが操作する前記アバターに付加する、
付記28に記載のコンピュータ読み取り可能な記憶媒体。
前記所定範囲は、前記出力画像の中央を含む範囲である、
付記21乃至29のいずれか一項に記載のコンピュータ読み取り可能な記憶媒体。
110 検出部
120 出力制御部
121 画像生成部
122 画像送信部
130、131 推定部
140 設定受付部
150 感情推定部
200 ユーザ端末
Claims (30)
- 仮想空間内のアバターであって、ユーザの操作に応じて向きが変化する前記アバターの向きを検出する検出手段と、
前記仮想空間における前記アバターの向きに応じた画像であって、当該画像上の所定範囲の外部の表示態様を変更した出力画像を、前記ユーザに対して出力するよう制御する出力制御手段と、
前記出力画像の前記所定範囲に基づいて、前記ユーザの視線を推定する推定手段と、を備える、
仮想空間提供装置。 - 前記出力制御手段は、前記所定範囲の外部をぼやかした前記出力画像を、前記ユーザに対して出力するよう制御する、
請求項1に記載の仮想空間提供装置。 - 前記推定手段は、前記所定範囲に複数のオブジェクトが含まれる場合、前記ユーザの視線は、前記所定範囲の中央により近いオブジェクトに向いていると推定する、
請求項1または2に記載の仮想空間提供装置。 - 前記出力制御手段は、
前記アバターからの視点の画像であって、前記所定範囲の外部の表示態様を変更した画像を、前記出力画像として生成する画像生成手段と、
生成された前記出力画像を、前記ユーザが利用する表示装置に送信する、画像送信手段と、を備える、
請求項1乃至3のいずれか一項に記載の仮想空間提供装置。 - 前記画像生成手段は、前記所定範囲の中央から所定の距離内にオブジェクトが映る場合であって、当該所定の距離内に映るオブジェクトが、前記所定範囲の外部にも映っている場合、当該所定の距離内に映るオブジェクトを含まない範囲であって、前記所定範囲の外部の範囲の表示態様を変更した前記出力画像を生成する、
請求項4に記載の仮想空間提供装置。 - 前記所定範囲の位置、大きさ及び形の少なくともいずれかの設定を受け付ける設定受付手段を備える、
請求項1乃至5のいずれか一項に記載の仮想空間提供装置。 - 前記ユーザがデバイスを操作することによって示されるカーソルが、前記所定範囲の内部にある場合、前記推定手段は、前記ユーザが、当該カーソルによって指し示されるオブジェクトを向いていると推定する、
請求項1乃至6のいずれか一項に記載の仮想空間提供装置。 - 撮影装置によって撮影された前記ユーザが映る撮影画像に基づいて、前記所定範囲に映るオブジェクトに対する、前記ユーザの感情を推定する感情推定手段を備える、
請求項1乃至7のいずれか一項に記載の仮想空間提供装置。 - 前記感情推定手段は、推定された前記ユーザの感情を示す情報を、前記ユーザが操作する前記アバターに付加する、
請求項8に記載の仮想空間提供装置。 - 前記所定範囲は、前記出力画像の中央を含む範囲である、
請求項1乃至9のいずれか一項に記載の仮想空間提供装置。 - 仮想空間内のアバターであって、ユーザの操作に応じて向きが変化する前記アバターの向きを検出し、
前記仮想空間における前記アバターの向きに応じた画像であって、当該画像上の所定範囲の外部の表示態様を変更した出力画像を、前記ユーザに対して出力するよう制御し、
前記出力画像の前記所定範囲に基づいて、前記ユーザの視線を推定する、
仮想空間提供方法。 - 前記制御するステップにおいて、前記所定範囲の外部をぼやかした前記出力画像を、前記ユーザに対して出力するよう制御する、
請求項11に記載の仮想空間提供方法。 - 前記ユーザの視線を推定するステップにおいて、前記所定範囲に複数のオブジェクトが含まれる場合、前記ユーザの視線は、前記所定範囲の中央により近いオブジェクトに向いていると推定する、
請求項11または12に記載の仮想空間提供方法。 - 前記制御するステップにおいて、
前記アバターからの視点の画像であって、前記所定範囲の外部の表示態様を変更した画像を、前記出力画像として生成し、
生成された前記出力画像を、前記ユーザが利用する表示方法に送信する、
請求項11乃至13のいずれか一項に記載の仮想空間提供方法。 - 前記生成するステップにおいて、前記所定範囲の中央から所定の距離内にオブジェクトが映る場合であって、当該所定の距離内に映るオブジェクトが、前記所定範囲の外部にも映っている場合、当該所定の距離内に映るオブジェクトを含まない範囲であって、前記所定範囲の外部の範囲の表示態様を変更した前記出力画像を生成する、
請求項14に記載の仮想空間提供方法。 - 前記所定範囲の位置、大きさ及び形の少なくともいずれかの設定を受け付ける、
請求項11乃至15のいずれか一項に記載の仮想空間提供方法。 - 前記ユーザの視線を推定するステップにおいて、前記ユーザがデバイスを操作することによって示されるカーソルが、前記所定範囲の内部にある場合、前記ユーザが、当該カーソルによって指し示されるオブジェクトを向いていると推定する、
請求項11乃至16のいずれか一項に記載の仮想空間提供方法。 - 撮影装置によって撮影された前記ユーザが映る撮影画像に基づいて、前記所定範囲に映るオブジェクトに対する、前記ユーザの感情を推定する、
請求項11乃至17のいずれか一項に記載の仮想空間提供方法。 - 前記ユーザの感情を推定するステップにおいて、推定された前記ユーザの感情を示す情報を、前記ユーザが操作する前記アバターに付加する、
請求項18に記載の仮想空間提供方法。 - 前記所定範囲は、前記出力画像の中央を含む範囲である、
請求項11乃至19のいずれか一項に記載の仮想空間提供方法。 - 仮想空間内のアバターであって、ユーザの操作に応じて向きが変化する前記アバターの向きを検出する処理と、
前記仮想空間における前記アバターの向きに応じた画像であって、当該画像上の所定範囲の外部の表示態様を変更した出力画像を、前記ユーザに対して出力するよう制御する処理と、
前記出力画像の前記所定範囲に基づいて、前記ユーザの視線を推定する処理と、をコンピュータに実行させるプログラムを格納する、
コンピュータ読み取り可能な記憶媒体。 - 前記制御する処理において、前記所定範囲の外部をぼやかした前記出力画像を、前記ユーザに対して出力するよう制御する、
請求項21に記載のコンピュータ読み取り可能な記憶媒体。 - 前記ユーザの視線を推定する処理において、前記所定範囲に複数のオブジェクトが含まれる場合、前記ユーザの視線は、前記所定範囲の中央により近いオブジェクトに向いていると推定する、
請求項21または22に記載のコンピュータ読み取り可能な記憶媒体。 - 前記制御する処理において、
前記アバターからの視点の画像であって、前記所定範囲の外部の表示態様を変更した画像を、前記出力画像として生成し、
生成された前記出力画像を、前記ユーザが利用する表示方法に送信する、
請求項21乃至23のいずれか一項に記載のコンピュータ読み取り可能な記憶媒体。 - 前記生成する処理において、前記所定範囲の中央から所定の距離内にオブジェクトが映る場合であって、当該所定の距離内に映るオブジェクトが、前記所定範囲の外部にも映っている場合、当該所定の距離内に映るオブジェクトを含まない範囲であって、前記所定範囲の外部の範囲の表示態様を変更した前記出力画像を生成する、
請求項24に記載のコンピュータ読み取り可能な記憶媒体。 - 前記所定範囲の位置、大きさ及び形の少なくともいずれかの設定を受け付ける処理をさらにコンピュータに実行させるプログラムを格納する、
請求項21乃至25のいずれか一項に記載のコンピュータ読み取り可能な記憶媒体。 - 前記ユーザの視線を推定する処理において、前記ユーザがデバイスを操作することによって示されるカーソルが、前記所定範囲の内部にある場合、前記ユーザが、当該カーソルによって指し示されるオブジェクトを向いていると推定する、
請求項21乃至26のいずれか一項に記載のコンピュータ読み取り可能な記憶媒体。 - 撮影装置によって撮影された前記ユーザが映る撮影画像に基づいて、前記所定範囲に映るオブジェクトに対する、前記ユーザの感情を推定する処理をさらにコンピュータに実行させるプログラムを格納する、
請求項21乃至27のいずれか一項に記載のコンピュータ読み取り可能な記憶媒体。 - 前記ユーザの感情を推定する処理において、推定された前記ユーザの感情を示す情報を、前記ユーザが操作する前記アバターに付加する、
請求項28に記載のコンピュータ読み取り可能な記憶媒体。 - 前記所定範囲は、前記出力画像の中央を含む範囲である、
請求項21乃至29のいずれか一項に記載のコンピュータ読み取り可能な記憶媒体。
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2021/032507 WO2023032173A1 (ja) | 2021-09-03 | 2021-09-03 | 仮想空間提供装置、仮想空間提供方法、及びコンピュータ読み取り可能な記憶媒体 |
| JP2023544953A JPWO2023032173A5 (ja) | 2021-09-03 | 仮想空間提供装置、仮想空間提供方法、及びコンピュータプログラム | |
| US18/685,017 US12475652B2 (en) | 2021-09-03 | 2021-09-03 | Virtual space providing device, virtual space providing method, and computer-readable storage medium |
| US19/293,015 US20250363750A1 (en) | 2021-09-03 | 2025-08-07 | Virtual space providing device, virtual space providing method, and computer-readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2021/032507 WO2023032173A1 (ja) | 2021-09-03 | 2021-09-03 | 仮想空間提供装置、仮想空間提供方法、及びコンピュータ読み取り可能な記憶媒体 |
Related Child Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US18/685,017 A-371-Of-International US12475652B2 (en) | 2021-09-03 | 2021-09-03 | Virtual space providing device, virtual space providing method, and computer-readable storage medium |
| US19/293,015 Continuation US20250363750A1 (en) | 2021-09-03 | 2025-08-07 | Virtual space providing device, virtual space providing method, and computer-readable storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023032173A1 true WO2023032173A1 (ja) | 2023-03-09 |
Family
ID=85411786
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2021/032507 Ceased WO2023032173A1 (ja) | 2021-09-03 | 2021-09-03 | 仮想空間提供装置、仮想空間提供方法、及びコンピュータ読み取り可能な記憶媒体 |
Country Status (2)
| Country | Link |
|---|---|
| US (2) | US12475652B2 (ja) |
| WO (1) | WO2023032173A1 (ja) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2019128683A (ja) * | 2018-01-22 | 2019-08-01 | システムインテリジェント株式会社 | オフィス用バーチャルリアリティシステム、及びオフィス用バーチャルリアリティプログラム |
| WO2020217326A1 (ja) * | 2019-04-23 | 2020-10-29 | マクセル株式会社 | ヘッドマウントディスプレイ装置 |
| WO2020250377A1 (ja) * | 2019-06-13 | 2020-12-17 | マクセル株式会社 | ヘッドマウント情報処理装置およびその制御方法 |
| JP2021021889A (ja) * | 2019-07-30 | 2021-02-18 | セイコーエプソン株式会社 | 表示装置および表示方法 |
| JP2021507355A (ja) * | 2017-12-14 | 2021-02-22 | マジック リープ, インコーポレイテッドMagic Leap,Inc. | 仮想アバタのコンテキストベースのレンダリング |
| WO2021153577A1 (ja) * | 2020-01-31 | 2021-08-05 | ソニーグループ株式会社 | 視線検出装置のキャリブレーション |
Family Cites Families (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2005018212A (ja) | 2003-06-24 | 2005-01-20 | Sora Technology Corp | ネットワーク上の情報コンテンツに対するユーザの反応を把握する情報収集方法及びシステム |
| US8793620B2 (en) * | 2011-04-21 | 2014-07-29 | Sony Computer Entertainment Inc. | Gaze-assisted computer interface |
| JP5296337B2 (ja) * | 2007-07-09 | 2013-09-25 | 任天堂株式会社 | 画像処理プログラム、画像処理装置、画像処理システムおよび画像処理方法 |
| JP5294612B2 (ja) | 2007-11-15 | 2013-09-18 | インターナショナル・ビジネス・マシーンズ・コーポレーション | 仮想共有空間における参照マークの自動生成方法、装置及びプログラム |
| JP4850885B2 (ja) * | 2008-10-02 | 2012-01-11 | 株式会社コナミデジタルエンタテインメント | ゲーム装置、ゲーム装置の制御方法及びプログラム |
| US9897805B2 (en) * | 2013-06-07 | 2018-02-20 | Sony Interactive Entertainment Inc. | Image rendering responsive to user actions in head mounted display |
| US20180061084A1 (en) * | 2016-08-24 | 2018-03-01 | Disney Enterprises, Inc. | System and method of bandwidth-sensitive rendering of a focal area of an animation |
| JP6298561B1 (ja) * | 2017-05-26 | 2018-03-20 | 株式会社コロプラ | ヘッドマウントデバイスと通信可能なコンピュータによって実行されるプログラム、当該プログラムを実行するための情報処理装置、およびヘッドマウントデバイスと通信可能なコンピュータによって実行される方法 |
| JP6878350B2 (ja) | 2018-05-01 | 2021-05-26 | グリー株式会社 | ゲーム処理プログラム、ゲーム処理方法、および、ゲーム処理装置 |
| JP2020004060A (ja) | 2018-06-27 | 2020-01-09 | 株式会社コロプラ | プログラム、情報処理装置および方法 |
| US10719127B1 (en) * | 2018-08-29 | 2020-07-21 | Rockwell Collins, Inc. | Extended life display by utilizing eye tracking |
| US20220237849A1 (en) * | 2019-06-28 | 2022-07-28 | Tobii Ab | Method and system for reducing processor load in a computer |
-
2021
- 2021-09-03 US US18/685,017 patent/US12475652B2/en active Active
- 2021-09-03 WO PCT/JP2021/032507 patent/WO2023032173A1/ja not_active Ceased
-
2025
- 2025-08-07 US US19/293,015 patent/US20250363750A1/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2021507355A (ja) * | 2017-12-14 | 2021-02-22 | マジック リープ, インコーポレイテッドMagic Leap,Inc. | 仮想アバタのコンテキストベースのレンダリング |
| JP2019128683A (ja) * | 2018-01-22 | 2019-08-01 | システムインテリジェント株式会社 | オフィス用バーチャルリアリティシステム、及びオフィス用バーチャルリアリティプログラム |
| WO2020217326A1 (ja) * | 2019-04-23 | 2020-10-29 | マクセル株式会社 | ヘッドマウントディスプレイ装置 |
| WO2020250377A1 (ja) * | 2019-06-13 | 2020-12-17 | マクセル株式会社 | ヘッドマウント情報処理装置およびその制御方法 |
| JP2021021889A (ja) * | 2019-07-30 | 2021-02-18 | セイコーエプソン株式会社 | 表示装置および表示方法 |
| WO2021153577A1 (ja) * | 2020-01-31 | 2021-08-05 | ソニーグループ株式会社 | 視線検出装置のキャリブレーション |
Non-Patent Citations (1)
| Title |
|---|
| KAORU TANAKA: "The Tracking System of the Visual Information by HMD Restricted with the Mask", TRANSACTIONS OF THE VIRTUAL REALITY SOCIETY OF JAPAN, vol. 7, no. 2, 1 January 2002 (2002-01-01), pages 257 - 266, XP093042411, ISSN: 1344-011X, DOI: 10.18974/tvrsj.7.2_257 * |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2023032173A1 (ja) | 2023-03-09 |
| US12475652B2 (en) | 2025-11-18 |
| US20250131654A1 (en) | 2025-04-24 |
| US20250363750A1 (en) | 2025-11-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6782846B2 (ja) | 仮想現実におけるオブジェクトの共同操作 | |
| EP3769509B1 (en) | Multi-endpoint mixed-reality meetings | |
| WO2019005332A1 (en) | PROVISION OF LIVING AVATARS IN VIRTUAL MEETINGS | |
| JP7408068B2 (ja) | コンピュータプログラム、サーバ装置及び方法 | |
| US12243120B2 (en) | Content distribution system, content distribution method, and content distribution program | |
| CN107210949A (zh) | 利用角色的消息服务方法、执行所述方法的用户终端、包括所述方法的消息应用程序 | |
| JP2023524119A (ja) | 顔イメージ生成方法、装置、電子機器及び可読記憶媒体 | |
| US20210311608A1 (en) | Displaying Representations of Environments | |
| US20240420400A1 (en) | Terminal, information processing method, program, and recording medium | |
| JPWO2018135057A1 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
| JP2023067892A (ja) | 情報処理プログラム、情報処理方法、情報処理装置 | |
| CN113413600A (zh) | 信息处理方法、装置、计算机设备及存储介质 | |
| US10788887B2 (en) | Image generation program, image generation device, and image generation method | |
| Nijholt | Capturing obstructed nonverbal cues in augmented reality interactions: a short survey | |
| WO2023032173A1 (ja) | 仮想空間提供装置、仮想空間提供方法、及びコンピュータ読み取り可能な記憶媒体 | |
| US20250355489A1 (en) | Virtual space providing device, virtual space providing method, and computer-readable storage medium | |
| WO2024116529A1 (ja) | システム、システムの制御方法 | |
| JP2021002145A (ja) | 仮想空間提供システム、仮想空間提供方法及びプログラム | |
| US11740773B2 (en) | Information processing device and method | |
| US20240259528A1 (en) | Provision system and control method for same | |
| WO2025100190A1 (ja) | 情報処理装置、方法、およびプログラム | |
| GB2639137A (en) | Electronic device | |
| JP2024044908A (ja) | 方法、プログラム、及び端末装置 | |
| WO2024252695A1 (ja) | プログラム、情報処理方法及び情報処理システム | |
| WO2024083302A1 (en) | Virtual portal between physical space and virtual space in extended reality environments |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21956060 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2023544953 Country of ref document: JP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 18685017 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 21956060 Country of ref document: EP Kind code of ref document: A1 |
|
| WWP | Wipo information: published in national office |
Ref document number: 18685017 Country of ref document: US |