[go: up one dir, main page]

WO2021023364A1 - Système de réalité virtuelle et procédé associé - Google Patents

Système de réalité virtuelle et procédé associé Download PDF

Info

Publication number
WO2021023364A1
WO2021023364A1 PCT/EP2019/070951 EP2019070951W WO2021023364A1 WO 2021023364 A1 WO2021023364 A1 WO 2021023364A1 EP 2019070951 W EP2019070951 W EP 2019070951W WO 2021023364 A1 WO2021023364 A1 WO 2021023364A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
real
scene
virtual
movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/EP2019/070951
Other languages
English (en)
Inventor
Alessandro CIERRO
Gwenaël LAURENT
José ZURSTRASSEN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Selfsun SA
Original Assignee
Selfsun SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Selfsun SA filed Critical Selfsun SA
Priority to PCT/EP2019/070951 priority Critical patent/WO2021023364A1/fr
Publication of WO2021023364A1 publication Critical patent/WO2021023364A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object

Definitions

  • the present invention concerns a virtual reality (VR) system.
  • VR virtual reality
  • the present invention relates to, in particular, a VR system which prevents its users from experiencing physical discomfort such as motion sickness due to an inconsistency between the visual and the vestibular or proprioceptive senses.
  • Virtual reality systems aim at creating a sensory illusion allowing a user to interact with objects or characters which are in a virtual scene and are not physically present in the real surrounding of the user.
  • each user has a corresponding avatar to represent his existence in the virtual scene so as to be recognized by the other users.
  • the avatar is in general therefore a graphical representation of a user in the virtual scene.
  • AR augmented reality
  • OS desktop and mobile operating systems
  • some VR applications may allow the avatar of a user in the virtual scene to perform rotations such as turning his head.
  • no VR application on a smartphone is yet capable of allowing displacements performed by the avatar of the user in the virtual scene.
  • the present invention aims at providing a robust and cost-sensitive solution which prevents the user from experiencing the physical discomfort during the execution of the VR application.
  • the invention concerns a virtual reality system configured to display to a user a portion of a virtual scene, the user being physically in a real scene.
  • the virtual reality system comprises a movement tracking module and a calculation module.
  • the movement tracking module is configured to generate sampling data regarding a real movement performed by the user in the real scene.
  • the calculation module is configured to calculate an estimated real movement of the user in the real scene according to said sampling data; to generate, according to the estimated real movement of the user, a corresponding virtual movement to be performed by an avatar of the user in the virtual scene; and to select, according to the generated virtual movement, the portion of the virtual scene to be displayed to the user.
  • the present invention allows thus a real-time synchronization between the real movement performed by the user in the real scene and the selected portion of the virtual scene to be displayed to the user.
  • the real-time synchronization allows the user to perform rotations and/or displacements in the real scene and to experience the corresponding rotations and/or displacements in the virtual scene, which prevents thus the user from experiencing the physical discomfort during the execution of the VR system and method.
  • the movement tracking module comprises at least one of following sampling devices: an image generator configured to generate, from the perspective of the user in the real scene, images of the real scene respectively generated at a time instant; an accelerometer configured to generate, by detecting magnitude and direction of an acceleration relative to a displacement of the user’s motion in the real scene, a set of displacement data regarding the displacement of the user; a gyroscope configured to generate a set of rotation data regarding a rotation performed by the user in the real scene.
  • Said sampling data regarding the real movement performed by the user in the real scene comprises at least one of following types of sampling data: generated images, the set of displacement data, and the set of rotation data.
  • the calculation module is configured to estimate a position of a real object in the real scene by analyzing the generated images.
  • the calculation module is configured to detect one or plural areas in each of the generated images, by grouping pixels with similar or identical property values, wherein the property values a pixel may be the intensity and/or the color information of the pixel; to compare property values of the detected areas belonging to different generated images, in order to identify which of the detected areas of the different images correspond to a same real object in the real scene; and to calculate an estimated position of the real object according to the property values of the detected areas which belong to different generated images but correspond to the same real object in the real scene.
  • the detected areas are considered to correspond to the same real object.
  • the calculation module is configured to calculate an estimated position of the user in the real scene by taking into account a relative relationship between the real object and the user in the real scene.
  • the calculation module is configured to calculate estimated positions and the estimated real movement of the user in the real scene according to at least one of three types of following data: one or plural real objects’ estimated positions, one or plural sets of displacement data generated by the accelerometer, and one or plural sets of rotation data generated by the gyroscope.
  • the calculation module is configured to generate a virtual scene image depicting the selected portion of the virtual scene to be displayed to the user; the virtual reality system further comprising a human-machine interface configured to display the virtual scene image to the user.
  • the calculation module, the image generator, the accelerometer and the gyroscope are components built into a mobile computing device of the user.
  • the mobile computing device is a smartphone; the calculation module and the image generator being respectively a processor and a camera both built into the smartphone of the user.
  • the invention also concerns a virtual reality method configured to display to a user a portion of a virtual scene, the user being physically in a real scene.
  • the virtual reality method comprises a tracking data generation step configured to generate sampling data regarding a real movement performed by the user in the real scene; a movement calculation step configured to calculate, according to the sampling data, an estimated real movement of the user in the real scene; and a movement synchronization step.
  • the movement synchronization step is configured to generate, according to the estimated real movement of the user, a corresponding virtual movement to be performed by an avatar of the user in the virtual scene, and to select, according to the generated virtual movement, the portion of the virtual scene to be displayed to the user.
  • the tracking data generation step comprises at least one of following steps: an image generation step configured to generate, from the perspective of the user in the real scene, images of the real scene respectively generated at a time instant; a displacement data generation step configured to generate, by detecting magnitude and direction of an acceleration relative to a displacement of the user’s motion in the real scene, a set of displacement data regarding the displacement of the user; a rotation data generation step configured to generate a set of rotation data regarding a rotation performed by the user in the real scene.
  • Said sampling data regarding the real movement performed by the user in the real scene comprises at least one of following types of sampling data: generated images, the set of displacement data, and the set of rotation data.
  • the virtual reality method further comprises an image processing step configured to estimate a position of a real object in the real scene by analyzing said generated images.
  • the image processing step comprises an area detection step configured to detect one or plural areas in each of the generated images, by grouping pixels with similar or identical property values, wherein the property values a pixel may be the intensity and/or the color information of the pixel; an area comparison step configured to compare property values of the detected areas belonging to different generated images, in order to identify which of the detected areas of the different images correspond to a same real object in the real scene; an object position estimation step configured to calculate an estimated position of the real object according to the property values of the detected areas which belong to different generated images but correspond to the same real object in the real scene. [0025] According to an embodiment, if a difference value between the property values of two detected areas belonging to two of the generated images is in a pre-determined confidence interval, the detected areas are considered to correspond to the same real object.
  • an estimated position of the user in the real scene is calculated by taking into account a relative relationship between the real object and the user in the real scene.
  • the movement calculation step is configured to calculate estimated positions and the estimated real movement of the user in the real scene according to at least one of three types of following data: one or plural real objects’ estimated positions, one or plural sets of displacement data generated in the displacement data generation step, and one or plural sets of rotation data generated in the rotation data generation step.
  • the movement synchronization step is further configured to generate a virtual scene image depicting the selected portion of the virtual scene to be displayed to the user; the virtual reality method further comprising a display step configured to display the virtual scene image to the user.
  • Figure 1 illustrates a virtual reality system according to an embodiment of the invention.
  • Figure 2 illustrates steps of a virtual reality method according to an embodiment of the invention.
  • Figure 3 illustrates an example of two images generated by an image generator of the virtual reality system of the invention.
  • FIG. 1 illustrates a virtual reality (VR) system 1 according to an embodiment of the invention.
  • the VR system 1 utilized by at least one user, is configured to execute a virtual reality method 200 so as to allow the at least one user to experience himself being in a virtual scene.
  • the real scene and the virtual scene are respectively a three-dimensional (3D) scene.
  • Each of the users is physically in the real scene and has its avatar to represent his existence in the virtual scene.
  • a field of view of his avatar means the field of view of the user when experiencing himself in the virtual scene.
  • Virtual objects and in the virtual scene are preferably pre-rendered in order to enhance the efficiency of the VR system 1 .
  • the virtual scene can be totally irrelated to the real scene.
  • the VR method 200 is configured to generate, according to a detected real movement of the user in the real scene, a corresponding virtual movement to be performed by the avatar of the user in the virtual scene.
  • a movement of the user can be a displacement or a rotation of the user.
  • the VR system 1 comprises following modules which the user is equipped with: a calculation module 10, a movement tracking module 3, a human-machine interface ("HMI") 2 and preferably, a data storage module 7, and/or a transmission module 8.
  • Figure 2 illustrates steps 210 to 250 of the VR method 200 according to an embodiment of the invention.
  • At least one of the above-mentioned modules is preferably a component built into a mobile computing device 5 of the user of the VR system 1.
  • Said mobile computing device 5 can be a smartphone.
  • the calculation module 10, the movement tracking module 3, the data storage module 7, and the transmission module 8 are all built in the smartphone 5 of the user.
  • the calculation module 10 is configured, in cooperation with other modules of the VR system 1 , to perform the steps of the VR method 200 in order to generate the virtual movement to be performed by the avatar of the user in the virtual scene, and to select a portion of the virtual scene to be displayed by the HMI interface 2 to the user.
  • the selected portion of the virtual scene is seen from the perspective of the avatar of the user in the virtual scene, which is what the user in the real scene sees by using the HMI interface 2.
  • the calculation module 10 is implemented by a processor and an operating system (OS) of the mobile computing device 5 (e.g. a smartphone), in terms of hardware together with software.
  • OS operating system
  • the calculation module 10 can be a cloud computing server. Even more advantageously, the cloud computing server has the ability to perform big data computation.
  • the calculation module 10 comprises the processor of the mobile computing device 5 and the cloud computing server respectively utilized to execute a part of the steps of the VR method 200.
  • the HMI interface 2 is configured to display to the user said portion of the virtual scene selected by the calculation module 10.
  • the HMI interface 2 is preferably a virtual reality head- mounted display mount, such as a headset “Samsung Gear VR” developed by Samsung Electronics.
  • the data storage module 7 is configured to store at least a portion of data generated and / or received during one of the steps of the VR method 200, by at least one of the following modules: the HMI interface 2, the movement tracking module 3 and the calculation module 10. As mentioned above, the data storage module 7 is built in the mobile computing device 5 of the user. In an alternative embodiment, the data storage module 7 may a remote data storage means accessible by the calculation module 10 and by the mobile computing device 5.
  • the transmission module 8 is configured to perform data transmission between at least two of the following modules of the VR system 1 : the calculation module 10, the movement tracking module 3, the HMI interface 2, and the data storage module 7. In an embodiment where the data storage module 7 is not electrically coupled to the calculation module 10, data transmission between the calculation module 10 and the data storage module 7 can be performed by the transmission module 8.
  • the transmission module 8 is configured to execute real time data transmission performed in a pre-selected wireless communication mode, for example data transmission according to the Wi-Fi standard of the IEEE 802.11 group (ISO / IEC 8802-11), or a data transmission via a cellular network according to 2G, 3G, 4G or 5G telecommunications standards.
  • the transmission module 8 comprises at least one WiFi connection means or at least one 2G / 3G / 4G / 5G telecommunication means. It is noted that other wireless connection means performing data transmission according to another communication protocol different from the above-mentioned ones can be utilized as the transmission module 8 without departing from the scope of the present invention.
  • the invention is nevertheless not limited to the means utilized as the calculation module 10, the HMI interface 2, the data storage module 7, and the transmission module 8.
  • the movement tracking module 3 being configured to generate sampling data regarding a real movement performed by the user in the real scene, comprises at least one of following sampling devices: an image generator 31 , an accelerometer 32 and a gyroscope 33.
  • said sampling devices such as the image generator 31 , the accelerometer 32 and the gyroscope 33 are components built into the mobile computing device 5 (e.g. the smartphone 5) of the user.
  • the accelerometer 32 and the gyroscope 33 are respectively a three-axis accelerometer and a three-axis gyroscope. The functions of said sampling devices will be described in detail in the following paragraphs.
  • the image generator 31 is configured to define a perspective of the user in the real scene and to generate an image depicting what the user sees from his perspective in the real scene at a time instant. More precisely, the image depicts what’s in the field of view of the image generator 31 worn by the user.
  • the image generator 31 is preferably a camera of the smartphone 5 which generates 30 successive images per second.
  • the generated image being represented by a corresponding set of image data comprising visual information on the real scene.
  • the visual information comprising preferably intensity and/or color information, is pixel-based data or pel- based data. “Pixel” or “pel” is the smallest controllable element of an image generated by the image generator 31. More precisely, each of pixel of the generated image has its intensity and color information. Color information is represented by three or four component intensities such as red, green, and blue, or cyan, magenta, yellow, and black.
  • the image generator 31 is preferably placed around the eyes of the user.
  • the smartphone 5 comprising the image generator 31 is placed in the HMI interface 2 (e.g. the headset) worn by the user.
  • the VR method 200 comprises a tracking data generation step 210, an image processing step 220, a movement calculation step 230, a movement synchronization step 240, and a display step 250.
  • the tracking data generation step 210 performed by the movement tracking module 3, comprises at least one of following steps 211 to 213: an image generation step 211 , a displacement data generation step 212 and a rotation data generation step 213.
  • the displacement data generation step 212 performed by the accelerometer 32, is configured to generate, by detecting magnitude and direction of an acceleration relative to a displacement of the user’s motion in the real scene, a set of displacement data regarding the displacement of the user.
  • the rotation data generation step 213, performed by the gyroscope 33 is configured to generate a set of rotation data regarding a rotation performed by the user, e.g. the user turning his head.
  • the set of rotation data comprises rotation values measured in radians per second around each of the three axes X-Y-Z.
  • the rotation values may be positive or negative depending on the direction of rotation.
  • the sampling data regarding the real movement performed by the user in the real scene comprises at least one of following types of sampling data: the images generated by the image generator 31 , sets of displacement data generated by the accelerometer 32, and sets of rotation data generated by the gyroscope 33.
  • the image generation step 211 performed by the image generator 31 it is configured to generate, from the perspective of the user in the real scene, successive images of the real scene (where the user is) respectively generated at a time instant.
  • successive images of the real scene (where the user is) respectively generated at a time instant.
  • two images 61 and 62 shown in Figure 3 which depict the field of view the user in the real scene, are generated at two successive time instants.
  • the images 61 and 62 corresponds respectively to a pixel-based set of image data.
  • the calculation module 10 is configured to perform the image processing step 220 to analyze said generated images in order to estimate a position of a real object (e.g. the above- mentioned real objects A and B) in the real scene.
  • the real object is preferably a static object presenting one or plural surfaces easy to be detected, such as a table, etc.
  • the estimated position of the real object will be utilized to calculate an estimated real movement of the user corresponding to the real movement performed by the user in the real scene.
  • the image processing step 220 comprises an area detection step 221 , an area comparison step 222 and an object position generation step 223.
  • the area detection step 221 is configured to detect one or plural areas in said image, by grouping pixels with similar or identical property values.
  • the property values of a pixel (or a pel) may be the intensity and/or the color information of the pixel (or the pel).
  • two areas 61a and 61b are detected.
  • the generated image 62 two areas 62a and 62b are detected.
  • the area comparison step 222 is then performed to compare property values of detected areas belonging to different generated images, in order to identify which of the detected areas of the different images correspond to a same real object in the real scene. For example, if a difference value between the property values of the detected area 61 a of the image 61 and the property values of the detected area 62a of the image 62 is in a pre-determined confidence interval, the detected areas 61 a and 62a can be considered to correspond to a same real object (which is the object A that can be a table for example). [0054] It is noted that at the current stage, it is not known and it is not important to know which real object in the real scene the detected areas 61a and 62a both correspond to.
  • the purpose of performing the area comparison step 202 is to identify that the detected areas 61a and 62a may correspond to a same real object. Similarly, a difference value between the property values of the detected area 61 b of the image 61 and the property values of the detected area 62b of the image 62 is in the pre-determined confidence interval, the detected areas 61b and 62b can be considered to correspond to another real object (which is the object B that can be a chair for example).
  • the pre-determined value is generated by taking into account effects of the environment of the real scene (e.g. the lighting) as well as those of different types of image generators utilized in the image generation step 211 .
  • the object position estimation step 223 is configured to calculate an estimated position of said real object.
  • the object position estimation step 223 comprises performing a depth estimation method in order to improve the precision of the estimated position of the real object. In this way, the estimated position of the real object corresponding to the detected areas 61 a and 62a is obtained, and the estimated position of another real object corresponding to the detected areas 61 b and 62b is obtained as well.
  • the real objects are static in the real scene when the images are generated by the image generator 31 .
  • the detected areas occupying different regions of different images because the user is performing a real movement (e.g. a displacement or a rotation) in the real scene, which results in changing accordingly the perspective of the user in the real scene.
  • the detected areas 61 a and 62a are considered to correspond to a same real object (without knowing the real object being the object A).
  • the detected area 61 a is located in a lower left part of the image 61 while the detected area 62a is located in a centered part of the image 62, which results from the change of the field of view of the user.
  • the relative relationship between the real object(s) and the user can be estimated by using the above-mentioned estimated positions of these static real objects, and the estimated position of the user in the real scene can thus be obtained in the following step 230.
  • Said relative relationship means that when objects are static and do not perform an absolute motion in the real scene while the user may be performing a movement, the objects are considered, from the user’s perspective, to perform relative displacements.
  • the image processing step 220 comprises further an object identification step configured to recognize what the real object (which corresponds to detected areas belonging to different generated images) is.
  • the detected areas 61 a and 62a belonging to the generated images 61 , 62 are considered to correspond to a same real object without knowing the real object being a table.
  • the YOLO (“You Only Look Once”) algorithm can be performed to recognize that the real object corresponding to the detected areas 61a and 62a is a table.
  • the VR system 1 also recognizes in this way that the detected areas 61 b and 62b is a chair.
  • a convolutional neural network can be applied to the generated images 61 and 62.
  • Such convolutional neural network utilizes bounding boxes to divide the generated images 61 and 62.
  • the bounding boxes are weighted by predicted probabilities.
  • the predicted probabilities can be generated by using a pre-trained model.
  • the invention does not rely on the GPS (Global Positioning System) to calculate the estimated position of the user in the real scene because the position estimated by the GPS system is less accurate than that estimated by performing the above-mentioned steps of the invention.
  • the present invention has a great light variation tolerance and does not require the real scene to be an environment being pre-defined according to certain conditions such as the lighting of the environment, the number and/or the colors of real objects, etc.
  • the real scene where the user physically is can be indoors or outdoors.
  • the VR system 1 and the VR method 200 has a great performance when the user performs high speed motions which can be up to at least 25 km/hour.
  • the VR method 200 further comprises a materialization step configured to utilize the estimated position of the real object to calculate a corresponding virtual position of the augmented-reality object to be rendered in the virtual scene.
  • the augmented-reality object corresponding to the real object is a graphical representation to be rendered and be displayed to the user.
  • the augmented-reality object can be for example a polyhedron or a plane.
  • the VR system 1 performs the above- mentioned object identification step to recognize what the real object is (e.g. a table or a chair), a virtual object which visually represents sufficiently the real object can be rendered in the virtual scene and to be displayed to the user.
  • the movement calculation step 230 is configured to calculate an estimated real movement of the user according to at least one of three types of following data: estimated positions of real objects calculated in the image processing step 220, sets of displacement data generated by the accelerometer 32 (in the step 212), and sets of rotation data generated by the gyroscope 33 (in the step 213).
  • the above three types of data are all utilized to calculate the estimated positions and estimated real movements of the user in the real scene.
  • the above-mentioned estimated positions of real objects calculated by analyzing the successive images generated by the image generator 31 may be sufficient to estimate positions of the user at different time moments in the real scene.
  • An estimated real movement of the user can be calculated according to at least two estimated positions of the user in the real scene.
  • a movement of the user can be, as mentioned above, a displacement or a rotation performed by the user.
  • a displacement of the user is thus defined by both of the starting and the arrival points.
  • the starting and arrival points can either be on a same plane (e.g. the user walking on the floor), or be with different heights (e.g. the user climbing up the stairs).
  • a rotation can be, for example, the user turning his head with or without walking in the real scene.
  • the generated images comprise limited information which can be used to determine the estimated positions and the estimated real movement (e.g. a displacement) of the user in the real scene because the user’s field of view does not change and is just full of the wide surface. Therefore, the set(s) of displacement data generated by the accelerometer 32 will be useful to calculate the estimated positions and the estimated real movement of the user in the real scene.
  • the real movements of the user in the real scene comprising the displacements and rotations performed by the user in the real scene, can be estimated in real-time.
  • the real-time tracking of the user’s orientation and position in the real scene can thus be performed.
  • the control module 10 utilizes an SDK (software development kit), such as ARCore developed by Google, for the execution of the above- mentioned steps of the VR method 200.
  • the movement synchronization step 240 is then performed by calculation module 10 in order to generate, according to the estimated real movement of the user at a first time instant in the real scene, a corresponding virtual movement to be performed by the avatar of the user in the virtual scene.
  • a synchronization between the real movement performed by the user in the real scene and the virtual movement performed by (the avatar of) the user in the virtual scene is performed.
  • the virtual scene is preferably pre-rendered, and a starting position or a starting orientation of the avatar of the user in the virtual scene is either given by the VR system 1 , or is an end position or an end orientation of a previous virtual movement performed by the avatar of the user in the virtual scene.
  • the calculation module 10 is also configured to select, according to the generated virtual movement, a portion of the virtual scene to be depicted in a virtual scene image to be displayed at a second time instant by the HMI interface 2 to the user.
  • the virtual scene image, generated by the calculation module 10 presents what the avatar of the user should see in the virtual scene at the second time instant; in other words, the virtual scene image depicting the field of view of the avatar of the user.
  • the second time instant can be only 1/30 second later than the first time instant in order to avoid latency and lack of smoothness which may result in the user experiencing physical discomfort.
  • the HMI interface 2 controlled by the calculation module 10, is configured to display to the user the virtual scene image depicting the portion of the virtual scene selected by the calculation module 10.
  • the synchronization is thus performed, which prevents the user from experiencing the physical discomfort while the user experiencing himself in the virtual scene.
  • the HMI interface 2 successively displays 30 virtual scene images per second. In this way, the user can feel a consistency between the visual and the vestibular or proprioceptive senses because the way the user’ avatar moves in the virtual scene is similar to the way the user moves in the real scene. Therefore, no physical discomfort such as motion sickness, would occur to the user during the execution of the VR method 200.
  • each of the users is equipped with the above-mentioned modules 2, 3, 10 and preferably the modules 7 and/or 8.
  • the VR system 1 allows each of the users to have his own avatar and a corresponding starting position of the virtual scene.
  • the estimated position of the user in the real scene and a corresponding virtual position of his avatar in the virtual scene can be calculated by the calculation module 10 by performing the above-mentioned steps of the VR method 200.
  • the VR system 1 further comprises a virtual position server configured to receive from each of the users a set of virtual presence data comprising information about the graphical representation of his avatar and the virtual position of his avatar in the virtual scene. Therefore for one of the users, his calculation module 10 is configured to communicate, via the transmission module 8, with the virtual position server so as to receive the sets of virtual presence data of the other users in the same virtual scene. While performing the movement synchronization step 240, the calculation module 10 is configured to generate a virtual scene image depicting a selected portion of the virtual scene to be displayed to the user, by taking into account the sets of virtual presence data of the other users in the same virtual scene. Some of the avatars of the other users may thus appear in the field of the view of (the avatar of) the user. The HMI interface 2 then displays such virtual scene image to the user.
  • the present invention is capable of providing to the user a real-time synchronization between the real movement performed by the user in the real scene and a selected portion of the virtual scene being displayed to the user by the HMI interface 2.
  • Said real-time synchronization allows the user to perform rotations and/or displacements in the real scene and to experience the corresponding rotations and/or displacements in the virtual scene.
  • the present invention provides thus a robust and cost-sensitive solution which prevents the user from experiencing the physical discomfort during the execution of the VR system and method.
  • the user does not need to wear numerous expensive and cumbersome sensors or other pieces of equipment, which greatly improves the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un système de réalité virtuelle (1) conçu pour présenter à un utilisateur une partie d'une scène virtuelle, l'utilisateur se trouvant physiquement dans une scène réelle. Le système de réalité virtuelle (1) comprend un module de suivi de mouvement (3 ; 31, 32, 33) et un module de calcul (10). Le module de suivi de mouvement (3) est conçu pour générer des données d'échantillonnage concernant un mouvement réel effectué par l'utilisateur dans la scène réelle. Le module de calcul (10) est conçu pour calculer un mouvement réel estimé de l'utilisateur dans la scène réelle en fonction desdites données d'échantillonnage ; pour générer, en fonction du mouvement réel estimé de l'utilisateur, un mouvement virtuel correspondant devant être effectué par un avatar de l'utilisateur dans la scène virtuelle ; et pour sélectionner, en fonction du mouvement virtuel généré, la partie de la scène virtuelle à présenter à l'utilisateur.
PCT/EP2019/070951 2019-08-03 2019-08-03 Système de réalité virtuelle et procédé associé Ceased WO2021023364A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2019/070951 WO2021023364A1 (fr) 2019-08-03 2019-08-03 Système de réalité virtuelle et procédé associé

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2019/070951 WO2021023364A1 (fr) 2019-08-03 2019-08-03 Système de réalité virtuelle et procédé associé

Publications (1)

Publication Number Publication Date
WO2021023364A1 true WO2021023364A1 (fr) 2021-02-11

Family

ID=67688732

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/070951 Ceased WO2021023364A1 (fr) 2019-08-03 2019-08-03 Système de réalité virtuelle et procédé associé

Country Status (1)

Country Link
WO (1) WO2021023364A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115098005A (zh) * 2022-06-24 2022-09-23 北京华建云鼎科技股份公司 一种控制目标对象移动的数据处理系统
CN115811623A (zh) * 2022-11-18 2023-03-17 上海哔哩哔哩科技有限公司 基于虚拟形象的直播方法和系统
CN119337461A (zh) * 2024-09-14 2025-01-21 湖北第二师范学院 虚拟仿真景观设计沟通系统及方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9417689B1 (en) * 2013-05-17 2016-08-16 Amazon Technologies, Inc. Robust device motion detection
WO2018020568A1 (fr) * 2016-07-26 2018-02-01 三菱電機株式会社 Dispositif d'affichage de zones mobiles de câbles, procédé d'affichage de zones mobiles de câbles, et programme d'affichage de zones mobiles de câbles
US20190088018A1 (en) * 2016-03-11 2019-03-21 Sony Interactive Entertainment Europe Limited Virtual Reality

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9417689B1 (en) * 2013-05-17 2016-08-16 Amazon Technologies, Inc. Robust device motion detection
US20190088018A1 (en) * 2016-03-11 2019-03-21 Sony Interactive Entertainment Europe Limited Virtual Reality
WO2018020568A1 (fr) * 2016-07-26 2018-02-01 三菱電機株式会社 Dispositif d'affichage de zones mobiles de câbles, procédé d'affichage de zones mobiles de câbles, et programme d'affichage de zones mobiles de câbles

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115098005A (zh) * 2022-06-24 2022-09-23 北京华建云鼎科技股份公司 一种控制目标对象移动的数据处理系统
CN115098005B (zh) * 2022-06-24 2023-01-24 北京华建云鼎科技股份公司 一种控制目标对象移动的数据处理系统
CN115811623A (zh) * 2022-11-18 2023-03-17 上海哔哩哔哩科技有限公司 基于虚拟形象的直播方法和系统
CN119337461A (zh) * 2024-09-14 2025-01-21 湖北第二师范学院 虚拟仿真景观设计沟通系统及方法

Similar Documents

Publication Publication Date Title
US20180012330A1 (en) Dynamic Multi-View Interactive Digital Media Representation Lock Screen
US20170359570A1 (en) Multi-View Interactive Digital Media Representation Lock Screen
US20180101989A1 (en) Headset removal in virtual, augmented, and mixed reality using an eye gaze database
TW202113428A (zh) 用於針對頭戴式顯示器產生動態障礙物碰撞警告之系統和方法
JP2022524683A (ja) 深度情報を使用して実世界の物体をレンダリングするシステムおよび方法
US20230316810A1 (en) Three-dimensional (3d) facial feature tracking for autostereoscopic telepresence systems
US11589024B2 (en) Multi-dimensional rendering
WO2021023364A1 (fr) Système de réalité virtuelle et procédé associé
US20210097749A1 (en) Stylized image painting
US20180227601A1 (en) Client-server communication for live filtering in a camera view
JP2015114905A (ja) 情報処理装置、情報処理方法およびプログラム
WO2020209199A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et support d'enregistrement
US20180364799A1 (en) Systems and methods to simulate user presence in a real-world three-dimensional space
US20220030204A1 (en) Artistic and other photo filter light field effects for images and videos utilizing image disparity
US20180089876A1 (en) System to identify and use markers for motion capture
JP2021530816A (ja) 角検出のための方法および装置
US20230316659A1 (en) Traveling in time and space continuum
US20240185511A1 (en) Information processing apparatus and information processing method
US11416975B2 (en) Information processing apparatus
US20200211275A1 (en) Information processing device, information processing method, and recording medium
US9600939B1 (en) Augmented reality platform using captured footage from multiple angles
US12254131B1 (en) Gaze-adaptive image reprojection
US12482193B2 (en) Independent scene movement based on mask layers
US10762715B2 (en) Information processing apparatus
US11272171B1 (en) Systems and methods for fallback tracking based on real-time tracking performance

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19756119

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10/05/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19756119

Country of ref document: EP

Kind code of ref document: A1