WO2022097271A1 - Système d'expérience d'espace virtuel - Google Patents
Système d'expérience d'espace virtuel Download PDFInfo
- Publication number
- WO2022097271A1 WO2022097271A1 PCT/JP2020/041524 JP2020041524W WO2022097271A1 WO 2022097271 A1 WO2022097271 A1 WO 2022097271A1 JP 2020041524 W JP2020041524 W JP 2020041524W WO 2022097271 A1 WO2022097271 A1 WO 2022097271A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- state
- virtual space
- avatar
- shape
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
Definitions
- the present invention can generate an avatar in a virtual space corresponding to a user existing in the real space, and change the state of a predetermined object generated in the virtual space through the operation of the avatar.
- the spatial experience system Regarding the spatial experience system.
- a virtual space is generated by a server or the like, and a user is made to recognize an image of the virtual space and an image of an avatar corresponding to the user via a head-mounted display (hereinafter, may be referred to as “HMD”).
- HMD head-mounted display
- a motion capture device or the like recognizes a change in the user's state in the real space (for example, body movement, coordinate movement, posture change, etc.), and the recognized state is recognized.
- a change in the user's state in the real space for example, body movement, coordinate movement, posture change, etc.
- Some change the state of the avatar in the virtual space according to the change see, for example, Patent Document 1).
- a hand-shaped avatar corresponding to the user's hand existing in the real space and an object to be operated are generated in the virtual space. Then, when the hand-shaped avatar performs an action of grabbing and moving the object, the coordinates of the object in the virtual space are changed according to the action.
- each of the multiple users participating in one virtual space has an object corresponding to the same object in their own hands (strictly speaking, in the hands of the corresponding avatars). It is possible to observe in a format that cannot be performed in the real space, such as holding one in each and observing independently at the same time. At this time, if the states of the plurality of objects are configured to change synchronously, each of the plurality of users can independently observe the changes in the states of the objects at the same time.
- the status of multiple users (and thus the corresponding avatars) is usually different for each. Therefore, when the state of multiple objects is configured to change synchronously, the object corresponding to another user when the state of the object corresponding to that specific user changes due to the action of a specific user. There is a risk that the state of the object will change unintentionally for other users, making it difficult for other users to observe.
- the present invention has been made in view of the above points, and provides a virtual space experience system in which a plurality of users can easily observe each of a plurality of objects modeled on one object in the virtual space.
- the purpose is to do.
- the "state” refers to something that can change according to the user's intention or operation.
- it refers to the shape, size, color, posture, direction, coordinate position, operating state of the body or mechanism, and the like. Therefore, the “change” and “change” of the state refer to a change or change in shape, size, color, posture, direction, movement of coordinates, start, progress and stop of operation, and the like.
- the "corresponding shape” includes not only the same shape but also a similar shape (that is, a shape having the same shape but a different size), a deformed shape, the same shape as a partially cut out shape, and the like.
- the virtual space experience system of the present invention is A predetermined value generated in the virtual space through the operation of either the first avatar or the second avatar generated in the virtual space corresponding to the first user and the second user existing in the real space.
- a virtual space experience system that can change at least one of the coordinates, orientation, and direction of an object, as well as a state that includes at least a shape.
- the virtual space, and a virtual space generation unit that generates the first avatar, the second avatar, and the predetermined object existing in the virtual space.
- a user state recognition unit that recognizes the state of the first user and the state of the second user
- An avatar control unit that controls the state of the first avatar according to the state of the first user and controls the state of the second avatar according to the state of the second user.
- An object control unit that controls the state of the predetermined object
- An image determination unit that determines an image of the virtual space to be recognized by the first user and the second user based on the state of the first avatar, the state of the second avatar, and the state of the predetermined object.
- the first user and the second user are provided with an image display for recognizing an image of the virtual space.
- the predetermined object includes a first object observed by the first user and a second object observed by the second user.
- the second object is located in the virtual space at coordinates different from the coordinates of the first object, and has a shape corresponding to the first object.
- the state of the first object can be changed by the first user through the operation of the first avatar.
- the object control unit When the shape of the first object is changed, the shape of the second object is changed according to the change of the shape of the first object.
- the state other than the fixed state of the second object is changed while at least one of the coordinates, the posture, and the direction of the second object is fixed. It is characterized in that it changes according to the change of the state of the first object.
- the virtual emptiness experience system of the present invention has a shape corresponding to the second object. That is, those objects are generated by using one object (for example, an article existing in the real space) as a model. Further, in this system, when the shape of the first object is changed, the shape of the second object is changed (synchronously) according to the change in the shape of the first object.
- the first user who plays the role of a teacher operates the first object in his / her own hands
- the second user who plays the role of a student observes the second object in the hands of the second user.
- the system remains fixed at least one of the coordinates, orientation, and orientation of the second object when the state other than the shape of the first object is changed, while the second object is out of that fixed state. It is configured to change the state of the first object according to the change of the state of the first object.
- the state of the second object observed by the second user changes depending on the result of the operation of the first user on the first object, but it is not completely synchronized and at least one of the coordinates, the posture, and the direction. One does not change. That is, the state related to observation remains fixed to some extent.
- the first user can observe the first object that he / she observes while freely moving it, while the second user also observes the second object that he / she observes in a somewhat stable state. Can be done.
- each of a plurality of objects modeled on one object in the virtual space is freely intended by each corresponding user, or is stable to some extent. It can be observed in the state. As a result, it is possible to realize a state in which each of the plurality of users can easily observe a plurality of objects modeled on one object and objects corresponding to each user.
- the state of the second object can be changed by the second user through the operation of the second avatar. It is preferable that the object control unit changes at least one of the corresponding coordinates, posture and direction of the second object independently of at least one of the coordinates, posture and direction of the first object.
- At least one of the coordinates, posture, and direction of the second object is set to the first object corresponding to the first user. It can be changed to make it easier to see regardless of the state of. This makes it easier for the second user to observe the second object.
- the predetermined object includes a third object operated by the second user.
- the color of the second object is translucent or different from the color of the third object.
- the third object has a shape corresponding to the second object and has a shape corresponding to the second object.
- the shape of the third object can be changed by the second user through the operation of the second avatar. It is preferable that the coordinates, posture and direction of the third object match the coordinates, posture and direction of the second object.
- the second object and the third object are superimposed in the virtual space.
- the shape of the second object changes according to the change in the shape of the first object corresponding to the first user (that is, the operation of the first user).
- the shape of the third object changes according to the operation of the second user. Therefore, when the shape of the first object changes, only the shape of the second object out of the superimposed second object and the third object changes in response to the change.
- the shape of the first object changes, only the changed part (that is, only the difference between the shape of the first object and the shape of the third object) is superposed and generated.
- the 2 object 7 and the 3rd object 8 will be shown to the 2nd user U2 as a translucent part or a part having a color different from that of the 3rd object.
- the second user can intuitively grasp the difference between the shape of the first object and the shape of the third object (and by extension, the operation of the first user on the first object).
- the second user can easily grasp the assembling procedure of the first user by observing the difference. ..
- the virtual space experience system of the present invention It is equipped with an item state recognition unit that recognizes the state of items existing in the real space.
- the predetermined object includes a third object operated by at least one of the first user and the second user. It is preferable that the virtual space generation unit generates the third object in correspondence with the item.
- the third object to be operated is generated corresponding to the item that actually exists in the real space
- the operation performed by the user on the third object via the avatar in the virtual space is performed.
- the result will be reflected in the item corresponding to the real space. This makes the user feel as if the result of the avatar's operation in the virtual space was reflected in the item in the real space.
- the user touches the third object when the user touches the third object, the user touches the item in the real space. This gives the user the sensation of touching the third object. As a result, the user's immersive feeling in the virtual space can be enhanced.
- the schematic diagram of the virtual space which a VR system which concerns on a modification makes a user recognize when an object is operated.
- the VR system S which is a virtual space experience system according to the embodiment, will be described with reference to the drawings.
- the VR system S is a system that allows the user to experience virtual reality (so-called VR (virtual reality)) by recognizing that the user himself / herself exists in the virtual space.
- VR virtual reality
- the VR system S exists in the first user U1 existing in the first real space RS1 and in the second real space RS2 existing at a position different from the first real space RS1.
- the second user U2 is configured to be able to communicate with each other via the same virtual space VS (see FIG. 3).
- the first real space RS1 and the second real space RS2 may be collectively referred to as a real space RS.
- the first user U1 and the second user U2 may be collectively referred to as a user U.
- the first camera 2a is installed in the first real space RS1
- the second camera 2b is installed in the second real space RS2.
- the first camera 2a and the second camera 2b may be collectively referred to as a camera 2.
- the "state” refers to a state that can change according to the intention or operation of the user U.
- it refers to the shape, size, color, posture, direction, coordinate position, operating state of the body or mechanism, and the like. Therefore, the "change” and “change” of the state refer to a change or change in shape, size, color, posture, direction, movement of coordinates, start, progress and stop of operation, and the like.
- the "corresponding shape” includes similar shapes (that is, shapes of the same shape but different sizes), deformed shapes, and the same shapes as those cut out in part.
- the first user U1 who is a teacher becomes a second user U2 who is a student who is located away from the first user U1 via the virtual space VS, together with the second user U2.
- a case where the VR system S is used will be described as an example.
- the virtual space experience system of the present invention is not limited to such a configuration, and a user may exist in each of three or more real spaces, or a plurality of users may be in the same real space. May be present in.
- HMD4 head-mounted display
- the first camera 2a and the HMD 4 mounted by the first user U1 can wirelessly transmit and receive information to and from the server 3.
- the second camera 2b and the HMD 4 mounted by the second user U2 can wirelessly transmit and receive information to and from the server 3.
- any one of them may be configured to be able to send and receive information to and from each other by wire instead of wirelessly.
- those attached to the user U are attached to the head, both hands, and both feet of the user U via the HMD4, gloves, and shoes worn by the user U.
- those attached to the item I and the table T are attached to the positions that are the feature points in the images of the item I and the table T taken by the second camera 2b.
- the item I is a model of an assembled car placed on the table T of the second real space RS2, and the sign 1 is a central portion, an edge, a corner vicinity, etc. of each part of the model. It is attached to.
- the sign 1 is composed of a circular top plate and four legs, and is attached to the edge of the top plate, the ends of the legs, and the like.
- the sign 1 is used to recognize the shape, posture, coordinates, and direction of the user U, the item I, or the table T in the real space RS as described later. Therefore, the mounting position of the sign 1 may be appropriately changed according to other devices constituting the VR system S.
- the first camera 2a installed in the first real space RS1 executes the operable range of the first user U1 (that is, change of posture, movement of coordinates, change of posture and direction, and the like. Multiple possible ranges) are installed so that images can be taken from multiple directions.
- the second camera 2b installed in the second real space RS2 has an operable range of the second user U2, the item I and the table T (that is, a change in posture, movement of coordinates, and movement of coordinates, and A plurality of areas (ranges in which changes in shape and direction can be performed) are installed so that images can be taken from multiple directions.
- the server 3 recognizes the sign 1 from the image taken by the camera 2, and based on the coordinates of the recognized sign 1 in the real space RS, determines the shape, posture, coordinates, and direction of the user U, the item I, or the table T. recognize. Further, the server 3 determines the image and the sound to be recognized by the user U based on the shape, the posture, the coordinates and the direction.
- the HMD4 is attached to the head of the user U. As shown in FIG. 2, the HMD 4 has a monitor 40 (image display) for causing the user U to recognize the image of the virtual space VS determined by the server 3, and a virtual space determined by the server 3. It has a speaker 41 (sound generator) for causing the user U to recognize the sound of VS.
- a monitor 40 image display
- speaker 41 sound generator
- the VR system S When the user U is made to experience the virtual reality by using this VR system S, the user U is made to recognize only the image and the sound of the virtual space VS, and the user U himself / herself is made to recognize the first avatar 5a or the second avatar 5b, which will be described later. It is recognized that it exists in the virtual space. That is, the VR system S is configured as a so-called immersive system.
- the VR system S includes a so-called motion capture device composed of a marker 1, a camera 2, and a server 3 as a system for recognizing the states of the user U, the item I, and the table T in the real space RS.
- the virtual space experience system of the present invention is not limited to such a configuration.
- the number of signs and cameras is different from the above configuration (for example, one is provided in each virtual space). May be good.
- a device other than the motion capture device may be used to recognize the state of the user, item or table in the real space.
- a sensor such as GPS may be mounted on the HMD so that the state of the user, item or table can be recognized based on the output from the sensor.
- a sensor may be used in combination with a motion capture device as described above.
- the server 3 is composed of one or a plurality of electronic circuit units including a CPU, RAM, ROM, an interface circuit, and the like. As shown in FIG. 2, the server 3 has a virtual space generation unit 30, a user state recognition unit 31, an avatar control unit 32, and an item state recognition unit as functions realized by the implemented hardware configuration or program. 33, an object control unit 34, and an output information determination unit 35 are provided.
- the virtual space generation unit 30 is a virtual space VS corresponding to the first real space RS1 in which the first user U1 exists and the second real space RS2 in which the second user U2 exists. Generates an image (strictly speaking, the background image) and related audio.
- the virtual space generation unit 30 generates images of the first avatar 5a and the second avatar 5b corresponding to the first user U1 and voices related thereto as avatars existing in the virtual space VS.
- the first avatar 5a and the second avatar 5b may be collectively referred to as the avatar 5.
- each of the avatars 5 is anthropomorphic to an animal, and operates in response to the corresponding operation of the user U.
- the virtual space generation unit 30 generates a predetermined object existing in the virtual space VS.
- the predetermined object includes the first object 6 and the first object that the first user U1 observes and operates.
- the second object 7 observed by the second user U2, the third object 8 operated by the second user U2, and the fourth object 9 on which the third object 8 is placed are included.
- the first object 6 is generated in the virtual space VS at a position near the first avatar 5a corresponding to the first user U1.
- the shape, coordinates, posture, and direction of the first object 6 can be changed according to the movement of the first avatar 5a (that is, the first user U1).
- the second object 7 is generated in the virtual space VS at a position near the second avatar 5b corresponding to the second user U2. That is, the second object 7 is located at a coordinate different from the coordinates of the first object 6.
- the coordinates, posture, and direction of the second object 7 can be changed according to the movement of the second avatar 5b (that is, the second user U2).
- the third object 8 is generated in the virtual space VS at the position corresponding to the item I existing in the second real space. That is, the third object 8 is located at a coordinate different from the coordinates of the first object 6 and the coordinates of the second object 7. Specifically, the third object 8 is generated at the coordinates that are placed on the fourth object 9 corresponding to the table T.
- the shape, coordinates, posture, and direction of the third object 8 can be changed according to the operation of the second avatar 5b with respect to the third object 8 (that is, the operation of the second user U2 with respect to the item I).
- the shape of the third object 8 is a shape corresponding to item I.
- the shapes of the first object 6 and the second object 7 are generated so as to correspond to the shapes of the third object 8. After that, the shapes of the first object 6 and the second object 7 change independently of the shapes of the third object 8.
- the item I existing in the second real space RS2 is a model of an assembled car, and is composed of four parts: a main body, a roof, a windshield, and a tire. Then, the third object 8 is generated as corresponding to the item I.
- the third object 8 has a first part 80 which is a main body part, a second part 81 which is a roof part, a third part 82 which is a windshield, and a tire so as to correspond to each part of the item I. It is composed of the fourth part 83 (see FIG. 12).
- the first object 6 corresponding to the third object 8 is a first part 60 which is a main body part, a second part 61 which is a roof part, a third part 62 which is a windshield, and a fourth part which is a tire. It is composed of 63 (see FIG. 6 and the like).
- the second object 7 is also composed of a first part 70 which is a main body part, a second part 71 which is a roof part, a third part 72 which is a windshield, and a fourth part 73 which is a tire ( See Fig. 6 etc.).
- the fourth object 9 is generated in the virtual space VS at the position corresponding to the table T existing in the second real space.
- the coordinates, posture, and direction of the fourth object can be changed according to the operation of the second avatar 5b with respect to the fourth object 9 (that is, the operation of the second user U2 with respect to the table T).
- the shape of the fourth object 9 is a shape corresponding to the table T.
- the user state recognition unit 31 recognizes the state of the first user U1 based on the image data of the first user U1 including the sign 1 taken by the first camera 2a, and the second camera 2b. Recognizes the state of the second user U2 based on the image data of the second user U2 including the sign 1 taken by the user.
- the user state recognition unit 31 has a user coordinate recognition unit 31a, a user posture recognition unit 31b, and a user direction recognition unit 31c.
- the user coordinate recognition unit 31a, the user posture recognition unit 31b, and the user direction recognition unit 31c extract the sign 1 attached to the user U from the image data of the user U, and based on the extraction result, the coordinates of the user U, Recognize posture and direction.
- the avatar control unit 32 controls the state (specifically, coordinates, posture and direction) of the first avatar 5a according to the change in the state of the first user U1 recognized by the user state recognition unit 31, and the user.
- the state (specifically, coordinates, posture and direction) of the second avatar 5b is controlled according to the change in the state of the second user U2 recognized by the state recognition unit 31.
- the item state recognition unit 33 recognizes the states of the item I and the table T based on the image data of the item I and the table T including the sign 1 taken by the second camera 2b.
- the item state recognition unit 33 has an item shape recognition unit 33a, an item coordinate recognition unit 33b, an item posture recognition unit 33c, and an item direction recognition unit 33d.
- the item shape recognition unit 33a, the item coordinate recognition unit 33b, the item attitude recognition unit 33c, and the item direction recognition unit 33d are attached to the item I and the table T from the image data of the item I and the image data of the table T. Is extracted, and the shape, coordinates, orientation and direction of the item I and the table T are recognized based on the extraction result.
- the object control unit 34 controls the states of the third object 8 and the fourth object 9 according to the operation of the avatar 5 recognized by the avatar control unit 32.
- the object control unit 34 has a first object control unit 34a, a second object control unit 34b, a third object control unit 34c, and a fourth object control unit 34d.
- the first object control unit 34a responds to the movement of the first avatar 5a (that is, the first user U1) by the shape (assembled state), coordinates, posture (tilt with respect to the yaw axis) and direction (inclination with respect to the yaw axis) of the first object 6.
- the phase around the yaw axis) is controlled.
- the second object control unit 34b controls the coordinates, posture, and direction of the second object 7 according to the operation of the second avatar 5b (that is, the second user U2). Further, the second object control unit 34b controls the shape of the second object 7 according to the change in the shape of the first object 6.
- the third object control unit 34c controls the shape, coordinates, posture, and direction of the third object 8 according to the operation of the second avatar 5b (that is, the second user U2) and the state of the item I.
- the fourth object control unit 34d controls the coordinates, posture, and direction of the fourth object 9 according to the operation of the second avatar 5b (that is, the second user U2) and the state of the table T. However, in this embodiment, the state of the fourth object 9 is not changed.
- the output information determination unit 35 determines the information regarding the virtual space VS to be recognized by the user U via the HMD4.
- the output information determination unit 35 has an image determination unit 35a and an audio determination unit 35b.
- the image determination unit 35a corresponds to the avatar 5 via the monitor 40 of the HMD 4 based on the state of the avatar 5 and the states of the first object 6, the second object 7, the third object 8 and the fourth object 9.
- the image of the virtual space VS to be recognized by the user U is determined.
- the voice determination unit 35b corresponds to the avatar 5 via the speaker 41 of the HMD 4 based on the state of the avatar 5 and the states of the first object 6, the second object 7, the third object 8 and the fourth object 9.
- the sound related to the image of the virtual space VS to be recognized by the user U is determined.
- each processing unit constituting the virtual space experience system of the present invention is not limited to the above configuration.
- a part of the processing unit provided in the server 3 in the present embodiment may be provided in the HMD 4.
- a plurality of servers may be used, or the server 3 may be omitted and the CPUs mounted on the HMD 4 may be linked to each other.
- the virtual space generation unit 30 of the server 3 generates a virtual space VS, an avatar 5 to exist in the virtual space VS, and various objects based on the image taken by the camera 2 ((). FIG. 4 / STEP100).
- the virtual space generation unit 30 generates an image as a background of the virtual space VS. Further, the virtual space generation unit 30 generates an image of the avatar 5 existing in the virtual space VS based on the image of the user U among the images taken by the camera 2, and the first image is based on the image of the item I. An image of the object 6, the second object 7, and the third object 8 is generated, and an image of the fourth object 9 is generated based on the image of the table T.
- the object control unit 34 of the server 3 determines the states of the first object 6, the second object 7, and the third object 8 in the virtual space VS based on the state of the item I in the second real space RS2 (. FIG. 4 / STEP101).
- the item state recognition unit 33 recognizes the state of the item I in the second real space RS2 based on the state of the item I taken by the camera 2.
- the third object control unit 34c of the object control unit 34 determines the state of the third object 8 based on the recognized state of the item I.
- the first object control unit 34a and the second object control unit 34b of the object control unit 34 determine the states of the first object 6 and the second object 7 so as to correspond to the states of the third object 8. ..
- the item state recognition unit 33 recognizes the state of the table T in the second real space RS2 based on the state of the table T taken by the camera 2. After that, the fourth object control unit 34d of the object control unit 34 determines the state of the fourth object 9 based on the recognized state of the table T.
- the avatar control unit 32 of the server 3 determines the state of the avatar 5 corresponding to each of the users U in the virtual space VS based on each state of the user U in the real space RS (FIG. 4 / STEP102). ..
- the user state recognition unit 31 recognizes the state of the user U in the real space RS based on the state of the user U taken by the camera 2.
- the avatar control unit 32 determines the state of the avatar 5 based on the recognized state of the user U.
- the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 change the states of the first object 6, the second object 7, the third object 8 and the fourth object 9, and the avatar 5. Based on the state, the image and sound to be recognized by the user U are determined (FIG. 4 / STEP103).
- the HMD 4 worn by the user U displays the determined image on the monitor 40, generates the determined voice on the speaker 41 (FIG. 4 / STEP104), and ends the current process. do.
- the first avatar 5a corresponding to the first user U1 and the second avatar 5b corresponding to the second user U2 are mutually caused by these processes. Arranged to exist facing each other.
- first object 6 is arranged in the vicinity of the first avatar 5a corresponding to the first user U1.
- second object 7 is arranged in the vicinity of the second user U2.
- third object 8 and the fourth object 9 are arranged in the vicinity of the second user U2 and at positions corresponding to the item I and the table T in the second real space RS2.
- the user U who is made to recognize this virtual space VS, as the avatar 5 corresponding to himself / herself, presents the object that exists in the virtual space VS and exists in the vicinity of himself / herself by his / her own hand. It is recognized so that it can be operated (strictly via the avatar 5).
- the user state recognition unit 31 determines whether or not the state of the first user U1 has changed (FIG. 5 / STEP200).
- the avatar control unit 32 changes the state of the first avatar 5a based on the change in the state of the first user U1 (FIG. 5). / STEP201).
- the object control unit 34 determines whether or not the change in the state (that is, the operation) of the first avatar 5a executed by the avatar control unit 32 is an operation such as operating the first object 6. (Fig. 5 / STEP202).
- the object control unit 34 is based on whether or not the change in the coordinates, posture and direction of the first avatar 5a with respect to the first object 6 in the virtual space VS corresponds to the change in the predetermined coordinates, posture and direction. Therefore, it is determined whether or not the operation of the first avatar 5a is an operation such as operating the first object 6.
- the object control unit 34 is an operation in which the hand of the first avatar 5a grabs or moves any of the parts included in the first object 6. Whether or not it is determined.
- the first object control unit 34a of the object control unit 34 is based on the operation by the first avatar 5a. , The shape, coordinates, posture and direction of the first object 6 are changed (FIG. 5 / STEP203).
- the first object control unit 34a of the object control unit 34 changes the shape of the second object 7 located in the vicinity of the second avatar 5b based on the change in the shape of the first object 6 (FIG. 5 / STEP204).
- the first avatar 5a performs an operation of attaching the second part 61, which is the roof portion of the first object 6, which is a model of a car, to the first part 60, which is the main body portion. It is assumed that the shape of 1 object 6 is changed.
- the shape of the second object 7 having the shape corresponding to the first object 6 also changes in synchronization with the change in the shape of the first object 6.
- the second part 71 which is the roof portion of the second object 7, is automatically attached to the second part 71, which is the main body portion.
- the first avatar 5a shows the posture (tilt with respect to the yaw axis) and the direction (phase about the yaw axis) of the first object 6 in the direction of the arrow shown in FIG. 7. It is assumed that the posture and direction are changed from those shown in.
- the posture and direction of the second object 7 do not change in synchronization with the change of the posture and direction of the first object 6. That is, the posture and direction of the second object 7 do not change before and after the change of the state of the first object 6.
- the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 are based on the states of the first object 6 and the second object 7 in the virtual space VS, and the state of the avatar 5.
- the image and sound to be recognized by U are determined (FIG. 5 / STEP205).
- the operation of the first avatar 5a is not an operation for operating the first object 6 (NO in STEP202)
- the states of the first object 6 and the second object 7 in the virtual space VS and the avatar.
- the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 determine the image and the sound to be recognized by the user U (FIG. 5 / STEP205).
- the HMD 4 worn by the user U displays the determined image on the monitor 40, generates the determined voice on the speaker 41 (FIG. 5 / STEP206), and ends the current process. do.
- the VR system S repeatedly executes the above processing in a predetermined control cycle until the end instruction by the user U is recognized.
- the first object 6 has a shape corresponding to the second object 7. That is, those objects are generated by using one object (in this embodiment, item I, which is a model of a prefabricated car existing in the second real space RS2) as a model. Further, in the VR system S, when the shape of the first object 6 is changed, the shape of the second object 7 is changed (synchronously) according to the change of the shape of the first object 6.
- the first user U1 who plays the role of a teacher operates the first object 6 in his / her own hand
- the second user U2 who plays the role of a student operates the second object U2 in the hand of the second user U2.
- the first user U1 acting as a teacher causes the second user U2 acting as a student to act as a model for the first object 6 and the second object 7 via the second object 7. It is possible to teach how to assemble (a model of a car in this embodiment).
- this VR system S keeps the coordinates, posture and direction of the second object 7 fixed when any one of the coordinates, the posture and the direction is changed as well as the shape of the first object 6.
- the shape of the object 7 is configured to be changed according to the change in the shape of the first object.
- the shape of the second object 7 observed by the second user U2 changes according to the result of the operation of the first user U1 with respect to the first object 6, but the coordinates, posture, and direction do not change. That is, the state related to observation remains fixed to some extent.
- the first user U1 can observe the first object 6 that he / she observes while freely moving it, while the second user U2 also observes the second object 7 that he / she observes in a stable state to some extent. Can be observed at.
- each of the first object 6 and the second object 7 modeled on the item I of the second real space RS2 is the corresponding first user U1.
- it can be observed in a state freely intended by the second user U2, or in a state that is stable to some extent.
- each of the first user U1 and the second user U2 can realize a state in which the first object 6 or the second object 7 corresponding to the first user U1 and the second user U2 can be easily observed.
- the virtual space experience system of the present invention is not limited to such a configuration, and at least the coordinates, posture, and direction of the second object are changed when the state including other than the shape of the first object is changed. It may be configured to change the state other than the fixed state of the second object according to the change of the state of the first object while keeping one fixed.
- the second object it is not always necessary to set all the coordinates, postures, and directions of the second object to be fixed regardless of the change in the state of the first object, and any of the coordinates, postures, and directions.
- One or two may be configured to be changed according to the corresponding state of the first object.
- the correspondingly changed state may be appropriately set according to the shape and type of the object to be observed, the purpose of observing the object, and the like.
- the user state recognition unit 31 determines whether or not the state of the second user U2 has changed (FIG. 8 / STEP300).
- the avatar control unit 32 changes the state of the second avatar 5b based on the change in the state of the second user U2 (FIG. 8). / STEP301).
- the object control unit 34 determines whether or not the change in the state (that is, the operation) of the second avatar 5b executed by the avatar control unit 32 is an operation such as operating the second object 7. (FIG. 8 / STEP302).
- the object control unit 34 is based on whether or not the change in the coordinates, posture and direction of the second avatar 5b with respect to the second object 7 in the virtual space VS corresponds to the change in the predetermined coordinates, posture and direction. Then, it is determined whether or not the operation of the second avatar 5b is an operation such as operating the second object 7.
- the object control unit 34 is an operation in which the hand of the second avatar 5b grabs or moves any of the parts included in the second object 7. Whether or not it is determined.
- the second object control unit 34b of the object control unit 34 is based on the operation by the second avatar 5b.
- the coordinates, posture and direction of the second object 7 are changed (FIG. 8 / STEP303).
- the second object 7 is an object for the second user U2 to observe. Therefore, in the VR system S, the second user U2 cannot change the shape of the second object 7 via the second avatar 5b corresponding to the second user U2.
- the state of the first object 6 does not change in synchronization with the change of the state of the second object 7.
- the second avatar 5b changes the posture (tilt with respect to the yaw axis) and the direction (phase around the yaw axis) of the second object 7 in the direction of the arrow shown in FIG. do.
- the posture and direction of the first object 6 do not change in synchronization with the change of the posture and direction of the second object 7. That is, the posture and direction of the first object 6 do not change before and after the state of the second object 7.
- the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 are based on the states of the first object 6 and the second object 7 in the virtual space VS, and the state of the avatar 5.
- the image and sound to be recognized by U are determined (FIG. 8 / STEP304).
- the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 are determined based on the states of the first object 6 and the second object 7 in the virtual space VS and the states of the avatar 5 (FIG. 8 / STEP304).
- the HMD 4 worn by the user U displays the determined image on the monitor 40, generates the determined voice on the speaker 41 (FIG. 8 / STEP305), and ends the current process. do.
- the VR system S repeatedly executes the above processing in a predetermined control cycle until the end instruction by the user U is recognized.
- the coordinates, posture and direction of the second object 7 can be changed independently of the coordinates, posture and direction of the first object 6.
- the coordinates, posture and direction of the second object 7 are set to the state of the first object 6 corresponding to the first user U1. Regardless, you can change it to make it easier to see.
- the virtual space experience system of the present invention is not limited to such a configuration, and is independent of at least one of the coordinates, posture and direction of the first object, and the corresponding coordinates, posture and direction of the second object. It may be configured to change at least one of the directions.
- the state of the second object is not necessarily changed.
- the state of the first object may be configured to be changed according to the change of the state of the second object.
- the first user when the first user plays the role of a teacher and the second user plays the role of a student as in the present embodiment, the first user changes the state according to the change in the state of the second object.
- the degree of understanding of the second user can be estimated.
- the user state recognition unit 31 determines whether or not the state of the second user U2 has changed (FIG. 10 / STEP400).
- the avatar control unit 32 changes the state of the second avatar 5b based on the change in the state of the second user U2 (FIG. 10). / STEP401).
- the object control unit 34 determines whether or not the change in the state (that is, the operation) of the second avatar 5b executed by the avatar control unit 32 is an operation such as operating the third object 8. (Fig. 10 / STEP402).
- the object control unit 34 is based on whether or not the change in the coordinates, posture and direction of the second avatar 5b with respect to the third object 8 in the virtual space VS corresponds to the change in the predetermined coordinates, posture and direction. Then, it is determined whether or not the operation of the second avatar 5b is an operation such as operating the third object 8.
- the object control unit 34 is an operation in which the hand of the second avatar 5b grabs or moves any of the parts included in the third object 8. Whether or not it is determined.
- the third object control unit 34c of the object control unit 34 is based on the operation by the second avatar 5b.
- the shape, coordinates, posture and direction of the third object 8 are changed (FIG. 10 / STEP403).
- the third object 8 is an object for the second user U2 to operate, and the change in the state according to the result of the operation is independent of the first object 6 and the second object 7. ing. Therefore, as shown in FIG. 11, even if the state of the third object 8 changes, the state of the first object 6 or the second object 7 does not change in synchronization with it.
- the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 make the user U recognize the image and voice based on the state of the third object 8 and the state of the avatar 5 in the virtual space VS. (Fig. 10 / STEP404).
- the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 are determined based on the state of the third object 8 and the state of the avatar 5 in the virtual space VS (FIG. 10 / STEP404).
- the HMD 4 worn by the user U displays the determined image on the monitor 40, generates the determined voice on the speaker 41 (FIG. 10 / STEP405), and ends the current process. do.
- the VR system S repeatedly executes the above processing in a predetermined control cycle until the end instruction by the user U is recognized.
- the third object 8 is generated in the virtual space VS, but is generated so as to overlap with the item I actually existing in the second real space RS2. Further, the change in the state of the third object 8 also corresponds to the item I. That is, the change in the state of the third object 8 corresponds to the change in the state of the item I.
- the second user U2 observes the assembly method via the second object 7, and then operates the third object 8 according to the method, so that the VR system S is not assembled before the start of use. Item I will be in the assembled state after use.
- the result of the operation performed by the second user U2 on the third object 8 via the second avatar 5b in the virtual space VS is reflected in the item I corresponding to the second real space RS2.
- the second user U2 can feel as if the result of the operation of the second avatar 5b in the virtual space VS is reflected in the item I of the second real space RS2.
- the second user U2 touches the third object 8
- the second user U2 touches the item I in the second real space RS2.
- This makes it possible to give the second user U2 the feeling of touching the third object 8.
- the immersive feeling of the second user U2 in the virtual space VS can be enhanced.
- first user U1 and the second user U2 exist in the same real space, or when the third object 8 is generated corresponding to the item existing in the first real space RS1. Can also touch the third object 8 by the first user U1.
- the first user U1 can feel as if the result of the operation of the first avatar 5a in the virtual space VS is reflected in the item I in the real space.
- the immersive feeling of the first user U1 in the virtual space VS can be enhanced.
- the state of the third object 8 does not change in synchronization with it.
- the state of the first object 6 is also changed in response to the change of the state of the third object 8 (synchronously)
- the operation of the second user U2 on the third object 8 is displayed.
- the first user U1 can use it to grasp it through the first object 6 corresponding to it.
- the second object 7 and the third object 8 are generated using the item I, which is a model of a prefabricated car, as a model.
- the shape of the second object 7 changes according to the change in the shape of the first object 6 corresponding to the first user U1 (that is, the operation of the first user U1).
- the shape of the third object 8 changes according to the operation of the second user U2.
- the second part 61 which is a roof portion
- the first part 60 which is the main body portion of the first object 6, by the first user U1.
- the second part 71 which is the roof portion of the second object 7, is also automatically attached to the first part 70, which is the main body portion.
- the second object 7 is generated as a translucent object. Therefore, when the shape of the first object 6 changes, only the changed part (that is, only the difference between the shape of the first object 6 and the shape of the third object 8) is sent to the second user U2. It will be shown as a translucent part.
- the second user U2 can intuitively grasp the difference between the shape of the first object 6 and the shape of the third object 8 (and by extension, the operation of the first user U1 on the first object 6). can. Specifically, the second user U2 can easily grasp the assembly procedure of the first user U1 by observing the difference portion.
- the color of the second object 7 does not necessarily have to be translucent, and may be a color different from the color of the third object 8.
- the VR system S is not only the first object 6 observed by the first user U1 and the second object 7 observed by the second user U2, but also the second object.
- the user U2 operates to generate a third object 8 corresponding to the item I and a fourth object 9 corresponding to the table T on which the item I is placed in the virtual space VS.
- the second user U2 who is the student role is the third object 8 (and thus, according to the method shown via the first object 6 (and thus the second object 7) by the first user U1 who is the teacher role.
- the item I) that actually exists can be operated.
- the virtual space experience system of the present invention is not limited to such a configuration, and creates a first object observed by the first user and a second object observed by the second user in the virtual space. It should be.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Lorsque l'état d'un premier objet est modifié par un premier avatar correspondant à un premier utilisateur, en réponse à la modification de l'état du premier objet, une unité de commande d'objet (34) d'un système de réalité virtuelle S modifie l'état autrement que par la coordonnée et/ou la posture et/ou l'orientation du second objet tout en définissant lesdites coordonnée et/ou posture et/ou orientation d'un second objet observé par un second avatar correspondant à un second utilisateur.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2020/041524 WO2022097271A1 (fr) | 2020-11-06 | 2020-11-06 | Système d'expérience d'espace virtuel |
| JP2021520445A JP6933850B1 (ja) | 2020-11-06 | 2020-11-06 | 仮想空間体感システム |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2020/041524 WO2022097271A1 (fr) | 2020-11-06 | 2020-11-06 | Système d'expérience d'espace virtuel |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2022097271A1 true WO2022097271A1 (fr) | 2022-05-12 |
Family
ID=77550013
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2020/041524 Ceased WO2022097271A1 (fr) | 2020-11-06 | 2020-11-06 | Système d'expérience d'espace virtuel |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JP6933850B1 (fr) |
| WO (1) | WO2022097271A1 (fr) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2020515891A (ja) * | 2017-03-24 | 2020-05-28 | サージカル シアター エルエルシー | 仮想環境における訓練および共同作業のためのシステムおよび方法 |
| JP6739611B1 (ja) * | 2019-11-28 | 2020-08-12 | 株式会社ドワンゴ | 授業システム、視聴端末、情報処理方法及びプログラム |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019230852A1 (fr) * | 2018-06-01 | 2019-12-05 | ソニー株式会社 | Dispositif de traitement d'informations, procédé de traitement d'informations et programme |
-
2020
- 2020-11-06 JP JP2021520445A patent/JP6933850B1/ja active Active
- 2020-11-06 WO PCT/JP2020/041524 patent/WO2022097271A1/fr not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2020515891A (ja) * | 2017-03-24 | 2020-05-28 | サージカル シアター エルエルシー | 仮想環境における訓練および共同作業のためのシステムおよび方法 |
| JP6739611B1 (ja) * | 2019-11-28 | 2020-08-12 | 株式会社ドワンゴ | 授業システム、視聴端末、情報処理方法及びプログラム |
Also Published As
| Publication number | Publication date |
|---|---|
| JP6933850B1 (ja) | 2021-09-08 |
| JPWO2022097271A1 (fr) | 2022-05-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3425481B1 (fr) | Dispositif de commande | |
| EP2705435B1 (fr) | Monde de présence numérique à distance simultané massif | |
| JP2022549853A (ja) | 共有空間内の個々の視認 | |
| TW202101172A (zh) | 用於人工實境系統的手臂視線驅動的使用者介面元件閘控 | |
| TW202101170A (zh) | 用於人工實境系統的角點識別手勢驅動的使用者介面元件閘控 | |
| TW202105129A (zh) | 具有用於閘控使用者介面元件的個人助理元件之人工實境系統 | |
| JP2010253277A (ja) | ビデオゲームにおいてオブジェクトの動きを制御する方法およびシステム | |
| JP2010257461A (ja) | ネットワークゲーム用の共有ゲーム空間を創出する方法およびシステム | |
| JP2010257081A (ja) | 画像処理方法及び画像処理装置 | |
| WO2019087564A1 (fr) | Dispositif de traitement d'informations, procédé de traitement d'informations, et programme | |
| WO2021261593A1 (fr) | Système d'apprentissage de réalité virtuelle pour aéronef, procédé d'apprentissage de réalité virtuelle pour aéronef et programme d'apprentissage de réalité virtuelle pour aéronef | |
| JP7530754B2 (ja) | 教育支援システム、方法およびプログラム | |
| US20230135138A1 (en) | Vr training system for aircraft, vr training method for aircraft, and vr training program for aircraft | |
| JP7138392B1 (ja) | 仮想空間体感システム | |
| JP6933850B1 (ja) | 仮想空間体感システム | |
| JP2019193705A (ja) | ゲームプログラム及びゲーム装置 | |
| US20250363752A1 (en) | Experience space switching method, experience space switching system, experience space switching apparatus, and experience space switching program and recording medium with the same recorded therein | |
| WO2021240601A1 (fr) | Système de sensation corporelle d'espace virtuel | |
| JP7115697B2 (ja) | アニメーション制作システム | |
| JP7055527B1 (ja) | 仮想空間体感システム | |
| JP7413472B1 (ja) | 情報処理システムおよびプログラム | |
| JP7412497B1 (ja) | 情報処理システム | |
| JP2022180478A (ja) | アニメーション制作システム | |
| WO2024090303A1 (fr) | Dispositif de traitement d'informations et procédé de traitement d'informations | |
| Mahendran | REPORT NUMBER: IE-PR-05-12 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| ENP | Entry into the national phase |
Ref document number: 2021520445 Country of ref document: JP Kind code of ref document: A |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20960824 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 20960824 Country of ref document: EP Kind code of ref document: A1 |