WO2025100190A1 - Dispositif, procédé et programme de traitement d'informations - Google Patents
Dispositif, procédé et programme de traitement d'informations Download PDFInfo
- Publication number
- WO2025100190A1 WO2025100190A1 PCT/JP2024/037006 JP2024037006W WO2025100190A1 WO 2025100190 A1 WO2025100190 A1 WO 2025100190A1 JP 2024037006 W JP2024037006 W JP 2024037006W WO 2025100190 A1 WO2025100190 A1 WO 2025100190A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sensing
- information
- virtual space
- tracking target
- spatial position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
Definitions
- This technology relates to an information processing device, method, and program, and in particular to an information processing device, method, and program that allows easy acquisition of information within a virtual space.
- Patent Document 1 It is known in the past to capture images of a virtual space with a virtual camera (see Patent Document 1).
- the shooting range within the virtual space had to be set by the user each time a photo was taken.
- An information processing device includes: a sensing object space position update unit that updates the tracking target object space position in response to a change in the previous tracking target object space position so that a positional relationship between a tracking target object space position, which is a spatial position of a tracking target object in a virtual space, and a sensing object space position, which is a spatial position of a sensing object for sensing a sensing range set for the tracking target object in the virtual space, is maintained; and a sensing result information generating unit that generates sensing result information that is a sensing result of the sensing range by the sensing object at the sensing object space position.
- the tracking target object spatial position which is the spatial position of the tracked object in a virtual space
- the sensing object spatial position which is the spatial position of a sensing object for sensing a sensing range set based on the tracked object in the virtual space
- the sensing object spatial position which is the spatial position of a sensing object for sensing a sensing range set based on the tracking target object in the virtual space
- FIG. 1 is a diagram illustrating an example of virtual space sensing according to an embodiment of the present technology.
- FIG. 11 is a diagram illustrating another example of virtual space sensing according to an embodiment of the present technology.
- 1 is a block diagram showing a configuration example of a virtual space sensing system according to a first embodiment of the present technology;
- FIG. 13 is a diagram illustrating an example of a registration UI.
- FIG. 5 is a diagram showing another example of the registration UI of FIG. 4 .
- 13 is a diagram showing an example of a display of a sensing object detail setting screen of a registration UI.
- FIG. 7 is a diagram showing another display example of the sensing object detail setting screen in FIG. 6 .
- FIG. 13A and 13B are diagrams illustrating other display examples of the sensing range setting area.
- FIG. 11 is a diagram illustrating an example of object information.
- FIG. 11 is a diagram showing an example of sensing object information.
- FIG. 4 is a diagram illustrating a relative shooting position.
- FIG. 13 is a diagram illustrating a sensing range.
- FIG. 13 is a diagram showing another example of the sensing object detail setting screen.
- FIG. 2 is a diagram illustrating a shooting direction.
- 11 is a diagram showing an example of a sensing range when a sensing object has a camera function.
- FIG. 13 is a diagram showing another example of the sensing range when the sensing object has a camera function.
- FIG. 11 is a diagram showing another example of the sensing object information.
- FIG. 4 is a flowchart illustrating a process of the virtual space sensing system of FIG. 3 .
- FIG. 11 is a diagram illustrating a configuration example of a virtual space sensing system according to a second embodiment of the present technology.
- FIG. 1 is a diagram illustrating an example of a service provided by the present technology.
- FIG. 11 is a diagram showing another example of a service provided by the present technology.
- FIG. 2 is a block diagram showing an example of the configuration of a computer.
- FIG. 1 is a diagram illustrating an example of virtual space sensing according to an embodiment of the present technology.
- FIG. 1 an example is shown in which a sensing object 11a in a virtual space 1 has a camera function and captures an image of the entire body 13a of a tracking target object 12a from the front.
- the tracked object 12a is an object that is the subject of tracking by the sensing object 11a.
- the tracked object 12a is an avatar, but it may be a person other than an avatar in the virtual space 1 (NPC: Non-player character), an animal, a plant, a product, a background, etc.
- NPC Non-player character
- the sensing object 11a has a sensing function of acquiring a representation in the three-dimensional virtual space 1 from the virtual space 1.
- the representation is, for example, an image or sound of an object.
- the sensing object 11a is set in correspondence with the tracked object 12a. Note that the sensing object 11a only needs to be functionally realized by software, and may or may not have an actual entity in the virtual space 1 (display an actual state in the virtual space 1).
- the sensing object 11a in FIG. 1 has a camera function among its sensing functions.
- the sensing object 11a When the sensing object 11a is set to sense the entire body 13a from the front of the tracked object 12a, it moves to an appropriate spatial position according to the spatial position of the tracked object 12a, as shown by the dotted arrow, to maintain a spatial positional relationship with the tracked object 12a, and senses (photographs) the entire body 13a from the front of the tracked object 12a.
- the three-dimensional spatial position in the virtual space 1 will be referred to simply as the position, and the spatial positional relationship will be referred to simply as the positional relationship.
- the sensing object 11a since the sensing object 11a has a camera function, the entire body of the tracked object 12a is the sensing range, and processing is performed to obtain an image as a sensing result of the sensing range as virtual space sensing.
- the sensing range is not limited to the entire body of the tracked object 12a, but can also be set to a part of the tracked object, the field of view as seen from the tracked object, etc.
- FIG. 2 shows another example of virtual space sensing according to an embodiment of the present technology.
- a sensing object 11b has a microphone function in a three-dimensional virtual space 1, and captures audio around a tracked object 12b, i.e., within a certain spherical range 13b centered on the vicinity.
- the vicinity may be any position within the certain range 13b that includes the position of the tracked object 12b, and the tracked object 12b may be set to be the center of the sphere.
- sensing object 11b has a microphone function among its sensing functions. Note that, like sensing object 11a, sensing object 11b only needs to be functionally realized by software, and may or may not have an actual entity in virtual space 1 (displaying an actual entity in virtual space 1).
- the sensing object 11b When the sensing object 11b is set to sense audio within a certain spherical range 13b centered on the vicinity of the tracked object 12b, it moves to an appropriate position according to the position of the tracked object 12b as shown by the dotted arrow, maintains its positional relationship with the tracked object 12b, and senses within a certain spherical range 13b centered on the vicinity of the tracked object 12b.
- the sensing object 11b has a microphone function, so the sensing range is a certain spherical range 13b centered near the tracked object 12b, and processing is performed to obtain sound as a sensing result in the sensing range as virtual space sensing.
- sensing object 11a and 11b when there is no need to distinguish between sensing objects 11a and 11b, they will be referred to as sensing object 11.
- tracked objects 12a and 12b when there is no need to distinguish between tracked objects 12a and 12b, they will be referred to as tracked objects 12.
- FIG. 3 is a diagram illustrating an example of the configuration of a virtual space sensing system according to the first embodiment of the present technology.
- the virtual space sensing system 51 in Figure 3 is a system that realizes the virtual space sensing of Figures 1 and 2 described above, and provides services to a virtual space user terminal using virtual space sensing result information (images, audio, etc.), analysis result information obtained by analyzing the virtual space sensing result information, or information based on the virtual space sensing result information and analysis result information.
- virtual space sensing result information images, audio, etc.
- analysis result information obtained by analyzing the virtual space sensing result information
- information based on the virtual space sensing result information and analysis result information information based on the virtual space sensing result information and analysis result information.
- the virtual space sensing system 51 is configured to include a virtual space construction unit 60, a virtual space object analysis server 61, a virtual space user terminal 62, and an object information registrant terminal 63.
- the virtual space system is made up of a virtual space construction unit 60 and an information storage unit 72 in a virtual space object analysis server 61, and the virtual space 1 in FIG. 1 and FIG. 2 is provided to a virtual space user terminal 62.
- the virtual space sensing system 51 and the virtual space system are managed by the same provider (administrator), but are different systems. That is, the virtual space sensing system 51 in FIG. 3 can provide sensing results or analysis results of the virtual space 1 to the outside.
- the virtual space sensing system 51 and the virtual space system may each be managed by, for example, different providers with which they have a business partnership.
- the virtual space construction unit 60 acquires virtual space construction information (e.g., object three-dimensional (3D) data, spatial positions, etc.), which is information necessary for constructing the virtual space 1, from the information storage unit 72, and constructs the three-dimensional virtual space 1.
- virtual space information e.g., object three-dimensional (3D) data, spatial positions, etc.
- the information of the constructed virtual space 1 (hereinafter referred to as virtual space information) is output to the virtual space user terminal 62 and provided to the virtual space user terminal 62.
- the virtual space construction unit 60 also outputs the virtual space information to the object information registration unit 73 of the virtual space object analysis server 61, and outputs the virtual space construction information to the sensing unit 74 of the virtual space object analysis server 61.
- the virtual space construction unit 60 not only constructs the virtual space 1, but also receives user input (user object information in the virtual space 1 or real space) supplied from the virtual space user terminal 62 and reconstructs the virtual space 1.
- the virtual space information and user object information are output to the information storage unit 72, and the virtual space construction information is updated.
- User object information in virtual space 1 is, for example, information related to the user's avatar.
- User object information in real space is, for example, when the virtual space user terminal 62 is an HMD (Head Mount Display), facial direction information and gaze information measured by the HMD worn by the user, images captured by a camera or the like that captures the user, gesture information acquired from the captured images, communication information (audio information) input from a microphone, and UI (User Interface) operation information.
- HMD Head Mount Display
- facial direction information and gaze information measured by the HMD worn by the user
- images captured by a camera or the like that captures the user images captured by a camera or the like that captures the user
- gesture information acquired from the captured images communication information (audio information) input from a microphone
- UI User Interface
- the virtual space construction unit 60 receives voice information and the like from the virtual space user terminal 62, and recognizes when the user starts, ends, or is currently communicating with other avatars, etc.
- the communication recognition information is output to and stored in the information storage unit 72.
- the virtual space construction unit 60 As an example of providing (feedback) to the virtual space user terminal 62, the virtual space construction unit 60 generates an In Play advertisement based on the analysis information analyzed by the information analysis unit 75 stored in the information storage unit 72, and displays the generated In Play advertisement in the virtual space 1 in order to provide (feedback) to the virtual space user terminal 62.
- the analysis information is, for example, the detection result of the reaction (smile) of the avatar who viewed the In Play advertisement, the degree of excitement, In Play advertisement generation information used to generate the In Play advertisement, etc.
- the virtual space construction unit 60 also outputs, as an example of providing to the virtual space user terminal 62, music corresponding to the analysis information analyzed by the information analysis unit 75 stored in the information storage unit 72 to the virtual space 1 in order to provide the music to the virtual space user terminal 62.
- the analysis information in this case is, for example, emotion estimation result information of the avatar based on the facial expression and voice of the avatar.
- the virtual space construction unit 60 outputs relaxing music in the virtual space 1.
- the virtual space construction unit 60 not only constructs the virtual space 1, but also functions as an information providing unit that provides (feeds back) the analysis results of the sensing results acquired from the virtual space 1 or information based on the analysis results to the virtual space 1 or the virtual space user terminal 62 that uses the virtual space 1.
- the virtual space construction unit 60 provides the sensing results, such as images and audio, photo albums made up of images, and audio, stored in the information storage unit 72, to the virtual space 1 or the virtual space user terminal 62 that uses the virtual space 1. At this time, the virtual space construction unit 60 also functions as an information providing unit that provides sensing result information to the virtual space 1 or the virtual space user terminal 62 that uses the virtual space 1.
- the virtual space object analysis server 61 is configured to include an information storage unit 72, an object information registration unit 73, a sensing unit 74, and an information analysis unit 75.
- the information storage unit 72 is a database that stores virtual space construction information, which is information necessary for the virtual space construction unit 60 to construct the virtual space 1, as well as information on the object of interest (object of interest information), information on the tracked object including the object of interest (tracked object information), and information on the sensing object (sensing object information).
- object of interest is an object selected as the object to be tracked from among the objects in the virtual space 1 in response to an operation by the registrant, and information on the object of interest is registered in the information storage unit 72 by the object information registration unit 73.
- the object to be tracked is an object to be tracked of the sensing object, which is set by the registrant or by the virtual space sensing system 51.
- the object to be tracked also includes the object of interest.
- the object of interest information, the tracked object information, and the sensing object information are registered by the object information registration unit 73, stored and managed by the information storage unit 72, and updated by the sensing unit 74.
- the information storage unit 72 also stores sensing result information (e.g., RGB image information, audio information, etc.) acquired by the sensing unit 74.
- the information storage unit 72 also stores analysis result information analyzed by the information analysis unit 75.
- the object information registration unit 73 is configured to include a registration UI generation unit 81 and a registration information generation unit 82.
- the registration UI generating unit 81 generates a registration UI such as a GUI for the registrant to register a target object and a sensing object from among the objects in the virtual space 1. Details of the registration UI will be described later with reference to FIG. 4 onwards.
- the registration UI generating unit 81 is supplied with virtual space information from the virtual space construction unit 60, and in the registration UI, a part of a three-dimensional area of the virtual space 1 is projected (displayed) as two-dimensional information based on the virtual space information, for example, in the virtual space display area 113 in FIG. 4 described later.
- the registration UI information is output to the object information registrant terminal 63.
- the part of the virtual space displayed in the registration UI may be three-dimensional information instead of two-dimensional information, and the part of the virtual space that is the three-dimensional information may be displayed as the registration UI on a display capable of three-dimensional display, such as an HMD.
- the registration information generating unit 82 sets the TrackingTargetObjectID, described later, in the sensing object information as information indicating that the object selected by the registrant among the objects in the virtual space 1 is the object of interest, and that the sensing object selected to perform sensing on the object of interest, based on the UI operation information by the registrant supplied from the object information registrant terminal 63.
- object information of the object having an ID corresponding to TrackingTargetObjectID is generated as attention object information (tracking target object information) and sent to the information storage unit 72 for registration.
- the registration information generating unit 82 also generates sensing object information including Setting information, which will be described later, for setting the sensing range of the selected sensing object. As a result, sensing object information including TrackingTargetObjectID and Setting information is generated and sent to the information storage unit 72 for registration.
- sensing object information and tracking target object information are registered based on UI operation information by the registrant, they may be registered in advance in the virtual space sensing system 51. Also, all objects may be set as tracking target objects.
- the sensing unit 74 is configured to include a sensing necessity determination unit 91, a sensing object space position update unit 92, and a sensing result information generation unit 93.
- the sensing necessity determination unit 91 periodically acquires the tracking target object information and sensing object information registered in the information storage unit 72. The sensing necessity determination unit 91 determines whether sensing is necessary based on the acquired tracking target object information and sensing object information.
- Each sensing object is set with a sensing condition that, for example, an image from the front of the avatar is acquired every frame, every few frames, every 30 seconds, or every minute.
- the sensing necessity determination unit 91 determines whether sensing and movement are necessary. Sensing is performed continuously under the sensing conditions described above. However, sensing may be started, for example, when the user's avatar starts communicating with another avatar as an additional sensing condition, or may be performed until the user's avatar finishes communicating with the other avatar.
- the sensing necessity determination unit 91 outputs the tracked object information and the sensing object information to the sensing object space position update unit 92.
- the sensing necessity judgment unit 91 the information necessary for the judgment is held in the sensing necessity judgment unit 91 as necessary. For example, if the coordinates (position) of the tracked object indicated by KeyPointCoord described later has moved from the previous coordinates (the coordinates at the time of last sensing) by a number of pixels that is the threshold for determining that the tracked object has moved, the sensing object will move in accordance with the movement of the tracked object. Note that the movement also includes rotation.
- the sensing necessity determination unit 91 If the sensing object does not move, the sensing necessity determination unit 91 outputs the sensing object information and the tracked object information (for example, the sensing object ID and KeyPointCoord, which is the spatial position information of the corresponding tracked object) to the sensing result information generation unit 93.
- the sensing object information and the tracked object information for example, the sensing object ID and KeyPointCoord, which is the spatial position information of the corresponding tracked object
- the sensing object spatial position update unit 92 moves the sensing object to an appropriate position depending on the tracked object. For example, when setting a sensing range (photography range) for sensing the front of an avatar, which is the tracked object, the sensing object must always be positioned in front of the avatar, so the sensing object spatial position update unit 92 updates the sensing object position information (KeyPointCoord) indicating the position of the sensing object depending on the position, facial orientation, line of sight, etc. of the avatar in question so that the positional relationship between the tracked object and the sensing object is maintained. Information supplied from the sensing necessity determination unit 91 is referenced for the position of the tracked object and the sensing object in the virtual space 1.
- KeyPointCoord sensing object position information
- the information may be used as supplementary information to the real-space user object information (such as facial direction information and gaze information) supplied to the information storage unit 72 via the virtual space construction unit 60.
- the real-space user object information such as facial direction information and gaze information
- the sensing object space position update unit 92 After updating the position information, the sensing object space position update unit 92 outputs the information supplied from the sensing necessity determination unit 91 and the updated information to the sensing result information generation unit 93. At that time, the sensing object space position update unit 92 sends sensing object position information indicating the position of the sensing object included in the sensing object information to the information storage unit 72, updates the sensing object position information stored in the information storage unit 72 with the latest position, and outputs it.
- the sensing result information generation unit 93 senses (photographs) the sensing range (photographing range) within the virtual space 1 based on the virtual space construction information supplied from the virtual space construction unit 60, based on the supplied sensing object information and tracking target object information, and generates sensing result information.
- the sensing result information will be a two-dimensional RGB image (color image) that captures the entire tracked object from the three-dimensional virtual space 1.
- the sensing result information is output to and saved in the information storage unit 72. Note that although this has been described as sensing, the actual processing is rendering, and in this case, generating (rendering) a two-dimensional RGB image of the entire tracked object based on the virtual space construction information corresponds to sensing.
- the information analysis unit 75 analyzes the sensing result information stored in the information storage unit 72. For example, if the sensing result information is a facial image of an avatar, the information analysis unit 75 analyzes (estimates) joy, anger, sadness, and happiness from the facial image. For example, if the sensing result information is audio in a space, the information analysis unit 75 analyzes (estimates) the degree of excitement from the audio in the space.
- the analysis results are output to and stored in the information storage unit 72.
- the information analysis unit 75 may additionally use real-space user object information (facial direction information, gaze information, gesture information, image information, etc. in the real space) acquired from the virtual space user terminal 62 using the virtual space 1 as the real-space user object information described above.
- the information stored in the information storage unit 72 may be provided (feedback) by the virtual space construction unit 60 to the virtual space 1 or the virtual space user terminal 62 using the virtual space 1, or may be provided to the object information registrant terminal 63, or may be provided to the outside, as will be described in detail later in FIG. 19.
- the virtual space construction unit 60 may generate and display an In-Play advertisement in the virtual space 1, as an example of information provided to the virtual space 1.
- the virtual space user terminal 62 is a terminal used by a user who uses the virtual space 1, and is composed of, for example, an HMD, a smartphone, a tablet terminal, a personal computer, or a game device.
- the user uses the virtual space 1 by accessing the virtual space 1 of the virtual space system provided by the virtual space construction unit 60 using the virtual space user terminal 62.
- the object information registrant terminal 63 is a terminal used by a registrant to register object information and sensing object information, and may be, for example, an HMD, a smartphone, a tablet terminal, a personal computer, or a game device.
- the registrant uses the object information registrant terminal 63 to access a registration UI of the virtual space object analysis server 61, thereby registering information such as an object of interest (object to be tracked) and a sensing object.
- the object information registrant terminal 63 may obtain sensing results and analysis results from the registered object of interest (object to be tracked) and sensing object from the information storage unit 72.
- the information storage unit 72 also functions as an information provider that provides sensing results and analysis results to the object information registrant terminal 63.
- FIG. 3 shows an example in which the virtual space user terminal 62 and the object information registrant terminal 63 are configured separately, the virtual space user terminal 62 and the object information registrant terminal 63 may be the same terminal. That is, a user who uses the virtual space 1 may use the virtual space user terminal 62 to register an object of interest (object to be tracked) and a sensing object, etc.
- the operation authority for the sensing object such as registering an object of interest (object to be tracked) and a sensing object, may be set. For example, it may be possible to set a refusal of sensing when the user does not want others to sense his or her avatar.
- the virtual space object analysis server 61 is provided with an information storage unit 72, and the information storage unit 72 stores virtual space construction information used by the virtual space construction unit 60 outside the virtual space object analysis server 61, but the virtual space construction unit 60 may be configured inside the virtual space object analysis server 61. Also, a virtual space server other than the virtual space object analysis server 61 may be made to manage the virtual space construction information, and the virtual space construction information managed by that virtual space server may be used by the virtual space object analysis server 61.
- FIG. 4 is a diagram showing an example of a registration UI.
- the registration UI 101 is configured to include a focus object registration area 111 located on the upper left, a sensing object registration area 112 located on the lower left, and a virtual space display area 113 located on the right.
- the object of interest registration area 111 is an area for manually or automatically registering an object of interest as an object to be tracked from among the objects in the virtual space 1.
- radio buttons for indicating whether to use ID or pointing as the selection method for manually selecting the object of interest icons for indicating whether to prioritize the object located at the forefront or the central object located at the center of the field of view as the selection method for automatically selecting the object of interest, and a registration button for instructing the registration of object information are displayed.
- the radio button indicating that IDs are to be used in the manual case is selected, and so to the right of it is displayed a list of the IDs of selectable objects (5yeg5t, 23g7ra, yq234h, 4y82f5).
- the ID list may also display the IDs of selectable objects that exist in the area where images are displayed in the virtual space display area 113. If there are many IDs of selectable objects, the ID list can also be scrolled.
- the object of interest registration area 111 When an object of interest is selected in the object of interest registration area 111 and the registration button is selected, the object of interest information of the ID of the object of interest is registered in the information storage unit 72.
- the sensing object registration area 112 is configured to include an ID selection field, a type selection field, a detailed settings button, and a registration button.
- the ID selection field is a selection field for selecting the ID of the tracking target object including the object of interest targeted by the sensing object to be registered.
- the type selection field is a selection field for selecting which type of sensing object to specify.
- the detailed settings button is a button for configuring detailed settings for the sensing object.
- the register button is a button for registering the settings of the sensing object.
- the triangle displayed in the ID selection field changes from pointing to the right to pointing downwards, and a list of IDs of already registered objects of interest or objects in the virtual space 1 stored in the information storage unit 72 is displayed.
- the ID of the desired object can be selected as the object to be tracked from the list displayed in the ID selection field.
- "5yeg5t" has been selected.
- the type selection field displays a list of sensing object types that can be set for the tracking target object of the ID selected in the ID selection field.
- the desired sensing object type can be selected from the list displayed in the type selection field.
- "2D RGB Camera” has been selected.
- the minimum necessary sensing object information is registered by just the tracking target object and sensing object type, but by selecting the detailed settings button, the sensing object detailed settings screen shown in Figure 6 (described below) is displayed, allowing further detailed settings of the sensing object.
- sensing object registration area 112 When a tracking target object is selected in the sensing object registration area 112, a sensing object type is selected, and the registration button is selected, sensing object information of the selected object type corresponding to the selected tracking target object is registered in the information storage unit 72.
- the virtual space display area 113 displays a portion of the virtual space 1 (actually, two-dimensional information onto which the space is projected).
- the field of view of the space displayed in the virtual space display area 113 can be moved by operating the keyboard on the object information registrant terminal 63 or by operating the HMD.
- a frame 121 indicating that the object is selected is displayed around the avatar of the attention object (object to be tracked) "5yeg5t" (the avatar in the foreground in the figure) whose ID has been selected in the ID selection field in the attention object registration area 111.
- the registration screen for the object of interest and the registration screen for the sensing object may be provided separately. Furthermore, the person who registers the object of interest and the person who registers the sensing object may be different people (such as a user of the virtual space) or may be the same person.
- a tracking target object (object of interest) can be set that is specific to the registrant, i.e., the user of this virtual space sensing system 51.
- sensing objects it is possible for sensing objects to be preset for all objects, and sensing results to be obtained based on those settings, but this would place a heavy load on the system and would also result in unnecessary sensing results.
- tracking targets objects of interest
- FIG. 5 is a diagram showing another example of the registration UI of FIG.
- the focus object registration area 111 in FIG. 5 differs from the focus object registration area 111 in FIG. 4 in that instead of the radio button indicating that ID is to be used in the manual case, a radio button indicating that pointing is to be used in the manual case is selected.
- the virtual space display area 113 displays a frame 121 indicating that the focus object is being selected, an arrow for selecting the focus object, and a pointer 131 indicating that the focus object has been selected by the arrow.
- the position of the object of interest in the virtual space 1 is calculated from the position in the virtual space display area 113, and the object of interest information is registered in the information storage unit 72.
- FIG. 6 is a diagram showing a display example of a sensing object detail setting screen of the registration UI.
- a sensing object detailed setting screen 141 is displayed when the detailed setting button is selected in the sensing object registration area 112 in FIG. 4.
- the sensing object detail setting screen 141 is configured to include a position/direction setting area 151 for setting the position and sensing direction of the sensing performed by the sensing object, a sensing range setting area 152 for setting the sensing range, and a toggle switch 153 for visualizing existing sensing objects.
- the position/direction setting area 151 displays a portion of the virtual space 1, i.e., the space in which the sensing object to be set exists (projected two-dimensional information).
- a movement button 161 and a rotation button 162 are provided at the top of the position/direction setting area 151. By selecting the movement button 161, the position/direction setting area 151 becomes an area for setting the position of the sensing object, and by selecting the rotation button 162, the position/direction setting area 151 becomes an area for setting the sensing direction.
- the position/direction setting area 151 becomes the sensing object position setting area, and the position/direction setting area 151 displays a camera icon 163 representing the sensing object with camera function that is currently being set, and a cursor 164 for setting the position.
- the person registering the sensing object can operate cursor 164 with a mouse or the like to move camera icon 163 to the desired position, thereby setting the position of the sensing object corresponding to camera icon 163 on the plane (x, y, z) that constitutes position/direction setting area 151 within virtual space 1.
- the toggle switch 153 for visualizing existing sensing objects is ON, so an icon representing an existing sensing object (for example, a microphone icon 165 representing a sensing object with a microphone function) is displayed.
- the sensing range setting area 152 displays a space cut out to a specified size at the position and sensing direction set for the sensing object in the position/direction setting area 151.
- a range setting button 171, a range movement button 172, and a range change button 173 are provided at the top of the sensing range setting area 152.
- Figure 6 shows the sensing range setting area 152 with no buttons selected.
- the sensing range setting area 152 becomes the sensing range setting area for the sensing object.
- the sensing range setting area 152 becomes the sensing range movement area for the sensing object.
- the sensing range setting area 152 becomes the sensing range change area for the sensing object.
- sensing range setting area 152 is a certain range from the position set in the position/direction setting area 151. For example, if a sensing object is placed right next to the face of an avatar (object to be tracked), a close-up of the face will be displayed in the sensing range setting area 152.
- the range displayed in the sensing range setting area 152 can be adjusted by adjusting the position in the position/orientation setting area 151 so that the entire body is displayed in the sensing range setting area 152.
- the value of the specified size itself (for example, the maximum size of the frame displayed in the sensing range setting area 152) is defined as the maximum range to be cut out in this virtual space sensing system 51.
- the registrant cannot sense at an image size larger than this, but if the registrant wishes to sense a wider range, this can be achieved by adjusting the position.
- FIG. 7 shows another example of the display of the sensing object detailed setting screen in FIG. 6.
- FIG. 7 shows the position/direction setting area 151 when the rotation button 162 is selected, and the sensing range setting area 152 when the range setting button 171 is selected.
- a camera icon 163 is displayed, which represents the sensing object for which settings are currently being made, and a sphere 166 with three axes (Yaw/Pitch/Roll) centered on the camera icon 163.
- the camera icon 163 can be rotated by sliding the mouse over the Yaw axis indicated by a thick line, the Pitch axis indicated by a dashed line, and the Roll axis indicated by a dotted diagonal line.
- the sensing direction of the sensing object corresponding to the camera icon 163 is set, and the display of the sensing range setting area 152 is also updated.
- the sensing range setting area 152 in FIG. 7 displays a space cut out at a specified size in the position and sensing direction set for the sensing object in the position/direction setting area 151.
- the sensing range can be set by setting a rectangle 175, indicated by an upper left point 174-1 and a lower right point 174-2, in the displayed space.
- FIG. 8 is a diagram showing another display example of the sensing range setting area.
- the sensing range setting area 152 is shown with the range movement button 172 selected.
- a cursor (arrow) 176 is displayed, and the sensing range indicated by the rectangle 175 can be moved by moving the rectangle 175 using the cursor 176.
- the sensing range setting area 152 is shown with the range movement button 172 selected.
- a desired corner 177 on the rectangle 175 can be highlighted by specifying it with a cursor (arrow) 176. Then, as shown by arrow P, the sensing range indicated by the rectangle 175 can be transformed by enlarging (reducing) the corner 177 to a desired position using cursor 178.
- FIG. 9 is a diagram showing an example of object information regarding all objects in the virtual space 1. As shown in FIG. 9
- the object information in FIG. 9 is information about all objects, and is information exchanged between the information storage unit 72 and the object information registration unit 73, for example, when registering an object of interest and a sensing object, or when using a service provided by the virtual space sensing system 51.
- the object information is configured to include ObjectID, UserID, ObjectType, IsSensing, KeyPointCoord1, and KeyPointCoord2.
- ObjectID is a unique ID for each software object registered in the virtual space system.
- the UserID is a user-specific ID when an object belongs to the operating user (virtual space user terminal 62). For example, when operating an avatar to walk around the virtual space 1, the user operates his or her own avatar, but in the virtual space 1, there is no user who is the subject of operation for certain objects, i.e., objects that are not assigned a UserID. For example, UserIDs are not assigned to objects such as people other than the avatar (non-player characters), animals, plants, products, backgrounds, etc.
- ObjectType indicates the type of object. Note that no distinction is made here between non-sensing objects and sensing objects. Examples of ObjectType include human, dog, tree, flower, house, 2D RGB camera, and microphone.
- IsSensing is information that indicates whether or not an object is a sensing object.
- microphone and 2d_rgb_camera are sensing objects, while human, dog, and tree are normal objects (non-sensing objects) that are not sensing objects.
- KeyPointCoord1 and KeyPointCoord2 are object position information and are the three-dimensional coordinates of feature points defined by each ObjectType (for example, skeleton points in the case of a human).
- the object position information is represented by KeyPointCoord1, which indicates the elbow position, and KeyPointCoord2, which indicates the hand position.
- KeyPointCoord1 which indicates the elbow position
- KeyPointCoord2 which indicates the hand position.
- the object position information is represented only by KeyPointCoord1, which indicates the center point.
- the entire position range of the corresponding object is identified based on multiple KeyPointCoords according to the ObjectType. These KeyPointCoord values change sequentially as the position of each object moves within the virtual space.
- the UserID of ObjectID 5yeg5t is tfq34f
- ObjectType is human
- IsSensing is 0,
- KeyPointCoord1(x,y,z) is 571,1215,5213
- KeyPointCoord2(x,y,z) is 592,42,5213.
- ObjectID 242tqa The UserID of ObjectID 242tqa is asdf3, ObjectType is dog, IsSensing is 0, KeyPointCoord1(x,y,z) is 3421,-1231,456, and KeyPointCoord2(x,y,z) is 4756,-4256,57.
- ObjectType ar24rfa is none
- ObjectType is tree
- IsSensing is 0,
- KeyPointCoord1(x,y,z) is 435,567,8654
- KeyPointCoord2(x,y,z) is 8675,765,534.
- ObjectID gfas5s The UserID of ObjectID gfas5s is none, ObjectType is microphone, and IsSensing is 1, indicating that it is a sensing object, and KeyPointCoord1 (x,y,z) is 571,1215,5213, and KeyPointCoord2(x,y,z) is - (none).
- ObjectType is 2d_rgb_camera
- IsSensing is 1, indicating that it is a sensing object
- KeyPointCoord1 (x,y,z) is 3421,-1231,456, and KeyPointCoord2(x,y,z) is - (none).
- FIG. 10 is a diagram showing an example of sensing object information set for a sensing object among the object information in FIG.
- the sensing object information for each ObjectID of the sensing object may be a single table of information that combines the information in FIG. 9 and FIG. 10.
- the information shown in FIG. 9 and FIG. 10 may be information that is set in advance in the virtual space sensing system 51, rather than being set by a registrant as shown in FIG. 4 to FIG. 8.
- the sensing object information is configured to include ObjectID, ObjectType, TrackingTargetObjectID, and Setting1 to Setting5.
- the ObjectID and ObjectType are the same as in Figure 9.
- information on non-sensing objects and information on sensing objects were included without distinction, but in Figure 16, only information on sensing objects is shown.
- Examples of object types for sensing objects include 2d_rgb_camera, 2d_gray_camera, 3d_rgb_camera, 3d_gray_camera, and microphone.
- TrackingTargetObjectID is the ObjectID of the tracking target object (or the object of interest if selected by the user) that is registered as the tracking target of each sensing object.
- TrackingTargetObjectID is the identifier of the tracking target object (or the object of interest if selected by the user via the object information registrant terminal 63) that is the tracking target of each sensing object.
- Setting1 to Setting5 include, for example, image_size, relative_position, angle_of_fov, yaw, roll, pitch, etc.
- image_size is the image size.
- relative_position is the relative sensing position, i.e. the difference (offset) between the position of the sensing object and the position of the tracked object.
- angle_of_fov is the field of view.
- yaw, roll, pitch are the sensing angles yaw, roll, pitch of the sensing target.
- Setting1 to Setting3 include, for example, radius and relative_position.
- radius is the radius of sound collection.
- relative_position is the relative sound collection (sensing) position.
- the sensing range is the spherical area around the relative_position position, with the radius set to radius.
- the ObjectID of j211234 is the ObjectType of 3d_rgb_camera
- the key and value of Setting1 are image_size and (256,256).
- the key and value of Setting2 are relative_position and (-5,40,10), and the key and value of Setting3 are angle_of_fov and 60.
- the key and value of Setting4 are yaw and 15, the key and value of Setting5 are roll and 49, and the key and value of Setting6 are pitch and 330.
- the ObjectType of ObjectID gfas5s is microphone, the key and value of Setting1 are radius and 1000, and the key and value of Setting2 are relative_position and (0,15,15).
- the ObjectType of ObjectID tw3tgf is 2d_gray_camera, and the key and value of Setting1 are image_size and (256,256).
- the key and value of Setting2 are relative_position and (0,0,0), and the key and value of Setting3 are angle_of_fov and 40.
- the key and value of Setting4 are yaw and 0, the key and value of Setting5 are roll and 90, and the key and value of Setting6 are pitch and 0.
- the ObjectType of ObjectID 234j8s is 2d_gray_camera, and the key and value of Setting1 are image_size and (256,256).
- the key and value of Setting2 are relative_position and (0,0,57), and the key and value of Setting3 are angle_of_fov and 40.
- the key and value of Setting4 are yaw and 0, the key and value of Setting5 are roll and 90, and the key and value of Setting6 are pitch and 0.
- FIG. 11 is a diagram illustrating the relative shooting position.
- the sensing object 11 is shown in a coordinate system with the tracked object 12 at its center, the x-axis being the two-dimensional horizontal direction, the y-axis being the depth direction, and the z-axis being the two-dimensional vertical direction.
- sensing object 11 is actually a point coordinate and has no width, but for convenience it is shown as a square and a round shape in Figure 11.
- the tracked object 12 is shown with a central position CP.
- the location of the tracked object 12 as the central position CP is defined by the ObjectType.
- the central position CP is defined as the center of gravity of all KeyPointCoords.
- FIG. 12 is a diagram showing an example of a sensing range.
- the space of image size (w, h) in the direction seen from the sensing object 11 is the sensing range.
- the yaw angle is a parameter that indicates whether the sensing object 11 shoots in the direction of the tracked object, or in a direction other than the tracked object (the line of sight direction as seen from the tracked object). For example, when the yaw angle is between 0 degrees and 90 degrees and between 270 degrees and 360 degrees, the tracking object direction is the shooting direction, and when the yaw angle is between 91 degrees and 269 degrees, the line of sight direction as seen from the tracked object is the shooting direction.
- the sensing range is the tracked object itself.
- the screen for setting the details of a sensing object is not limited to the configuration of the sensing object details setting screen 141 in FIG. 6.
- the screen for setting the details of a sensing object may be configured, for example, like the sensing object details setting screen 181 in FIG. 13, which will be described next.
- FIG. 13 is a diagram showing another display example of the sensing object detail setting screen of FIG.
- the sensing object detailed setting screen 181 is configured to include a display field for the sensing object ID, a field for selecting the ID of the object to be tracked, a field for selecting the sensing object type, a setting button for visualizing the sensing object in the registration UI, an input field for the image size, an input field for the viewing angle, a selection field for the shooting direction, a selection field for the part to be shot, and an input field for the relative shooting position.
- the sensing object ID is an identifier for the sensing object.
- "j211234" is displayed in the sensing object ID display field.
- the sensing object ID is set appropriately by the system.
- the ID selection field for the tracking target object and the type selection field for the sensing object type are the same as those in Figure 4, so their explanation will be omitted.
- the setting button for visualizing the sensing object in the registration UI is a YES button or a NO button that sets whether or not to display the camera icon 163 or microphone icon 165 described above in FIG. 6 in the space displayed in the virtual space display area 113.
- a camera icon 163 of the sensing object set on the corresponding tracking target object (the avatar in the foreground in the figure) on the two-dimensional information in the virtual space display area 113 is displayed.
- a microphone icon 165 of the sensing object set for the corresponding tracking target object (the avatar at the back in the figure) on the two-dimensional information in the virtual space display area 113 is displayed.
- the position and direction of the displayed icon may correspond to the position and direction of the corresponding sensing object.
- the image size is the size of the image sensed by the sensing object.
- (256) is entered as the height and (256) is entered as the width in the image size input field.
- the viewing angle is the viewing angle sensed by the sensing object.
- 45 degrees is entered for the top and bottom, and 45 degrees for the left and right.
- the shooting direction is the direction in which the sensing object takes the image.
- yourself (inside) is selected in the shooting direction selection field.
- the photographed part is the part that the sensing object photographs. If the user himself is selected in the photographing direction selection field, then the whole body (all) or face (face) can be selected in the photographed part selection field. In the case of Figure 13, the face (face) is selected in the photographed part selection field. Note that if the outward direction is selected in the photographing direction selection field, no selection is possible in the photographed part selection field.
- the relative shooting position is the relative position from the center of the sensing object to the center of the object to be tracked.
- the relative shooting position can be specified by inputting the x, y, and z coordinates of the relative shooting position.
- (-5, 40, 40) has been entered in the (x, y, z) of the relative shooting position input field.
- the image size input field, field of view angle input field, field of shooting direction selection field, field of shooting part selection field, and field of relative shooting position input field are included in the parameter setting area 182.
- the configuration of the parameter setting area 182 differs depending on the sensing object type.
- the configuration of the parameter setting area 182 when the sensing object type is "2D RGB camera" is shown. That is, the tracking target object is set in the sensing object registration area 112, and parameters for setting the sensing range are set in this parameter setting area 182, thereby setting the sensing range targeted by the sensing object.
- FIG. 13 an example is shown in which the x, y, and z coordinates of the relative shooting position are input numerically in the relative shooting position input field, but the relative shooting position may be intuitively input by operating the camera icon 153 of the sensing object displayed in the virtual space display area 113.
- the camera icon 153 of the sensing object displayed behind the tracked object is selected, and the selected camera icon 153 is moved to the desired position (in front of the tracked object), and the relative positions of the tracked object and the sensing object during and after the movement are calculated and input in real time.
- the two-dimensional coordinates are calculated and input in response to the movement of the camera icon 153, and the depth coordinates can be input by controlling the display of the virtual space in the virtual space display area 113, for example by scrolling the mouse.
- the x, y, and z coordinates of the relative position calculated in response to the movement of the camera icon 153 and the control of the display of the virtual space are displayed in real time in the input field for the relative shooting position.
- FIG. 14 is a diagram for explaining the shooting direction in FIG.
- the sensing object 11 is shown as a square and a round shape for convenience, similar to FIG. 11.
- the sensing object 11 is shown in a coordinate system with the tracked object 12 at its center, the x-axis being the two-dimensional horizontal direction, the y-axis being the depth direction, and the z-axis being the two-dimensional vertical direction.
- the forward direction of the tracked object 12 (for example, the face direction or line of sight direction in the case of a person) indicated by the arrow P is taken as the positive direction of the x-axis, as shown in A of Figure 14. This way of taking the coordinates is defined by the settings.
- relative_position (20, 0 10)
- the sensing object 11 is placed in front of the tracked object 12.
- the relative coordinates are input as the relative shooting position on the sensing object detailed setting screen described above in FIG. 13.
- the shooting direction (for example, "inside” which indicates that the sensing object faces the tracked object, or “outside” which indicates that the sensing object faces outward) is determined by the selection in the shooting direction selection field on the sensing object detailed setting screen.
- the shooting direction is “inside,” an image is captured from the position of the sensing object 11 toward the object itself (the tracked object).
- the specific part to be captured is set in more detail (whole body or face) depending on the sensing part, and captured.
- the sensing range is a range that includes at least a part of the tracked object itself.
- the sensing range is the field of view as seen from the tracked object.
- B of Figure 14 shows a case where the x coordinate of the relative shooting position is a positive value.
- the sensing object 11 is placed in front of the tracked object 12, so when the shooting direction is of the user himself, the front of the tracked object 12 is shot. When the shooting direction is outward, the line of sight of the tracked object 12 is shot.
- C of Figure 14 shows a case where the x coordinate of the relative shooting position is a negative value.
- the sensing object 11 is placed behind the tracked object 12, so when the shooting direction is of the user himself, the back of the tracked object 12 is shot. When the shooting direction is outward, the line of sight of the tracked object 12 as it would be if its eyes were at the back of its head is shot.
- a UI may be added that selects 90 degrees left or right, or behind (180 degrees), for example, with respect to the face direction or gaze direction other than the viewpoint direction.
- the system automatically sets the position of the sensing object to a (relative) position of the origin on the x-axis (or slightly to the positive side: offset from the eye position).
- further settings may be made so that when 180 degrees is selected, the x value becomes negative, and when 90 degrees or 270 degrees is selected, the x value is positioned on the y-axis.
- the position of the sensing object may be automatically set to a (relative) position where x is greater than or equal to a threshold value, in order to take into account the need to move away from the self until it can be sensed.
- FIG. 15 is a diagram showing an example of a sensing range when a sensing object has a camera function.
- the parameter setting area 182 in FIG. 13 shows an example in which the image size for the tracking target object (avatar) with ID "5yeg5t" set in FIG. 4 is set to (height: 1024, width: 256), the viewing angle is set to 45 degrees up and down and left and right, the shooting direction is set to the subject itself, and the shooting area is set to the whole body.
- the sensing range (capture range) targeted by the sensing object is "the entire object to be tracked.”
- the registrant can set the shooting directions (yaw, roll, pitch) by selecting the sensing range in the position/direction setting area 151 as described above in FIG. 7 for the sensing range in FIG. 15.
- FIG. 16 is a diagram showing another example of the sensing range when the sensing object has a camera function.
- the sensing range targeted by the sensing object is "a part of the tracked object (face part).”
- the registrant can set the shooting directions yaw, roll, and pitch by selecting the sensing range in the position and direction setting area 151 as described above in FIG. 7 for the sensing range in FIG. 16.
- FIG. 17 is a diagram showing an example of sensing object information in the case of the sensing object detail setting screen 181 in FIG.
- the sensing object information in FIG. 17 is information that is set for the sensing object among the object information in FIG. 6, and for ease of explanation, it is described separately from the object information in FIG. 9, but the sensing object information for each Object ID of the sensing object may be a single table information that combines the information in FIG. 9 and FIG. 17.
- both the information shown in FIG. 9 and FIG. 17 may be information that is set in advance within the system, rather than being set by the user as shown in FIG. 4 and FIG. 13.
- the sensing object information is configured to include ObjectID, ObjectType, TrackingTargetObjectID, and Setting1 to Setting5.
- the ObjectID and ObjectType are the same as in Figure 9.
- information on non-sensing objects and information on sensing objects were included without distinction, but in Figure 17, only information on sensing objects is shown.
- Examples of object types for sensing objects include 2d_rgb_camera, 2d_gray_camera, 3d_rgb_camera, 3d_gray_camera, and microphone.
- TrackingTargetObjectID is the ObjectID of the tracking target object (or the object of interest if selected by the user) that is registered as the tracking target of each sensing object.
- TrackingTargetObjectID is the identifier of the tracking target object (or the object of interest if selected by the registrant via the object information registrant terminal 63) that is the tracking target of each sensing object.
- Setting1 to Setting5 include, for example, image_size, relative_position, angle_of_fov, sensing_direction, sensing_part, etc.
- image_size is the image size.
- relative_position is the relative shooting (sensing) position.
- angle_of_fov is the field of view.
- sensing_direction is the sensing direction (shooting direction) of the sensing object, and can be selected from inside or outside.
- Sensing_part is a setting for selecting the part to be photographed when sensing_direction is inside, i.e., the entire sensing (photography) range that is the object to be tracked, or a part of it. When sensing_direction is outside, sensing part cannot be specified. Note that the definitions of the values of these settings are held by this system.
- Setting1 to Setting3 include, for example, radius and relative_position.
- radius is the radius of sound collection.
- relative_position is the relative sound collection (sensing) position.
- the ObjectID of j211234 is the ObjectType of 3d_rgb_camera
- the key and value of Setting1 are image_size and (256,256).
- the key and value of Setting2 are relative_position and (-5,40,10), and the key and value of Setting3 are angle_of_fov and 60.
- the key and value of Setting4 are sensing direction and inside, and the key and value of Setting5 are sensing part and all.
- the ObjectType of ObjectID gfas5s is microphone, the key and value of Setting1 are radius and 1000, and the key and value of Setting2 are relative_position and (0,15,15).
- the ObjectType of ObjectID tw3tgf is 2d_gray_camera, the key and value of Setting1 are image_size and (256,256).
- the key and value of Setting2 are relative_position and (0,0,0), and the key and value of Setting3 are angle_of_fov and 40.
- the key and value of Setting4 are sensing direction and inside, and the key and value of Setting5 are sensing part and face.
- the ObjectType of ObjectID 234j8s is 2d_gray_camera, and the key and value of Setting1 are image_size and (256,256).
- the key and value of Setting2 are relative_position and (0,0,57), and the key and value of Setting3 are angle_of_fov and 40.
- the key and value of Setting4 are sensing direction and outside, and the key and value of Setting5 are none.
- both the raw, roll, and pitch cases in the setting method of FIG. 6 and the inside and outside cases in the setting method of FIG. 13 have been described as examples using algorithms based on information indicating the shooting direction with the sensing object as the origin (the direction as seen from the sensing object), but these are merely examples.
- the origin of the shooting direction may be the object to be tracked.
- FIG. 18 is a flowchart illustrating the processing of the virtual space sensing system 51.
- the registrant uses the object information registrant terminal 63 to access the registration UI 101 of the virtual space object analysis server 61.
- the registration UI generation unit 81 generates a registration UI 101 such as a GUI for the registrant to register an object of interest and a sensing object from among the objects in the virtual space 1, and outputs information about the registration UI 101 to the object information registrant terminal 63.
- the registrant operates the object information registrant terminal 63 to input the information required to register the target object and the sensing object into the registration UI 101 displayed on the monitor (not shown) of the object information registrant terminal 63.
- step S112 the object information registration unit 73 generates attention object information (tracking target object information) and sensing object information based on the UI operation information by the registrant, and registers them in the information storage unit 72. At this time, the relevant object information in the virtual space construction information stored in the information storage unit 72 is referenced.
- the sensing necessity determination unit 91 periodically acquires tracking target object information including the target object and sensing object information (ObjectType, ObjectType, KeyPointCoord, etc. in FIG. 9 for ObjectID corresponding to TrackingTargetObjectID in FIG. 13 or FIG. 16).
- step S113 the sensing necessity determination unit 91 waits until it determines that sensing is necessary based on KeyPointCoord, which is the position information of the sensing target object (object of interest) included in the acquired tracked object information.
- the sensing object information may set, for example, sensing start and end times, the number of sensing operations, a predetermined interval for sensing (for example, every 10 seconds), and the like as sensing conditions. If it is determined in step S113 that sensing is necessary, the process proceeds to step S114. Note that if sensing is performed for each frame and there are no additional conditions as described above, step S113 is not necessary.
- step S114 the sensing necessity determination unit 91 determines whether the tracked object (object of interest) has moved. If it is determined in step S114 that the tracked object (object of interest) has moved, it is assumed that this involves movement of the sensing object, and the process proceeds to step S115. At this time, the sensing necessity determination unit 91 outputs the tracked object information (object of interest information) and the sensing object information to the sensing object space position update unit 92.
- step S115 the sensing object space position update unit 92 tracks the sensing target object (object of interest) so as to maintain the positional relationship with the sensing target object (object of interest), and moves the sensing object to an appropriate position relative to the sensing target object (object of interest).
- the sensing object space position update unit 92 updates the sensing object position information included in the sensing object information among the information supplied from the sensing necessity determination unit 91, and outputs the information supplied from the sensing necessity determination unit 91 to the sensing result information generation unit 93. Furthermore, the sensing object space position update unit 92 updates the sensing object position information with the latest position for the sensing object information stored in the information storage unit 72 and outputs it.
- step S116 the sensing result information generation unit 93 sets a sensing range in the virtual space based on the supplied tracking target object information (target object information) and sensing object information, and senses the set sensing range.
- the sensing result information generation unit 93 generates the sensing results of the sensing range by rendering based on the virtual space construction information.
- the sensing results are output to and stored in the information storage unit 72.
- step S117 the sensing necessity determination unit 91 determines whether or not to end sensing based on information such as the end of use of the virtual space sensing system 51. If it is determined in step S117 that sensing should not be ended, the process returns to step S114, and the subsequent processes are repeated.
- step S117 If it is determined in step S117 that sensing is to be terminated, the processing of the virtual space sensing system 51 in FIG. 18 is terminated.
- the process in FIG. 18 is just one example. For example, if registration of attention objects and sensing objects has been performed in advance, or if information previously registered in virtual space sensing system 51 is used without any settings by the registrant, then when an actual service using virtual space sensing system 51 is used, processing is performed without the registration-related processing in steps S111 and S112. Also, while the processing in FIG. 18 is processing performed in real time on the virtual space, it may also be performed on stored virtual space content, for example.
- the sensing result information (images and audio) acquired as described above is stored in the information storage unit 72, and the images can then be provided to users of the virtual space 1 or to those who have registered objects of interest, either individually or as part of a photo album, and the audio can be provided for personal enjoyment.
- the information analysis unit 75 can provide the virtual space user terminal 62 with analysis result information (level of excitement, estimated result information on the user's emotions) obtained by analyzing the sensing results. Furthermore, the analysis result information can be used to provide in-play advertisements in the virtual space 1 that are more favorable to users and advertisers.
- the above processing can be performed using information preregistered in the virtual space sensing system 51 without any settings by the registrant, but the registrant or user may not need sensing results based on all objects. Therefore, by being able to set tracking targets (objects of interest) specific to a user such as a registrant or user, it is possible to provide the user with the sensing results they need and reduce the load on the system.
- FIG. 19 is a diagram illustrating a configuration example of a virtual space sensing system according to a second embodiment of the present technology. As shown in FIG. 19
- the virtual space sensing system 201 in FIG. 19 differs from the virtual space sensing system 51 in FIG. 3 in that an external system 211, an external website server 212, and an external device (including an information analysis unit 213) are added.
- the virtual space sensing system 201 in FIG. 19, like the virtual space sensing system 51 in FIG. 3, is a system that is different from a virtual space system, and therefore can provide sensing results or analysis results of the virtual space 1 to the outside, which could not be provided by conventional virtual space systems alone.
- the external system 211 is, for example, a system that generates and provides an In-Play advertisement.
- the external system 211 acquires analysis result information that analyzes the degree of excitement in the virtual space 1 stored in the information storage unit 72, and generates an In-Play advertisement to be displayed in the virtual space 1 based on the acquired analysis result information.
- the external system 211 outputs the generated In-Play advertisement to the virtual space construction unit 60.
- the external system 211 also generates a photo album made up of images that are the sensing result information stored in the information storage unit 72, and outputs it to the virtual space construction unit 60. That is, in FIG. 19, the information storage unit 72 also functions as an information providing unit that provides the sensing result information acquired from the virtual space 1, the analysis result information of the sensing result information, or information based on the analysis result information to the outside.
- the external website server 212 is, for example, a server for a website that introduces virtual spaces.
- the external website server 212 acquires analysis result information obtained by analyzing the degree of excitement in the virtual space 1 stored in the information storage unit 72, and reflects fixed-point observations of the excitement in the virtual space 1 based on the analysis result information on the website that introduces the virtual space 1 of the present virtual space sensing system 51.
- the external website server 212 may also reflect fixed-point observations using images, which are the sensing result information stored in the information storage unit 72, on the website that introduces the virtual space 1 of the present virtual space sensing system 51.
- the information analysis unit 213 configured in the external device acquires sensing result information or analysis result information by the information analysis unit 75 from the information storage unit 72 via the information analysis unit 75.
- the information analysis unit 213 analyzes the sensing result information in the same manner as the information analysis unit 75. If the sensing result information is a facial image of an avatar, the information analysis unit 75 analyzes (estimates), for example, joy, anger, sadness, and happiness from the facial image. If the sensing result information is spatial audio, the information analysis unit 213 analyzes (estimates), for example, the degree of excitement from the spatial audio. The analysis results by the information analysis unit 213 are fed back to the information storage unit 72, for example, via the information analysis unit 75.
- the information analysis unit 75 extracts feature points from the facial image and the resulting information is supplied to the information analysis unit 213, which can then share the analysis process according to its area of expertise, such as estimating emotions based on the extracted feature points.
- the information analysis unit 213 may additionally use, as external system information 214, the above-mentioned real-space user object information, such as facial direction information, gaze information, and gesture information in the real space obtained from the virtual space user terminal 62 using the virtual space.
- the information analysis unit 213 may additionally use, as external system information 214, the above-mentioned real-space user object information, such as facial direction information, gaze information, and gesture information in the real space obtained from the virtual space user terminal 62 using the virtual space.
- the virtual space sensing system 201 in FIG. 19 can provide even more convenient services to users by linking with external parties such as the external system 211, the external website server 212, and the information analysis unit 213 configured in the external device, and instead of providing sensing result information or analysis result information, it can obtain feedback that incorporates the technology in which the external company excels.
- usage rights for the information to be provided may be set.
- FIG. 20 is a diagram showing an example of a service provided by the present technology.
- FIG. 20 an example is shown in which a user is wearing an HMD 231 as a virtual space user terminal 62 and using virtual space 1.
- the user uses the registration UI to register in advance in the information storage unit 72 that his/her friends' avatars 251 and 252 are to be sensed by the respective sensing objects having camera functions, with the friends' avatars 251 and 252 being set as objects of interest. At that time, the faces of the avatars 251 and 252 themselves are set as the sensing range.
- the sensing result information generating unit 93 Based on the above-mentioned registration, the sensing result information generating unit 93 generates RGB images of the facial parts of the avatars 251 and 252 and stores them in the information storage unit 72. The information analyzing unit 75 then detects feature points of the facial parts from the RGB images stored in the information storage unit 72 and estimates the emotions.
- the emotion estimation result information is stored in the information storage unit 72, so that the emotion estimation result information (degree of happiness) is acquired by the virtual space construction unit 60, and for example, speech bubbles 261 and 262 indicating the happiness degrees of avatars 251 and 252 are displayed, respectively, in the virtual space 1 of the HMD 231 that the user is viewing.
- Speech bubble 261 indicates that the happiness degree is 60%, as indicated by Happy:60.
- Speech bubble 262 indicates that the happiness degree is 90%, as indicated by Happy:90.
- the user using the virtual space 1 can recognize the emotions of the friend's avatars 251 and 252 during the conversation.
- the information storage unit 72 is registered in advance with the user's avatar as the object of interest and sensing is performed by each sensing object having a camera function.
- RGB images of the facial parts of avatars 251 and 252 are generated and stored in information storage unit 72, so that even if avatars 251 and 252 are facing backwards, for example, it is possible to display the RGB images of the facial parts of avatars 251 and 252 on child screen W, etc.
- FIG. 21 is a diagram showing another example of a service provided by the present technology.
- FIG. 21 an example is shown in which a user is using a virtual space user terminal 62 to access VR shopping 271 as part of the virtual space 1.
- the shooting direction is, for example, in the case of sensing object detail setting screen 141 in FIG. 6, the line of sight direction (field of view range) of the object of interest is set by the shooting angles yaw, roll, and pitch, and in the case of sensing object detail setting screen 181 in FIG. 13, it is set to outside.
- the sensing object with a camera function that is registered for the object of interest is set to perform sensing at all times, and the fact that the product 281, which is the object of interest, has been picked up by the avatar 283 is detected by storing an image, which is sensing result information, in the information storage unit 72 and analyzing the stored image by the information analysis unit 75.
- the information analysis unit 75 analyzes the image to determine the range 291 in which the avatar 283 is to be focused, and provides feedback (provides) this to the information storage unit 72 so that the sensing range of the avatar 283 by the sensing object can be corrected to the range 291 in which the avatar 283 is to be focused.
- the sensing object information in the information storage unit 72 is updated by feedback of the focus range of the avatar 283, so that the sensing unit 74 can detect the position of the sensing range with high quality using the updated sensing object. This makes it possible to sense an image and more accurately analyze the facial expression of the avatar 283 from the sensed image.
- sensing objects with microphone functionality examples include the following services.
- attention objects e.g., avatars
- the tracking target object spatial position is updated in response to changes in the tracking target object spatial position so that the positional relationship between the tracking target object spatial position, which is the spatial position of the tracked object in virtual space, and the sensing object spatial position, which is the spatial position of the sensing object for sensing the sensing range set for the tracked object in virtual space, is maintained, and sensing result information, which is the sensing result of the sensing range by the sensing object at the sensing object spatial position, is generated.
- the above-mentioned series of processes can be executed by hardware or software.
- the program constituting the software is installed from a program recording medium into a computer incorporated in dedicated hardware, or into a general-purpose personal computer, etc.
- FIG. 22 is a block diagram showing an example of the hardware configuration of a computer 900 that executes the above-mentioned series of processes by a program.
- the CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- an input/output interface 910 Connected to the input/output interface 910 are an input unit 911 consisting of a keyboard, mouse, etc., and an output unit 912 consisting of a display, speakers, etc. Also connected to the input/output interface 910 are a storage unit 913 consisting of a hard disk or non-volatile memory, a communication unit 914 consisting of a network interface, etc., and a drive 915 that drives a removable recording medium 921.
- the CPU 901 for example, loads a program stored in the storage unit 913 into the RAM 903 via the input/output interface 910 and the bus 904 and executes the program, thereby performing the series of processes described above.
- the programs executed by the CPU 901 are recorded on, for example, a removable recording medium 921, or are provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital broadcasting, and are installed in the storage unit 913.
- the program executed by the computer may be a program in which processing is performed chronologically in the order described in this specification, or a program in which processing is performed in parallel or at the required timing, such as when called.
- a system refers to a collection of multiple components (devices, modules (parts), etc.), regardless of whether all the components are in the same housing. Therefore, multiple devices housed in separate housings and connected via a network, and a single device in which multiple modules are housed in a single housing, are both systems.
- this technology can be configured as cloud computing, in which a single function is shared and processed collaboratively by multiple devices over a network.
- each step described in the above flowchart can be executed by a single device, or can be shared and executed by multiple devices.
- a single step includes multiple processes
- the processes included in that single step can be executed by a single device, or can be shared and executed by multiple devices.
- the present technology can also be configured as follows.
- a sensing object spatial position update unit that updates the tracking target object spatial position in response to a change in the tracking target object spatial position so that a positional relationship between a tracking target object spatial position, which is a spatial position of the tracking target object in a virtual space, and a sensing object spatial position, which is a spatial position of a sensing object for sensing a sensing range set for the tracking target object in the virtual space, is maintained;
- a sensing result information generating unit that generates sensing result information that is a sensing result of the sensing range by the sensing object at the sensing object space position.
- the information processing device further comprising an information analysis unit that analyzes the sensing result information.
- the information processing device further comprising an information providing unit that provides a result of the analysis by the information analysis unit or information based on the result of the analysis.
- the information providing unit provides the analysis result or information based on the analysis result to the virtual space or a virtual space user terminal that uses the virtual space.
- the information processing device uses sensing information in a real space to analyze the sensing result information.
- the sensing result information generation unit generates the sensing result information by performing a rendering process using virtual space information for constructing the virtual space.
- the information processing device has a camera function.
- the information processing device includes at least a portion of the tracked object itself.
- the information processing device is a visual field range seen from the tracked object.
- the information processing device has a microphone function.
- the information processing device is a periphery of the tracked object.
- the information processing device is a registration unit configured to register at least one of an object of interest and the sensing object as the tracking target object.
- the information processing device wherein the registration unit generates the UI for registering at least one of the attention object and the sensing object.
- the registration unit registers the object of interest in accordance with a selection result of identification information of an object in the virtual space via the UI.
- the registration unit generates the UI including a display of two-dimensional information or three-dimensional information of a partial area of the virtual space.
- the registration unit generates the UI for registering the object of interest in accordance with a selection result of an object displayed on the display.
- the information processing device according to (15), wherein the registration unit generates the UI for registering the sensing range of the sensing object in accordance with a movement position of the sensing object displayed on the display.
- the information processing device further comprising an information providing unit that provides the sensing result or information based on the sensing result.
- the information processing device according to any one of (1) to (18), which is configured as a system separate from a virtual space system that provides the virtual space.
- An information processing device updating the tracking target object spatial position in response to a change in the tracking target object spatial position so that a positional relationship between the tracking target object spatial position, which is a spatial position of the tracking target object in a virtual space, and a sensing object spatial position, which is a spatial position of a sensing object for sensing a sensing range set for the tracking target object in the virtual space, is maintained; generating sensing result information that is a sensing result of the sensing range by the sensing object at the sensing object space position.
- a sensing object spatial position update unit that updates the tracking target object spatial position in response to a change in the tracking target object spatial position so that a positional relationship between a tracking target object spatial position, which is a spatial position of the tracking target object in a virtual space, and a sensing object spatial position, which is a spatial position of a sensing object for sensing a sensing range set for the tracking target object in the virtual space, is maintained;
- a program that causes a computer to function as a sensing result information generating unit that generates sensing result information that is a sensing result of the sensing range by the sensing object at the sensing object space position.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
La présente technologie concerne un dispositif, un procédé et un programme de traitement d'informations qui permettent d'acquérir facilement des informations dans un espace virtuel. Le dispositif de traitement d'informations met à jour une position d'espace d'objet de détection conformément à un changement dans une position d'espace d'objet cible de suivi de façon à maintenir une relation de position entre une position d'espace d'objet cible de suivi, qui est une position spatiale d'un objet cible de suivi dans un espace virtuel, et une position d'espace d'objet de détection, qui est une position spatiale d'un objet de détection pour détecter une plage de détection définie pour l'objet cible de suivi dans l'espace virtuel, et génère des informations de résultat de détection, qui sont le résultat de la détection de la plage de détection par l'objet de détection au niveau de la position d'espace d'objet de détection. La présente technologie peut être appliquée à un système de détection d'espace virtuel.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2023-189835 | 2023-11-07 | ||
| JP2023189835 | 2023-11-07 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025100190A1 true WO2025100190A1 (fr) | 2025-05-15 |
Family
ID=95695770
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2024/037006 Pending WO2025100190A1 (fr) | 2023-11-07 | 2024-10-17 | Dispositif, procédé et programme de traitement d'informations |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025100190A1 (fr) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002049937A (ja) * | 2000-08-04 | 2002-02-15 | Atr Ningen Joho Tsushin Kenkyusho:Kk | 自律キャラクタ再現装置、仮想空間表示装置およびコンピュータ読み取り可能な記録媒体 |
| JP2007313001A (ja) * | 2006-05-25 | 2007-12-06 | Namco Bandai Games Inc | プログラム、情報記憶媒体及びゲーム装置 |
| JP2018088946A (ja) * | 2016-11-30 | 2018-06-14 | 株式会社コロプラ | 情報処理方法および当該情報処理方法をコンピュータに実行させるためのプログラム |
| JP2020052775A (ja) * | 2018-09-27 | 2020-04-02 | 株式会社コロプラ | プログラム、仮想空間の提供方法および情報処理装置 |
-
2024
- 2024-10-17 WO PCT/JP2024/037006 patent/WO2025100190A1/fr active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002049937A (ja) * | 2000-08-04 | 2002-02-15 | Atr Ningen Joho Tsushin Kenkyusho:Kk | 自律キャラクタ再現装置、仮想空間表示装置およびコンピュータ読み取り可能な記録媒体 |
| JP2007313001A (ja) * | 2006-05-25 | 2007-12-06 | Namco Bandai Games Inc | プログラム、情報記憶媒体及びゲーム装置 |
| JP2018088946A (ja) * | 2016-11-30 | 2018-06-14 | 株式会社コロプラ | 情報処理方法および当該情報処理方法をコンピュータに実行させるためのプログラム |
| JP2020052775A (ja) * | 2018-09-27 | 2020-04-02 | 株式会社コロプラ | プログラム、仮想空間の提供方法および情報処理装置 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20210001223A1 (en) | Method and Apparatus for Displaying Virtual Pet, Terminal, and Storage Medium | |
| US11620798B2 (en) | Systems and methods for conveying virtual content in an augmented reality environment, for facilitating presentation of the virtual content based on biometric information match and user-performed activities | |
| CN117120959B (zh) | 具有触觉反馈响应和音频反馈响应的界面 | |
| CN109107166B (zh) | 虚拟宠物的繁育方法、装置、设备及存储介质 | |
| KR20240155971A (ko) | 실시간 3d 신체 모션 캡처로부터의 사이드-바이-사이드 캐릭터 애니메이션 | |
| CN110716645A (zh) | 一种增强现实数据呈现方法、装置、电子设备及存储介质 | |
| JP2023524119A (ja) | 顔イメージ生成方法、装置、電子機器及び可読記憶媒体 | |
| CN119135647A (zh) | 使用骨骼姿势系统的图像捕获设备实时捕获视频的系统和方法 | |
| CN103975365A (zh) | 用于俘获和移动真实世界对象的3d模型和真实比例元数据的方法和系统 | |
| KR102832466B1 (ko) | 실시간에서의 실제 크기 안경류 경험 | |
| TW202009682A (zh) | 基於擴增實境的互動方法及裝置 | |
| CN107479699A (zh) | 虚拟现实交互方法、装置及系统 | |
| JP2022545598A (ja) | 仮想対象の調整方法、装置、電子機器、コンピュータ記憶媒体及びプログラム | |
| US20220405996A1 (en) | Program, information processing apparatus, and information processing method | |
| JP6563580B1 (ja) | コミュニケーションシステム及びプログラム | |
| KR20250075728A (ko) | 2d 이미지들로부터의 3d 객체 모델 재구성 | |
| CN119631111A (zh) | 虚拟衣柜ar体验 | |
| WO2021039856A1 (fr) | Dispositif de traitement d'informations, procédé de commande d'affichage et programme de commande d'affichage | |
| JP2020062322A (ja) | 人形造形システム、情報処理方法及びプログラム | |
| JP6609078B1 (ja) | コンテンツ配信システム、コンテンツ配信方法、およびコンテンツ配信プログラム | |
| JP2019159647A (ja) | プログラム、ネットワークシステム及び画像判定方法 | |
| WO2025100190A1 (fr) | Dispositif, procédé et programme de traitement d'informations | |
| CN120584364A (zh) | 自适应缩放试穿体验 | |
| CN110866963A (zh) | 动态图像发布系统、动态图像发布方法以及记录介质 | |
| US12229893B2 (en) | Information interaction method, computer-readable storage medium and communication terminal |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24888488 Country of ref document: EP Kind code of ref document: A1 |