WO2025180486A1 - Procédé et appareil d'identification de dispositif de suivi, et dispositif et support - Google Patents
Procédé et appareil d'identification de dispositif de suivi, et dispositif et supportInfo
- Publication number
- WO2025180486A1 WO2025180486A1 PCT/CN2025/079820 CN2025079820W WO2025180486A1 WO 2025180486 A1 WO2025180486 A1 WO 2025180486A1 CN 2025079820 W CN2025079820 W CN 2025079820W WO 2025180486 A1 WO2025180486 A1 WO 2025180486A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- tracking
- tracking device
- current
- devices
- relative positions
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/212—Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
Definitions
- the embodiments of the present application relate to the field of motion capture technology, and in particular to a method, apparatus, device, and medium for identifying a tracking device.
- a tracking device worn or mounted on the target is used to collect motion data of the target, which can be a human body or an object.
- human motion capture technology has great application prospects in the fields of virtual reality (VR), human-computer interaction, etc.
- human motion capture solutions mainly use tracking devices worn on human limbs (such as inertial sensors or tracking devices with inertial sensors, etc.) to capture motions in order to collect human motion data in the real world.
- the tracking device Before using the tracking device to collect human motion data, the tracking device needs to be calibrated to ensure the accuracy and precision of the motion data collected by the tracking device.
- the user before using the tracking device, the user needs to distinguish the wearing position of the tracking device and wear each tracking device on the corresponding part of the human body.
- the tracking device for the left leg can only be worn on the left leg
- the tracking device for the right leg can only be worn on the right leg
- the tracking device for the waist can only be worn on the waist.
- the present application provides a tracking device identification method, apparatus, device and medium, which can automatically identify the usage position of the tracking device on the tracking target based on the relative position between the current device and the tracking device.
- the user does not need to distinguish the usage position of the tracking device before using the tracking device, which is convenient for the user to use and brings a better user experience.
- an embodiment of the present application provides a method for identifying a tracking device, which is applied to a current device and includes: determining multiple first relative positions between multiple tracking devices and the current device; and identifying the tracking device based on the multiple first relative positions.
- an embodiment of the present application provides an identification device for a tracking device, which is configured in a current device and includes: a determination module for determining multiple first relative positions between multiple tracking devices and the current device; and an identification module for identifying the tracking device based on the multiple first relative positions.
- an embodiment of the present application provides an electronic device comprising: a processor and a memory, wherein the memory is used to store a computer program, and the processor is used to call and run the computer program stored in the memory to execute the calibration method of the tracking device described in the embodiment of the first aspect or its various implementation methods.
- an embodiment of the present application provides a computer-readable storage medium for storing a computer program, wherein the computer program enables a computer to execute the calibration method of a tracking device as described in the embodiment of the first aspect or its various implementations.
- an embodiment of the present application provides a computer program product comprising program instructions, which, when executed on an electronic device, enables the electronic device to execute the calibration method for a tracking device as described in the embodiment of the first aspect or its various implementations.
- FIG1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
- FIG4 is a flow chart of a method for identifying a tracking device provided in Example 2 of the present application.
- FIG5 is a flow chart of a method for identifying a tracking device provided in Example 3 of the present application.
- FIG7 is a schematic block diagram of an electronic device provided in Example 5 of the present application.
- the method provided in the embodiment of the present application is applicable to motion capture scenarios, which include object motion capture scenarios and human motion capture scenarios.
- tracking devices can be installed at various locations in a room to mark the floor, walls, edges, corners, or furniture within the room.
- Another example is installing a tracking device on the ground to mark lines on the ground, such as various lines on a sports field or the sidelines of a road.
- Determining the location of a tracking device on a human body or object can also be understood as determining the identity (ID) of the tracking devices worn on various parts of the human body or object.
- ID identity
- the user wears three tracking devices on the two ankles and waist. However, the user does not know the ID of the tracking device worn on the left ankle, the ID of the tracking device worn on the right ankle, and the ID of the tracking device worn on the waist. Therefore, the tracking devices need to be identified. Similarly, in the object tagging scenario, the tracking device needs to be identified.
- Embodiments of the present application provide a tracking device identification method, which is applied to an electronic device.
- the electronic device may be a mobile phone, a tablet computer, a head-mounted device, etc.
- the electronic device can automatically identify the tracking device, that is, automatically determine the location of each tracking device.
- XR Extended Reality
- XR refers to all real and virtual environments and human-computer interactions generated by computer technology and wearable devices.
- XR includes various forms such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR).
- VR Virtual Reality
- AR Augmented Reality
- MR Mixed Reality
- VR a technology for creating and experiencing virtual worlds, generates a virtual environment using multi-source information
- the VR mentioned in this article includes at least visual perception, but can also include auditory perception, tactile perception, motion perception, and even taste and olfactory perception). It achieves the fusion of interactive three-dimensional dynamic visuals and the simulation of physical behavior in the virtual environment, immersing users in a simulated VR environment.
- This allows for a variety of virtual environment applications, including mapping, gaming, video, education, healthcare, simulation, collaborative training, sales, assisted manufacturing, maintenance, and repair.
- the virtual reality devices described in the embodiments of the present application may include but are not limited to the following types.
- PC-based virtual reality (PCVR) devices use the PC to perform calculations and output data related to virtual reality functions.
- External PC-based virtual reality devices use the data output by the PC to achieve virtual reality effects.
- Mobile VR devices support the configuration of mobile terminals (such as smartphones) in various ways (such as head-mounted displays with dedicated card slots). Through wired or wireless connection with the mobile terminal, the mobile terminal performs relevant calculations for VR functions and outputs data to the mobile VR device, such as viewing VR videos through the mobile terminal's app.
- Virtual field of view The area in the virtual environment that the user can perceive through the lens of a virtual reality device.
- the field of view (FOV) of the virtual field of view is used to represent the perceived area.
- AR A technology that calculates the camera's pose parameters in the real world (also known as the 3D world or the real world) in real time while the camera is capturing images. Based on these pose parameters, virtual elements are added to the captured images. Virtual elements include, but are not limited to, images, videos, and 3D models. The goal of AR technology is to overlay the virtual world on the real world for interactive viewing on a screen.
- MR a simulated setting that integrates computer-generated sensory input (e.g., virtual objects) with sensory input from a physical setting or its representation.
- the computer-generated sensory input can adapt to changes in sensory input from the physical setting.
- some electronic systems used to render MR settings can monitor the orientation and/or position relative to the physical setting to enable virtual objects to interact with real objects (i.e., physical elements from the physical setting or their representations). For example, the system can monitor motion so that virtual plants appear stationary relative to physical buildings.
- a virtual scene is a virtual scene displayed (or provided) when an application is running on an electronic device.
- the virtual scene can be a simulation of the real world, a semi-simulation and semi-fictitious virtual scene, or a purely fictitious virtual scene.
- the virtual scene can be any of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene.
- the embodiments of the present application do not limit the dimensions of the virtual scene.
- a virtual scene can include the sky, land, ocean, etc., and the land can include environmental elements such as deserts and cities. Users can control virtual objects to move in the virtual scene.
- Virtual objects are objects that interact in virtual scenes and are controlled by users or robot programs (for example, robot programs based on artificial intelligence). They can be still, move, and perform various behaviors in the virtual scene, such as various characters in games.
- the head-mounted device 100 may be an HMD, such as a head-mounted display in an all-in-one VR machine, etc. This embodiment does not impose any restrictions on this.
- the head-mounted device 100 is provided with a camera to collect information about the surrounding environment. Based on the collected data, the camera uses a simultaneous localization and mapping (SLAM) algorithm in computer vision to perform tracking and positioning.
- SLAM simultaneous localization and mapping
- the number of cameras can be at least one, and FIG1 illustrates the number of cameras as four.
- the type of camera can be a fisheye camera, a standard camera, or other types of cameras, and this application does not impose any restrictions on this.
- the tracking device 200 may include an inertial sensor, or any device having an inertial sensor.
- tracking device 200 may be an optical tracker having a plurality of light-emitting units for positioning, distributed at different locations within the optical tracker.
- the head-mounted device 100 captures light spot images from the light-emitting units through its camera, and determines the optical tracker's posture based on the number N and coordinates of the captured light spots.
- the optical tracker also includes an inertial sensor.
- tracking device 200 includes a camera. Tracking device 200 uses images captured by the camera to determine its own position in the world coordinate system. It then determines the relative positions of head-mounted device 100 and itself based on their positions in the world coordinate system. In this implementation, tracking device 200 can employ the same self-tracking method as head-mounted device 100.
- the tracking device 200 can be worn on different parts of the human body, such as the limbs, torso, shoulders, waist and other parts of the human body.
- the limbs of the human body include: upper limbs and lower limbs.
- the tracking device 200 is worn on the lower limbs of the human body.
- the lower limbs of the human body include: thighs and calves.
- four tracking devices 200 can be worn on the thighs and calves (or ankles) of the human body respectively, so as to use the tracking devices 200 to collect the thigh motion data and calf motion data of the human body (collectively referred to as lower limb motion data), and send the lower limb motion data to the head-mounted device 100 to achieve the purpose of tracking the lower limb motion of the human body.
- the limb data collected by the tracking device 200 worn on the human limbs can be 3-degree-of-freedom (Dof) data or 6-DoF data.
- the peripheral device 300 may be, but is not limited to, a handle, a glove, a bracelet, a wristband, a finger ring, and other wearable devices.
- the peripheral device 300 is provided with an inertial sensor, and the inertial sensor is capable of providing 6-DOF data including the position and posture of the upper limbs of the human body.
- the embodiments of the present application can achieve full-body tracking of the human body by wearing tracking devices 200 on the human body's limbs, or wearing external devices 300 on the human body's upper limbs and tracking devices on the human body's lower limbs, etc.
- head-mounted device 100, tracking device 200 and peripheral device 300 shown in Figures 1 and 2 are only illustrative and do not constitute specific limitations on the present application.
- FIG3 is a flow chart of a method for identifying a tracking device provided in Example 1 of the present application.
- the execution subject of this embodiment is the current device, which is an electronic device such as a mobile phone, tablet computer, head-mounted device, etc.
- the method may include the following steps:
- a plurality of first relative positions between a plurality of tracking devices and a current device are determined.
- the multiple tracking devices are worn or installed on different parts of the tracking target, which can be a human body or an object.
- the tracking target varies in different application scenarios.
- the tracking target in a human motion capture scenario, the tracking target is the human body, while in an object tracking scenario, the tracking target can be furniture installed in a room, a robot moving in space, etc.
- the tracking target in an object marking scenario, can be the marked floor, wall, furniture in the house, a playground, a road, etc.
- the multiple tracking devices are worn or installed on corresponding parts of the tracking target, and communication connections can be established between the current device and the multiple tracking devices.
- the multiple tracking devices have the same hardware and appearance. If the multiple tracking devices have the same appearance, the user cannot distinguish the tracking devices by appearance when wearing or installing the tracking devices. In the embodiments of the present application, the user does not need to distinguish the use location (wearing location or installation location) of the tracking devices when wearing or installing the tracking devices.
- the current device determines the first relative positions between the M tracking devices and the current device, that is, a maximum of M first relative positions can be obtained.
- the first relative position is the relative position of the tracking device relative to the current device.
- a current device is provided with a camera, and each tracking device includes multiple light-emitting units. Accordingly, the current device determines a first relative position between each tracking device and the current device based on one or more light spot images of each tracking device captured by the camera.
- the light spot image is an image captured by the camera based on light emitted by the tracking device's transmitting unit.
- the light-emitting unit on the tracking device can be a light-emitting diode (LED) or other light-emitting device.
- the light emitted by the light-emitting unit can be visible light or invisible light, as long as it can be captured by the camera.
- the light-emitting unit can be distributed in a preset deployment method inside the housing of the tracking device body or in an accessory rigidly connected to the tracking device body, such as a base.
- the preset deployment method can be flexibly set according to the shape of the tracking device body or the accessory rigidly connected to the tracking device body, and no restrictions are imposed on it here.
- the camera collects one or more light spot images containing a light spot, and determines a first relative position of the tracking device with respect to the current device according to the one or more light spot images of the tracking device.
- PnP algorithms include: P3P algorithm, P5P algorithm and other algorithms.
- the P3P algorithm calculates the relative pose of the tracking device and the current device based on three points
- the P5P algorithm calculates the relative pose of the tracking device and the current device based on five points. Therefore, in the embodiment of the present application, the number of feature points extracted from the spot image of the tracking device is at least three (that is, the spot image includes at least three spots), and correspondingly, at least three 3D points need to be determined based on the above-mentioned at least three 2D feature points. Based on the at least three 2D feature points and their corresponding 3D points, the relative pose of the tracking device and the current device is calculated.
- the current device needs to identify which tracking device the captured light spot image belongs to, that is, it needs to match the captured light spot image with the ID of the tracking device.
- multiple tracking devices have different light spot patterns, which can be used to distinguish between different tracking devices.
- the different light spot patterns used by the tracking devices result in different light spot images captured by the camera.
- the light spot patterns in the light spot images captured by the camera can be used to determine the ID of the tracking device corresponding to the light spot image.
- the tracking device corresponding to ID1 uses light spot pattern 1
- the tracking device corresponding to ID2 uses light spot pattern 2
- the tracking device corresponding to ID3 uses light spot pattern 3.
- Different light spot patterns may be different numbers of light spots, and the different numbers of light spots are achieved by lighting different light-emitting units.
- the light spot patterns of multiple tracking devices may be the same, but the lighting timings of the multiple tracking devices may be different.
- the ID of the tracking device corresponding to the light spot image may be determined based on the lighting timings.
- a camera is used to capture a sequence of light spot images of multiple tracking devices, wherein the multiple tracking devices sequentially light up the light-emitting units on the corresponding tracking devices according to a predefined lighting sequence; based on the lighting sequence of the multiple tracking devices and the sequence of light spot images captured by the camera, the tracking device corresponding to each light spot image in the light spot image sequence is determined; based on the tracking device corresponding to each light spot image, one or more light spot images of each tracking device are determined.
- Determining the tracking device corresponding to each spot image in the spot image sequence refers to determining the ID of the tracking device corresponding to each spot image, and then using all spot images with the same ID as available images for the tracking device corresponding to the ID. Alternatively, selecting one or a portion of the spot images with the same ID as available images for the tracking device corresponding to the ID, and using the available images to determine the first relative position between the tracking device corresponding to the ID and the current device.
- the predefined lighting sequence is used to control the lighting order of multiple tracking devices connected to the current device.
- the lighting sequence can be determined according to the number of connected tracking devices after the current device establishes a communication connection with the tracking device.
- the current device sends a lighting instruction to the tracking device in real time according to the lighting sequence to control the lighting state of the connected tracking device.
- the current device sends the lighting sequence and IDs of the multiple connected tracking devices to each tracking device respectively, and each tracking device controls its own lighting state according to the lighting sequence and IDs of the multiple tracking devices.
- Each tracking device can obtain the lighting sequence and IDs of the multiple tracking devices. Taking the current device connected to three tracking devices as an example, assuming that the identifications of the three tracking devices are: tracking device 1, tracking device 2 and tracking device 3, the lighting sequence of the three tracking devices can be: tracking device 1 ⁇ tracking device 2 ⁇ tracking device 3, then the three tracking devices will obtain the IDs and lighting sequence of the three tracking devices.
- each tracking device can determine the lighting frequency of each tracking device according to the acquisition frequency of the camera, and light up the sending unit at the corresponding time according to the lighting frequency and lighting timing.
- the camera's acquisition frequency is 30Hz
- the lighting frequency of the two tracking devices is 15Hz, that is, each tracking device lights up 15 times in 1 second, and the two tracking devices light up alternately.
- the camera captures 30 frames of images in 1 second, of which 15 images are collected from tracking device 1 and 15 images are collected from tracking device 2.
- each tracking device's on-time should be less than or equal to the camera's acquisition time. For example, if the camera's acquisition frequency is 30 Hz, the camera's acquisition time is 33 milliseconds, and the tracking device's on-time should be less than or equal to this 33 milliseconds.
- the light-emitting unit of tracking device 1 will light up for the first 33 milliseconds
- the light-emitting unit of tracking device 2 will light up for the second 33 milliseconds
- the light-emitting unit of tracking device 1 will light up for the third 33 milliseconds
- the light-emitting unit of tracking device 2 will light up for the fourth 33 milliseconds.
- the two tracking devices will light up alternately in this order.
- the current device If the current device is connected to three tracking devices, they light up at a 10Hz frequency. The three tracking devices light up alternately, and the camera captures 20 frames per second: 10 images of tracking device 1 and 10 images of tracking device 2. Tracking device 3 is worn around the waist, so the camera cannot capture the light spot image of tracking device 3. Alternatively, the current device controls the camera not to capture images of tracking device 3 based on the lighting sequence.
- the current device not only sends the lighting sequence and ID of the connected tracking devices to each tracking device, but also sends the lighting duration and starting lighting time of each tracking device to each tracking device.
- the lighting duration of each tracking device is 1 second or 2 seconds, and each tracking device controls the lighting of the light unit according to the starting lighting time, lighting duration, and lighting sequence.
- the lighting sequence for three tracking devices is: Tracking Device 1 ⁇ Tracking Device 2 ⁇ Tracking Device 3, and the lighting duration is 1 second, then starting from this starting lighting time, Tracking Device 1 lights up at 1 second, Tracking Device 2 lights up at 2 seconds, and Tracking Device 3 lights up at 3 seconds.
- the lighting duration, lighting sequence, and starting lighting time can all be flexibly adjusted according to actual needs.
- the current device may capture an environment image that includes an image of the tracking device, and determine the first relative position between the tracking device and the current device based on the environment image. For example, the current device may capture multiple feature points of the tracking device from the image of the tracking device included in the environment image, calculate coordinates of the feature points using a triangulation method, and determine the first relative position between the tracking device and the current device based on the coordinates of the feature points.
- a tracking device is identified according to a plurality of first relative positions.
- identifying the tracking device may refer to identifying or determining the tracking object corresponding to each tracking device. Because the number of tracking devices is the same as the number of tracking objects, the use position of the tracking device is associated with the positional relationship of the tracking object. When the positional relationship of each tracking object is known, the tracking object corresponding to each tracking device can be identified based on multiple first relative positions. For example, in an object motion capture scene or a human motion capture scene, identifying the tracking device may refer to determining at which part of the object or human body each tracking device is located. The user may be prompted to place the current device and each tracking device in a preset reasonable position. When the current device and each tracking device are in a preset positional relationship, the positional relationship of each tracking object is known.
- the position of each tracking device in the current tracking mode is identified based on the multiple first relative positions and information about the tracking object tracked in the current tracking mode.
- Each tracking mode contains different or partially different information about the objects being tracked. This information includes at least the positional relationships between the objects, and may also include the object categories and number of objects being tracked. Tracking modes can be divided based on one or more of these: object categories, number of objects being tracked, and positional relationships.
- the tracking object category includes human or object.
- the tracking object category is human
- the tracking object can be multiple different parts of the human body, such as the lower limbs of the human body, or the lower limbs and waist of the human body.
- the tracking object category is object
- the tracking object can be multiple different parts of the object, such as multiple different parts, multiple corners, vertices, or edges of the object.
- Tracking Mode 1 The head-mounted device is connected to two tracking devices, and the two tracking devices are worn on the lower limbs of the human body (such as ankles).
- Tracking mode 2 The head-mounted device is connected to three tracking devices, and the three motion capture devices are worn on the lower limbs (such as ankles) and waist of the human body respectively.
- Tracking mode 3 A handheld device (such as a mobile phone) is connected to four tracking devices, and the four motion capture devices are respectively installed at the four vertices of the object.
- the user before executing the method of this embodiment, the user first selects a tracking mode to be used.
- the tracking mode selected by the user is the current tracking mode.
- the current tracking mode may also be automatically identified by the current device.
- information about the tracking object being tracked in the current tracking mode can be determined, such as the number of tracking devices connected to the current device, the location of the tracking device in use, and the tracking target.
- the tracking target is a human body or an object.
- the number of tracking devices is the same as the number of tracking objects.
- the location of the tracking device in use is associated with the positional relationship of the tracking object. Based on the multiple first relative positions and the number of tracking devices in the current tracking mode, the location of the tracking device in use, and the tracking target, the position of each tracking device in the current tracking mode is identified.
- the tracking device In Tracking Mode 1 and Tracking Mode 2, the tracking device is worn on the human body. Accordingly, the position of the tracking device in the current tracking mode is the position of the tracking device on the human body in the current tracking mode.
- the position of the tracking device on the human body refers to the position of the tracking device on the human body.
- the four tracking devices can be identified as being located within the picture frame.
- the method of this embodiment can identify whether the tracking device corresponding to ID 1 is located in the upper left, lower left, upper right, or lower right of the picture frame, ultimately obtaining the IDs for the upper left, lower left, upper right, and lower right tracking devices.
- the tracking device When the tracking device is worn on a human body, optionally, under a preset calibration action, multiple first relative positions between multiple tracking devices and the current device are determined.
- This preset calibration action also known as a preset calibration pose, is performed by the user and held for a specified period of time to ensure a specific relative position between the device and the wearer's body. It should be understood that the applicable calibration action may vary depending on the position of the device relative to the body. These actions include, but are not limited to, T-pose and N-pose.
- the T-pose can be understood as a T-shaped posture with arms stretched out horizontally to the sides, toes pointing forward, and legs together.
- the N-pose can be understood as similar to the standing position, with toes pointing forward and the body standing at attention and looking straight ahead.
- the time for the calibration action to be maintained can be flexibly set according to actual computing requirements, such as 1 second, 2 seconds, or 3 seconds, etc. This application does not impose any restrictions on this.
- the T-pose can be modified to obtain the calibration pose.
- the calibration pose is to stretch both arms forward horizontally, point both toes forward, and keep both legs together.
- the calibration pose is to stretch both arms forward horizontally, point both toes forward, and spread both legs apart.
- the calibration pose is to stretch both arms horizontally to the sides, point both toes forward, and spread both legs apart.
- the position of the tracking device on the human body in the current tracking mode is determined based on the relative positions of the various parts of the human body under the calibration action.
- the relative positional relationship between the various parts of the human body under the calibration action is known and fixed. For example, when the calibration action is N-pose, the left leg is located at the lower left position of the head, and the right leg is located at the lower right position of the head. Therefore, based on the relative positions of the various parts of the human body under the calibration action and the relative positional relationship between the detected tracking devices and the current device, the position of the tracking device on the human body in the current tracking mode can be determined.
- N first relative positions between N tracking devices among the M tracking devices connected to the current device and the current device are determined, where M and N are both integers, N is greater than or equal to 2, and M is greater than N; accordingly, M tracking devices are identified based on the N first relative positions.
- the current device is connected to three tracking devices, which are worn on the left leg, right leg, and waist of the human body respectively.
- the current device cannot determine the first relative position between the tracking device worn on the waist and the current device through the spot image.
- the current device determines the first relative position between the two tracking devices worn on the legs and the current device. Based on the first relative position between the two tracking devices worn on the legs and the current device, the two tracking devices worn on the legs are identified, and the wearing position of the remaining tracking device is the waist, so that the three tracking devices can be identified.
- the tracking device can be identified based on multiple first relative postures or multiple first relative positions.
- the first relative posture refers to the posture of the tracking device relative to the current device, and the first relative posture includes the position and posture of the tracking device relative to the current device.
- the current device can obtain the first relative posture and obtain the first relative position and first relative posture from the first relative position.
- an operation instruction is executed based on the information of the identified tracking device.
- the information of the identified tracking device may be the tracking device's identification information, which refers to the correspondence between the tracking device ID and the tracking device's location.
- the tracking device ID for the user's left leg is ID1
- the tracking device ID for the user's right leg is ID2.
- the tracking device corresponding to ID1 is worn on the user's left leg
- the tracking device corresponding to ID2 is worn on the user's right leg.
- the operation instruction may be a display instruction, and the ID and location information of the identified tracking device may be displayed through the display instruction.
- the ID and location information of each identified tracking device may be displayed in a list.
- the ID and location information of each identified tracking device may be displayed in conjunction with a human body model, that is, the ID and location information of each identified tracking device may be displayed at a display position corresponding to the location information on the human body model.
- the identified tracking device information may also include tracking information from the tracking device.
- This tracking information may include IMU data from the tracking device.
- User actions may be identified based on the tracking information from the tracking device.
- the display of virtual objects or interaction with virtual objects may be controlled based on the identified user actions.
- the IMU data from the left leg tracking device can be used to identify whether the left leg has stepped on or lifted its leg. If this is detected, a virtual object can be controlled to display in the virtual scene, such as a prompt message. Alternatively, a virtual object in the virtual scene can be controlled to perform a corresponding special effect.
- the human body posture can be determined based on the tracking information of the tracking device, and the avatar can be driven or sports games can be played based on the human body posture.
- the current device determines multiple first relative positions between multiple tracking devices and the current device, and identifies the tracking device based on the multiple first relative positions.
- This method can automatically identify the tracking device's usage position on the tracking target based on the relative positions between the current device and the tracking device. The user does not need to determine the tracking device's usage position before using the tracking device, which is convenient for the user and provides a better user experience.
- Example 2 of the present application is illustrated using the human motion capture scenario as an example.
- the method of this embodiment can be applied to the calibration process of the tracking device.
- the method of this embodiment can be executed by a head-mounted device, which can be a VR device, an AR device, or an MR device.
- Figure 4 is a flow chart of a method for identifying a tracking device according to a second embodiment of the present application. As shown in Figure 4 , the method may include the following steps.
- first relative positions between a plurality of tracking devices and a head-mounted device are determined, wherein the tracking devices are worn on a limb of a human body.
- the multiple tracking devices When multiple tracking devices are used to capture human motion, the multiple tracking devices are respectively worn on different limbs of the human body.
- the optical characteristics of the multiple tracking devices are the same, that is, the hardware appearance and/or the spatial position of the light-emitting elements are the same.
- the appearance of the multiple tracking devices is the same, the user cannot distinguish the tracking devices by appearance when wearing the tracking devices.
- the spatial position of the light-emitting elements of the multiple tracking devices is the same, although it is easy to produce and manufacture the hardware products, it is not easy to distinguish between the tracking devices. In this embodiment, when the user wears the tracking device, there is no need to distinguish the wearing position of the tracking device.
- the user Before executing S201, the user first wears the head-mounted device and the tracking device on different parts of the human body, such as wearing the head-mounted device on the human head, wearing the tracking device on the human ankle, or holding the peripheral device in the human hand.
- the head-mounted device is worn on the human head, and the tracking device is worn on the human upper limbs and the human lower limbs.
- the head-mounted device is worn on the human head, and the tracking device is worn on the human upper limbs, waist, and human lower limbs, etc.
- the tracking modes of the head-mounted device are divided into the following modes according to the number of tracking devices connected to the head-mounted device.
- the head-mounted device is connected to two tracking devices, which are worn on the lower limbs (e.g., ankles) of the human body.
- the head-mounted device is connected to three tracking devices, which are respectively worn on the lower limbs (such as ankles) and waist of the human body.
- Human body tracking mode 3 The head-mounted device is connected to four tracking devices, two of which are worn on the lower limbs (such as ankles) and upper limbs (such as wrists) of the human body.
- Human body tracking mode 4 The head-mounted device is connected to five tracking devices, and the two tracking devices are worn on the lower limbs (such as ankles), upper limbs (such as wrists) and waist of the human body respectively.
- the lower limbs of the human body include the left lower limb and the right lower limb
- the upper limbs of the human body include the right upper limb and the right lower limb
- the head-mounted device can also be connected to an external device (such as left and right handles) to achieve full-body motion capture of the human body.
- the user can activate the headset, tracking device, and peripheral devices to establish communication connections between the headset and each tracking device, as well as between the headset and the peripheral devices, laying the foundation for subsequent data communication.
- Such communication connections include but are not limited to short-range wireless communications such as Bluetooth connections and Wi-Fi.
- the head-mounted device can guide the user to perform a preset calibration action and maintain it for a certain period of time to ensure that there is a certain relative posture between the user's face and the human body wearing position of the tracking device through the calibration action.
- the calibration action includes but is not limited to: T-pose, N-pose, other actions that can ensure that the user's face, chest and lower limbs (i.e., legs) are facing the same direction, or other actions that ensure that the user's face and the limbs wearing the tracking device have a certain relative posture.
- N-pose can be used as the calibration pose.
- Body Tracking Mode 3 and Body Tracking Mode 4 the user wears a tracking device on each of their left and right ankles and wrists.
- the T-pose can be used as the calibration pose.
- the user may wear tracking devices only on the left and right wrists.
- the user may wear a tracking device on the left and right wrists and a tracking device on the waist to capture the motion of the upper body.
- the pose transformation relationship between the limb coordinate system and the head-mounted coordinate system is a known quantity, and this known quantity is typically a constant.
- the pose transformation relationship between the limb coordinate system and the head-mounted coordinate system is a unit matrix.
- the head-mounted device uses a PnP algorithm to directly calculate the first relative position between the tracking device and the head-mounted device based on the spot image of the tracking device captured by the camera.
- the positioning device may obtain a first position of the head-mounted device in a world coordinate system and a second position of each tracking device in the world coordinate system. For each tracking device, a first relative position between the tracking device and the head-mounted device is determined based on the first position and the second position.
- the positioning device may be a device other than the head-mounted device and the tracking device.
- the coordinate systems of the first position and the second position may not be the world coordinate system, but may use other coordinate systems. This embodiment is not limited to this, as long as the first position and the second position are in the same coordinate system.
- the positioning device while the person maintains the calibration posture, the positioning device obtains a first pose of the head-mounted device in the world coordinate system. While the person maintains the calibration posture and rotates their head, the positioning device uses a camera to capture spot images of multiple tracking devices and determines a second pose of the tracking devices in the world coordinate system based on the spot images of the tracking devices. Based on the first and second poses, the positioning device determines a first relative position between the tracking devices and the head-mounted device.
- the first posture includes a first position and a first posture
- the second posture includes a second position and a second posture.
- the positioning device can determine a first relative position between the tracking device and the head-mounted device based on the first position and the second position.
- the head-mounted device is equipped with a camera that can collect real-time data about the surrounding environment.
- the head-mounted device can use a computer vision SLAM algorithm to determine the head-mounted device's position information in real time based on the surrounding environment data collected by the camera. Therefore, in this embodiment, when determining the first position of the head-mounted device in the world coordinate system, the computer vision SLAM algorithm can be used based on the surrounding environment data collected by the camera.
- the specific determination process is conventional in the art and will not be described in detail here.
- the headset may need to turn the head to capture the tracking device's spot image. For example, when the headset is in Body Tracking Mode 1 or Body Tracking Mode 2, if the user lowers their head, the headset's lower binocular cameras can capture the tracking device's spot image on both lower limbs.
- the human body wearing position of each tracking device is identified based on the multiple first relative positions.
- the human body wearing position of each tracking device is the human body wearing position of the tracking device in the current tracking mode.
- the head-mounted device identifies the human body wearing position of each tracking device in the current tracking mode based on multiple first relative positions and information of the tracking object tracked in the current tracking mode.
- the relative positions of various parts of the human body during the calibration action are related to the human body wearing positions of various tracking devices in the current tracking mode.
- the relative positions of the various parts of the human body during the calibration action may be the second relative positions of the various parts of the human body and the human head, that is, the second relative positions of the various parts of the human body and the head-mounted device.
- the predefined second relative position relationship between the various parts of the human body and the human head during the calibration action may include: the left foot is located at the lower left of the human head, and the right foot is located at the lower right of the human head.
- the second relative position relationship may include: the left foot is located at the lower left of the human head, the right foot is located at the lower right of the human head, the left hand is located on the left side of the human head, and the right hand is located on the right side of the human head.
- the head-mounted device can match the first relative position of each tracking device with the predefined second relative position of each human body part and the human head during the calibration action. If the first relative position corresponding to the first tracking device matches the second relative position of the first limb and the human head, the wearing position of the first tracking device is determined to be the first limb.
- the first tracking device is any one of the multiple tracking devices
- the first limb is any one of the limbs of the human body.
- the first relative position of the first tracking device and the second relative position of the first limb and the human head match each other if the first relative position and the second relative position are the same or correspond to each other. For example, if the first relative position of the first tracking device is that the first tracking device is located to the lower right of the head-mounted device, then if the first limb is located to the lower right of the head, then the first relative position and the second relative position are determined to match.
- the total number M of tracking devices connected to the head-mounted device is equal to the first number N of tracking devices corresponding to the light spot image captured by the camera, such as body tracking mode 1 and body tracking mode 3.
- the total number M of tracking devices connected to the head-mounted device is greater than the first number N of tracking devices corresponding to the light spot image captured by the camera, such as body tracking mode 2 and body tracking mode 4.
- M is greater than N by 1, which means that there is one more tracking device worn at the waist.
- the wearing position of the first tracking device connected to the head-mounted device, excluding the tracking device corresponding to the light spot image captured by the camera, is determined to be the waist.
- the headset can obtain the IDs of all connected tracking devices, as well as the ID of the tracking device captured by the camera. Excluding the ID of the tracking device captured by the camera from the IDs of all connected tracking devices, the ID of the remaining tracking device (i.e., the first tracking device) is the ID of the tracking device worn at the waist, indicating that the first tracking device is worn at the waist.
- the first tracking device is worn at the front waist or the back waist.
- the head-mounted device defaults to the first tracking device being worn at the back waist.
- the head-mounted device determines whether the first tracking device is worn at the front waist or the back waist based on the user's first input information.
- the head-mounted device determines that the first tracking device is worn at the waist using the above method, it displays waist wearing position options to the user.
- the waist wearing position options include front waist and back waist.
- the user selects front waist or back waist based on the actual wearing position.
- the head-mounted display determines whether the first tracking device is worn at the front waist or back waist based on the user's selection information (i.e., the first input information).
- M is much larger than N
- other information needs to be obtained, and the tracking device is identified based on the other information and the first relative positions of the N tracking devices.
- a preset calibration process is performed to determine the first relative positions between multiple tracking devices and a head-mounted device.
- the tracking devices are worn on a human limb, and the wear position of each tracking device on the human body is identified based on the multiple first relative positions.
- This method can automatically identify the wear position of the tracking device based on the relative positions between the tracking device and the head-mounted device. This eliminates the need for the user to determine the wear position of the tracking device before wearing the tracking device, facilitating user use and providing a better user experience.
- FIG5 is a flow chart of the method for identifying a tracking device provided in the third embodiment of the present application. As shown in FIG5 , the method may include the following steps:
- the camera collects light spot images of multiple tracking devices.
- the calibration action is T-pose or N-pose.
- the user performs N-pose for 1 second, that is, the left and right toes are pointed forward, the body stands upright and looks straight ahead, and maintains this position for 1 second.
- the camera can capture light spot images of multiple tracking devices while the human body maintains the calibration posture and turns the head.
- the user needs to lower their head so that the camera can capture the light spot image of the tracking device worn on their lower limbs.
- the headset needs to identify which tracking device the captured light spot image belongs to, that is, it needs to match the captured light spot image with the tracking device ID.
- a head-mounted device uses a camera to capture a sequence of light spot images from multiple tracking devices, each of which is equipped with a light-emitting unit.
- the multiple tracking devices sequentially illuminate the light-emitting units on their corresponding tracking devices according to a predefined illumination sequence. Based on the illumination sequence of the multiple tracking devices and the sequence of light spot images captured by the camera, the head-mounted device determines the ID of the tracking device corresponding to each light spot image in the sequence of light spot images.
- a first relative posture between each tracking device and the head-mounted device is determined based on one or more light spot images of each tracking device.
- the head-mounted device uses the PNP algorithm to determine the first relative posture of each tracking device relative to the head-mounted device based on the light spot image of each tracking device. This will not be described in detail here.
- a first relative position between each tracking device and the head-mounted device is obtained from the first relative posture between each tracking device and the head-mounted device.
- the first relative posture includes a first relative position and a first relative posture
- the head-mounted device obtains the first relative position from the first relative posture.
- the human body wearing position of each tracking device is determined based on the multiple first relative positions.
- multiple tracking devices are controlled to light up the light-emitting units on each tracking device in sequence according to a prescribed lighting sequence.
- the camera of the head-mounted device is used to capture the light spot images of the light-emitting units of each tracking device.
- the ID of the tracking device corresponding to each light spot image in the captured light spot image sequence is determined according to the lighting sequence.
- the second posture of the tracking device in the world coordinate system is determined based on the light spot image of each tracking device.
- the first relative position between each tracking device and the head-mounted device is determined based on the first posture of the head-mounted device and the second posture of each tracking device.
- the wearing position of the tracking device is automatically identified optically, and the user does not need to distinguish the wearing position of the tracking device before wearing the tracking device, which is convenient for the user to use and brings the user a better experience.
- the tracking device identification method described in the second and third embodiments can be applied to the relative posture calibration process between a human body part and a tracking device.
- the tracking device identification method described above can be considered as calibration of the tracking device's wearing position.
- the tracking device's wearing position can be calibrated using the tracking device identification method described in the above embodiments.
- Calibration of the relative posture between a human body part and a tracking device includes determining a relative posture relationship between the tracking device and the part of the human body where the tracking device is worn, and determining the posture of the part where the tracking device is worn based on the first relative posture relationship and the real-time posture of the tracking device in a world coordinate system.
- the relative posture relationship between the tracking device at the ankle and the ankle can be determined, and the real-time posture of the ankle can be determined based on the relative posture relationship and the real-time posture of the tracking device at the ankle.
- multiple antennas are provided at different positions of the current device.
- the current device determines the target antenna corresponding to the target tracking device from multiple antennas according to the position of the target tracking device that currently needs to communicate in the current tracking mode.
- the target antenna is used to communicate with the target tracking device, for example, tracking devices at different positions are assigned to different antennas to achieve communication. By assigning different antennas to tracking devices at different positions, interference can be avoided. By selecting a suitable target antenna to communicate with the target tracking device, the packet loss rate between the current device and the tracking device can also be reduced, thereby improving the communication reliability between the current device and the tracking device.
- the wireless controller controlling the current device is connected to a target antenna, which is then used to communicate with the target tracking device. While the wireless controller is connected to the target antenna, it disconnects from other antennas.
- the wireless controller controls wireless communications between the current device and each tracking device.
- the current device selects an antenna to communicate with the target tracking device by switching the connection between the wireless controller and each antenna.
- the target antenna is at the shortest distance from the target tracking device.
- distance affects communication quality. Selecting the antenna with the shortest distance as the target antenna improves communication reliability between the current device and the target tracking device.
- a headset typically uses Bluetooth to communicate with a tracking device.
- a Bluetooth antenna can be installed on each side of the headset. The left antenna communicates with the tracking device on the left side, while the right antenna communicates with the tracking device on the right side. This reduces packet loss between the headset and the tracking device, improving communication reliability.
- the headset determines the currently used target antenna from the two Bluetooth antennas based on the wearer's position.
- This target antenna is located on the same side of the headset as the target tracking device, and is used to communicate with the target tracking device.
- the target tracking device does not refer to a specific tracking device, but rather refers to the tracking device that the headset currently needs to communicate with.
- the target antenna and the target tracking device being located on the same side of the headset means that the target antenna and the target tracking device are both located on the left or right side of the headset.
- a wireless controller of the head mounted device is connected to a target antenna via a connection port.
- the wireless controller is disconnected from another antenna.
- the wireless controller controls wireless communication between the headset and each tracking device.
- the wireless controller and the two Bluetooth antennas can be connected and communicate with each other via a connection port.
- This connection port can be a general-purpose input/output (GPIO) port.
- the wireless controller uses the GPIO to control whether the left antenna or the right antenna is used. When the GPIO between the wireless controller and the left antenna is connected, the left antenna is used. When the GPIO between the wireless controller and the right antenna is connected, the right antenna is used.
- the setting position of the Bluetooth antenna is not limited to the left and right sides of the head-mounted device, but can also be the front and back sides of the head-mounted device, or other positions.
- the technical solution disclosed in the embodiments of the present application has at least the following beneficial effects: by determining multiple first relative positions between multiple tracking devices and the current device, the tracking device can be identified based on the multiple first relative positions. Based on the relative positions between the current device and the tracking device, the tracking device's usage position on the tracking target can be automatically identified. Users do not need to distinguish the tracking device's usage position before using the tracking device, which facilitates user use and provides a better user experience.
- FIG6 is a schematic block diagram of an identification device for a tracking device provided in Example 4 of the present application.
- the identification device 100 for the tracking device includes: a determination module 11 and an identification module 12.
- the determination module 11 determines multiple first relative positions between multiple tracking devices and the current device.
- the identification module 12 is configured to identify the tracking device based on the multiple first relative positions.
- the determining module 11 is specifically configured to: identify the position of each tracking device in the current tracking mode according to the multiple first relative positions and information about the tracking object tracked in the current tracking mode.
- the tracking device is worn on a human body, and the position of the tracking device in the current tracking mode is the position of the tracking device worn on a human body in the current tracking mode.
- the determining module 11 is specifically configured to determine a plurality of first relative positions between the plurality of tracking devices and the current device under a preset calibration action.
- the human body wearing position of the tracking device in the current tracking mode is determined based on the relative positions of the various parts of the human body during the calibration action.
- the determination module 11 is specifically used to: determine N first relative positions between N tracking devices among the M tracking devices connected to the current device and the current device, where M and N are both integers, N is greater than or equal to 2, and M is greater than N; the identification module 12 is specifically used to: identify the M tracking devices based on the N first relative positions.
- the current device includes a camera
- the determination module 11 is specifically configured to determine the first relative position between each tracking device and the current device based on one or more light spot images of each tracking device captured by the camera.
- the tracking device includes multiple light-emitting units
- the apparatus further includes a matching module, which is used to: collect a sequence of light spot images of the multiple tracking devices through the camera, wherein the multiple tracking devices sequentially light up the light-emitting units on the corresponding tracking devices according to a predefined lighting sequence; determine the tracking device corresponding to each light spot image in the light spot image sequence according to the lighting sequence of the multiple tracking devices and the light spot image sequence collected by the camera; and determine one or more light spot images of each tracking device according to the tracking device corresponding to each light spot image.
- the current device is provided with multiple antennas at different positions of the device, and the apparatus further includes a communication module for: determining a target antenna corresponding to the target tracking device from the multiple antennas according to the position of the target tracking device that the current device currently needs to communicate with in the current tracking mode; and using the target antenna to communicate with the target tracking device.
- the communication module is specifically used to: control the wireless controller of the current device to connect to the target antenna, and use the target antenna to communicate with the target tracking device, wherein, when the wireless controller is connected to the target antenna, the wireless controller disconnects from other antennas.
- the distance between the target antenna and the target tracking device is minimized.
- the current device is a head-mounted device.
- the apparatus further includes a control module configured to execute an operation instruction based on the identified information of the tracking device.
- the identification device of the tracking device provided in the embodiment of the present application is used to execute the identification method of the tracking device described in the above method embodiment.
- the device embodiment and the aforementioned method embodiment can correspond to each other, and similar descriptions can refer to the method embodiment. To avoid repetition, no further details will be given here.
- the device 100 shown in Figure 6 can execute any method embodiment of embodiment one to embodiment three, and the operations and/or functions of each module in the device 100 are respectively for implementing the corresponding processes in each method. For the sake of brevity, no further details will be given here.
- the functional module can be implemented in the form of hardware, can be implemented by instructions in the form of software, and can also be implemented by a combination of hardware and software modules.
- the steps of the first aspect method embodiment in the embodiment of the present application can be completed by the hardware integrated logic circuit and/or software instructions in the processor, and the steps of the first aspect method disclosed in conjunction with the embodiment of the present application can be directly embodied as being executed by a hardware decoding processor, or can be executed by a combination of hardware and software modules in the decoding processor.
- the software module can be located in a mature storage medium in the art such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, a register, etc.
- the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps in the above-mentioned first aspect method embodiment in conjunction with its hardware.
- Figure 7 is a schematic block diagram of an electronic device provided in accordance with an embodiment of the present application.
- the electronic device 200 may include a memory 21 and a processor 22.
- the memory 21 is configured to store a computer program and transmit the program code to the processor 22.
- the processor 22 may retrieve and execute the computer program from the memory 21 to implement the method in accordance with an embodiment of the present application.
- the processor 22 may be configured to execute the above method embodiments according to instructions in the computer program.
- the processor 22 may include but is not limited to: a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- DSP digital signal processor
- ASIC application-specific integrated circuit
- FPGA field programmable gate array
- the memory 21 includes, but is not limited to, volatile memory and/or non-volatile memory.
- the non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
- the volatile memory may be a random access memory (RAM), which is used as an external cache.
- RAM random access memory
- SRAM static RAM
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDR SDRAM double data rate synchronous dynamic random access memory
- ESDRAM enhanced synchronous dynamic random access memory
- SLDRAM synchronous link DRAM
- DR RAM direct RAM bus random access memory
- the computer program may be divided into one or more modules, which are stored in the memory 21 and executed by the processor 22 to implement the method provided by the present application.
- the one or more modules may be a series of computer program instruction segments capable of implementing specific functions, and the instruction segments are used to describe the execution process of the computer program in the electronic device.
- the electronic device 200 may further include a transceiver 23, which may be connected to the processor 22 or the memory 21.
- the processor 22 may control the transceiver 23 to communicate with other devices.
- the transceiver 23 may send information or data to other devices, or receive information or data sent by other devices.
- the transceiver 23 may include a transmitter and a receiver.
- the transceiver 23 may further include one or more antennas.
- the electronic device 200 may further include a camera module, a wireless fidelity WIFI module, a positioning module, a Bluetooth module, a display, a controller, etc., which will not be described in detail here.
- bus system includes not only a data bus but also a power bus, a control bus and a status signal bus.
- the present application also provides a computer storage medium having a computer program stored thereon, which enables the computer to perform the method of the above method embodiment when the computer program is executed by the computer.
- An embodiment of the present application further provides a computer program product comprising program instructions, which, when executed on an electronic device, enables the electronic device to execute the method of the above method embodiment.
- the computer program product includes one or more computer instructions.
- the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
- the computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
- the computer instructions can be transmitted from one website, computer, server or data center to another website, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
- the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server or data center that includes one or more available media integrated.
- the available medium can be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital video disc (DVD)), or a semiconductor medium (e.g., a solid state disk (SSD)), etc.
- modules and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered beyond the scope of this application.
- the disclosed systems, devices and methods can be implemented in other ways.
- the device embodiments described above are merely schematic.
- the division of the modules is merely a logical function division.
- Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or modules, which can be electrical, mechanical or other forms.
- Modules described as separate components may or may not be physically separate, and components displayed as modules may or may not be physical modules, i.e., they may be located in one place or distributed across multiple network elements. Some or all of the modules may be selected based on actual needs to achieve the purpose of the present embodiment.
- the functional modules in the various embodiments of the present application may be integrated into a processing module, or each module may exist physically separately, or two or more modules may be integrated into a single module.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Heart & Thoracic Surgery (AREA)
- Cardiology (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
La présente demande concerne un procédé et un appareil d'identification d'un dispositif de suivi, ainsi qu'un dispositif et un support. Le procédé consiste à : déterminer une pluralité de premières positions relatives entre une pluralité de dispositifs de suivi et un dispositif actuel ; et identifier les dispositifs de suivi sur la base de la pluralité de premières positions relatives. Au moyen du procédé, la position d'utilisation d'un dispositif de suivi sur une cible suivie peut être automatiquement identifiée sur la base d'une position relative entre un dispositif actuel et le dispositif de suivi, de telle sorte qu'il n'est pas nécessaire pour un utilisateur de distinguer la position d'utilisation du dispositif de suivi avant de l'utiliser, ce qui facilite l'utilisation par l'utilisateur et offre une meilleure expérience à l'utilisateur.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410232779.3A CN120560492A (zh) | 2024-02-29 | 2024-02-29 | 追踪设备的识别方法、装置、设备和介质 |
| CN202410232779.3 | 2024-02-29 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025180486A1 true WO2025180486A1 (fr) | 2025-09-04 |
Family
ID=96822033
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2025/079820 Pending WO2025180486A1 (fr) | 2024-02-29 | 2025-02-28 | Procédé et appareil d'identification de dispositif de suivi, et dispositif et support |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN120560492A (fr) |
| WO (1) | WO2025180486A1 (fr) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106019265A (zh) * | 2016-05-27 | 2016-10-12 | 北京小鸟看看科技有限公司 | 一种多目标定位方法和系统 |
| CN106768361A (zh) * | 2016-12-19 | 2017-05-31 | 北京小鸟看看科技有限公司 | 与vr头戴设备配套的手柄的位置追踪方法和系统 |
| CN115082520A (zh) * | 2022-06-14 | 2022-09-20 | 歌尔股份有限公司 | 定位追踪方法、装置、终端设备及计算机可读存储介质 |
| CN117314955A (zh) * | 2022-06-21 | 2023-12-29 | 北京字跳网络技术有限公司 | 光学追踪器的位姿确定方法、装置、设备及介质 |
-
2024
- 2024-02-29 CN CN202410232779.3A patent/CN120560492A/zh active Pending
-
2025
- 2025-02-28 WO PCT/CN2025/079820 patent/WO2025180486A1/fr active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106019265A (zh) * | 2016-05-27 | 2016-10-12 | 北京小鸟看看科技有限公司 | 一种多目标定位方法和系统 |
| CN106768361A (zh) * | 2016-12-19 | 2017-05-31 | 北京小鸟看看科技有限公司 | 与vr头戴设备配套的手柄的位置追踪方法和系统 |
| CN115082520A (zh) * | 2022-06-14 | 2022-09-20 | 歌尔股份有限公司 | 定位追踪方法、装置、终端设备及计算机可读存储介质 |
| CN117314955A (zh) * | 2022-06-21 | 2023-12-29 | 北京字跳网络技术有限公司 | 光学追踪器的位姿确定方法、装置、设备及介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN120560492A (zh) | 2025-08-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11262841B2 (en) | Wireless wrist computing and control device and method for 3D imaging, mapping, networking and interfacing | |
| US10678324B2 (en) | Systems and methods for augmented reality | |
| CA3011377C (fr) | Systemes et procedes pour realite augmentee | |
| JP2020173670A (ja) | 複数のマーカを備えたデバイス | |
| US11256090B2 (en) | Systems and methods for augmented reality | |
| WO2025180486A1 (fr) | Procédé et appareil d'identification de dispositif de suivi, et dispositif et support | |
| EP4541438A1 (fr) | Procédé, appareil, dispositif, support et programme d'affichage d'un personnage virtuel | |
| WO2025044573A1 (fr) | Procédé et appareil d'étalonnage pour dispositif de capture de mouvement, dispositif et support | |
| JP7634069B2 (ja) | 複数のマーカを備えたデバイス | |
| CN120891926A (zh) | 交互方法、装置、设备和存储介质 | |
| CN118533144A (zh) | 光学追踪器的位姿确定方法、装置、设备、介质和程序 | |
| CN116055709A (zh) | 同步多ar显示装置沙盘信息显示系统及方法 | |
| CN118280062A (zh) | 扩展现实设备的安全提示方法和装置 | |
| CN118092634A (zh) | 人机交互方法、装置、设备及介质 | |
| HK40009363A (en) | Wireless wrist computing and control device and method for 3d imaging, mapping, networking and interfacing |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25761120 Country of ref document: EP Kind code of ref document: A1 |