[go: up one dir, main page]

WO2020259481A1 - Procédé et appareil de positionnement, dispositif électronique et support de stockage lisible - Google Patents

Procédé et appareil de positionnement, dispositif électronique et support de stockage lisible Download PDF

Info

Publication number
WO2020259481A1
WO2020259481A1 PCT/CN2020/097660 CN2020097660W WO2020259481A1 WO 2020259481 A1 WO2020259481 A1 WO 2020259481A1 CN 2020097660 W CN2020097660 W CN 2020097660W WO 2020259481 A1 WO2020259481 A1 WO 2020259481A1
Authority
WO
WIPO (PCT)
Prior art keywords
frames
electronic device
images
type
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2020/097660
Other languages
English (en)
Chinese (zh)
Inventor
杨宇尘
金珂
陈岩
方攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of WO2020259481A1 publication Critical patent/WO2020259481A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • This application relates to the field of image processing technology, in particular to a positioning method and device, electronic equipment, and readable storage medium.
  • SLAM Real-time positioning and map construction
  • the relocation method in the related art usually first finds an image similar to the current electronic device shooting picture (ie the current frame) in the database as a reference frame, and then compares the two-dimensional (2-Dimension, 2D) features in the current frame with the reference The 2D features in the frame are feature-matched to recover the relative pose transformation between the current frame and the reference frame.
  • the existing 2D-2D feature matching relocation algorithm cannot determine the absolute scale of the environment, that is to say, the 2D-2D feature matching relocation algorithm can get the distance between two frames as unit 1, but cannot determine This unit 1 refers to 1 cm or 1 meter, which leads to the problem of inaccurate pose information after relocation.
  • the embodiments of the present application provide a positioning method and device, electronic equipment, and readable storage medium.
  • a positioning method is provided, which is applied to an electronic device, including:
  • the scale information corresponding to the at least two frames of images to be positioned is determined; the scale information is used to characterize when the electronic device collects the images to be positioned The size relationship between the coordinate system of, and the target coordinate system when the electronic device collects the corresponding reference image;
  • a positioning device in a second aspect, includes:
  • the first-type pose change information determining unit is configured to obtain the first-type pose change information between the poses of the electronic device when at least two frames of images to be positioned are collected; wherein the pose is used to characterize the position of the electronic device And/or posture;
  • the second-type pose change information determining unit is configured to determine the second-type pose between the pose of the electronic device when the at least two frames of images to be positioned are collected and the pose of the electronic device indicated by the corresponding reference image. Change information
  • the scale information determining unit is configured to determine scale information corresponding to the at least two frames of images to be positioned based on the first type of pose change information and the second type of pose change information; the scale information is used for characterization The size relationship between the coordinate system when the electronic device collects the image to be positioned and the target coordinate system when the electronic device collects the corresponding reference image;
  • the positioning processing unit is configured to determine the corresponding target poses of the electronic device in the target coordinate system when the at least two frames of images to be located are collected based on the second-type pose change information and the scale information.
  • an electronic device including: a processor and a memory configured to store a computer program that can run on the processor, wherein the processor is configured to execute the computer program described in the first aspect when the processor is configured to run the computer program. The steps of the positioning method are described.
  • an embodiment of the present application also provides a computer-readable storage medium having a computer program stored thereon, and the computer program is executed by a processor to implement the steps of the positioning method described in the first aspect.
  • the first type of pose change information between the poses of the electronic device when at least two frames of images to be positioned is acquired; and it is determined to collect at least two The second-type pose change information between the pose of the electronic device when the image to be positioned is framed and the pose of the electronic device indicated by the corresponding reference image; based on the first-type pose change information and the second-type pose change Information, determine the scale information corresponding to at least two frames of images to be located; at least based on the second type of pose change information and scale information, determine the respective targets of the electronic device in the target coordinate system when the at least two frames of images to be located are collected Posture.
  • the first type of pose change information with a scale between the poses of the electronic device of at least two currently collected images to be positioned is used to restore the coordinate system when the image to be positioned is collected and the coordinate system when the reference image is collected.
  • FIG. 1 is a schematic flowchart of a positioning method provided by an embodiment of this application.
  • FIG. 2 is a schematic flowchart of another positioning method provided by an embodiment of the application.
  • FIG. 3 is a schematic flowchart of another positioning method provided by an embodiment of this application.
  • FIG. 4 is a schematic diagram of a posture relationship of an electronic device according to an embodiment of the application.
  • FIG. 5 is a schematic diagram of the structural composition of a positioning device provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the application.
  • SLAM SLAM
  • electronic devices are required to relocate under certain scenarios. For example, when the electronic device is moving too fast, the overlapping area between the two consecutive frames of images collected is too small to be able to perform feature matching, resulting in tracking loss. Another example is when the electronic device loads an offline map, it needs to be based on the existing The information restores the current posture of the electronic device.
  • the way for electronic devices to relocate is to first find an image similar to the current electronic device shooting picture (ie, the current frame) in the offline map database as a reference frame, and then compare the 2D features in the current frame with those in the reference frame.
  • 3D point cloud for feature matching includes one or more descriptors and 3D position information corresponding to the reference frame.
  • the coordinates of the current frame in the point cloud coordinate system of the offline map are obtained.
  • this technology requires an offline map database to provide 3D point cloud information.
  • the point cloud information needs to include at least 3D position information and descriptor information in order to perform feature matching with the current frame and use the PnP algorithm.
  • software development Kit SDK
  • Apple mobile phones using ArKit some Android phones can use ArCore, etc.
  • ArCore some Android phones can use ArCore, etc.
  • the related technology can also perform 2D-2D feature matching between the current frame and the reference frame, and then calculate the basic matrix or homography matrix between the current frame and the reference frame, and then restore the electronic device between the two frames Relative pose; however, this pose is scaleless, leading to inaccurate pose information after relocation.
  • 2D-2D feature matching between the current frame and the reference frame
  • the basic matrix or homography matrix between the current frame and the reference frame
  • restore the electronic device between the two frames Relative pose however, this pose is scaleless, leading to inaccurate pose information after relocation.
  • FIG. 1 is a schematic flowchart of the positioning method provided in an embodiment of the application. As shown in FIG. 1, the positioning method includes the following steps:
  • Step 101 Acquire first-type pose change information between the poses of the electronic device when at least two frames of images to be positioned are collected.
  • the pose is used to characterize the position and/or posture of the electronic device.
  • the execution subject of step 101 may be a processor of an electronic device; the electronic device may be a drone, a robot, a virtual reality device, a smart terminal, and the like.
  • the image acquisition device may be activated to acquire images of the surrounding environment, where the image acquisition device may be a monocular camera, a binocular camera, a scanner, etc.
  • At least two frames to be positioned images of the surrounding environment are collected to pass at least two frames to be determined.
  • the bit image determines the specific location of the electronic device on the offline map.
  • at least two collected frames of to-be-positioned images at least a part of the regions in every two adjacent frames of the to-be-positioned images are the same.
  • the first type of pose change information between the poses of the electronic device when at least two frames of images to be positioned is collected can be obtained according to the movement of the electronic device in the process of collecting at least two frames of images to be positioned.
  • the electronic device can obtain the first frame to be positioned image and the second frame to be positioned according to the rotation angle and the displacement generated during the movement of the electronic device in the process of acquiring the first frame to be positioned image and the second frame to be positioned image. Information about the posture change of the electronic device during the image process.
  • the first type of pose change information may include multiple pose change information, for example, the pose change information between the electronic device's pose when every two adjacent frames or N frames of images to be positioned are collected, or The position and posture change information between the position and posture of the electronic device when each frame of the image to be positioned is collected and the position and posture of the electronic device when the image of the target to be positioned is collected (the object to be positioned image is one or more of at least two frames to be positioned) A), etc., the embodiments of this application are not limited here.
  • Step 102 Determine the second type of pose change information between the pose of the electronic device when at least two frames of images to be positioned are collected and the pose of the electronic device indicated by the corresponding reference image.
  • the reference image refers to an image that matches at least two frames of images to be positioned obtained from preset map information.
  • the preset map information includes a plurality of image information and the pose information of the electronic device corresponding to the plurality of image information; the preset map information may refer to an offline map.
  • the second type of pose change information between poses includes:
  • the second type of pose change information is obtained based on the at least two frames of images to be positioned and reference images corresponding to the at least two frames of images to be positioned.
  • the electronic device searches for an image that best matches each of the at least two frames of images to be located from the preset map information, and uses the image as a reference image corresponding to the image to be located. Further, the electronic device matches the image feature in each frame of the image to be positioned with the image feature of the corresponding reference image, and after obtaining the matching pair, calculates the basic matrix or homography matrix between the image to be positioned and its corresponding reference image, Thus, the second type of pose change information between the pose of the electronic device corresponding to the image to be positioned and the pose of the electronic device corresponding to the reference image is obtained.
  • the second type of pose change information includes multiple pose change information, that is, the pose change information between the pose of the electronic device in each frame of the image to be positioned and the pose of the electronic device in the corresponding reference image. .
  • the reference image corresponding to each frame of the to-be-positioned image in the at least two frames of the to-be-positioned image may be the same image or different images.
  • Step 103 Based on the first type of pose change information and the second type of pose change information, determine scale information corresponding to at least two frames of images to be located respectively.
  • the scale information is used to characterize the size relationship between the coordinate system when the electronic device collects the image to be positioned and the target coordinate system when the electronic device collects the corresponding reference image.
  • the electronic device may determine that each image to be positioned is collected by the current electronic device based on the first type of pose change information
  • the position in the coordinate system of the image to be located that is, the coordinate system of the image to be located currently
  • the relative position of each image to be located in the target coordinate system is determined based on the second type of pose change information.
  • the reference image is an image obtained from an offline map
  • the target coordinate system of the reference image may refer to the coordinate system of the offline map.
  • each image to be positioned in the coordinate system of the current electronic device when the image to be positioned is collected (that is, the coordinate system of the current image to be positioned) can be compared with the relative position of each image to be positioned in the target coordinate system.
  • Merging processing because the coordinate system when acquiring the image to be positioned has a scale, the merge processing can restore the coordinate system of the current electronic device when acquiring the image to be positioned, and the target coordinate system when the electronic device is acquiring the corresponding reference image.
  • Step 104 Based on the second type of pose change information and scale information, determine the corresponding target poses of the electronic device in the target coordinate system when at least two frames of images to be positioned are collected.
  • step 102 the non-scale second type of pose change information between the pose of the electronic device when each frame of the image to be positioned is collected and the pose of the electronic device when the corresponding reference image is collected is determined, and step 103 The size relationship between the coordinate system of the image to be positioned and the coordinate system of the reference image is determined. In this way, the electronic device can determine the scale of each frame of the image to be positioned relative to the reference image.
  • the target pose under the system that is, the target pose in the target coordinate system.
  • the positioning method provided in the embodiments of the present application uses the first-type pose change information with a scale between the poses of the electronic device of at least two frames of images to be located currently collected to restore the coordinate system and the collection of the images to be located.
  • the size relationship between the coordinate systems when referring to the image so as to obtain the scaled target pose of the electronic device corresponding to the image to be positioned, which improves the accuracy of positioning; in addition, only the position of the reference image is required in the embodiment provided in this application.
  • the pose information and image information do not need to calculate the 3D point cloud information of the reference image, which greatly simplifies the complexity of building the map, and has higher compatibility, and the map occupies less memory.
  • FIG. 2 is a schematic flowchart of the positioning method provided by an embodiment of the application. As shown in FIG. 2, the positioning method includes the following steps:
  • Step 201 Collect at least two frames of images to be positioned.
  • the electronic device can collect N frames of images to be positioned; N is an integer greater than or equal to 2.
  • the principle of collecting at least N frames of images to be positioned is that at least part of the area between the collected two adjacent frames of images to be positioned is the same, but the image areas of the two adjacent frames of images to be positioned are not complete. the same. This is to ensure that there is a certain distance between each frame collected by the electronic device, but the distance is not too far. If the distance is too close, the noise of SLAM will have a greater impact, resulting in inaccurate changes in pose between two adjacent frames. However, if the distance is too far, there will be a large parallax between multiple images to be positioned and the same reference frame corresponding to them, which will result in too few matching points and the problem of relative pose cannot be recovered.
  • Step 202 Obtain motion parameters of the electronic device in the process of collecting at least two frames of images to be positioned.
  • Step 203 Determine the first type of pose change information between the poses of the electronic device when at least two frames of images to be positioned are collected based on the motion parameters.
  • the first type of position between the position of the electronic device when acquiring each frame of the image to be positioned and the position of the electronic device when acquiring the image of the target to be positioned can be calculated based on the motion parameters of the electronic device.
  • Posture change information For example, the change information between the position and pose of the electronic device when collecting each frame of the image to be positioned and the position of the electronic device when collecting the first frame of the image to be positioned, the first type of pose change information ⁇ P 1,2 ,P 1,3 , ...,P 1,N ⁇ .
  • the first type of pose change information here is determined by the motion parameters of the electronic device, where each element includes the rotation angle and displacement; it should be noted that the displacement here is a displacement with a scale.
  • Step 204 Acquire at least one reference image corresponding to at least two frames of images to be located from the preset map information of the environment.
  • acquiring at least one reference image corresponding to at least two frames of images to be located in the preset map information of the environment includes:
  • an image matching the target feature information is searched from the preset map information of the environment, and the image is used as a reference image corresponding to the image to be located in the i-th frame.
  • the clustering process is used to divide the related information of similar feature points into the same set; where i is an integer greater than or equal to 1 and less than or equal to N.
  • the feature point may be a FAST corner point in the image to be located.
  • multiple feature points in the image to be located can be extracted, where the number of feature points extracted is dynamic and can be determined according to the scene of the current image. Specifically, more feature points are extracted in scenes with complex textures, and fewer feature points are extracted in scenes with weak textures, such as white walls.
  • the characteristic points usually extracted are between 500 and 1500.
  • the electronic device after obtaining the relevant information of the feature point, the electronic device further extracts the BRIEF feature of the pixel point within the range of 31x31 around each feature point, and describes the feature point through the BRIEF feature to obtain each feature point.
  • the BRIEF feature is a 256-bit binary code, which can be understood as 2 256 representations of the BRIEF feature.
  • clustering the similar BRIEF features among the BRIEF features of the at least one feature point may be to calculate the bag-of-words model features of the BRIEF feature.
  • the word bag model with BRIEF feature is an offline trained tree structure.
  • the bag-of-words model can map similar features to the same node in the tree structure according to the actual feature points in the real images in the training set.
  • the electronic device can find the image that best matches the target feature of the i-th frame to be located in the preset map information corresponding to the current environment according to the target feature of the i-th frame to be located, and use it as the i-th frame to be located.
  • the reference image corresponding to the image.
  • the electronic device can respectively find the N frames of reference images that best match the N frames of images to be positioned.
  • the similarity between the k-th frame to be positioned image and the i-th frame to be positioned image is less than the preset similarity threshold, and the reference image corresponding to the i-th frame to be positioned image is taken as the kth frame.
  • the reference image corresponding to the frame to be positioned is an integer greater than or equal to 1 and less than or equal to N, and k is not equal to i.
  • the reference image corresponding to the image to be positioned may be used as the reference image corresponding to the k-th frame to be positioned. In this way, the time for image matching can be saved and the efficiency of positioning can be improved.
  • Step 205 Obtain the second type of pose change information based on at least two frames of images to be positioned and reference images respectively corresponding to the at least two frames of images to be positioned.
  • the second type of pose change information is obtained, including:
  • step 204 the bag-of-words model feature of the image to be located in the i-th frame has been calculated, that is, the BRIEF feature of each feature point of the image to be located in the i-th frame corresponds to the node of the bag-of-words model tree structure.
  • the BRIEF feature has a unique bag-of-words model node number.
  • the electronic device can obtain the bag-of-words model of the reference image corresponding to the i-th frame to be located, and combine the i-th frame of to-be-positioned image and the reference image corresponding to the i-th frame of the to-be-positioned image, the BRIEF feature in the node number of the same bag of words model Perform matching and calculate the Hamming distance of the two BRIEF features. The smaller the Hamming distance, the more similar the two BRIEF features. Obtain the feature with the smallest Hamming distance and the second smallest from the BRIEF feature of the i-th frame to be positioned in the reference image.
  • the Hamming distance between a certain BRIEF feature in the image to be positioned in the i-th frame and a certain BRIEF feature in the reference image is less than a preset distance threshold (for example, 50), and the smallest Hamming distance is less than 0.9 times the first If the Hamming distance is two small, the above-mentioned BRIEF feature is considered to be a set of successful matching features. It should be noted that less than the preset distance threshold indicates that the Hamming distance between the features is small enough and the two BRIEF features are sufficiently similar; and the smallest Hamming distance is less than the second smallest Hamming distance multiplied by 0.9, indicating that the BRIEF feature is a very Significant feature matches, no other similar matches.
  • a preset distance threshold for example, 50
  • the matching with this constraint has a higher accuracy rate than general violent matching, and the bag-of-words model features of the i-th frame to be positioned image have been calculated when searching for the reference image, so there is no need to repeat the calculation.
  • the Random Sample Consensus (RANSAC) algorithm and the eight-point method are used to calculate the basic matrix and the homography matrix between the i-th frame to be positioned image and the corresponding reference image, through The method of comparing the number of interior points, selecting the one suitable for the current scene from these two matrices, and recovering the second type of pose change information between the electronic device when the i-th frame to be positioned image is collected and the electronic device when collecting the reference image .
  • RANSAC Random Sample Consensus
  • the second type of pose change information between N frames of images to be positioned and their corresponding reference images can be obtained ⁇ P 1,map(1) ,P 2,map(2) ,...,P N ,map(N) ⁇ ;
  • map(i) represents the ID of the reference image corresponding to the i-th frame to be positioned, and i is an integer greater than or equal to 1 and less than or equal to N.
  • the second type of pose change information here includes the rotation angle and scale-free displacement of the electronic device when the image to be positioned is collected and when the corresponding reference image is collected.
  • the displacement is directional, and the mode length is normalized to 1.
  • Step 206 Based on the first-type pose change information, determine the corresponding first-type positions in the coordinate system when the electronic device collects the images to be positioned when at least two frames of the images to be positioned are collected.
  • the first type of pose change information between the pose of the electronic device when each frame of the image to be positioned is collected and the pose of the electronic device when the first frame of image to be positioned is collected is obtained ⁇ P 1,2 ,P 1 ,3 ,...,P 1,N ⁇ .
  • the position in the coordinate system of the electronic device's pose when acquiring each frame of the to-be-positioned image can be obtained by formula (2-1), and the first type of position can be obtained:
  • t i is a coordinate in a homogeneous coordinate system, where i is an integer greater than or equal to 1 and less than or equal to N, for example:
  • R 1,i refers to the rotation angle between the pose of the electronic device when the image to be positioned in the first frame is collected and the pose of the electronic device when the image in the i-th frame to be positioned is collected
  • t 1,i refers to the first frame of the collection
  • the modulus length is both 1.
  • t 1,i refers to the displacement with scale information.
  • Step 207 Based on the second-type pose change information, determine the corresponding second-type positions in the target coordinate system when the electronic device collects the reference image when at least two frames of images to be positioned are collected.
  • step 205 the second type of pose change information between each frame of the image to be positioned and the reference image is obtained, ⁇ P 1,map(1) ,P 2,map(2) ,...,P N,map( N) ⁇ ; specifically,
  • R i,map(i) refers to the rotation angle between the pose of the electronic device when the i-th frame to be positioned image is collected and the position and pose of the electronic device represented by the reference image corresponding to the i-th frame to be positioned image
  • t i, map(i) refers to the displacement between the position of the electronic device when the i-th frame to be positioned image is collected and the position and attitude of the electronic device represented by the reference image corresponding to the i-th frame to be positioned image
  • the modulus length is both 1.
  • s i refers to the scale information of the i-th frame to be positioned.
  • the second type of positions corresponding to the target coordinate system of the electronic device when at least two frames of images to be positioned are collected can be obtained by formula (2-2):
  • Step 208 Based on the first-type position and the second-type position, determine scale information corresponding to at least two frames of images to be located respectively.
  • determining the scale information corresponding to at least two frames of images to be located respectively includes:
  • the scale information corresponding to at least two frames of images to be positioned is determined.
  • Step 209 Based on the second type of pose change information and scale information, determine the corresponding target poses of the electronic device in the target coordinate system when at least two frames of images to be positioned are collected.
  • the scaled relative pose of each frame of image to be positioned relative to the corresponding reference image is solved, and the offline map coordinates of the electronic device when each frame of image to be positioned is collected The target pose under the tie.
  • At least two frames of images to be located are collected in SLAM to respectively calculate the scale-free relative poses of the electronic equipment in the reference image matching the offline map, and then use one of the electronic equipment corresponding to the at least two images to be located.
  • the scaled relative pose between the two can recover the scaled pose with the reference image, and determine the target pose of each frame to be located in the offline map coordinate system.
  • this method does not require point cloud information, only the electronic equipment and pose and image information when the reference image is collected, so it can make the mapping simpler, higher compatibility, and smaller map size.
  • FIG. 3 is a schematic flowchart of the positioning method provided by the embodiment of the application, as shown in FIG. As shown in 3, the positioning method includes the following steps:
  • Step 301 Collect the first frame of image to be positioned, and obtain a reference image corresponding to the first frame of image to be positioned from the offline map.
  • At least one feature point such as a FAST corner point, in the image to be positioned in the first frame is extracted, and then the BRIEF feature is extracted around each feature point.
  • SLAM extracts 150 feature points in order to track the pose and real-time mapping of the electronic device.
  • additional feature points are also extracted. The number of additional feature points is dynamic. There are more feature points in scenes with complex textures, but less in general weak texture scenes, such as white walls. Generally, the extracted feature points are between 500 and 1500.
  • the BRIEF feature is extracted from the pixels within the range of 31x31 around each feature point, and this feature point is described by the BRIEF feature.
  • Step 302 Obtain the second frame of the to-be-positioned image, and determine the first type of pose change information between the pose of the electronic device when the first frame of the to-be-positioned image is collected and the pose of the electronic device when the second frame of the to-be-positioned image is collected.
  • the method for selecting the second frame of the image to be positioned here is that if 20% of the captured image tracking feature points are not observed in the first frame of the image to be positioned, this frame of image is regarded as the second frame of the image to be positioned.
  • the purpose of selecting the second frame to be positioned is to ensure a certain distance from the first frame to be positioned, but the distance is not too far. If the distance is too close, the noise of SLAM may have a greater impact, resulting in inaccurate information about the first type of pose change between two frames. When the two frames are far apart, the noise of SLAM is relatively small relative to this distance, and the pose transformation is more reliable.
  • the second frame to be positioned image selected based on the above principles has a certain degree of similarity with the first frame to be positioned image.
  • the reference image of the first frame to be positioned image can be used as the second frame to be positioned The reference image of the image.
  • the reason for selecting two frames of to-be-positioned images is that relocation based on multiple frames of to-be-positioned images obviously requires waiting for more frames, the calculation method is more complicated, and the user will spend more time on relocation, resulting in Bad experience.
  • the subsequent frames may have a large parallax with the first frame to be positioned, which will also produce a large parallax with the reference image, which will result in too few matching points and cannot be restored. Relative pose.
  • Step 303 Based on the first frame to be positioned image, the second frame to be positioned image and the reference image, the position and posture of the electronic device when the first frame to be positioned image is collected and the position and posture of the electronic device when the second frame to be positioned image is collected are obtained respectively And the second type of posture change information between the posture of the electronic device when the reference image is collected.
  • the word bag model is used as a constraint for matching.
  • the specific principle here is that the word bag model of the BRIEF descriptor is an offline, trained tree structure.
  • the bag-of-words model can put similar features into the nodes of the same bag-of-words model based on the actual feature points in the real images in the training set.
  • each BRIEF feature has a unique bag-of-words model node number, and one bag-of-words model node number corresponds to multiple BRIEF features.
  • the Hamming distance of the two BRIEF features in the bag model node The smaller the Hamming distance, the more similar the two BRIEF features.
  • the Hamming distance from the feature of the image to be positioned in the first frame is the smallest and the feature that is the second smallest.
  • the Hamming distance between a certain BRIEF feature in the image to be positioned in the first frame and a certain BRIEF feature in the reference image is less than a preset distance threshold (for example, 50), and the smallest Hamming distance is less than 0.9 times the first If the Hamming distance is two small, the above BRIEF feature is considered to be a successful match.
  • a preset distance threshold for example, 50
  • less than the preset distance threshold indicates that the Hamming distance between the features is small enough and the two BRIEF features are sufficiently similar; and the smallest Hamming distance is less than the second smallest Hamming distance multiplied by 0.9, indicating that the BRIEF feature is a very Significant feature matches, no other similar matches.
  • RANSAC and the eight-point method to calculate the basic matrix and homography matrix between the image to be positioned in the first frame and the corresponding reference image, and select the two matrices by comparing the number of interior points. It is suitable for the current scene to recover the second type of pose change information between the first frame to be positioned image and the reference image.
  • the second type of pose change information between the pose of the electronic device of the second frame to be positioned image and the pose of the reference image electronic device is obtained.
  • Step 304 Based on the first type of pose change information and the second type of pose change information, obtain scale information of the first frame of image to be positioned and the second frame of image to be positioned.
  • the geometric relationship between the position and pose of the electronic device when the first frame to be positioned image is collected, the position and pose of the electronic device when the second frame to be positioned image is collected, and the position and pose of the electronic device when the reference image is collected can be as shown in Figure 4. Show.
  • electronic device A is the electronic device's pose when collecting the reference image
  • electronic device B and electronic device C are the first frame of the image to be positioned and the second frame of the image to be positioned by SLAM during relocation, respectively.
  • the pose is the electronic device's pose when collecting the reference image
  • electronic device B and electronic device C are the first frame of the image to be positioned and the second frame of the image to be positioned by SLAM during relocation, respectively. The pose.
  • step 303 the scale-free second-type pose change information P AB and P AC between the electronic device B and the electronic device C, respectively, and the electronic device A have been calculated;
  • R AB refers to the rotation from electronic device A to electronic device B
  • R AC refers to the rotation from electronic device A to electronic device C
  • t AB refers to the displacement from electronic device A to electronic device B
  • t AC refers to the displacement from electronic device A to electronic device C
  • the modulus length is all 1.
  • s 1 and s 2 refer to the scale information of t AB and t AC displacement, respectively.
  • the position coordinates of the electronic device's pose in the target coordinate system (ie offline map coordinate system) of the reference image when the first frame of the image to be positioned is collected can be obtained by formula (3-1):
  • t B is the position coordinate of the electronic device when the first frame to be positioned image is collected.
  • t C is the position coordinate of the electronic device when the second frame to be positioned image is collected.
  • the t C in the formula (3-2) can be obtained by the first type of pose change information of the electronic device when the first frame of the image to be positioned and the second frame of the image to be positioned are collected.
  • the formula (3-5) can be processed to solve the least square solution of s 1 and s 2 .
  • Step 305 Based on the second type of pose change information and scale information, obtain the target pose of the electronic device in the offline map coordinate system when the first frame of the image to be positioned is collected, and the electronic device is offline when the second frame of the image to be positioned is collected The pose of the target in the map coordinate system.
  • the positioning method provided by the embodiment of the present application uses the two current SLAM images to be positioned, and the corresponding reference image in the offline map to calculate the scale-free relative pose, and then uses the scaled image between the two frames to be positioned.
  • Relative pose restore the scaled pose between the two frames to be positioned and the reference image.
  • This solution does not require 3D point cloud information, but only needs to refer to the pose and image information of the image, so it can make map creation easier, higher compatibility, and smaller map size.
  • the embodiment of the present application also provides a positioning device, which can be applied to the electronic equipment described above, refer to the positioning device shown in 5.
  • a schematic diagram of structural composition, the positioning device includes:
  • the first-type pose change information determining unit 51 is configured to acquire the first-type pose change information between the poses of the electronic device when at least two frames of images to be positioned are collected; wherein the pose is used to characterize the electronic device Position and/or posture;
  • the second-type pose change information determining unit 52 is configured to determine the second-type position between the pose of the electronic device when the at least two frames of images to be positioned are collected and the pose of the electronic device indicated by the corresponding reference image. Posture change information;
  • the scale information determining unit 53 is configured to determine scale information corresponding to the at least two frames of images to be positioned based on the first type of pose change information and the second type of pose change information; the scale information is used for Characterize the size relationship between the coordinate system when the electronic device collects the image to be positioned and the target coordinate system when the electronic device collects the corresponding reference image;
  • the positioning processing unit 54 is configured to determine the corresponding target poses of the electronic device in the target coordinate system when the at least two frames of images to be located are collected based on the second-type pose change information and the scale information.
  • the first-type pose change information determining unit 51 is configured to acquire the motion parameters of the electronic device in the process of collecting the at least two frames of images to be positioned; based on the motion parameters, determine that the acquisition is at least The first type of pose change information between the pose of the electronic device when two frames of images to be positioned.
  • the second-type pose change information determining unit 52 is configured to obtain at least one reference image corresponding to the at least two frames of images to be located in the preset map information of the environment; The second type of pose change information is obtained based on the at least two frames of images to be positioned and reference images respectively corresponding to the at least two frames of images to be positioned.
  • the scale information determining unit 53 is configured to determine the coordinate system of the electronic device when collecting the at least two frames of images to be positioned based on the first type of pose change information Corresponding to the first type of position in the corresponding position; based on the second type of pose change information, determine the corresponding second type of position in the target coordinate system of the reference image collected by the electronic device when the at least two frames of images to be positioned are collected ; Based on the first type of position and the second type of position, determine the scale information corresponding to the at least two frames of images to be located respectively.
  • the scale information determining unit 53 is further configured to establish scale information corresponding to the at least two frames of images to be positioned based on the first type of position and the second type of position.
  • Target optimization equation based on the target optimization equation, determine the scale information corresponding to the at least two frames to be positioned.
  • the second-type pose change information determining unit 52 is further configured to extract relevant information of at least one feature point in the i-th frame of the image to be positioned, and compare the correlation of the at least one feature point Information is clustered to obtain target feature information; the clustering process is used to divide the related information of similar feature points into the same set; where i is an integer greater than or equal to 1 and less than or equal to N, and N is the The total number of image frames of at least two frames to be located; based on the target feature information, an image matching the target feature information is searched from the preset map information where it is located to obtain a reference corresponding to the i-th frame to be located image image.
  • the second-type pose change information determining unit 52 is further configured to detect that the similarity between the k-th frame to be positioned image and the i-th frame to be positioned image is less than a preset For the similarity threshold, the reference image corresponding to the i-th frame to be located is taken as the reference image corresponding to the k-th frame to be located.
  • k is an integer greater than or equal to 1 and less than or equal to N, and k is not equal to i.
  • the second-type pose change information determining unit 52 is further configured to match the target feature information corresponding to the i-th frame to be positioned image with the feature information of the reference image to obtain a matching At least one successful matching feature set; based on the at least one matching feature set, determine the second between the pose of the electronic device when the i-th frame to be positioned image is collected and the pose of the electronic device indicated by the corresponding reference image Class pose change information.
  • each of the at least two frames of images to be positioned at least part of the area of each two adjacent frames of the image is the same, and other parts of the area are different.
  • an embodiment of the present application also provides an electronic device.
  • the electronic device 60 includes a processor 61 and a processor 61 configured to be stored in Memory 62 for computer programs running on the processor,
  • processor 61 when the processor 61 is configured to run the computer program, it executes the method steps in the foregoing embodiment.
  • the communication bus 63 is used to implement connection and communication between these components.
  • the communication bus 63 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the communication bus 63 in FIG. 6.
  • the terminal is usually a mobile terminal with a front dual camera or a rear dual camera function, and the mobile terminal may be implemented in various forms.
  • the mobile terminal described in an exemplary embodiment of the present application may include a mobile phone, a tablet computer, a palmtop computer, a personal digital assistant (Personal Digital Assistant, PDA), etc.
  • an exemplary embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the dark-light image processing method provided in the foregoing embodiments are implemented.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms of.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of an exemplary embodiment of the present application.
  • the functional units in the embodiments of the present application can all be integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
  • the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program can be stored in a computer readable storage medium.
  • the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes: various media that can store program codes, such as a mobile storage device, a read only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
  • ROM Read Only Memory
  • the above-mentioned integrated unit of this application is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.
  • the technical solution of an exemplary embodiment of the present application can be embodied in the form of a software product in essence or a part that contributes to related technologies.
  • the computer software product is stored in a storage medium and includes several instructions. It is used to make the terminal execute all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: removable storage devices, ROMs, magnetic disks or optical discs and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un procédé de positionnement, un dispositif électronique et un support de stockage lisible par ordinateur. Le procédé consiste à : acquérir des informations de changement de pose de premier type entre des poses d'un dispositif électronique lorsqu'au moins deux trames d'une image à positionner sont collectées (101) ; déterminer des informations de changement de pose de second type entre la pose du dispositif électronique et la pose du dispositif électronique indiquée par une image de référence correspondante lorsqu'au moins deux trames d'une image à positionner sont collectées (102) ; déterminer, sur la base des informations de changement de pose de premier type et des informations de changement de pose de second type, des informations d'échelle correspondant respectivement à au moins deux trames d'une image à positionner (103), les informations d'échelle étant utilisées pour représenter une relation de taille entre un système de coordonnées lorsque le dispositif électronique collecte une image à positionner et un système de coordonnées cible lorsque le dispositif électronique collecte une image de référence correspondante ; et déterminer, sur la base des informations de changement de pose de second type et des informations d'échelle, les poses cibles correspondant respectivement au dispositif électronique dans le système de coordonnées cible lorsque les au moins deux trames d'une image à positionner sont collectées (104).
PCT/CN2020/097660 2019-06-27 2020-06-23 Procédé et appareil de positionnement, dispositif électronique et support de stockage lisible Ceased WO2020259481A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910569873.7A CN110310333B (zh) 2019-06-27 2019-06-27 定位方法及电子设备、可读存储介质
CN201910569873.7 2019-06-27

Publications (1)

Publication Number Publication Date
WO2020259481A1 true WO2020259481A1 (fr) 2020-12-30

Family

ID=68076901

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/097660 Ceased WO2020259481A1 (fr) 2019-06-27 2020-06-23 Procédé et appareil de positionnement, dispositif électronique et support de stockage lisible

Country Status (2)

Country Link
CN (1) CN110310333B (fr)
WO (1) WO2020259481A1 (fr)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112712561A (zh) * 2021-01-05 2021-04-27 北京三快在线科技有限公司 一种建图方法、装置、存储介质及电子设备
CN112880675A (zh) * 2021-01-22 2021-06-01 京东数科海益信息科技有限公司 用于视觉定位的位姿平滑方法、装置、终端和移动机器人
CN112926578A (zh) * 2021-02-23 2021-06-08 北京三快在线科技有限公司 图片定位的方法、装置、电子设备及计算机可读存储介质
CN113223184A (zh) * 2021-05-26 2021-08-06 北京奇艺世纪科技有限公司 一种图像处理方法、装置、电子设备及存储介质
CN114119750A (zh) * 2021-11-25 2022-03-01 上海库灵科技有限公司 用于机械臂抓取的物体三维空间位姿识别方法和装置
CN114485607A (zh) * 2021-12-02 2022-05-13 陕西欧卡电子智能科技有限公司 一种运动轨迹的确定方法、作业设备、装置、存储介质
CN114763992A (zh) * 2021-01-14 2022-07-19 未岚大陆(北京)科技有限公司 建立地图的方法、定位方法、装置、自移动设备和介质
CN114898084A (zh) * 2022-04-18 2022-08-12 荣耀终端有限公司 视觉定位方法、设备和存储介质
CN115170652A (zh) * 2021-04-06 2022-10-11 阿里巴巴新加坡控股有限公司 全局重定位方法、装置、电子设备及计算机存储介质
CN115424353A (zh) * 2022-09-07 2022-12-02 杭银消费金融股份有限公司 基于ai模型的业务用户特征识别方法及系统
US20230063099A1 (en) * 2021-08-24 2023-03-02 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus for correcting positioning information, and storage medium
WO2023103915A1 (fr) * 2021-12-08 2023-06-15 中兴通讯股份有限公司 Procédé de reconnaissance de cible, dispositif électronique et support de stockage

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310333B (zh) * 2019-06-27 2021-08-31 Oppo广东移动通信有限公司 定位方法及电子设备、可读存储介质
CN110866953B (zh) * 2019-10-31 2023-12-29 Oppo广东移动通信有限公司 地图构建方法及装置、定位方法及装置
CN111148218B (zh) * 2019-12-20 2022-03-25 联想(北京)有限公司 一种信息处理方法、设备及计算机可读存储介质
CN111145339B (zh) * 2019-12-25 2023-06-02 Oppo广东移动通信有限公司 图像处理方法及装置、设备、存储介质
CN111339228B (zh) * 2020-02-18 2023-08-11 Oppo广东移动通信有限公司 一种地图更新方法、设备、云服务器和存储介质
CN111862213B (zh) * 2020-07-29 2024-12-06 Oppo广东移动通信有限公司 定位方法及装置、电子设备、计算机可读存储介质
CN111950642B (zh) * 2020-08-17 2024-06-21 联想(北京)有限公司 一种重定位方法及电子设备
CN112802097B (zh) * 2020-12-30 2024-07-12 深圳市慧鲤科技有限公司 一种定位方法、装置、电子设备及存储介质
CN113326394A (zh) * 2021-06-30 2021-08-31 合肥高维数据技术有限公司 一种矢量图水印嵌入、溯源方法及系统
CN119478036A (zh) * 2023-08-11 2025-02-18 北京字跳网络技术有限公司 位姿检测方法、装置、电子设备及存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529538A (zh) * 2016-11-24 2017-03-22 腾讯科技(深圳)有限公司 一种飞行器的定位方法和装置
CN108629843A (zh) * 2017-03-24 2018-10-09 成都理想境界科技有限公司 一种实现增强现实的方法及设备
CN108648235A (zh) * 2018-04-27 2018-10-12 腾讯科技(深圳)有限公司 相机姿态追踪过程的重定位方法、装置及存储介质
CN108805917A (zh) * 2018-05-25 2018-11-13 网易(杭州)网络有限公司 空间定位的方法、介质、装置和计算设备
CN109472828A (zh) * 2018-10-26 2019-03-15 达闼科技(北京)有限公司 一种定位方法、装置、电子设备及计算机可读存储介质
CN109540148A (zh) * 2018-12-04 2019-03-29 广州小鹏汽车科技有限公司 基于slam地图的定位方法及系统
US20190102891A1 (en) * 2017-09-29 2019-04-04 Nanjing Avatarmind Robot Technology Co., Ltd. Method and system for displaying target image based on robot
CN109887032A (zh) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 一种基于单目视觉slam的车辆定位方法及系统
CN110310333A (zh) * 2019-06-27 2019-10-08 Oppo广东移动通信有限公司 定位方法及电子设备、可读存储介质

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8358311B1 (en) * 2007-10-23 2013-01-22 Pixar Interpolation between model poses using inverse kinematics
CN101619985B (zh) * 2009-08-06 2011-05-04 上海交通大学 基于可变形拓扑地图的服务机器人自主导航方法
CN104374395A (zh) * 2014-03-31 2015-02-25 南京邮电大学 基于图的视觉slam方法
CN106092104B (zh) * 2016-08-26 2019-03-15 深圳微服机器人科技有限公司 一种室内机器人的重定位方法及装置
CN106846497B (zh) * 2017-03-07 2020-07-10 百度在线网络技术(北京)有限公司 应用于终端的呈现三维地图的方法和装置
CN108109175A (zh) * 2017-12-20 2018-06-01 北京搜狐新媒体信息技术有限公司 一种图像特征点的跟踪方法及装置
CN108364319B (zh) * 2018-02-12 2022-02-01 腾讯科技(深圳)有限公司 尺度确定方法、装置、存储介质及设备
CN108460779B (zh) * 2018-02-12 2021-09-24 浙江大学 一种动态环境下的移动机器人图像视觉定位方法
CN108615248B (zh) * 2018-04-27 2022-04-05 腾讯科技(深圳)有限公司 相机姿态追踪过程的重定位方法、装置、设备及存储介质
CN109087359B (zh) * 2018-08-30 2020-12-08 杭州易现先进科技有限公司 位姿确定方法、位姿确定装置、介质和计算设备
CN109671120A (zh) * 2018-11-08 2019-04-23 南京华捷艾米软件科技有限公司 一种基于轮式编码器的单目slam初始化方法及系统
CN109544615B (zh) * 2018-11-23 2021-08-24 深圳市腾讯信息技术有限公司 基于图像的重定位方法、装置、终端及存储介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529538A (zh) * 2016-11-24 2017-03-22 腾讯科技(深圳)有限公司 一种飞行器的定位方法和装置
CN108629843A (zh) * 2017-03-24 2018-10-09 成都理想境界科技有限公司 一种实现增强现实的方法及设备
US20190102891A1 (en) * 2017-09-29 2019-04-04 Nanjing Avatarmind Robot Technology Co., Ltd. Method and system for displaying target image based on robot
CN108648235A (zh) * 2018-04-27 2018-10-12 腾讯科技(深圳)有限公司 相机姿态追踪过程的重定位方法、装置及存储介质
CN108805917A (zh) * 2018-05-25 2018-11-13 网易(杭州)网络有限公司 空间定位的方法、介质、装置和计算设备
CN109472828A (zh) * 2018-10-26 2019-03-15 达闼科技(北京)有限公司 一种定位方法、装置、电子设备及计算机可读存储介质
CN109540148A (zh) * 2018-12-04 2019-03-29 广州小鹏汽车科技有限公司 基于slam地图的定位方法及系统
CN109887032A (zh) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 一种基于单目视觉slam的车辆定位方法及系统
CN110310333A (zh) * 2019-06-27 2019-10-08 Oppo广东移动通信有限公司 定位方法及电子设备、可读存储介质

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112712561A (zh) * 2021-01-05 2021-04-27 北京三快在线科技有限公司 一种建图方法、装置、存储介质及电子设备
CN114763992A (zh) * 2021-01-14 2022-07-19 未岚大陆(北京)科技有限公司 建立地图的方法、定位方法、装置、自移动设备和介质
CN112880675A (zh) * 2021-01-22 2021-06-01 京东数科海益信息科技有限公司 用于视觉定位的位姿平滑方法、装置、终端和移动机器人
CN112880675B (zh) * 2021-01-22 2023-04-07 京东科技信息技术有限公司 用于视觉定位的位姿平滑方法、装置、终端和移动机器人
CN112926578A (zh) * 2021-02-23 2021-06-08 北京三快在线科技有限公司 图片定位的方法、装置、电子设备及计算机可读存储介质
CN115170652A (zh) * 2021-04-06 2022-10-11 阿里巴巴新加坡控股有限公司 全局重定位方法、装置、电子设备及计算机存储介质
CN113223184A (zh) * 2021-05-26 2021-08-06 北京奇艺世纪科技有限公司 一种图像处理方法、装置、电子设备及存储介质
CN113223184B (zh) * 2021-05-26 2023-09-05 北京奇艺世纪科技有限公司 一种图像处理方法、装置、电子设备及存储介质
US20230063099A1 (en) * 2021-08-24 2023-03-02 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus for correcting positioning information, and storage medium
CN114119750A (zh) * 2021-11-25 2022-03-01 上海库灵科技有限公司 用于机械臂抓取的物体三维空间位姿识别方法和装置
CN114485607A (zh) * 2021-12-02 2022-05-13 陕西欧卡电子智能科技有限公司 一种运动轨迹的确定方法、作业设备、装置、存储介质
CN114485607B (zh) * 2021-12-02 2023-11-10 陕西欧卡电子智能科技有限公司 一种运动轨迹的确定方法、作业设备、装置、存储介质
WO2023103915A1 (fr) * 2021-12-08 2023-06-15 中兴通讯股份有限公司 Procédé de reconnaissance de cible, dispositif électronique et support de stockage
CN114898084A (zh) * 2022-04-18 2022-08-12 荣耀终端有限公司 视觉定位方法、设备和存储介质
CN114898084B (zh) * 2022-04-18 2023-08-25 荣耀终端有限公司 视觉定位方法、设备和存储介质
CN115424353A (zh) * 2022-09-07 2022-12-02 杭银消费金融股份有限公司 基于ai模型的业务用户特征识别方法及系统
CN115424353B (zh) * 2022-09-07 2023-05-05 杭银消费金融股份有限公司 基于ai模型的业务用户特征识别方法及系统

Also Published As

Publication number Publication date
CN110310333B (zh) 2021-08-31
CN110310333A (zh) 2019-10-08

Similar Documents

Publication Publication Date Title
WO2020259481A1 (fr) Procédé et appareil de positionnement, dispositif électronique et support de stockage lisible
JP7430243B2 (ja) 視覚的測位方法及び関連装置
CN107742311B (zh) 一种视觉定位的方法及装置
JP6348574B2 (ja) 総体的カメラ移動およびパノラマカメラ移動を使用した単眼視覚slam
CN103858148B (zh) 用于移动设备的平面映射和跟踪方法、装置及设备
JP6430064B2 (ja) データを位置合わせする方法及びシステム
Arth et al. Wide area localization on mobile phones
US8798357B2 (en) Image-based localization
CN107329962B (zh) 图像检索数据库生成方法、增强现实的方法及装置
WO2016009811A1 (fr) Procédé pour étalonner une ou plusieurs caméras
CN111724438B (zh) 一种数据处理方法、装置
CN113298871B (zh) 地图生成方法、定位方法及其系统、计算机可读存储介质
CN112489119A (zh) 一种增强可靠性的单目视觉定位方法
CN110765882A (zh) 一种视频标签确定方法、装置、服务器及存储介质
JP5536124B2 (ja) 画像処理システム及び画像処理方法
CN114882106A (zh) 位姿确定方法和装置、设备、介质
Geng et al. SANet: A novel segmented attention mechanism and multi-level information fusion network for 6D object pose estimation
Sui et al. An accurate indoor localization approach using cellphone camera
CN115131691A (zh) 对象匹配方法、装置、电子设备及计算机可读存储介质
Huang et al. Image-based localization for indoor environment using mobile phone
CN108426566B (zh) 一种基于多摄像机的移动机器人定位方法
CN113570535B (zh) 视觉定位方法及相关装置、设备
CN114419103A (zh) 一种骨架检测跟踪方法、装置及电子设备
Zhang Sparse visual localization in GPS-denied indoor environments
CN113587916A (zh) 实时稀疏视觉里程计、导航方法以及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20831344

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20831344

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20831344

Country of ref document: EP

Kind code of ref document: A1