[go: up one dir, main page]

WO2022002149A1 - Procédé de localisation initiale, dispositif de navigation visuelle et système d'entreposage - Google Patents

Procédé de localisation initiale, dispositif de navigation visuelle et système d'entreposage Download PDF

Info

Publication number
WO2022002149A1
WO2022002149A1 PCT/CN2021/103651 CN2021103651W WO2022002149A1 WO 2022002149 A1 WO2022002149 A1 WO 2022002149A1 CN 2021103651 W CN2021103651 W CN 2021103651W WO 2022002149 A1 WO2022002149 A1 WO 2022002149A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
navigation device
visual navigation
online
ground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2021/103651
Other languages
English (en)
Chinese (zh)
Inventor
王力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Technology Co Ltd
Original Assignee
Hangzhou Hikrobot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Technology Co Ltd filed Critical Hangzhou Hikrobot Technology Co Ltd
Publication of WO2022002149A1 publication Critical patent/WO2022002149A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Definitions

  • the present application relates to the technical field of visual navigation, and in particular, to an initial positioning method, a visual navigation device, and a storage system.
  • Robots based on visual navigation need to determine their current position after each startup, which can be called online.
  • the robot after the robot is started, it will shoot an image towards the ground, and then match the captured image with the image in the image sample library, wherein the image sample library stores some ground images with known positioning information.
  • the texture on the ground is different in different places, so feature matching can be performed according to the feature information such as the texture features of the image, so as to determine the current positioning information according to the positioning information of the successfully matched image samples, so that the initial positioning of the robot is completed.
  • the above-mentioned process from starting the robot to initial positioning may be referred to as the online process of the robot, and thus, when the initial positioning of the robot is completed, it may be referred to as completing the online process of the robot.
  • the embodiments of the present application provide an initial positioning method, a visual navigation device, and a storage system, so as to reduce the number of online sample images, and further reduce the number of times of image matching during initial positioning, thereby solving the problem that the efficiency of initial positioning in the related art is relatively high. low problem.
  • an embodiment of the present application provides an initial positioning method, and the method includes:
  • the ground image collected by the camera is used to obtain an initial positioning image, wherein the target site includes multiple areas, and the multiple areas are divided into online areas and offline areas.
  • the first visual navigation device is pre-positioned in any one of the multiple areas on-line area; the feature matching degree between the initial positioning image and each on-line sample image is calculated, wherein the on-line sample image is The ground sample image pre-collected in the upper line area and used as a matching sample during initial positioning; determine the positioning information of the upper line sample image with the highest degree of feature matching with the initial positioning image; according to the feature matching degree The positioning information of the highest online sample image determines the positioning result of the initial positioning of the visual navigation device.
  • the method before calculating the feature matching degree between the initial positioning image and each online sample image, the method further includes:
  • each of the ground sample images is related to the positioning information obtained by the positioning sensor during shooting Calculate the degree of feature matching between the multiple ground sample images; according to the calculated feature matching degree, determine repeated images in the multiple ground sample images, wherein the repeated images are calculated One of any two ground sample images whose feature matching degree obtained exceeds a preset threshold; the ground sample images other than the repeated images captured in the upper-line area are used as the upper-line sample images.
  • the ground sample images other than the repeated images that will be shot in the online area, as the online sample images include:
  • a first attribute mark is marked on the ground sample images other than the repeated images captured in the upper line area, wherein the first attribute mark is used to indicate that the corresponding ground sample image is the upper line sample image.
  • the method before calculating the feature matching degree between the initial positioning image and each online sample image, the method further includes:
  • the ground sample images marked with the first attribute mark are searched to obtain the upper-line sample images.
  • the determining of duplicate images in the plurality of ground sample images includes:
  • the camera and the positioning sensor are configured on a second visual navigation device, and the acquisition of ground images taken by the camera at different positions in the online area of the target site is to obtain a plurality of all images.
  • the above ground sample images including:
  • Control the second visual navigation device to move to different positions in all the on-line areas in the target site; take a picture of the ground when moving to a position, and record all the images obtained by the positioning sensor during the shooting.
  • the second visual navigation device is based on pre-configured positioning information of the positioning origin of the second visual navigation device.
  • an embodiment of the present invention provides an initial positioning device, the device comprising:
  • the initial image acquisition module is used to acquire the ground image collected by the camera after the first visual navigation device is activated in the online area of the target site, and obtain an initial positioning image, wherein the target site includes multiple areas, and the multiple areas Divided into an online area and an offline area, and the first visual navigation device is pre-positioned in any one of the multiple areas on-line area;
  • the first calculation module is used to calculate the feature matching degree between the initial positioning image and each on-line sample image, wherein the on-line sample image is pre-collected in the on-line area and used as the matching during initial positioning the ground sample image of the sample;
  • a positioning information determining module configured to determine the positioning information of the online sample image with the highest degree of matching with the feature of the initial positioning image
  • the initial positioning determination module is configured to determine the positioning result of the initial positioning of the visual navigation device according to the positioning information.
  • an embodiment of the present application provides a visual navigation device, the visual navigation device includes: a communication module; a moving mechanism for driving the visual navigation device to move to different positions; a camera, configured to face the ground, for taking ground images; a positioning sensor for obtaining positioning information of a visual navigation device; one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, one or more
  • the computer program includes instructions that, when executed by the visual navigation device, cause the visual navigation device to perform the steps of any of the initial positioning methods provided in the first aspect above.
  • an embodiment of the present application provides a storage system, the storage system includes: a target site, including multiple areas, and the multiple areas are divided into an online area and an off-line area; Navigation equipment.
  • the storage system further includes: a server for selecting a visual navigation device among multiple visual navigation devices, planning the path of the selected visual navigation device, and sending the travel path. to the corresponding visual navigation device so that the corresponding visual navigation device follows the path after the initial positioning.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when it runs on a device, causes the device to execute any one of the above-mentioned first aspects. The steps of the initial positioning method.
  • an embodiment of the present invention provides a computer program product containing instructions, which, when run on a computer, cause the computer to execute the steps of any of the initial positioning methods provided in the first aspect above.
  • the target site is divided into an online area and an off-line area
  • the visual navigation device is placed in the online area and then activated, and after the visual navigation device is activated, the ground image captured by the camera is used. It is determined as the initial positioning image, so that the initial positioning image is matched with the upper-line sample image of the upper-line area. In this way, matching with sample images in all regions is avoided, the number of matching samples and the number of times of image matching are reduced, thereby improving the efficiency of initial positioning and further improving the efficiency of the visual navigation device going online.
  • FIG. 1 is a schematic flowchart of an optional embodiment of an initial positioning method provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of an optional embodiment of a visual navigation device provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of an optional embodiment of the storage system provided by the embodiment of the present application.
  • Fig. 4 is a schematic structural diagram of an optional embodiment of the initial positioning device provided by the embodiment of the present application.
  • the embodiment of the present application provides an initial positioning method, and the execution subject of the positioning method may be a visual navigation device.
  • the visual navigation device that provides an initial positioning method is called the first visual navigation device.
  • the initial positioning method provided by the embodiment of the present application includes the following steps, and the execution subject of the initial positioning method shown in FIG. 1 is a first visual navigation device.
  • Step 101 Acquire a ground image collected by a camera after the first visual navigation device is activated in the online area of the target site, to obtain an initial positioning image.
  • the target site may be an indoor or outdoor site.
  • the target site includes multiple areas, the multiple areas are divided into online areas and offline areas, the number of online areas can be one or more, and the first visual navigation (V-SLAM) device is pre-placed in any of the multiple areas.
  • V-SLAM visual navigation
  • the first visual navigation device can be configured with a moving mechanism, which can be regarded as a movable robot that can move according to a set program. Move by manual push or by other equipment handling.
  • the camera can be controlled to capture the current ground image through an instruction.
  • the lens of the camera can be configured to be vertically facing the ground by default, so that the captured ground images are more convenient for image processing.
  • the ground image carries ground texture information.
  • the outdoor ground may have textures or markers such as sand, asphalt roads, and landmarks
  • the indoor ground may have textures such as floor tiles, floors, and other markers.
  • the ground image may be stored in a designated storage medium, that is, in the above step 101, the ground image taken by the camera may be acquired in the designated storage medium.
  • Going online refers to an initialization process after the first vision navigation device is started.
  • the first vision navigation device needs to determine its own location, that is, initial positioning.
  • the specific method of positioning is after the first vision navigation device is started.
  • Shoot the ground image match it with the sample image, compare its similarity, and determine the current position and direction according to the positioning information of the most similar sample image.
  • the above-mentioned initial positioning image may be a ground image first collected by a camera after the first visual navigation device is activated in the online area of the target site.
  • the online area and the offline area are pre-specified, and the online area can be one or more, and the offline area can also be one or more.
  • the first visual navigation device may be pre-positioned in any one of the multiple areas on-line area.
  • the area boundaries of the entity are marked in multiple areas of the target site, and each area is marked in the target site as an on-line area or a non-on-line area, such as marking a symbol on the ground, etc.
  • the staff can move the first vision navigation device to one of the online areas to go online according to the area boundaries and markers in the target venue.
  • the target site may not be marked, but a map of the target site can be drawn in advance, divided into multiple areas, and whether it is an online area is marked on the map. Choose an online area, estimate the actual location range of the online area in the target site according to the scale of the map, and move the first visual navigation device to the online area to go online.
  • the ground image refers to the image of the ground taken towards the ground.
  • the initial positioning image is used as the image to be compared during the initial positioning, and the initial positioning image is compared with the online sample image.
  • Step 102 Calculate the feature matching degree between the initial positioning image and each online sample image.
  • the upper-line sample image is a ground sample image pre-collected in the upper-line area and used as a matching sample during initial positioning.
  • the specified image feature extraction algorithm can be used to extract the image features of the initial positioning image and each online sample image respectively.
  • the image feature extraction algorithm is a method of image processing, which can extract the features in the image.
  • the image features extracted in this embodiment of the present application may be image feature points, such as corners, edges, spots, etc. in the image, and the corresponding image feature extraction algorithm may be the Harris algorithm that can extract corners in the image. , or SIFT (Scale Invariant Feature Transform) algorithm, etc.
  • Harris algorithm that can extract corners in the image.
  • SIFT Scale Invariant Feature Transform
  • the above image feature extraction algorithm can find points with certain characteristics in the image according to the gray value characteristics of the image, for example, local gray The point where the value abruptly changes, etc.
  • the specified image feature extracted in the embodiment of the present application may refer to the texture feature of the image.
  • Texture feature is a global feature that describes the surface properties of the scene corresponding to an image or image area. Texture features are not pixel-based features and require statistical calculations in areas containing multiple pixels. Commonly used image feature extraction algorithms for texture features can be statistical methods (such as gray level co-occurrence matrix, image autocorrelation function, etc.), geometric methods, random field model methods (such as Markov random field model method, Gibbs random field model method, Fractal model, autoregressive model, etc.), etc., to obtain parameters for describing texture features.
  • the similarity between the initial positioning image and the image features of each online sample image can be calculated, that is, the characteristics of the initial positioning image and each online sample image can be calculated. Matching degree.
  • the degree of feature matching of the two images is used to indicate the degree of successful feature matching in the two images.
  • Different types of image features use different parameters for the feature matching degree.
  • the feature matching degree can refer to the number of image feature points that are successfully matched.
  • texture features it can be used to describe texture features. Whether the parameter difference exceeds a preset threshold is used to determine whether they have the same texture feature. If the parameter difference exceeds the preset threshold, it can indicate that they have the same texture feature; otherwise, it can indicate that they do not have the same texture feature.
  • upper-line sample images are ground sample images pre-collected in the upper-line area and used as matching samples during initial positioning.
  • the on-line sample images are pre-collected in each on-line area in the above-mentioned target site, and can be used for initial positioning when the first visual navigation device goes on-line.
  • off-line sample images are not used as matching samples for initial positioning, and both on-line sample images and off-line sample images can be used as matching samples after the first visual navigation device goes online.
  • the above-mentioned online sample images may be stored in a preset storage device, for example, in a server corresponding to the first visual navigation device. Therefore, when the above step 102 is executed, the first visual navigation device may be stored from the preset storage device. Get each online sample image in .
  • the above-mentioned first visual navigation device can also download and store each online sample image into its own storage space, so that when performing the above step 102, the first visual navigation device can obtain each image from its own storage space. Online sample image.
  • Step 103 Determine the positioning information of the upper-line sample image with the highest matching degree with the feature of the initial positioning image.
  • the online sample image with the highest matching degree with the characteristics of the initial positioning image can be determined, and then the online sample image with the highest matching degree with the characteristics of the initial positioning image is obtained. Location information for the sample image.
  • each online sample image is selected from the collected ground sample images. Since each ground sample image corresponds to the positioning information obtained through the positioning sensor during shooting, each online sample image filtered from multiple ground sample images can correspond to the positioning information obtained through the positioning sensor during shooting.
  • the above-mentioned positioning sensor may be a sensor such as GPS that can obtain absolute positioning information, or may be a sensor such as a gyroscope, an acceleration sensor, etc., which is used to obtain the current relative displacement.
  • the camera and the positioning sensor are configured on the second visual navigation device.
  • the second visual navigation device can collect the ground sample image through the camera, and determine the positioning information when collecting the ground sample image according to the positioning sensor. .
  • the second visual navigation device can collect the ground sample image through the camera, and determine the positioning information when collecting the ground sample image according to the positioning sensor. .
  • multiple ground sample images can be obtained, and then a part of the images can be extracted from the ground sample images taken in the online area among the multiple ground sample images according to the preset method for acquiring online sample images.
  • the above-mentioned second visual navigation device can be understood as a visual navigation device for acquiring online sample images, and the second visual navigation device and the above-mentioned first visual navigation device can be the same visual navigation device or different. visual navigation device.
  • Step 104 Determine the positioning result of the initial positioning of the visual navigation device according to the positioning information of the online sample image with the highest feature matching degree.
  • the positioning result of the initial positioning of the first visual navigation device can be determined according to the positioning information.
  • the position and direction where the second visual navigation device collects the online sample image through the camera may be different from the position and direction where the first visual navigation device collects the initial positioning image through the camera, it can be Using a pre-calibrated image processing method to calculate the gap between the online sample image and the initial positioning image, and based on the gap and the position and direction where the second visual navigation device collected the online sample image through the camera, determine that the first visual navigation device passed the The position and direction at which the camera collects the initial positioning image, so as to determine the positioning result of the initial positioning of the first visual navigation device.
  • the target site is divided into an online area and an off-line area
  • the visual navigation device is placed in the online area and then activated, and after the visual navigation device is activated, the ground image captured by the camera is used. It is determined as the initial positioning image, so that the initial positioning image is matched with the upper-line sample image of the upper-line area. In this way, matching with sample images in all regions is avoided, the number of matching samples and the number of times of image matching are reduced, thereby improving the efficiency of initial positioning and further improving the efficiency of the visual navigation device going online.
  • the method for acquiring online sample images will be exemplified, wherein the visual navigation device that executes the method for acquiring online sample images may be referred to as a second visual navigation device, and the second visual navigation device is configured with a camera and a positioning sensor .
  • the second visual navigation device and the above-mentioned first visual navigation device may be the same visual navigation device, or may be different visual navigation devices.
  • the acquisition method of the online sample image may include the following steps A-C:
  • Step A Obtain images of the ground captured by the camera of the second visual navigation device at different positions in the online area, and obtain multiple ground sample images.
  • each ground sample image is associated with the positioning information obtained by the positioning sensor during shooting.
  • the target site Before acquiring the online sample image, the target site may be determined first, and each area in the target site may be divided to determine the online area and the off-line area in the target site.
  • the second visual navigation device can move in the target field, so that when the second visual navigation device moves to a certain position, it can be determined whether the position is within the go-live area.
  • the camera of the second visual navigation device can be controlled to collect images of the ground at the position, thereby obtaining multiple ground sample images, and each The ground sample images are all images of the ground at a certain position within the upper-line area in the target site.
  • the second visual navigation device before the second visual navigation device moves in the target field, the second visual navigation device can be placed at a certain position in the target field, and the positioning information of the position can be input to the second visual navigation device, so as to complete the second visual navigation device.
  • the positioning sensor is configured on the second visual navigation device, when the second visual navigation device moves to each different position in the online area, the configured positioning sensor can be used Determine the positioning information of the position where the second visual navigation device is located, and the positioning information is: when the camera of the second visual navigation device collects the image of the ground at the position, the positioning information of the second visual navigation device, the positioning information may include: The position and orientation of the second visual navigation device.
  • each ground sample image is associated with the positioning information obtained by the positioning sensor when the ground sample image is photographed.
  • step A acquiring images of the ground shot by the camera of the second visual navigation device at different positions in the online area, and obtaining multiple ground sample images, may include the following step A1, and this step
  • the execution subject of A1 is the second visual navigation device.
  • Step A1 Control the second visual navigation device to move to different positions in all the online areas in the target site; shoot a ground image every time it moves to a position, and record the second visual navigation device obtained by the positioning sensor during shooting based on the The preconfigured positioning information of the positioning origin of the second visual navigation device.
  • the positioning origin can be pre-configured for the second visual navigation device, and since online sample images related to all the online areas in the target scene are to be acquired, it is necessary to determine that the second visual navigation device is within all the online areas in the target scene positioning information at different locations.
  • the positioning information of the second visual navigation device at different positions in all the online areas in the target site may be the position information based on the positioning origin.
  • a unified coordinate system can be established for all online areas in the target scene, and the positioning origin of the configured second visual navigation device is the origin of the coordinate system.
  • the positioning information at different positions in the area can be the position coordinates in the coordinate system, and since each coordinate in the coordinate system is determined based on the relative positional relationship between the point corresponding to the coordinate and the origin of the coordinate system, that is, in the coordinate system The respective coordinates of are based on the coordinates of the origin of the coordinate system. Therefore, the positioning information of the second visual navigation device at different positions in all online areas is based on the pre-configured positioning information of the positioning origin of the second visual navigation device.
  • the positioning information of the second visual navigation device based on the configured positioning origin can be determined.
  • the positional relationship between the position where the second visual navigation device is located and the positioning origin of the configured second visual navigation device may be determined, and then, using the positional relationship, it is determined that the second visual navigation device is based on the configured second visual navigation device. Positioning information of the positioning origin of the navigation device.
  • the second visual navigation device can be controlled to move within all the online areas in the target scene, so as to control the second visual navigation device to move to all the online areas in the target scene different locations within the area.
  • control the camera on the second visual navigation device to collect the ground image of the position as the ground sample image, and when shooting the ground sample image, the positioning sensor on the second visual navigation device can It is determined that the second visual navigation device is based on the positioning information of the positioning origin of the configured second visual navigation device, thereby recording the positioning information determined by the positioning sensor.
  • controlling the second visual navigation device to move to different positions in all the online areas in the target site may include the following step A11:
  • Step A11 Control the second visual navigation device to start from the preconfigured positioning origin of the second visual navigation device to move to different positions in all the online areas in the target venue.
  • the second visual navigation device can first be placed at the position corresponding to the above-mentioned positioning origin in the target site, and the positioning information of the position corresponding to the above-mentioned positioning origin can be input to the second visual navigation device, thereby completing the second visual navigation Device goes online. Further, control the second visual navigation device to start from the positioning origin to move to different positions in all the online areas in the target site, and, when moving to a position, control the camera on the second visual navigation device to capture the position The ground image is taken as a ground sample image, and the second visual navigation device is controlled to obtain the positioning information based on the positioning origin based on the positioning sensor on the second visual navigation device, and the positioning information is recorded.
  • the above-mentioned positioning information may be calculated and obtained according to the movement information of the second visual navigation device.
  • a positioning sensor that can be used to determine the relative displacement such as a gyroscope, an acceleration sensor, etc.
  • the relative displacement may include a displacement direction and a displacement distance.
  • the superposition calculation of the relative displacement can be used to obtain the distance between the current position of the second visual navigation device and the above-mentioned positioning origin every time it moves to a position in all the on-line areas in the target site Therefore, according to the position relationship, the current position of the visual navigation device can be determined based on the positioning information of the positioning origin.
  • the positioning information associated with each ground sample image can be used as an attribute of the ground sample image, then when the positioning information associated with the ground sample image is obtained, the corresponding ground sample image can be directly searched. property information.
  • step B the feature matching degree between the multiple ground sample images is calculated, and then the duplicate images are determined in the multiple ground sample images according to the calculated feature matching degree.
  • the repeated image is one of any two ground sample images whose feature matching degree exceeds a preset threshold.
  • the feature matching degree of any two ground sample images can be calculated. That is to say, the feature matching degree of any two ground sample images can be calculated for all the ground sample images captured in all the on-line areas in the target scene.
  • the two ground sample images used for calculation may be two images taken in the same upper-line area, or may be two images captured in different upper-line areas respectively.
  • the method of calculating the feature matching degree of any two ground sample images is the same as the method of calculating the feature matching degree between the initial positioning image and each online sample image in the above calculation step 102, and will not be repeated here.
  • duplicate images can be determined in multiple ground sample images according to the calculated feature matching degree.
  • any two ground sample images whose calculated feature matching degree exceeds a preset threshold can be determined, thereby obtaining the first image and second image. That is to say, any one of the two ground sample images whose calculated feature matching degree exceeds the preset threshold can be determined as the first image, and the other image can be determined as the second image. In this way, the obtained image quality parameters of the first image and the second image can be calculated respectively, and the image quality parameters of the first image and the second image can be compared. An image with a lower parameter is used as a duplicate image.
  • the two ground sample images can be further determined. Whether it is an image captured in the same on-line area.
  • any one of the two ground images may be determined as the first image, and the other image may be determined as the second image image, and then, the obtained image quality parameters of the first image and the second image can be calculated separately, and the image quality parameters of the first image and the second image can be compared.
  • An image with a lower image quality parameter is regarded as a duplicate image, and when the upper-line sample image is subsequently determined, the duplicate images in the two images are deleted.
  • the two ground sample images are images collected in different on-line areas
  • the two images can be deleted directly, so as to avoid the occurrence of when the feature matching degree of the two images taken in different on-line areas exceeds
  • the threshold is preset
  • the subsequent initial positioning may be misjudged by deleting any one of the images.
  • any two ground sample images after calculating the feature matching degree of any two ground sample images, it can be determined that the calculated feature matching degree exceeds a preset threshold, and any two ground samples taken in the same online area.
  • a sample image thereby obtaining a first image and a second image. That is to say, any one of any two ground sample images taken in the same upper-line area and the calculated feature matching degree exceeds the preset threshold can be determined as the first image, and the other image can be determined as the first image.
  • the image is indeed the second image.
  • the obtained image quality parameters of the first image and the second image can be calculated respectively, and the image quality parameters of the first image and the second image can be compared.
  • An image with a lower parameter is used as a duplicate image.
  • the image quality parameter may be a parameter used for evaluating image quality, such as the sharpness of the image.
  • the multiple ground sample images used to obtain the upper-line sample images may have higher image quality parameters, the quality of the obtained upper-line sample images may be higher.
  • the navigation device performs initial positioning, the accuracy of the obtained positioning result of the initial positioning can be improved.
  • step C the ground sample images other than the repeated images captured in the upper-line area are used as the upper-line sample images.
  • the ground sample images other than the repeated images in the above-mentioned multiple ground sample images can be determined as the upper-line sample images.
  • the above-mentioned multiple ground sample images are ground images captured by the camera at different positions in all the online areas of the above-mentioned target site, that is to say, after the above-mentioned repeated images are determined, they can be used in all the online areas.
  • the ground sample images taken, except for the repeated images, are used as the upper-line sample images.
  • step C may include the following step C1:
  • Step C1 Mark the first attribute mark on the ground sample images shot in the upper-line area except for the repeated images.
  • the first attribute tag is used to indicate that the corresponding ground sample image is an upper-line sample image.
  • the off-line sample image may be marked with the second attribute mark, or not marked, which is not limited in this embodiment of the present application.
  • step 102 before calculating the feature matching degree between the initial positioning image and each online sample image, it is possible to search for the ground marked with the first attribute mark in all ground sample images.
  • the sample image is obtained, and the online sample image is obtained, and then the online sample image is used for matching.
  • a plurality of ground sample images at different positions in the target site can be preset to be collected, and these ground sample images include ground images at positions within the on-line area, as well as those located in the off-line area. image of the ground at the location.
  • the above-mentioned steps AB can be used to determine the ground sample images taken in all the upper-line areas, excluding the repeated images, and further, the ground sample images excluding the repeated images captured in all the upper-line areas can be determined.
  • a first attribute tag is marked on the sample image.
  • the first visual navigation device After the first visual navigation device obtains the initial positioning image, it can first search for the ground sample images marked with the first attribute mark among all the ground sample images, so that the found ground marked with the first attribute mark
  • the sample image is the online sample image.
  • calculating the feature matching degree between the initial positioning image and each online sample image is: calculating the matching degree between the initial positioning image and the found ground sample images marked with each first attribute mark.
  • the embodiment of the present application also provides an initial positioning device. Please refer to FIG. 4 , which is a schematic structural diagram of the initial positioning device provided by the embodiment of the present application. As shown in FIG. 4 , the device includes:
  • the initial image acquisition module 410 is configured to acquire the ground image collected by the camera after the first visual navigation device is activated in the online area of the target site, and obtain an initial positioning image, wherein the target site includes multiple areas, and the multiple The area is divided into an online area and an off-line area, and the first visual navigation device is pre-positioned in any one of the multiple areas on-line area;
  • the first calculation module 420 is configured to calculate the feature matching degree between the initial positioning image and each on-line sample image, wherein the on-line sample image is pre-collected in the on-line area and used as the initial positioning image. the ground sample image of the matched sample;
  • a positioning information determining module 430 configured to determine the positioning information of the online sample image with the highest degree of matching with the feature of the initial positioning image
  • the initial positioning determining module 440 is configured to determine the positioning result of the initial positioning of the visual navigation device according to the positioning information.
  • the target site is divided into an online area and an off-line area
  • the visual navigation device is placed in the online area and then activated, and after the visual navigation device is activated, the ground image captured by the camera is used. It is determined as the initial positioning image, so that the initial positioning image is matched with the upper-line sample image of the upper-line area. In this way, matching with sample images in all regions is avoided, the number of matching samples and the number of times of image matching are reduced, thereby improving the efficiency of initial positioning and further improving the efficiency of the visual navigation device going online.
  • the device further includes:
  • the sample image acquisition module is used to acquire images of the ground shot by the camera of the second visual navigation device at different positions in the online area before calculating the feature matching degree between the initial positioning image and each online sample image, and obtain multiple the ground sample images, wherein each of the ground sample images is associated with positioning information obtained by a positioning sensor during shooting;
  • the second calculation module is used to calculate the feature matching degree between the multiple ground sample images
  • the repeated image determination module is configured to determine repeated images in the plurality of ground sample images according to the calculated feature matching degree, wherein the repeated images are any two of the calculated feature matching degrees exceeding a preset threshold. one of the ground sample images;
  • the sample image determination module is configured to use the ground sample images captured in the upper-line area, except for the repeated images, as the upper-line sample images.
  • the sample image determination module is specifically configured to:
  • a first attribute mark is marked on the ground sample images other than the repeated images captured in the upper line area, wherein the first attribute mark is used to indicate that the corresponding ground sample image is the upper line sample image.
  • the device further includes:
  • the sample image search module is used to find the ground sample images marked with the first attribute mark in all ground sample images before calculating the feature matching degree between the initial positioning image and each online sample image, and obtain the Online sample image.
  • the repeated image determination module is specifically used for:
  • an image with a lower image quality parameter calculated is used as the repeated image.
  • the camera and the positioning sensor are configured on a second visual navigation device, and the sample image acquisition module is specifically used for:
  • Control the visual navigation device to move to different positions in all the online areas in the target site; take a picture of the ground each time it moves to a position, and record the first image obtained by the positioning sensor during shooting.
  • the second visual navigation device is based on pre-configured positioning information of the positioning origin of the second visual navigation device.
  • the embodiment of the present application also provides a visual navigation device.
  • FIG. 2 is a schematic diagram of the visual navigation device provided by the embodiment of the present application.
  • the visual navigation device includes:
  • a camera 203 configured to face the ground, for capturing images of the ground
  • the positioning sensor 204 is used to obtain the positioning information of the visual navigation device
  • processors 205 one or more processors 205;
  • the memory 206 can be connected through the communication bus 208 and communicate through the communication bus 208 .
  • the visual navigation device also includes one or more computer programs 207, wherein the one or more computer programs 207 are stored in the memory 206, the one or more computer programs 207 including instructions that, when executed by the visual navigation device, cause visual navigation
  • the device executes the initial positioning method and any implementation manner thereof provided by the foregoing embodiments of the present application.
  • An embodiment of the present application further provides a storage system, as shown in FIG. 3 , including: a target site 300 , and a plurality of visual navigation devices 401 to 404 as shown in FIG. 2 .
  • the target site 300 includes a plurality of areas 301 to 309, and the plurality of areas are divided into an online area and an offline area, and the online area includes an area 301, an area 303 and an area 304 (FIG. 3 is only an example, and may actually include more or Fewer upline areas), the rest are non-upline areas.
  • the warehousing system further includes: a server 501 that communicates with multiple visual navigation devices, and optionally, wireless communication may be used.
  • the server 501 is configured to select a visual navigation device among a plurality of visual navigation devices, plan the path of the selected visual navigation device, and send the travel path to the corresponding visual navigation device, so that the corresponding visual navigation device can follow the path travels.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when it runs on the device, the device enables the device to execute the above-mentioned initial positioning method and any implementation manner thereof .
  • Embodiments of the present invention also provide a computer program product containing instructions, which, when running on a computer, enables the computer to execute the above-mentioned initial positioning method and any implementation manner thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un appareil de localisation initiale, un dispositif de navigation visuelle, un système d'entreposage, un support de stockage lisible par ordinateur et un produit de programme informatique. Le procédé consiste à : acquérir des images de sol capturées par un premier dispositif de navigation visuelle au moyen d'une caméra après le démarrage du dispositif de navigation visuelle dans une région en ligne d'un site cible afin d'obtenir une image de localisation initiale, le site cible comprenant une pluralité de régions, la pluralité des régions étant divisées en régions en ligne et en régions non en ligne, et le premier dispositif de navigation visuelle étant préalablement placé dans n'importe quelle région en ligne de la pluralité des régions (101) ; calculer un degré de correspondance de caractéristiques entre l'image de localisation initiale et chaque image d'échantillon en ligne (102) ; déterminer des informations de localisation de l'image d'échantillon en ligne présentant le degré de correspondance de caractéristiques le plus élevé entre celle-ci et l'image de localisation initiale (103) ; et déterminer, selon les informations de localisation de l'image d'échantillon en ligne présentant le degré de correspondance de caractéristique le plus élevé, un résultat de localisation de la localisation initiale mise en oeuvre par le dispositif de navigation visuelle (104). Le procédé de localisation initiale permet d'améliorer l'efficacité de localisation initiale, ce qui améliore aussi l'efficacité en ligne.
PCT/CN2021/103651 2020-06-30 2021-06-30 Procédé de localisation initiale, dispositif de navigation visuelle et système d'entreposage Ceased WO2022002149A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010621642.9A CN111623783A (zh) 2020-06-30 2020-06-30 一种初始定位方法、视觉导航设备、仓储系统
CN202010621642.9 2020-06-30

Publications (1)

Publication Number Publication Date
WO2022002149A1 true WO2022002149A1 (fr) 2022-01-06

Family

ID=72259457

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/103651 Ceased WO2022002149A1 (fr) 2020-06-30 2021-06-30 Procédé de localisation initiale, dispositif de navigation visuelle et système d'entreposage

Country Status (2)

Country Link
CN (1) CN111623783A (fr)
WO (1) WO2022002149A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111623783A (zh) * 2020-06-30 2020-09-04 杭州海康机器人技术有限公司 一种初始定位方法、视觉导航设备、仓储系统
CN113077475B (zh) * 2021-03-17 2023-09-08 杭州海康机器人股份有限公司 视觉定位方法、装置、系统、移动机器人及存储介质
CN115237115A (zh) * 2021-10-15 2022-10-25 达闼科技(北京)有限公司 机器人扫图的控制方法、装置、服务器、设备及存储介质
CN115457123A (zh) * 2022-08-09 2022-12-09 浙江华睿科技股份有限公司 视觉导航定位方法、装置、终端及计算机可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989599A (zh) * 2015-02-15 2016-10-05 西安酷派软件科技有限公司 一种图像处理方法、装置和终端
CN106291517A (zh) * 2016-08-12 2017-01-04 苏州大学 基于位置与视觉信息优化的室内云机器人角度定位方法
US20180297207A1 (en) * 2017-04-14 2018-10-18 TwoAntz, Inc. Visual positioning and navigation device and method thereof
CN108692720A (zh) * 2018-04-09 2018-10-23 京东方科技集团股份有限公司 定位方法、定位服务器及定位系统
CN109073390A (zh) * 2018-07-23 2018-12-21 深圳前海达闼云端智能科技有限公司 一种定位方法及装置、电子设备和可读存储介质
CN110657812A (zh) * 2018-06-29 2020-01-07 比亚迪股份有限公司 车辆定位方法、装置及车辆
CN111623783A (zh) * 2020-06-30 2020-09-04 杭州海康机器人技术有限公司 一种初始定位方法、视觉导航设备、仓储系统

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1438138A (zh) * 2003-03-12 2003-08-27 吉林大学 自动引导车的视觉引导方法及其自动引导电动车
KR20090077547A (ko) * 2008-01-11 2009-07-15 삼성전자주식회사 이동 로봇의 경로 계획 방법 및 장치
CN105258702B (zh) * 2015-10-06 2019-05-07 深圳力子机器人有限公司 一种基于slam导航移动机器人的全局定位方法
CN106127180A (zh) * 2016-06-30 2016-11-16 广东电网有限责任公司电力科学研究院 一种机器人辅助定位方法及装置
CN108638062B (zh) * 2018-05-09 2021-08-13 科沃斯商用机器人有限公司 机器人定位方法、装置、定位设备及存储介质
CN110553648A (zh) * 2018-06-01 2019-12-10 北京嘀嘀无限科技发展有限公司 一种用于室内导航的方法和系统
CN109035291B (zh) * 2018-08-03 2020-11-20 重庆电子工程职业学院 机器人定位方法及装置
CN111322993B (zh) * 2018-12-13 2022-03-04 杭州海康机器人技术有限公司 一种视觉定位方法和装置
CN110207707B (zh) * 2019-05-30 2022-04-12 四川长虹电器股份有限公司 基于粒子滤波器的快速初始定位方法及机器人设备
CN110231039A (zh) * 2019-06-27 2019-09-13 维沃移动通信有限公司 一种定位信息修正方法及终端设备
CN110645986B (zh) * 2019-09-27 2023-07-14 Oppo广东移动通信有限公司 定位方法及装置、终端、存储介质
CN110906924A (zh) * 2019-12-17 2020-03-24 杭州光珀智能科技有限公司 一种定位初始化方法和装置、定位方法和装置及移动装置
CN111006673A (zh) * 2020-01-03 2020-04-14 中仿智能科技(上海)股份有限公司 一种模拟飞行器基于路标和景象匹配视觉导航系统
CN111288996A (zh) * 2020-03-19 2020-06-16 西北工业大学 基于视频实景导航技术的室内导航方法及系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989599A (zh) * 2015-02-15 2016-10-05 西安酷派软件科技有限公司 一种图像处理方法、装置和终端
CN106291517A (zh) * 2016-08-12 2017-01-04 苏州大学 基于位置与视觉信息优化的室内云机器人角度定位方法
US20180297207A1 (en) * 2017-04-14 2018-10-18 TwoAntz, Inc. Visual positioning and navigation device and method thereof
CN108692720A (zh) * 2018-04-09 2018-10-23 京东方科技集团股份有限公司 定位方法、定位服务器及定位系统
CN110657812A (zh) * 2018-06-29 2020-01-07 比亚迪股份有限公司 车辆定位方法、装置及车辆
CN109073390A (zh) * 2018-07-23 2018-12-21 深圳前海达闼云端智能科技有限公司 一种定位方法及装置、电子设备和可读存储介质
CN111623783A (zh) * 2020-06-30 2020-09-04 杭州海康机器人技术有限公司 一种初始定位方法、视觉导航设备、仓储系统

Also Published As

Publication number Publication date
CN111623783A (zh) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111462200B (zh) 一种跨视频行人定位追踪方法、系统及设备
WO2022002149A1 (fr) Procédé de localisation initiale, dispositif de navigation visuelle et système d'entreposage
CN110555901B (zh) 动静态场景的定位和建图方法、装置、设备和存储介质
JP6144826B2 (ja) データベース作成のための対話型および自動的3dオブジェクト走査方法
JP6095018B2 (ja) 移動オブジェクトの検出及び追跡
US9420265B2 (en) Tracking poses of 3D camera using points and planes
JP6430064B2 (ja) データを位置合わせする方法及びシステム
JP6464934B2 (ja) カメラ姿勢推定装置、カメラ姿勢推定方法およびカメラ姿勢推定プログラム
CN109186606B (zh) 一种基于slam和图像信息的机器人构图及导航方法
CN111652929A (zh) 一种视觉特征的识别定位方法及系统
US20110164832A1 (en) Image-based localization feature point registration apparatus, method and computer-readable medium
CN108519102B (zh) 一种基于二次投影的双目视觉里程计算方法
CN106845354B (zh) 零件视图库构建方法、零件定位抓取方法及装置
JP2010033447A (ja) 画像処理装置および画像処理方法
WO2020015501A1 (fr) Procédé de construction de carte, appareil, support de stockage et dispositif électronique
CN110827353A (zh) 一种基于单目摄像头辅助的机器人定位方法
CN111583342A (zh) 一种基于双目视觉的目标快速定位方法及装置
CN105339981B (zh) 用于使用一组基元配准数据的方法
JP2006113832A (ja) ステレオ画像処理装置およびプログラム
CN112686962A (zh) 室内视觉定位方法、装置及电子设备
Liang et al. Robust Alignment of UGV Perspectives with BIM for Inspection in Indoor Environments
CN119310080A (zh) 图像分析系统和控制拍摄样本图像的方法
CN119313728A (zh) 视觉定位方法、装置、设备及介质
JP2017162024A (ja) ステレオマッチングの処理方法、処理プログラムおよび処理装置
Armenakis et al. Feasibility study for pose estimation of small UAS in known 3D environment using geometric hashing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21833382

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21833382

Country of ref document: EP

Kind code of ref document: A1