[go: up one dir, main page]

WO2019245320A1 - Dispositif de robot mobile destiné à corriger une position par fusion d'un capteur d'image et d'une pluralité de capteurs géomagnétiques, et procédé de commande - Google Patents

Dispositif de robot mobile destiné à corriger une position par fusion d'un capteur d'image et d'une pluralité de capteurs géomagnétiques, et procédé de commande Download PDF

Info

Publication number
WO2019245320A1
WO2019245320A1 PCT/KR2019/007499 KR2019007499W WO2019245320A1 WO 2019245320 A1 WO2019245320 A1 WO 2019245320A1 KR 2019007499 W KR2019007499 W KR 2019007499W WO 2019245320 A1 WO2019245320 A1 WO 2019245320A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
mobile robot
robot device
sensing data
bundle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2019/007499
Other languages
English (en)
Korean (ko)
Inventor
홍순혁
명현
김형진
송승원
현지음
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Korea Advanced Institute of Science and Technology KAIST
Original Assignee
Samsung Electronics Co Ltd
Korea Advanced Institute of Science and Technology KAIST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020190024283A external-priority patent/KR102601141B1/ko
Application filed by Samsung Electronics Co Ltd, Korea Advanced Institute of Science and Technology KAIST filed Critical Samsung Electronics Co Ltd
Priority to US17/055,415 priority Critical patent/US12001218B2/en
Publication of WO2019245320A1 publication Critical patent/WO2019245320A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means

Definitions

  • the present disclosure relates to a mobile robot device and a control method of fusion of an image sensor and a plurality of geomagnetic sensors to correct a position and to produce a map. More particularly, the present invention analyzes and simultaneously matches data acquired through each sensor and moves a mobile robot. An apparatus and a control method for estimating and correcting a current position of a.
  • Robots have improved work efficiency by performing tasks that are difficult to reach in various industries.
  • a robot arm type that has been fixed at a specific position and performing repetitive tasks has been used, but research and demand for a mobile robot device that is relatively free to move and can perform various tasks is gradually increasing. It is increasing.
  • SLAM Simultaneous Localization And Mapping
  • the SLAM technology has recently been developed in various fields, and has been spotlighted as an essential technology for efficiently driving a cleaning robot or an autonomous vehicle.
  • a graph-based SLAM that expresses the position and odometry of the robot as nodes and edges (constraint in other expressions) is widely used as one of the above SLAMs.
  • the technology of recognizing the position through the existing geomagnetic sensor has a disadvantage in that the matching is not accurate in an environment in which the magnetic field distortion is not large enough, and even if it is used, only the similarity between the positions is required, and thus, another auxiliary means is required.
  • the present disclosure has been made to solve the above-described problems, and an object of the present disclosure is to fuse a plurality of geomagnetic sensors with an image sensor having different characteristics, thereby enabling a mobile robot device to more efficiently use graph slam (SLAM) technology.
  • the present invention provides a mobile robot device and a control method thereof.
  • Mobile robot device for achieving the above object is a traveling unit; An image sensor; A plurality of geomagnetic sensors; A memory for storing at least one instruction; And a processor configured to execute the at least one instruction, wherein the processor acquires a plurality of image data through the image sensor while the mobile robot apparatus moves through the driving unit, and through the plurality of geomagnetic sensors.
  • a graph structure for estimating the position of the mobile robot device may be generated, and if the position recognition of the mobile robot device fails, the graph structure may be corrected.
  • the processor may control to extract the feature points from the plurality of image data using an ORF (Oriented FAST and Rotated BRIEF) algorithm.
  • ORF Oriented FAST and Rotated BRIEF
  • the processor may be configured to accumulate the feature points to form a submap, perform a random sample consensus (RANSAC) algorithm, and match the submap to the nearest node to obtain the key node.
  • RANSAC random sample consensus
  • the processor acquires a sensing data group by grouping the obtained sensing data, compares and matches a magnetic field value of the sensing data group with magnetic fields of a previously stored sensing data group, and matches the matched sensing data group. Based on this, it is possible to control to obtain a node bundle having a pair of graph structures.
  • the processor may determine whether the newly obtained node bundle exists between the previously stored node bundles, and if it is determined that the newly obtained node bundle exists between the previously stored node bundles, Only the newly acquired node bundle existing between the existing acquired node bundles may be extracted, and only the extracted newly obtained node bundle may be selected as the node sequence.
  • the processor updates an existing acquired node bundle having a smaller distance difference from the newly acquired node bundle, and the distance difference between the extracted newly acquired node bundle and the updated existing acquired node bundle is most significant.
  • the small node bundle may be searched and controlled to find the position of the node sequence using the node bundle having the smallest distance difference.
  • the processor may control to update to an existing obtained node bundle having a smaller distance difference using a Gaussian process.
  • the processor detects location information different from the location information represented by the graph structure generated by the mobile robot device through a wheel sensor while the mobile robot device moves, and fails to recognize the location.
  • the device may be rotated and controlled to acquire new image data and sensing data through the image sensor and the plurality of magnetic sensor.
  • the mobile robot device obtains a new keynode and node sequence based on the newly acquired image data and the sensing data and compares it with the graph structure previously generated based on the obtained new keynode and node sequence. Control to determine whether or not a match.
  • the processor may control to correct the graph structure based on the new keynode and node sequences matching the previously generated graph structure.
  • the control method of the mobile robot device for achieving the above object obtains a plurality of image data through the image sensor while the mobile robot device is moving, sensing through a plurality of geomagnetic sensors Obtaining data; extracting feature points from the plurality of image data and acquiring key nodes based on the feature points; acquiring a node sequence based on the sensing data; obtaining the key nodes and the node sequence Generating a graph structure estimating a position of the mobile robot device based on the result; And correcting the graph structure when the mobile robot device fails to recognize the position.
  • the feature points may be extracted from the plurality of image data by using an ORF (Oriented FAST and Rotated BRIEF) algorithm.
  • ORF Oriented FAST and Rotated BRIEF
  • the acquiring of the key nodes may include accumulating the feature points to construct a submap; And performing the submap random sample consensus (RANSAC) algorithm and matching the nearest node to obtain the key node.
  • RANSAC submap random sample consensus
  • the acquiring of the node sequence may include acquiring a sensing data group by grouping the acquired sensing data; Comparing and matching magnetic field values of the sensing data group with magnetic field values of the previously stored sensing data group; And acquiring a node bundle having a pair of graph structures based on the matched sensing data group.
  • the acquiring of the node sequence may include determining whether the newly obtained node bundle exists between the previously stored node bundles; If it is determined that the newly acquired node bundle exists between the previously stored node bundles, extracting only the newly acquired node bundle existing between the existing acquired node bundles; And selecting only the extracted newly obtained node bundle as the node sequence.
  • the locating of the node sequence may include updating an existing acquired node bundle having a smaller distance difference from the newly acquired node bundle; the newly acquired node bundle and the updated existing node bundle; Searching for a node bundle having a smallest distance difference among the obtained node bundles; And locating the node sequence using the node bundle having the smallest difference in distance.
  • the step of locating the node sequence may be updated with an existing acquired node bundle having a smaller distance difference using a Gaussian process.
  • the position sensor fails to detect position information different from the position information represented by the graph structure generated by the mobile robot apparatus through a wheel sensor.
  • the mobile robot device rotates and acquires new image data and sensing data through the image sensor and the plurality of magnetic sensor.
  • the correcting of the graph structure may include: acquiring a new key node and a node sequence based on the image data and the sensing data newly acquired by the mobile robot device; And comparing the new key node and the node sequence based on the acquired new key node and the node sequence to determine whether they match.
  • the correcting of the graph structure may include correcting the graph structure based on the new keynode and node sequences matching the previously generated graph structure.
  • the mobile robot device can generate a graph structure based on the location information by matching and fusing data obtained from the image sensor and the plurality of geomagnetic sensors, respectively, and the mobile robot. If the device fails to recognize the position, the graph can be corrected.
  • FIG. 1 is a block diagram briefly illustrating a configuration of a mobile robot apparatus according to an embodiment of the present disclosure
  • FIG. 2 is a block diagram illustrating in detail the configuration of a mobile robot apparatus according to an embodiment of the present disclosure
  • FIG. 3 is a block diagram illustrating a configuration of generating a graph by acquiring a sequence of key nodes and nodes according to an embodiment of the present disclosure
  • FIG. 4 is a block diagram illustrating a graph slam (SLAM) algorithm using an image sensor, according to an embodiment of the present disclosure
  • FIG. 5 is a flowchart illustrating a method of obtaining a key node from an image sensor according to an embodiment of the present disclosure
  • 6A is a diagram for describing a method of extracting feature points from image data, according to an exemplary embodiment.
  • 6B is a diagram for describing a submap in which feature points extracted from image data are accumulated, according to an embodiment of the present disclosure
  • FIG. 7 is a flowchart illustrating a method of obtaining a node bundle from sensing data according to an embodiment of the present disclosure
  • FIG. 8 is a diagram for describing a method of obtaining a node sequence by matching a node bundle according to an embodiment of the present disclosure
  • FIG. 9A is a diagram for describing a distance value between an existing node bundle and a newly obtained node bundle, according to an embodiment of the present disclosure.
  • FIG. 9B is a view for explaining a table for determining whether a distance value between a previously acquired node bundle and a newly acquired node bundle can be measured according to an embodiment of the present disclosure
  • FIG. 10 is a diagram for describing a method of determining a location of a newly obtained node bundle using an existing node bundle, according to an embodiment of the present disclosure
  • FIG. 11 is a diagram for describing a method of generating a graph structure using a key node and a node sequence, according to an embodiment of the present disclosure
  • 12A is a diagram for describing a method of matching a submap and a target node when the mobile robot apparatus fails to recognize a location according to an embodiment of the present disclosure
  • 12B is a diagram for describing a method of matching a feature point and a target node of a submap when the mobile robot apparatus fails to recognize a position according to an embodiment of the present disclosure
  • FIG. 13A is a diagram for describing a method of matching a target node when a mobile robot apparatus fails to recognize a position, according to an exemplary embodiment.
  • FIG. 13B is a diagram for describing a method of correcting a graph by matching a target node when the mobile robot apparatus fails to recognize a position according to an embodiment of the present disclosure
  • FIG. 14 is a flowchart illustrating a method of matching with a target node when the mobile robot apparatus fails to recognize a position according to an embodiment of the present disclosure
  • FIG. 15A is a diagram for describing a method of performing an experiment when a mobile robot apparatus fails to recognize a position, according to an exemplary embodiment.
  • FIG. 15B is a view for explaining a result of an experiment when a mobile robot apparatus fails to recognize a position according to one embodiment of the present disclosure
  • 16 is a flowchart illustrating a method of controlling a mobile robot device according to an embodiment of the present disclosure.
  • each step should be understood to be non-limiting unless the preceding step is to be performed logically and temporally prior to the later step. That is, except for the exceptional cases described above, even if the process described as the following step is performed before the process described as the preceding step, the nature of the disclosure is not affected and the scope of rights should be defined regardless of the order of the steps.
  • a or B is defined to mean not only selectively indicating any one of A and B, but also including both A and B.
  • the term “comprising” in this specification has the meaning encompassing further including other components in addition to the listed elements.
  • FIG. 1 is a block diagram schematically illustrating a configuration of a mobile robot apparatus 10 according to an embodiment of the present disclosure.
  • the mobile robot device 10 includes a processor 150, an image sensor 110, a plurality of geomagnetic sensors 120, a memory 130, and a driving unit 140.
  • the 3D image data including the color image and the depth image may be obtained using the image sensor 110, and a distance value corresponding to the pixel of the obtained 3D image data may be measured.
  • the image sensor 110 may include a stereo camera, a time of flight camera, an infrared camera, and the like, and may obtain depth data values of each pixel.
  • the image sensor 110 may operate in various situations such as the moving robot apparatus 10 moves, rotates, and stops.
  • the plurality of geomagnetic sensors 120 may be an IMU (Inertial Measurement Unit) sensor incorporating a three-axis geomagnetic sensor. Magnetic field sensing data may be obtained through the plurality of geomagnetic sensors 120. Sensing data obtained through the plurality of geomagnetic sensors 120 may be stored in the memory 130.
  • IMU Inertial Measurement Unit
  • the memory 130 may store instructions or data related to at least one other component of the mobile robot device 10.
  • the memory 130 may be implemented as a nonvolatile memory, a volatile memory, a flash-memory, a hard disk drive (HDD), or a solid state drive (SSD).
  • the memory 130 may be accessed by the processor 150 and read / write / modify / delete / update of data by the processor 150 may be performed.
  • the term memory refers to a memory card (not shown) mounted on the memory 130, a ROM (not shown), a RAM (not shown), or the mobile robot device 10 (eg, micro). SD card, memory stick).
  • the memory 130 may store programs and data for configuring various screens to be displayed in the display area of the display.
  • the memory 130 may store the acquired 3D image data or the sensing data, and may store at least one instruction.
  • Various program instructions may be included, but various program instructions may be partially omitted, modified, or added depending on the type and characteristics of the mobile robot apparatus 10.
  • the driving unit 140 is a device that helps the mobile robot device 10 to move, and may be configured as a wheel, or may be configured as a device that can move in a non-standard moving form such as N-group walking. In addition, the driving unit 140 may rotate left and right as well as forward and backward and may also be configured to rotate. Therefore, the driving unit 140 may be configured in various ways according to the type and characteristics of the mobile robot device 10.
  • the processor 150 may control the overall operation of the mobile robot apparatus 10 using various instructions and modules stored in the memory 130.
  • the processor 150 acquires a plurality of image data through the image sensor 110 and senses through the plurality of geomagnetic sensors 120. Data can be obtained.
  • the processor 150 may extract feature points from the plurality of image data and obtain key nodes based on the feature points.
  • the processor 150 may extract the feature points from the plurality of image data using an ORF (Oriented FAST and Rotated BRIEF) algorithm.
  • the processor 150 may accumulate the extracted feature points to form a submap, perform a random sample consensus (RANSAC) algorithm on the submap, and match the nearest node to obtain a key node.
  • RFSAC random sample consensus
  • the processor 150 may obtain a node sequence based on the sensing data. Specifically, a graph structure for estimating the position of the mobile robot apparatus 10 based on the key nodes and the node sequence is generated, and when the position recognition of the mobile robot apparatus 10 fails, the graph structure is generated. You can correct it.
  • the processor 150 acquires a sensing data group by grouping the obtained sensing data, compares and matches the magnetic field values of the obtained sensing data group with the magnetic field values of a previously stored sensing data group, and matches Based on the sensing data group, a node bundle having a pair of graph structures may be obtained.
  • the processor 150 determines whether the newly obtained node bundle exists between the previously stored node bundles, and if it is determined that the newly obtained node bundle exists between the previously stored node bundles. Only the newly acquired node bundle existing between the existing obtained node bundles may be extracted, and only the newly acquired node bundle may be selected as the node sequence.
  • the processor 150 updates an existing acquired node bundle having a smaller distance difference from the newly acquired node bundle and extracts a distance difference between the newly acquired node bundle and the updated existing node bundle.
  • the smallest node bundle may be searched and the node sequence having the smallest distance difference may be used to locate the node sequence.
  • the processor 150 may update a previously obtained node bundle having a smaller distance difference using a Gaussian process.
  • the processor 150 senses location information different from the location information represented by the graph structure generated by the mobile robot device 10 through the wheel sensor 160 while the mobile robot device 10 moves. If the position recognition fails, the mobile robot apparatus 10 may rotate and acquire new image data and sensing data through the image sensor 110 and the plurality of magnetic sensors.
  • the processor 150 acquires a new key node and node sequence based on the image data and the sensing data newly acquired by the mobile robot device 10, and based on the obtained new key node and node sequence. It may be determined whether or not a match is made by comparing the graph structure generated previously.
  • the processor 150 may correct the graph structure based on the new key node and node sequence matching the previously generated graph structure.
  • the mobile robot apparatus 10 includes a processor 150, an image sensor 110, a plurality of geomagnetic sensors 120, a wheel sensor 160, an input unit 170, a memory 130,
  • the driving unit 140 may include a communication unit 180, a display unit 190, and a function unit 195.
  • Figure 2 is a case where the mobile robot device 10 is a device having a variety of functions, such as content sharing function, communication function, display function, for example, shows a comprehensive view of the various components. Therefore, according to the exemplary embodiment, some of the components shown in FIG. 2 may be omitted or changed, and other components may be further added.
  • the number of the image sensors 110 may be plural and may be disposed in the central portion or the outer portion of the mobile robot apparatus 10 to photograph the surrounding environment. However, this is only an embodiment, and the number of image sensors 110 may be added, modified, or changed depending on the type and characteristics of the mobile robot apparatus 10.
  • the plurality of geomagnetic sensors 120 may measure magnetic field distortion of an indoor environment, and extract magnetic field characteristics and the like using a sequence. The plurality of geomagnetic sensors 120 may also operate regardless of the operating situation of the mobile robot device 10.
  • the plurality of geomagnetic sensors 120 may be symmetrically disposed to the left and right of the driving unit 140 of the mobile robot apparatus 10, but this is merely an example and may be disposed at various positions. According to the type and characteristics of the mobile robot device 10, the number of the plurality of geomagnetic sensors 120 may be added, modified or changed.
  • the wheel sensor 160 may detect a situation in which the wheel and the floor of the mobile robot apparatus 10 are lifted without contact, that is, a kidnapping situation. In addition to the kidnapping situation, the wheel sensor 160 may detect a location recognition failure situation that detects location information different from a graph structure indicating the location information of the mobile robot device 10 acquired by the mobile robot device 10. .
  • the wheel sensor 160 may be disposed on the driving unit 140 abutting the floor, but this is only an example and may be disposed anywhere touching the floor.
  • the input unit 170 may receive a user command for controlling the overall operation of the mobile robot device 10.
  • the input unit 170 may be implemented as a touch screen.
  • the input unit 170 may be implemented as another input device (for example, a mouse, a keyboard, or a microphone) according to the type and characteristics of the mobile robot apparatus 10. Can be.
  • the driving unit 140 may allow the mobile robot apparatus 10 to freely move, including forward and backward rotation.
  • the wheel sensor 160 may be disposed on the driving unit 140.
  • the driving unit 140 may be variously modified, such as quadruped walking.
  • the communication unit 180 is a component capable of communicating with various types of external devices according to various types of communication methods.
  • the communication unit 180 may include various communication chips such as a Wi-Fi chip, a Bluetooth chip, a wireless communication chip, and the like.
  • the Wi-Fi chip and the Bluetooth chip may communicate with each other by the Wi-Fi method and the Bluetooth method.
  • the wireless communication chip refers to a chip that performs communication according to various communication standards such as IEEE, Zigbee, 3rd Generatoin (3G), 3rd Generation Partnership Project (3GPP), Long term Evolution (LTE), and the like.
  • the display 190 may output image data under the control of the processor 150.
  • the content received through the communication unit 180 may be displayed, and when generating a graph structure based on the data acquired by the mobile robot device 10, the corresponding graph is displayed to display the mobile robot device 10. It is also possible to estimate the movement path. This is only an example and may display various functions according to the type and characteristics of the mobile robot apparatus 10.
  • the function unit 195 may perform various functions according to the type and characteristics of the mobile robot device 10. For example, if the mobile robot apparatus 10 is a robot cleaner, it may adsorb dust in the surroundings through the function unit 195 while moving. If the mobile robot device 10 is a robot that moves and plays various multimedia contents, the function unit 195 may be hardware that plays multimedia contents.
  • the processor 150 may process a digital signal, a central processing unit (CPU), a micro controller unit (MCU), a micro processing unit (MPU), a controller, and an application processor (AP). Or a communication processor (CP), one or more of ARM processors, or may be defined in the corresponding terms.
  • the processor 150 may be implemented in a System on Chip (SoC), a large scale integration (LSI) in which a processing algorithm is embedded, or may be implemented in the form of a field programmable gate array (FPGA).
  • SoC System on Chip
  • LSI large scale integration
  • FPGA field programmable gate array
  • the processor 150 may perform various functions by executing computer executable instructions stored in the memory 130.
  • the processor 150 may include at least one of a separate AI dedicated processor, a graphics-processing unit (GPU), a neural processing unit (NPU), and a visual processing unit (VPU) to perform an artificial intelligence function. have.
  • FIG. 3 is a block diagram illustrating a configuration of generating a graph by acquiring a sequence of key nodes and nodes according to an embodiment of the present disclosure.
  • the processor 150 may include a key node acquirer 310, a node sequence acquirer 320, and a graph generator 330 illustrated in FIG. 3.
  • the key node acquisition unit 310 may obtain a key node by extracting and matching feature points by using image data acquired by the image sensor 110 stored in the memory 130.
  • the image data is three-dimensional data including color data and depth data, and feature points may be extracted by using an ORB algorithm.
  • ORB Oriented FAST and Rotated BRIEF
  • three-dimensional image data can also be used using other algorithms (e.g., algorithms such as The speeded up robust features (SURF) and the Scale Invariant Feature Transform (SIFT)).
  • SURF speeded up robust features
  • SIFT Scale Invariant Feature Transform
  • the key node acquisition unit 310 may overcome an environment in which the feature points are insufficient by accumulating the extracted feature points to form a submap.
  • the submap is formed by accumulating three-dimensional feature points in the key node acquisition unit 310, and may be based on reliability of an odometry of the robot.
  • the key node acquisition unit 310 may acquire the image from the image sensor 110 in a new environment. Key nodes can be obtained by using three-dimensional image data.
  • the node sequence acquirer 320 may acquire a node sequence based on sensing data acquired through the plurality of geomagnetic sensors 120 stored in the memory 130.
  • the sensing data includes data in which magnetic field distortion of an indoor environment is reflected.
  • the data measured by the magnetic field distortion, such as the sensing data may derive the magnetic field value using a sequence of data instead of the value itself at a specific instant.
  • the node sequence obtainer 320 may group sensing data, match magnetic field values of the grouped sensing data, and generate a node bundle with the matched sensing data group.
  • the node bundle may be matched again to obtain a node sequence in the form of a loop closing.
  • the loop closing form refers to the closed loop form and means that the node is connected in the form of a sequence.
  • the node sequence obtainer 320 may apply a Gaussian process in the process of obtaining a node sequence by performing node bundle matching.
  • the Gaussian process is a type of supervised learning that is a set of random variables. In this case, the set variables follow a joint Gaussian distribution. Specifically, this will be described with reference to FIG. 10.
  • the node sequence acquisition unit 320 compares a newly obtained node bundle query value with a previously obtained node bundle to extract a query having the smallest error score. Can be.
  • the node bundle having the smallest distance difference can be selected as a node sequence in a loop closing form.
  • the graph generator 330 may generate a graph structure indicating the position information of the mobile robot apparatus 10 based on the keynode and node sequences acquired by the keynode acquirer 310 and the node sequence acquirer 320. have.
  • the graph structure generated by the graph generator 330 may be in the form of a loop closing, and may be composed of a key node and a node sequence. Such a graph structure may mean that 3D image data matching and sensing data matching may be simultaneously performed.
  • the graph generator 330 may not only be a kidnapping situation in which the wheel sensor 160 is currently held by the mobile robot device 10, but also an image sensor 110 and a plurality of geomagnetic sensors 120.
  • the graph can be corrected based on the data obtained from
  • the graph structure generated by the graph generator 330 may be displayed on the display 190, and is preset in the input unit 170 (eg, a touch screen, a microphone, a keyboard, etc.).
  • the input unit 170 eg, a touch screen, a microphone, a keyboard, etc.
  • the display 190 can see the simulation process.
  • FIG. 4 is a block diagram illustrating a graph slam (SLAM) algorithm using an image sensor 110 according to an embodiment of the present disclosure.
  • SLAM graph slam
  • a key node is obtained based on the 3D image data acquired by the image sensor 110, a graph structure is generated based on the obtained key node, and the position information of the mobile robot apparatus 10 is estimated.
  • the process of making a map 400 is shown.
  • the 3D image data including the depth data is acquired through the image sensor 110 that is a component of the mobile robot apparatus 10 (410).
  • the continuous or discontinuous three-dimensional image data is acquired through the image sensor 110, and the obtained three-dimensional image data is stored in the memory 130.
  • the mobile robot device 10 may perform a process of visual odometry 420.
  • the visual odometry 420 process refers to a process of determining the position and direction of the mobile robot device 10 by analyzing the image data acquired by the mobile robot device 10.
  • the mobile robot device 10 of the present disclosure uses a visual odometry 420 that can be effectively used for the mobile robot device 10 having a non-standard moving method such as a group N walking robot, but this is only one embodiment. Do.
  • the visual odometry 420 process performed by the mobile robot apparatus 10 is to perform image correction on the first three-dimensional image data obtained.
  • the mobile robot apparatus 10 may correct such as lens distortion that may occur while the image sensor 110 acquires 3D image data.
  • the mobile robot apparatus 10 may extract a feature point of the three-dimensional image data after the image correction is completed, and build an optical flow field with the extracted feature point.
  • the mobile robotic device 10 may determine the position and direction of the mobile robotic device 10 through the optical flow vector of the optical flow field.
  • the mobile robot apparatus 10 may extract feature points from an ORB algorithm using three-dimensional image data.
  • ORB Oriented FAST and Rotated BRIEF
  • FAST-9 Oriented FAST and Rotated BRIEF
  • the mobile robot device 10 accumulates and matches feature points extracted from the obtained 3D image data. This process is referred to as node management 430.
  • the mobile robot apparatus 10 may match a submap accumulated by collecting the feature points of the nodes, that is, the nodes of the mobile robot apparatus 10.
  • Equation (1) above is an equation for explaining that the processor 150 may configure a submap by accumulating the feature points extracted from the 3D image data.
  • M on the left side means the j-th submap accumulated feature points.
  • K and l are index numbers of the first and last nodes included in the j th submap.
  • the mobile robot apparatus 10 may match the submap formed by accumulating the feature points in two stages.
  • the mobile robot apparatus 10 may search for a submap closest to the target node by matching several submaps with the target node. Since the mobile robot device 10 is difficult to estimate the position of the mobile robot device 10 by performing matching on a single feature point in an environment in which the feature points are insufficient, the closest node, i. Rough matching may be performed with the target node.
  • the mobile robot device 10 may utilize a RANSAC algorithm based on rigid transformation when performing rough matching.
  • Rigid body transformation refers to a transformation that changes only the position and direction while maintaining the shape and size of the object to be converted.
  • the RANSAC algorithm is an algorithm that enables regression analysis of values outside the normal distribution called outliers to data belonging to the normal distribution.
  • the mobile robot device 10 selects any number of accumulated feature points, assumes the selected number of feature points as data (Inlier) belonging to a normal distribution, and then selects a regression model. You can get it.
  • the mobile robot apparatus 10 may determine whether the remaining feature points are within a preset tolerance, and if the tolerances are within the tolerance, include the data (Inlier) in a normal distribution and regression. Models can be constructed.
  • the mobile robot apparatus 10 may measure the reconfigured regression model, the data (Inlier) and the error belonging to the normal distribution, and determine whether to repeat the above-described process according to whether the error exceeds a preset tolerance. .
  • the mobile robot apparatus 10 may acquire a key node closest to the target node in the submap by matching the target node with the node including the feature points accumulated in the submap searched in the first step.
  • the mobile robot device 10 may look at one submap to which the RANSAC algorithm is applied and match the target node to search for the nearest submap, and the second step may be performed by the mobile robot device 10.
  • the mobile robot device 10 may determine depth based constraints 450 and feature based constraints of each of the keynodes. 460 may be generated.
  • the mobile robot device 10 may generate a constraint by connecting a region where the depth data of the key node matches the depth data of the target node, and the constraint may be a depth based constraint.
  • the mobile robot device 10 may generate a constraint by connecting a keynode's feature to an area matching the feature of the target node, and the constraint may be a feature based constraints ( 460).
  • the mobile robot apparatus 10 generates depth based constraints 450 and feature based constraints 460 based on the key node to form a loop closing shape.
  • optimizing the graph structure means that the mobile robot device 10 generates a graph based on the key node in order to best represent the position information of the mobile robot device 10. This means that it can be generated accurately and the error can be minimized.
  • Bow (Bag of words) Scene Matching 440 is a matching method that the mobile robot apparatus 10 may use in a situation such as kidnapping.
  • the mobile robot apparatus 10 extracts a representative feature point from feature points extracted from previously acquired image data, and comprises a codebook consisting of representative feature points. Can be generated. Then, when the mobile robot device 10 is placed on the floor after the kidnapping situation, the mobile robot device 10 can check whether the current position matches the existing position using a codebook.
  • FIG. 5 is a flowchart illustrating a method of obtaining a key node from the image sensor 110 according to an embodiment of the present disclosure.
  • the image sensor 110 such as an RGB-D sensor may acquire 3D image data.
  • the mobile robot apparatus 10 may store the obtained 3D image data in the memory 130. Can be stored.
  • the mobile robot apparatus 10 may extract a feature point by applying an ORB algorithm, in one embodiment, from the plurality of image data stored in the memory 130 (S520).
  • the mobile robot apparatus 10 may accumulate the extracted feature points to form a submap (S530), apply a RANSAC algorithm, and perform matching. (S540)
  • the matching method is divided into two steps as described above.
  • the mobile robot device 10 may view the submap that has undergone the RANSAC algorithm as one node and perform rough matching with the target node.
  • the mobile robot apparatus 10 may search for the nearest node by matching the nodes of the matched submap and the target node (S550). Thereafter, the mobile robot apparatus 10 may acquire a node that is closest to the target node, that is, a key node (S560).
  • 6A and 6B are diagrams for describing a method of extracting feature points from image data, according to an exemplary embodiment.
  • FIG. 6A illustrates the mobile robot device 10 extracting feature points from three-dimensional image data.
  • the mobile robot apparatus 10 may extract and view the vicinity of the leg 610 of the whiteboard as a feature point, which is continuously brighter than the pixels near the floor of the corridor.
  • the mobile robot apparatus 10 may view and extract a portion 620 attached to a white tape which is brighter than pixels Pixel near the bottom of the hall as a feature point.
  • the mobile robot apparatus 10 shows the accumulation of sub-maps of feature points extracted from the 3D image data.
  • the mobile robot apparatus 10 may apply the RANSAC algorithm to the submap formed by accumulating the feature points extracted from the image data and match the target node.
  • the mobile robot apparatus 10 may obtain a key node by matching the feature points constituting the matched submap and the target node.
  • FIG. 7 is a flowchart illustrating a method of obtaining a node bundle from sensing data according to an embodiment of the present disclosure.
  • the sensing data may be acquired by the plurality of geomagnetic sensors 120.
  • the sensing data is data obtained by measuring magnetic field distortion of an indoor environment, and should be formed in a sequence form. Can be grouped in a sequence form (S720).
  • the mobile robot apparatus 10 may compare and match the previously stored sensing data group with the newly acquired sensing data group and the magnetic field values. (S740) The mobile robot apparatus 10 compares and matches the magnetic field values of the sensing data group. By doing so, it is possible to construct a loop closing with the node sequence as a component.
  • Equation (2) above calculates a matching cost when the sensing data group is expressed as a vector.
  • the mobile robot apparatus 10 may obtain a matching cost by calculating a Euclidean distance between the previously stored sensing data group and the newly acquired sensing data.
  • the mobile robot device 10 may generate a node bundle with the matched sensing data group (S760). That is, the mobile robot device 10 may generate a node bundle having a pair of graph structures as matched sensing data groups.
  • FIGS. 8 to 10 are diagrams for describing a method of obtaining a node sequence by matching node bundles according to an embodiment of the present disclosure.
  • FIG. 8 compares the distance difference between the node bundle 800 (or target) previously acquired by the mobile robot device 10 and the node bundle 810 (or query) newly acquired. The process of finding a matching point is shown.
  • the mobile robot device 10 may calculate an error score by pushing the query 810 with the target 800.
  • the mobile robot device 10 may repeat the process 820 of obtaining a distance difference from the target 800 while continuously raising the position of the query 810. Therefore, it can be seen from the table 890 that the difference between the target 800 and the query 810 is the smallest in the case of (c) than in the case of (b) 840 or (d) 880. .
  • FIG. 9A a node bundle consisting of a pair of graph structures of a target_L 910 and a target_R 930 and a node bundle consisting of a pair of graph structures of a query_L 920 and a query_R 940 are illustrated. It is shown.
  • a table shows whether the target 800 is between queries 810 so that the mobile robot device 10 is in a range capable of measuring physical values between each node bundle.
  • the mobile robot apparatus 10 may calculate a distance difference between the target_L 910 and the target_R 930 based on the query_L 920 and the query_R 940. have. For example, the mobile robot apparatus 10 calculates the distance between the target_L 920 and LL based on the query_L 920 and the distance between the query_L 920 and the target_R 930 is LR. Can be calculated as
  • the mobile robot device 10 may measure the distance difference between the query bundle and the graph structure constituting the target bundle only when the query bundle is on the left side or on the right side.
  • the target 1020 extracts only the query and the Gaussian process to find the location of the extracted query 1010.
  • the process of updating the target 1030 having a smaller error score (Error Score) is shown.
  • the mobile robot device 10 may extract only a query existing between the targets 1020 and select the extracted query as a node sequence.
  • the mobile robot apparatus 10 may generate a target 1030 having a smaller query and distance difference than the target 1040 by using a Gaussian process to determine the position of the node sequence.
  • the mobile robot apparatus 10 performs a Gaussian process repeatedly until the distance between the curry selected as the node sequence is almost disappeared, that is, until the distance difference is smallest, and finally finds the position of the node sequence. Can be.
  • FIG. 11 is a diagram for describing a method of generating a graph structure using a key node and a node sequence, according to an exemplary embodiment.
  • the mobile robot device 10 may simultaneously configure a graph structure 1110 having a key node 1100 obtained based on 3D image data and a node sequence obtained based on sensing data, as components.
  • the mobile robot apparatus 10 may create the constraint 1120 using the key node 1100 and the node sequence, and perform a graph slam by generating a graph structure in the form of a loop closing. have.
  • FIG. 12 to 13 are diagrams for describing a method of calibrating a graph when the mobile robot apparatus 10 fails to recognize a position, according to an embodiment of the present disclosure
  • FIG. 14 is a view according to an embodiment of the present disclosure.
  • 2 is a flowchart illustrating a method of correcting a graph when the mobile robot apparatus 10 fails to recognize a position.
  • the wheel sensor 160 may detect a case where the mobile robot device 10 fails to recognize the position, such as a kid napping situation in which the mobile robot device 10 is heard, and the mobile robot device 10 may again detect an image sensor ( The key node and the node sequence may be obtained using the 110 and the plurality of geomagnetic sensors 120.
  • the mobile robot apparatus 10 may rough match the relocation node 1220 based on a submap in which feature points are equally accumulated in a situation such as kidnapping.
  • the mobile robot apparatus 10 may match a submap 1200 including a node relatively closer to the relocation node 1220 among various submaps.
  • the mobile robot apparatus 10 may identify that the submap 1210 including the relatively distant node does not match the relocation node 1220.
  • the mobile robot apparatus 10 may match the submap 1200 including the node closer to the relocation node 1220 again to search for the best match.
  • FIG. 13 illustrates in detail a loop closed graph structure in which the mobile robot device 10 finds the best matching pair, that is, a node, in the kidnapping 1300 situation.
  • the mobile robot device 10 finds the best-fit node 1310, the mobile robot device 10 places a constraint 1320 on the best-fit node 1310 and performs an existing graph slam. do.
  • FIG. 14 is a flowchart for describing a method of correcting a graph when the mobile robot apparatus 10 fails to recognize a position, according to an exemplary embodiment.
  • the wheel sensor 160 may detect a kidnapping situation such as the mobile robot device 10 being lifted (S1410). At this time, the mobile robot device 10 may generate generation of an odometry constraint. Therefore, the mobile robot apparatus 10 may rotate and simultaneously acquire three-dimensional image data and sensing data using the image sensor 110 and the plurality of geomagnetic sensors 120. (S1430)
  • the mobile robot apparatus 10 may newly acquire a key node and a node sequence based on the acquired new 3D image data and sensing data, and attempt to match the relocation node (S1440).
  • the mobile robot device 10 may determine the next step according to whether the matching is successful (S1450). If the matching does not succeed, the mobile robot device 10 attempts matching again (S1440). In this case, the graph may be corrected based on the matched keynode and node sequences (S1460).
  • FIG. 15 is a diagram for describing a method and a result of assuming an experiment when the mobile robot apparatus 10 fails to recognize a position, according to an exemplary embodiment.
  • FIG. 15A illustrates an experimental method for understanding how the mobile robot apparatus 10 operates when an experimenter randomly generates a kidnapping situation 1510 in the mobile robot apparatus 10 in a rectangular experiment site 13 m long and 8.5 m long. It is shown.
  • FIG. 15B a result value when the experiment planned in FIG. 15A is actually performed is illustrated.
  • the line 1520 indicated by the dotted line indicates the trajectory of the graph slam generated by the mobile robot device 10 in the kidnapping situation.
  • the line indicated by the solid line 1530 is an odometry trajectory of the mobile robot device 10, and even though the kidnapping 1510 occurs, it can be seen that the result value is derived to match the trajectory 1520 of the graph slam. .
  • FIG. 16 is a flowchart illustrating a control method of the mobile robot apparatus 10 according to an exemplary embodiment.
  • the mobile robot apparatus 10 may acquire a plurality of image data through the image sensor 110 and acquire sensing data through the plurality of geomagnetic sensors 120 (S1610).
  • the mobile robot apparatus 10 may extract a feature point from a plurality of image data, obtain a key node, and obtain a node sequence based on the sensing data (S1620).
  • the mobile robot apparatus 10 may generate a graph structure indicating the position information of the mobile robot apparatus 10 using the obtained key node and node sequence (S1630).
  • the mobile robot device 10 When the mobile robot device 10 fails to recognize the location (for example, in a situation such as kidnapping) (S1640), the mobile robot device 10 acquires a new node and node sequence again to obtain the location information of the mobile robot device 10. The graph structure shown can be corrected. (S1650)
  • the term “part” or “module” as used in the present disclosure includes a unit composed of hardware, software, or firmware, and for example, may be used interchangeably with terms such as logic, logic block, component, or circuit. Can be.
  • the "unit” or “module” may be an integrally formed part or a minimum unit or part of performing one or more functions.
  • the module may be configured as an application-specific integrated circuit (ASIC).
  • ASIC application-specific integrated circuit
  • the device may be a device capable of calling a stored command from a storage medium and operating according to the called command, and may include an electronic device according to the disclosed embodiments.
  • the processor may perform a function corresponding to the instruction directly or by using other components under the control of the processor.
  • the instructions can include code generated or executed by a compiler or interpreter.
  • the device-readable storage medium may be provided in the form of a non-transitory storage medium.
  • 'non-temporary' means that the storage medium does not include a signal and is tangible, but does not distinguish that the data is stored semi-permanently or temporarily on the storage medium.
  • a method may be provided included in a computer program product.
  • the computer program product may be traded between the seller and the buyer as a product.
  • the computer program product may be distributed online in the form of a device-readable storage medium (eg compact disc read only memory (CD-ROM)) or through an application store (eg Play StoreTM).
  • a device-readable storage medium eg compact disc read only memory (CD-ROM)
  • an application store eg Play StoreTM
  • at least a portion of the computer program product may be stored at least temporarily on a storage medium such as a server of a manufacturer, a server of an application store, or a relay server, or may be temporarily created.
  • Each component eg, a module or a program
  • some components eg, modules or programs
  • operations performed by a module, program, or other component may be executed sequentially, in parallel, repeatedly, or heuristically, or at least some of the operations may be executed in a different order, omitted, or another operation may be added. Can be.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Mechanical Engineering (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Robotics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

L'invention se rapporte à un dispositif de robot mobile et à un procédé de commande associé. Le dispositif de robot mobile comprend : une unité d'entraînement ; un capteur d'image ; une pluralité de capteurs géomagnétiques ; une mémoire destinée à stocker au moins une instruction ; et un processeur destiné à exécuter au moins une instruction, le processeur pouvant acquérir une pluralité de données d'image par l'intermédiaire du capteur d'image pendant que le dispositif de robot mobile se déplace au moyen de l'unité d'entraînement, acquérir des données de détection par l'intermédiaire de la pluralité de capteurs géomagnétiques, extraire un point caractéristique à partir de la pluralité de données d'image et obtenir des nœuds clés sur la base du point caractéristique, obtenir une séquence de nœuds sur la base des données de détection, générer une structure de graphique qui estime une position du dispositif de robot mobile sur la base des nœuds clés et de la séquence de nœuds, et corriger la structure de graphe lorsque la reconnaissance de position du dispositif de robot mobile échoue.
PCT/KR2019/007499 2018-06-22 2019-06-21 Dispositif de robot mobile destiné à corriger une position par fusion d'un capteur d'image et d'une pluralité de capteurs géomagnétiques, et procédé de commande Ceased WO2019245320A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/055,415 US12001218B2 (en) 2018-06-22 2019-06-21 Mobile robot device for correcting position by fusing image sensor and plurality of geomagnetic sensors, and control method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201862688700P 2018-06-22 2018-06-22
US62/688,700 2018-06-22
KR10-2019-0024283 2019-02-28
KR1020190024283A KR102601141B1 (ko) 2018-06-22 2019-02-28 이미지 센서와 복수의 지자기 센서를 융합하여 위치 보정하는 이동 로봇 장치 및 제어 방법

Publications (1)

Publication Number Publication Date
WO2019245320A1 true WO2019245320A1 (fr) 2019-12-26

Family

ID=68983379

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/007499 Ceased WO2019245320A1 (fr) 2018-06-22 2019-06-21 Dispositif de robot mobile destiné à corriger une position par fusion d'un capteur d'image et d'une pluralité de capteurs géomagnétiques, et procédé de commande

Country Status (1)

Country Link
WO (1) WO2019245320A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275697A (zh) * 2020-02-10 2020-06-12 西安交通大学 一种基于orb特征匹配和lk光流法的电池丝印质量检测方法
CN114358166A (zh) * 2021-12-29 2022-04-15 青岛星科瑞升信息科技有限公司 一种基于自适应k均值聚类的多目标定位方法
CN115993807A (zh) * 2023-03-23 2023-04-21 日照鲁光电子科技有限公司 一种碳化硅的生产监测优化控制方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100801261B1 (ko) * 2006-11-03 2008-02-04 주식회사 에스씨티 지자기 센서가 구비된 로봇의 제어방법
CN103674015A (zh) * 2013-12-13 2014-03-26 国家电网公司 一种无轨化定位导航方法及装置
US20160377688A1 (en) * 2015-06-05 2016-12-29 Irobot Corporation Magnetic field localization and navigation
KR20160150504A (ko) * 2015-06-22 2016-12-30 한국과학기술원 실내 자기장을 이용한 이동 로봇의 위치 인식 방법 및 장치
JP2017045447A (ja) * 2015-08-28 2017-03-02 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 地図生成方法、自己位置推定方法、ロボットシステム、およびロボット

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100801261B1 (ko) * 2006-11-03 2008-02-04 주식회사 에스씨티 지자기 센서가 구비된 로봇의 제어방법
CN103674015A (zh) * 2013-12-13 2014-03-26 国家电网公司 一种无轨化定位导航方法及装置
US20160377688A1 (en) * 2015-06-05 2016-12-29 Irobot Corporation Magnetic field localization and navigation
KR20160150504A (ko) * 2015-06-22 2016-12-30 한국과학기술원 실내 자기장을 이용한 이동 로봇의 위치 인식 방법 및 장치
JP2017045447A (ja) * 2015-08-28 2017-03-02 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 地図生成方法、自己位置推定方法、ロボットシステム、およびロボット

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275697A (zh) * 2020-02-10 2020-06-12 西安交通大学 一种基于orb特征匹配和lk光流法的电池丝印质量检测方法
CN111275697B (zh) * 2020-02-10 2022-04-22 西安交通大学 一种基于orb特征匹配和lk光流法的电池丝印质量检测方法
CN114358166A (zh) * 2021-12-29 2022-04-15 青岛星科瑞升信息科技有限公司 一种基于自适应k均值聚类的多目标定位方法
CN114358166B (zh) * 2021-12-29 2023-11-07 青岛星科瑞升信息科技有限公司 一种基于自适应k均值聚类的多目标定位方法
CN115993807A (zh) * 2023-03-23 2023-04-21 日照鲁光电子科技有限公司 一种碳化硅的生产监测优化控制方法及系统
CN115993807B (zh) * 2023-03-23 2023-06-09 日照鲁光电子科技有限公司 一种碳化硅的生产监测优化控制方法及系统

Similar Documents

Publication Publication Date Title
US20190355173A1 (en) Leveraging crowdsourced data for localization and mapping within an environment
CN110411441B (zh) 用于多模态映射和定位的系统和方法
WO2020046038A1 (fr) Robot et procédé de commande associé
WO2019031714A1 (fr) Procédé et appareil de reconnaissance d'objet
WO2011013862A1 (fr) Procédé de commande pour la localisation et la navigation de robot mobile et robot mobile utilisant un tel procédé
WO2017091008A1 (fr) Robot mobile et procédé de commande pour ce dernier
WO2019059505A1 (fr) Procédé et appareil de reconnaissance d'objet
WO2018074903A1 (fr) Procédé de commande de robot mobile
WO2016074169A1 (fr) Procédé de détection de cible, dispositif détecteur, et robot
WO2015194864A1 (fr) Dispositif de mise à jour de carte de robot mobile et procédé associé
CN113614784A (zh) 利用稀疏rgb-d slam和交互感知对对象进行检测、跟踪和三维建模
WO2019245320A1 (fr) Dispositif de robot mobile destiné à corriger une position par fusion d'un capteur d'image et d'une pluralité de capteurs géomagnétiques, et procédé de commande
WO2024058618A1 (fr) Approche probabiliste pour unifier des représentations pour une cartographie robotique
CN114283198B (zh) 一种基于rgbd传感器的去除动态目标的slam方法
CN115311353B (zh) 一种多传感器多手柄控制器图优化紧耦合追踪方法及系统
WO2020075954A1 (fr) Système et procédé de positionnement utilisant une combinaison de résultats de reconnaissance d'emplacement basée sur un capteur multimodal
WO2018207969A1 (fr) Procédé de détection et de classification d'objet
WO2018124500A1 (fr) Procédé et dispositif électronique pour fournir un résultat de reconnaissance d'objet
WO2024155137A1 (fr) Procédé et dispositif permettant d'effectuer une localisation visuelle
WO2020256517A2 (fr) Procédé et système de traitement de mappage de phase automatique basés sur des informations d'image omnidirectionnelle
WO2019163576A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
WO2024090942A1 (fr) Procédé et dispositif électronique pour l'entraînement de modèle de réseau neuronal par augmentation d'images représentant des objets capturés par de multiples caméras
KR102618069B1 (ko) 지상조사 로봇의 점군 자료와 시각정보를 이용한 실내건물 재난정보 분석 방법 및 장치
WO2020080734A1 (fr) Procédé de reconnaissance faciale et dispositif de reconnaissance faciale
JP2018036901A (ja) 画像処理装置、画像処理方法および画像処理プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19823087

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19823087

Country of ref document: EP

Kind code of ref document: A1