WO2017038012A1 - Mapping method, localization method, robot system, and robot - Google Patents
Mapping method, localization method, robot system, and robot Download PDFInfo
- Publication number
- WO2017038012A1 WO2017038012A1 PCT/JP2016/003634 JP2016003634W WO2017038012A1 WO 2017038012 A1 WO2017038012 A1 WO 2017038012A1 JP 2016003634 W JP2016003634 W JP 2016003634W WO 2017038012 A1 WO2017038012 A1 WO 2017038012A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- map information
- map
- mobile robot
- location
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
- G05D1/028—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
Definitions
- the present disclosure relates to a technique for estimating the position of a mobile robot and creating an environment map, and more particularly, to an environment sensor installed in the environment and a mobile robot system that cooperates with the environment sensor.
- SLAM Multiple Localization And Mapping
- the mobile robot in the SLAM technology includes an internal sensor that is a sensor for knowing the internal state of the mobile robot, and an external sensor that is a sensor for detecting the external state.
- the current position / orientation is first estimated from the information of the internal sensor.
- (i) position (ii) posture (iii) position uncertainty (iv) posture uncertainty make a prediction.
- the observation information obtained by the external sensor and the information predicted from the information of the internal sensor are compared.
- the weight of the information of the inner world sensor and the outer world sensor is determined from the likelihood of each information calculated by the comparison.
- position and map information is performed using the information of the weight of the data of an inner world sensor and an outer world sensor.
- Patent Document 1 discloses a robot system that evaluates the reliability of a plurality of estimators that perform self-position estimation based on outputs from a plurality of different sensors and integrates self-position estimation information from the plurality of estimators according to the reliability. Is disclosed.
- Patent Document 2 discloses a mobile robot that moves while avoiding a collision with a person or an obstacle without interfering with a human action in cooperation with an environmental sensor.
- a plurality of environmental sensors are arranged so as to detect the positions of all persons existing in the environment.
- JP 2012-248032 A Japanese Patent No. 5617562
- a map generation method for a mobile robot that performs map generation using at least one environmental sensor node, Obtaining pre-created first map information including information around the mobile robot; Obtaining second map information including information around the mobile robot by an external sensor mounted on the mobile robot; Receiving third map information including information around the mobile robot from the environmental sensor node; (I) When there is information on a location where the temporal change amount is equal to or greater than a predetermined threshold in the third map information, A removal process for removing the location information in the first map information and the location information in the second map information corresponding to a location where the temporal change amount in the third map information is equal to or greater than the predetermined threshold.
- Increase processing (Ii) matching the first map information and the second map information after the removal process or the dispersion increasing process is performed, and generating map information based on the matching result; (Iii) A map generation method is provided in which the first map information is updated to map information generated based on a matching result.
- FIG. 1 is a block diagram illustrating an example of the configuration of the robot system according to the first embodiment.
- FIG. 2 is a schematic diagram illustrating an example of an environment in which an environment sensor node and a mobile robot exist.
- FIG. 3A is a flowchart illustrating an example of processing of the robot system.
- FIG. 3B is a flowchart illustrating an example of processing of the moving object movement / environment change information integration unit.
- FIG. 4A is a diagram illustrating an example of an environment including a mobile robot at time T.
- FIG. 4B is an illustrative view showing an example of an environment including a mobile robot at time T + ⁇ .
- FIG. 4C is a diagram illustrating an example of sensor data acquired by the mobile robot at time T.
- FIG. 4D is a diagram illustrating an example of sensor data acquired by the mobile robot at time T + ⁇ .
- FIG. 5A is an illustrative view showing one example of data matching at the time of map generation / self-position estimation processing.
- FIG. 5B is a diagram illustrating an example of matching results.
- FIG. 5C is a diagram illustrating an example of matching evaluation values.
- FIG. 6A is a diagram illustrating an example of data removal processing.
- FIG. 6B is a diagram illustrating an example of a matching result of data after the removal process.
- FIG. 6C is a diagram illustrating an example of matching evaluation values of data after the removal process.
- FIG. 7A is a diagram illustrating an example of a data distribution increase process.
- FIG. 7B is a diagram illustrating an example of a matching result of data after the dispersion increasing process.
- FIG. 7C is a diagram illustrating an example of matching evaluation values of data after the dispersion increasing process.
- FIG. 8 is a diagram showing map data at time T. As shown in FIG. FIG. 9 is a diagram for explaining an example of the dispersion increasing process.
- FIG. 10 is a diagram illustrating an example of a result of the dispersion increasing process.
- FIG. 11 is a flowchart illustrating an example of processing of the robot system according to the second embodiment.
- Patent Document 1 In conventional methods such as Patent Document 1 and Patent Document 2, there has been no study on performing map generation and self-position estimation with high accuracy and robustness even in a space with a large environmental change. For this reason, the increase in the amount of calculation required could not be suppressed.
- the present disclosure provides a self-position estimation method, a robot system, and a mobile robot that perform map generation and self-position estimation with high accuracy and robustness even in a space with large environmental changes.
- the self-position estimation method is a map generation method for a mobile robot that performs map generation using at least one environment sensor node, and includes a first preliminarily generated information including information on the periphery of the mobile robot.
- the map information is acquired, the second map information including information around the mobile robot is acquired by an external sensor mounted on the mobile robot, and the third map including the information around the mobile robot from the environment sensor node.
- the third map information includes information on a place where the temporal change amount is equal to or greater than a predetermined threshold, the temporal change amount in the third map information is equal to or greater than the predetermined threshold value.
- a removal process is performed to remove the location information in the first map information and the location information in the second map information corresponding to the location, or the third map information Performing a variance increasing process for increasing the variance of each of the location information in the first map information and the location information in the second map information, corresponding to a location where the temporal change amount in FIG. ii) Matching the first map information and the second map information after the removal process or the variance increasing process is performed, and generating map information based on the matching result, (iii) The first map information is updated to the map information generated based on the map information.
- the third map information includes information on a place where the temporal change amount is equal to or greater than the predetermined threshold between the first timing and the second timing
- the third map information includes information on a place where the temporal change amount is equal to or more than the predetermined threshold between the first timing and the second timing
- the first map information When the time difference between the timing and the second timing is equal to or less than the first time, the first map information and the second map are not performed without performing the removal process and the variance increasing process. Information may be matched and map information may be generated based on the matching result.
- map information may be generated based on the result.
- the variance increasing process may be a process of increasing the uncertainty of the information of each of the corresponding location of the first map information and the corresponding location of the second map information.
- the third map information may include information on the existence probability of an object around the environmental sensor node, and the temporal change amount may be the change amount of the existence probability.
- Information, the second map information, and the third map information are coordinate information in a two-dimensional coordinate system or coordinate information in a three-dimensional coordinate system, and before performing the matching, the first map information A coordinate conversion process for converting the coordinate systems of the second map information and the third map information into a common coordinate system may be performed.
- a self-position estimation method is a self-position estimation method for a mobile robot that performs self-position estimation using at least one sensor node.
- 1st map information is acquired
- 2nd map information including the information around the said mobile robot is acquired by the external sensor mounted in the said mobile robot
- 3rd including the information around the said mobile robot from an environmental sensor node Receiving map information
- the third map information includes information on a location whose temporal change amount is equal to or greater than a predetermined threshold, the temporal change amount in the third map information is equal to or greater than the predetermined threshold value.
- a removal process for removing the location information in the first map information and the location information in the second map information corresponding to the location of the third location, or the third location A variance increasing process for increasing the variance of each of the location information in the first map information and the location information in the second map information corresponding to a location where the temporal change amount in the information is equal to or greater than the predetermined threshold, (Ii) matching the first map information and the second map information after the removal process or the dispersion increasing process is performed, and generating map information based on the matching result; (iii) matching result Update the first map information to the map information generated on the basis of (iv) the updated first map information and an internal sensor for detecting at least one of the position and orientation of the mobile robot Based on the detection result, the self-position of the mobile robot on the updated first map information is estimated.
- a moving route may be calculated based on the updated first map information and the estimated self-location, and the mobile robot may be moved along the moving route.
- a robot system is a robot system including at least one environment sensor node and a mobile robot, and the environment sensor node acquires third map information including information around the mobile robot.
- the mobile robot acquires a database in which first map information created in advance including information around the mobile robot and second map information including information around the mobile robot are acquired.
- a communication unit that communicates with the environmental sensor node to acquire the third map information, and (i) information on a location whose temporal variation is equal to or greater than a predetermined threshold exists in the third map information.
- the location information in the first map information and the second map information corresponding to locations where the temporal change in the third map information is equal to or greater than the predetermined threshold.
- First map information and second map information after the removal process or the dispersion increasing process is performed.
- an information processing unit that updates the first map information recorded in advance to the map information generated based on the matching result, and generates map information based on the matching result.
- a mobile robot includes a database in which first map information created in advance including information around the mobile robot is recorded, and second map information including information around the mobile robot.
- the third map information is acquired by communicating with an external sensor to be acquired and at least one environmental sensor node that is external to the mobile robot and acquires third map information including information around the mobile robot.
- a communication unit and (i) if the third map information includes information on a place where the temporal change amount is equal to or greater than a predetermined threshold, the temporal change amount in the third map information is equal to or greater than the predetermined threshold.
- a removal process for removing the location information in the first map information and the location information in the second map information corresponding to the location is performed, or the time in the third map information (Ii) performing a variance increasing process for increasing variance of each of the location information in the first map information and the location information in the second map information corresponding to a location where the change amount is equal to or greater than the predetermined threshold;
- the first map information and the second map information after the removal process or the dispersion increasing process are matched, map information is generated based on the matching result, and (iii) generated based on the matching result.
- an information processing unit that updates the previously recorded first map information in the map information.
- FIG. 1 is a block diagram showing an example of the configuration of the robot system according to the first embodiment.
- the robot system includes a mobile robot 100 and an environment sensor node 120.
- the mobile robot 100 includes an external sensor 101, an internal sensor 102, an information processing unit (moving object / environment change information integration unit 103, map generation / self-position estimation unit 104, map information database 105, and route planning unit 106), The control unit 107, the actuator 108, and the communication unit 109 are included.
- an information processing unit moving object / environment change information integration unit 103, map generation / self-position estimation unit 104, map information database 105, and route planning unit 106
- the control unit 107, the actuator 108, and the communication unit 109 are included.
- the mobile robot 100 includes, for example, two or four wheels or two or more legs as means for moving the mobile robot 100. These wheels or legs are operated by the power from the actuator 108, whereby the mobile robot 100 can be moved.
- the above-described leg of the mobile robot 100 preferably includes one or more joints.
- the arm of the mobile robot 100 described above preferably includes one or more joints.
- the mobile robot 100 includes one or more CPUs, one or more main memories, and one or more auxiliary memories (not shown).
- the above-described CPU of the mobile robot 100 performs various arithmetic processes performed by the moving object / environment change information integration unit 103, the map generation / self-position estimation unit 104, the route plan unit 106, the control unit 107, and the like described below. .
- the main memory of the mobile robot 100 described above is a storage device that can be directly accessed by the CPU described above, and can be configured by a memory such as a DRAM, SRAM, flash memory, ReRAM, FeRAM, MRAM, STT-RAM, PCRAM, or the like. .
- the auxiliary storage of the mobile robot 100 described above is a storage device that performs long-term storage, copying, or backup of the contents of the main storage described above, and may be configured by an HDD, a flash memory, an optical disk, a magnetic disk, or the like.
- the external sensor 101 is a sensor that detects information about the surrounding environment of the mobile robot 100, and detects a two-dimensional shape or a three-dimensional shape, color, and material of the surrounding environment.
- the external sensor 101 may be configured by sensors such as LRF (Laser Range Finder), LIDAR (Laser Imaging Detection and Ranging), a camera, a depth camera, a stereo camera, a sonar, a RADAR, or all combinations thereof.
- LRF Laser Range Finder
- LIDAR Laser Imaging Detection and Ranging
- a camera a depth camera
- stereo camera a stereo camera
- sonar a sonar
- RADAR RADAR
- the inner world sensor 102 is a sensor that detects the position / posture of the mobile robot 100.
- the internal sensor 102 has three direction components of the moving distance, speed, acceleration, angular velocity, and posture of the mobile robot 100 (for example, the X-axis direction component, the Y-axis direction component, and the Z-axis direction component of the XYZ orthogonal coordinate system). ) At least one of the absolute values can be detected. Further, it may be possible to detect the absolute values of the velocity, acceleration, angular velocity, posture, and torque for each joint of the leg and arm of the mobile robot 100.
- the internal sensor 102 can be configured by sensors such as an acceleration sensor, an angular velocity sensor, an encoder, a geomagnetic sensor, an atmospheric pressure / water pressure sensor, a torque sensor, or a combination thereof.
- the moving body / environment change information integration unit 103 acquires information (second map information) detected by the external sensor 101 mounted on the mobile robot 100. Also, the moving object movement / environment change information (third map information) extracted by the environment sensor node 120 is acquired via the communication unit 109. Also, a process (removal process) for removing information about a place that dynamically changes in time series in information detected by the external sensor 101 using the moving body movement / environment change information extracted by the environment sensor node 120, or A process of reducing the reliability of information about the place (dispersion increasing process) is performed. Specific contents of the removal process and the dispersion increasing process will be described later.
- the map generation / self-position estimation unit 104 acquires at least one of the absolute values of the three-direction components of the moving distance, speed, acceleration, angular velocity, and posture of the mobile robot 100 obtained from the internal sensor 102. Further, the map generation / self-position estimation unit 104 acquires information about the surrounding environment of the mobile robot 100 by the external sensor 101 processed by the moving body / environment change information integration unit 103 (removed or distributed and increased). To do. Then, the map generation / self-position estimation unit 104 simultaneously performs estimation of the self-position of the mobile robot 100 and generation of map information from the acquired information by filtering processing. Note that self-position estimation and map information generation need not be performed simultaneously.
- the filtering process can be configured by a Kalman filter, an extended Kalman filter, an Unsent Kalman filter, a particle filter, or a Rao-Blackwellized particle filter.
- the map generation / self-position estimation unit 104 stores the generated map in the map information database 105. If a map of the corresponding environment already exists, the map information is updated using the map information stored in the map information database 105 and stored in the map information database 105.
- the map information database 105 stores the map information (first map information) generated / updated by the map generation / self-position estimation unit 104.
- the stored map information is used by the map generation / self-position estimating unit 104 and the route planning unit 106, respectively.
- the map information database 105 is arranged in the main memory or secondary memory of the mobile robot 100.
- the route planning unit 106 uses the map information stored in the map information database 105 and the information on the self-location of the mobile robot 100 estimated by the map generation / self-position estimation unit 104 to calculate the map from the current self-location.
- Plan a route to travel through.
- the planned route is a route that minimizes the cost of the route.
- the cost of the route can be expressed by the total travel time, the total travel distance, the total energy used for the travel, the sum of the congestion degree of the route, or a combination of all of them.
- the control unit 107 performs control for executing the moving operation of the mobile robot 100 on the route planned by the route planning unit 106 in an environment where the mobile robot actually operates.
- the actuator 108 drives, for example, a wheel or the like based on a control command from the control unit 107, and thereby actually moves the mobile robot 100.
- the communication unit 109 has a function of performing one-to-many and many-to-many communication by wire or wireless.
- the communication unit 109 inquires of the environment sensor node 120 whether there is moving object movement / environment change information at the specified location.
- the environmental sensor node 120 includes an environmental sensor 121, a moving body movement / environment change information extraction unit 122, an environmental sensor information time series database 123, a moving body movement / environment change information database 124, and a communication unit 125.
- the environmental sensor 121 is a sensor that detects information about the surrounding environment of the environmental sensor node 120.
- the information on the surrounding environment includes information on the two-dimensional shape or the three-dimensional shape and color / material of the surrounding environment.
- FIG. 2 is a schematic diagram showing an example of an environment in which the environment sensor node 120, a plurality of environment sensor nodes 312 to 319 similar to the environment sensor node 120, and the mobile robot 100 exist.
- the environmental sensor nodes 120 and 312 to 319 include environmental sensors 121 and 322 to 329, respectively.
- the detection ranges of the environmental sensor nodes 120 and 312 are 331 and 332, respectively.
- the detection ranges of the environmental sensor nodes 313 to 319 are not shown for simplification of the drawing, the environmental sensor nodes and the environmental sensors are installed so that the mobile robot can detect all the movable spaces. ing.
- the environmental sensors 121 and 322 to 329 of the environmental sensor nodes 120 and 312 to 319 include, for example, LRF (Laser Range Finder), LIDAR (Laser Imaging Detection and Ranging), camera, depth camera, stereo camera, sonar, RADAR, focus It may be configured by a sensor such as an electric infrared sensor, an infrared ToF (Time of Flight) sensor, or a combination of all of them.
- LRF Laser Range Finder
- LIDAR Laser Imaging Detection and Ranging
- camera depth camera
- stereo camera stereo camera
- sonar sonar
- RADAR RADAR
- focus It may be configured by a sensor such as an electric infrared sensor, an infrared ToF (Time of Flight) sensor, or a combination of all of them.
- the mobile robot 100 includes an external sensor 101 and moves in the movement direction 303.
- the detection range of the external sensor 101 is 304 at the timing shown in FIG.
- FIG. 2 shows the current position 341 of the person / moving object, the past position 342 of the person / moving object, and the current moving direction 343 of the person / moving object.
- the time-series environmental change due to the movement of the person / moving body from the position 342 to the position 341 is extracted by the environmental sensor nodes 120, 312, 313, and 314 as moving body moving / environment change information, as will be described in detail later.
- the information on the surrounding environment acquired by the environment sensor 121 includes information on coordinates (coordinate information).
- the moving object / environment change information extraction unit 122 stores the time-series information of the surrounding environment detected by the environment sensor 121 in the environment sensor information time-series database 123 in the form of a combination of detection information and detection timing.
- the detection timing is represented by time.
- the moving object / environmental change information extraction unit 122 also changes the surrounding environment based on a plurality of combinations of detection timing and detection information along the time series stored in the environmental sensor information time-series database 123. The magnitude of change and the time taken for the change are calculated.
- the moving body / environment change information extraction unit 122 uses the moving body / environment change information database 124 as the moving body / environment change information, which includes the change location of the surrounding environment, the change shape, the magnitude of the change, and the time taken for the change. Save to. Note that the moving body / environment change information may not include all of the change place, change shape, change magnitude, and time taken for change in the surrounding environment. Only some of these pieces of information may be included, or other information may be included.
- the environmental sensor information time-series database 123 stores the information of the time-series surrounding environment detected by the environmental sensor 121 acquired by the moving body movement / environment change information extraction unit 122 as a set of detection information and detection timing. The stored information is used by the moving object movement / environment change information extraction unit 122 to extract moving object movement / environment change information.
- the moving object / environment change information database 124 stores the moving object / environment change information calculated by the moving object / environment change information extraction unit 122.
- the stored moving object / environment change information is transmitted to the mobile robot 100 by the communication unit 125 that has received an inquiry from the communication unit 109 of the mobile robot 100.
- the communication unit 125 receives an inquiry from the communication unit 109 of the mobile robot 100 as to whether or not there is moving body movement / environment change information, and performs a search in the moving body movement / environment change information database 124 to search for moving body movement / environment.
- the change information is transmitted to the mobile robot 100 by wireless or wired communication.
- FIG. 3A is a flowchart illustrating an example of processing of the robot system.
- T201 represents the processing flow of the mobile robot 100
- T221A and T221B represent the processing flow of the environment sensor node 120.
- the inner world sensor 102 acquires information.
- the information to be acquired is the absolute value of the three direction components of the moving distance, speed, acceleration, angular velocity, and posture of the mobile robot 100.
- step S202 the external sensor 101 acquires information about the surrounding environment of the mobile robot 100.
- the information on the surrounding environment at the timing shown in FIG. 2 is information detected by the external sensor 101 for the detection range 304.
- the information to be acquired includes information on the two-dimensional or three-dimensional shape, color, and material of the surrounding environment.
- step S203 the mobile robot 100 transmits the specified time to the environment sensor nodes 120 and 312 to 319 via the communication unit 109 within a specified two-dimensional or three-dimensional arbitrary size.
- An inquiry is made as to whether moving object / environment change information exists in the belt, and a response to the inquiry is waited for a predetermined time.
- the location and time may be designated in advance by a user who manages the mobile robot 100 as an arbitrary (fixed) size location or an arbitrary length of time.
- the mobile robot may change and specify the size of the place and the length of time according to the state.
- the inquiry from the mobile robot 100 may be made to all other mobile robots and other environmental sensor nodes that can communicate.
- the mobile robot 100 can receive responses and moving object / environment change information from all mobile robots and environmental sensor nodes. Inquiries from the mobile robot 100 may be performed simultaneously on a plurality of environmental sensor nodes (a plurality of other mobile robots) or sequentially.
- step S204 the communication unit 109 determines whether there is a response from the environment sensor nodes 120 and 312 to 319 in response to the inquiry in step S203. If there is one or more responses, the process proceeds to step S205. If there is no response, the process proceeds to step S208.
- step S205 the communication unit 109 receives moving body movement / environment change information from the environment sensor nodes 120, 312-319, all other environment sensor nodes, and all other mobile robots.
- moving body movement / environment change information may be received only from a specific environmental sensor node among the environmental sensor nodes 120 and 312 to 319.
- Information regarding the location specified in step S203 may be acquired only from the environmental sensor node having the environmental change information.
- step S206 the moving body / environment change information integration unit 103 uses map information (first information) that is used in the next step S207 and includes location information corresponding to the location where the moving body movement / environment change information received in step S205 exists. Map information) is acquired from the map information database 105.
- FIG. 3B is a flowchart showing an example of the process of the robot system, and more specifically, a flowchart showing an example of the process of step S207.
- step S207 the specific contents of step S207, that is, the details of steps S20701 to S20706 will be described.
- step S207 the moving body / environment change information integration unit 103 extracts one of the information acquired by the external sensor 101 in step S202 and the map information acquired in step S206 by the environmental sensor nodes 120 and 312 to 319. Performs processing to integrate more than one moving object / environmental change information.
- the moving body / environment change information integration unit 103 includes the coordinate system of the information on the surrounding environment acquired by the external sensor 101 in step S202, and the moving body / environment information transmitted from the environment sensor nodes 120 and 312 to 319. Coordinate conversion is performed to convert the coordinate system of the change information into the coordinate system of the map information acquired in step S206. Make each coordinate system a common coordinate system.
- step S20702 the moving body / environment change information integration unit 103 determines whether there is a place where the magnitude of the temporal change amount between different timings regarding the shape of the moving body movement / environment change information is greater than the threshold Th_ch. Judgment is made.
- the “temporal change amount” relates to the existence probability (Exitence Probability) of each point, which is the probability that an object exists at each point in the two-dimensional space or the three-dimensional space, for example. Specifically, for each point, the absolute value of the difference between the existence probability at the first timing and the existence probability at the second timing that is temporally later than the first timing is obtained, and each point is obtained. The sum of the absolute values of the differences (Sum of Absolute Difference). That is, it is the change amount of the existence probability that has changed in the time between the first timing and the second timing. Further, the first timing and the second timing may be time, or may be elapsed time after the robot system starts to operate.
- the temporal change amount related to the shape is, for example, SSD (Sum of Squared Difference), SATD (Sum of Absolute Transformed Difference), MAD (Mean Absolute Difference), MSD Quante, or MSD Quant (Image). It may be a difference between feature vectors such as SIFT, SURF, and HoG. Further, the temporal change amount related to the shape may be, for example, a difference between BoF (Bag of Feature) feature vectors generated using these feature vectors.
- the setting of the specific value of the threshold Th_ch differs depending on the standard of change and the situation. As an example, a change amount when an object of 100 mm ⁇ 100 mm ⁇ 100 mm moves 20 mm in a space of 10 m ⁇ 10 m ⁇ 10 m may be set as the threshold Th_ch.
- the setting of the specific value of the threshold Th_ch is not limited to the above example.
- step S20703 If there is a place where the amount of temporal change between different timings is larger than the threshold Th_ch, the process proceeds to step S20703. Otherwise, the process proceeds to step S20707.
- step S20703 the moving body / environment change information integration unit 103 determines that the time difference between timings where the temporal change amount related to the shape of the moving object / environment change information is greater than the threshold Th_ch is greater than the threshold Th_tu (first time). It is determined whether or not. If greater than the threshold Th_tu, the process proceeds to step S20707. This is because when the time difference is larger than the predetermined threshold Th_tu, the moving object / environmental change information is not a temporary environmental change due to a disturbance, but a semi-permanent or permanent environmental change due to the movement of a static object, etc. This means treating the changed part of the environment as a new map.
- a layout change when moving furniture or the like in a room corresponds to this, and a change with a large time difference such as a layout change is treated as a new map. If the time difference is less than or equal to the threshold Th_tu, the process proceeds to step S20704.
- step S20704 the moving body / environment change information integration unit 103 determines that the time difference between timings at which the temporal change amount related to the shape of the moving body / environment change information is larger than the threshold Th_ch is smaller than the threshold Th_tb (second time). It is determined whether or not.
- the moving body / environment change information integration unit 103 corresponds to a place where the temporal change amount in the moving body / environment change information (third map information) is larger than the threshold Th_ch.
- the removal process is performed to remove the location information in the surrounding environment information (second map information) acquired by the 100 external sensors 101 and the location information in the map information (first map information) acquired in step S206. . Details of the information removal process will be described later.
- the moving body / environment change information integration unit 103 corresponds to a location where the temporal change amount in the moving body / environment change information (third map information) is greater than the threshold Th_ch.
- the location information in the surrounding environment information (second map information) acquired by the external sensor 101 of the mobile robot 100 and the location information in the map information (first map information) acquired in step S206 are distributed. Execute distributed increase processing to increase and change.
- the threshold Th_tb is larger than the threshold Th_tu. For example, an increase change in variance is calculated in a form (a ⁇ V) by multiplying variance V by a constant a greater than 1.
- “Distribution of information” here refers to an index that represents the variability and uncertainty of information.
- the variance when the variance is small, it means that there is a high probability that the detection target exists around the expected value in the space of the surrounding environment.
- the variance when the variance is large, it means that the probability that the detection target exists around the expected value is low, and the distribution of the probability that the detection target can exist greatly spreads in the space.
- dispersion increase processing of information means increasing the likelihood of information variation and uncertainty, that is, reducing the reliability of information. Details of this information increase processing will be described later.
- step S20704 it is determined whether or not the time difference between timings related to the shape of the moving object / environment change information is greater than the threshold Th_ch, but is not limited thereto. For example, it may be determined whether the magnitude of the temporal change amount related to the shape of the moving object / environment change information is larger than a new threshold value different from the threshold value Th_ch. That is, if the magnitude of the temporal change amount related to the shape of the moving object / environment change information is larger than the new threshold (Yes), the process proceeds to step S20705; otherwise (No), the process proceeds to step S20706.
- the new threshold value is higher than the threshold value Th_ch. That is, in the case of Yes in step S20704, since the amount of change is larger than in the case where it is not (No), the process of removing the moving body movement / environment change information from the external sensor information / map information is effective.
- step S20707 the moving object / environment change information integration unit 103 has not processed the moving object / environment change information integration unit 103 regarding the received moving object / environment change information related to one or more places, that is, has not been processed. It is determined whether there is moving object movement / environment change information. If unprocessed moving object / environment change information exists, the process returns to step S20701, and the process is continued until there is no unprocessed moving object / environment change information. If there is no unprocessed moving object / environment change information, the process proceeds to step S208.
- step S208 map information is generated and the position of the mobile robot 100 is estimated (including matching processing). If there is a response from the environmental sensor node 120 in step S204, the information acquired by the external sensor 101 in which the moving body movement / environment change information is integrated in step S207, and the information acquired by the internal sensor 102 in step S201
- the map generation / self-position estimation unit 104 generates map information and estimates the self-position of the mobile robot 100.
- the map information is compared with the sensor data acquired by the external sensor (matching process) to search for the conversion with the highest similarity and move with the conversion with the highest similarity.
- the moving distance and moving direction of the robot 100 that is, the current self position is estimated.
- the map generation / self-use is performed using the information acquired by the external sensor 101 in step S202 and the information acquired by the internal sensor 102 in step S201.
- the position estimation unit 104 generates map information and estimates the self position of the mobile robot 100. The generation of the map information and the estimation of the self position are simultaneously performed by the filtering process in the map generation / self position estimation unit 104.
- step S209 the route planning unit 106 plans a route moving from the current self-location using the map information generated in step S208 stored in the map information database 105.
- the route planning method is as described above.
- step S210 the control unit 107 generates a control command for performing the moving operation of the mobile robot 100 on the route planned in step S209.
- step S211 the actuator 108 actually moves the mobile robot 100 in response to the control command generated in step S210.
- steps S221 to S227 which are processes in the environment sensor node 120, will be described.
- step S221 of the processing flow T221A the communication unit 125 of the environmental sensor node 120 receives an inquiry from the mobile robot 100.
- step S223 whether or not there is moving body movement / environment change information that meets the conditions of the specified location and the specified time zone, which are the contents of the inquiry from the mobile robot 100, is determined using the moving body movement / environment change information database 124. Check. As a result of the confirmation, if there is moving object movement / environment change information that matches the condition, the process proceeds to step S225. If there is no moving object movement / environment change information that matches the condition, the process returns to step S221.
- step S225 the communication unit 125 responds to the inquiry about whether the moving body / environment change information exists in the location and time zone specified in step S203 to the mobile robot 100 (the specified location Information on whether or not there is moving object movement / environment change information).
- step S226 the communication unit 125 transmits moving body movement / environment change information to the mobile robot 100 that has made the inquiry received in step S221. After the transmission, the process returns to step S221.
- step S222 of the process flow 221B the moving object / environment change information extraction unit 122 acquires information around the environment sensor node 120 detected by the environment sensor 121, and detects the detection time and detection in the environment sensor information time series database 123. Save in chronological order as a set of information.
- step S224 the moving object / environment change information extraction unit 122 sets a plurality of sets of detection times and detection information related to the periphery of the environmental sensor node 120 stored in the environmental sensor information time series database 123 in time series. It is determined whether or not the environment around the environment sensor node 120 has changed.
- step S227 the moving body movement / environment change information extraction unit 122 follows the detection time and detection information time series stored in the environmental sensor information time series database 123. Based on a plurality of sets, the change place, change shape, change magnitude, and time taken for the change in the surrounding environment are calculated. The change location, the magnitude of the change, and the time taken for the change are stored in the moving object / environment change information database 124 as moving object / environment change information.
- the extracted moving body movement / environment change information is transmitted from the environment sensor node through the communication path 353 by the communication unit 351 mounted on the environment sensor node and the communication unit 352 mounted on the mobile robot 100.
- the moving body / environment change information sent to the mobile robot is information detected by the external sensor 101 of the mobile robot 100 corresponding to the place where the environment change information by the person / moving body exists by the moving body / environment change information integration unit. Used to remove location information in (second map information). This makes it possible to remove a dynamic environmental change that is a disturbance from information detected by the external sensor 101 of the mobile robot 100.
- FIG. 4A shows that the mobile robot 100 exists in the environment 400 at an arbitrary time T, for example.
- FIG. 4C shows information (sensor data) 430 acquired by the external sensor 101 of the mobile robot 100 at time T.
- FIG. 4B shows that the mobile robot 100 exists in the environment 400 at time T + ⁇ when an arbitrary time interval ⁇ has elapsed from time T.
- FIG. 4D shows information (sensor data) 431 acquired by the external sensor 101 of the mobile robot 100 at time T + ⁇ when an arbitrary time interval ⁇ has elapsed from time T.
- the environment 400 includes the mobile robot 100, the static object A present at the position 401, the moving object H present at the position 402, and the environment sensor node 120.
- the mobile robot 100 is moving in the traveling direction 405.
- the moving body H is moving from the position 402 in the traveling direction 404.
- the environment 400 includes the mobile robot 100, the static object A present at the position 401, the moving object H present at the position 403, and the environment sensor node 120.
- the position of the mobile robot 100 has changed. Further, the position of the moving body H also changes from the position 402 to the position 403.
- the external sensor 101 mounted on the mobile robot 100 acquires information around the mobile robot 100 in the environment 400.
- FIG. 4C shows sensor data 430 (thick solid line) acquired by the external sensor 101 with respect to the environment 400 at time T.
- the sensor data 430 includes the static object A present at the position 401, the moving object H present at the position 402, the shape of each wall W, and the distance from the external sensor 101 in the environment 400.
- FIG. 4D shows sensor data 431 (thick dashed line) acquired by the external sensor 101 for the environment 400 at time T + ⁇ when an arbitrary time interval ⁇ has elapsed from time T.
- the sensor data 431 includes, in the environment 400, the static object A present at the position 401, the moving object H present at the position 403, the shape of each wall W, and the distance from the external sensor 101.
- the sensor data 430 and the sensor data 431 may include data formats such as two-dimensional distance data, three-dimensional distance data, two-dimensional shape data, three-dimensional shape data, and image feature points, or all combinations thereof. . That is, the information acquired by the external sensor 101 includes coordinate information.
- the environmental sensor node 120 extracts, as the moving body movement / environment change information, the environmental change caused by the moving body H moving from the position 402 to the position 403 between the time T and the time T + ⁇ .
- 5A to 5C show a matching process between map data at time T and sensor data at time T + ⁇ , and an example of the result.
- the sensor data matching process is executed by the map generation / self-position estimation unit 104 in step S208 described above.
- the map information (data) and the sensor data acquired by the external sensor are collated to search for a conversion with a high degree of similarity, and the moving distance and movement of the mobile robot 100 with the conversion with the highest degree of similarity.
- map data 501 at time T that is, map information in the map information database 105) created based on the sensor data 430, sensor data 431 acquired by the external sensor 101 of the mobile robot 100 at time T + ⁇ , The matching process will be described.
- the matching process is a process of comparing a plurality of data and searching for conversion between a plurality of data having a low degree of difference, that is, a high degree of similarity.
- the gradient of the dissimilarity is calculated, and a conversion that minimizes the dissimilarity is searched for.
- the matching result of the map data 501 and the sensor data 431 by the matching process can be as a result example 503 and a result example 504 shown in FIG. 5B.
- Result example 504 shows an example that is very close to the correct answer (Ground truth) as the correspondence between the map data 501 and the sensor data 431.
- the still object A that does not move from the position 401 corresponds to the data portion (left side portion) based on the shape of the wall with high accuracy, and corresponds to the correct answer with high accuracy.
- the part (right side part) in which the shape of the sensor data has changed due to the moving body H moving from the position 402 to the position 403 is not taken well, and as a result, the similarity of the entire sensor data is reduced. Yes.
- FIG. 5C shows an evaluation value example 505 in matching.
- an evaluation value example 505 an example is shown in which the horizontal axis represents the distance in the X-axis direction between two pieces of data 501 and 431 to be matched, and the vertical axis represents the evaluation value.
- the evaluation value represents the degree of dissimilarity between data, and becomes a lower value as the data shapes match.
- the evaluation value example 505 shows an example in which the sensor data 431 is translated only in the X-axis direction with respect to the map data 501.
- Translation refers to a movement that translates in one direction.
- the evaluation value example 505 in matching shows an evaluation value 507 of matching calculation for the result example 504.
- the change in the evaluation value is gentle, and when the result is obtained using a method such as the gradient method, a large number of repetitions of the matching operation are required before the result is converged.
- the result example 503 shown in FIG. 5B shows an example in which the degree of correspondence between the map data 501 and the correct answer (Ground truth) as the correspondence between the sensor data 431 is low.
- the evaluation value example 505 in matching shows the evaluation value 506 of the matching operation for the result example 503.
- the evaluation value 506 is a minimum value in the evaluation value example 505. Therefore, when a result is obtained using a technique such as a gradient method, the result converges to the evaluation value 506, and the result example 503 having a low degree of correspondence with the correct answer is erroneously determined as the optimum result. As described above, the result converges to the result example 503, and there may be a problem that a correspondence having a low degree of correspondence with the correct answer is erroneously determined to be optimal.
- FIG. 6A is a diagram illustrating an example of data removal processing.
- the process of removing the data of the target place by the first process that is, the process of removing the data of the place (region C) where the moving body / environment change information exists due to the movement of the moving body H
- the map data 601 at time T and the sensor data 602 at time T + ⁇ after the operation is performed are shown.
- step S20705 the process shown in FIG. 6A is the same process as step S20705. That is, the region C shown in FIG. 6A corresponds to a place where the temporal change amount is larger than the threshold Th_ch.
- the matching evaluation value example shows an example in which the sensor data 602 is translated only in the X-axis direction with respect to the map data 601. That is, in the graph of FIG. 6C, the horizontal axis indicates the distance in the X-axis direction between the two data 601 and 602, and the vertical axis indicates the evaluation value (difference between the two data 601 and 602).
- the information of the area where the moving object exists in each of the two data to be matched (area C shown in FIG. 6A) is removed before the matching process is executed.
- the accuracy of the matching process can be improved and the amount of calculation can be reduced.
- FIG. 7A is a diagram illustrating an example of a data distribution increase process.
- the process of increasing the data distribution at the target location is performed by the process of increasing the data distribution at the location where the moving object / environment change information exists (area C) due to the movement of the moving object H.
- Area C the process of increasing the data distribution at the location where the moving object / environment change information exists due to the movement of the moving object H.
- Later map data 603 at time T and sensor data 604 at time T + ⁇ are shown.
- FIG. 8 shows map data 501 at time T.
- the map data 501 includes actual measurement points (measurement coordinates) actually measured by the external sensor 101 of the mobile robot 100.
- the variance of the information of the map data 501 can be regarded as the variance of the measured points, and represents the uncertainty of the measured points (the reliability of the coordinate information).
- the variance of the actual measurement points can be specifically defined by the width of the distribution of existence probabilities having the actual measurement points as a peak. Therefore, an increase in the distribution of information means an increase in the width of the distribution of existence probabilities having the actual measurement point as a peak.
- FIG. 10 shows a state in which an increase in the distribution of information is executed in a part (right side part) of the map data 501, that is, the width of the distribution of the existence probability of the measured points is increased.
- step S20706 the process shown in FIG. 7A is the same process as step S20706. That is, a region C illustrated in FIG. 7A corresponds to a place where the temporal change amount is larger than the threshold Th_ch.
- FIG. 7B and 7C show examples of matching results and matching evaluation values of the map data 603 and sensor data 604.
- the matching evaluation value example shows an example in which the sensor data 604 is translated only in the X-axis direction with respect to the map data 603.
- the change in the evaluation value becomes steep. Therefore, when the result is obtained using a method such as the gradient method, the minimum value corresponding to the point of the evaluation value 506 of the evaluation value example 505 shown in FIG. 5C is sufficiently larger than the point of the evaluation value 608. Compared to the evaluation value example 505 in FIG. According to this evaluation example, it is easy to avoid misjudgment and it is possible to respond to a correct answer with high accuracy. Furthermore, the change in the evaluation value around the evaluation value 608 is steep compared to the evaluation value 507 shown in FIG. 5C, and the number of times of the matching operation is repeated using the evaluation value example 505 shown in FIG. 5C. Compared to
- the information on the area where the moving object exists in each of the two data to be matched (for example, the area C shown in FIG. 7A) is distributed and increased before the matching process is executed. As a result, the accuracy of the matching process can be improved and the amount of calculation can be reduced.
- the information removal process shown in FIG. 6A and the information dispersion increase process shown in FIG. 7A are used according to the conditions as shown in FIG. 3B. Specifically, when there is a place where the amount of temporal change between the first timing and the second timing is greater than the threshold Th_ch (when there is a place where such moving object / environment change information exists) In addition, when the time difference between the first timing and the second timing is smaller than the threshold Th_tb (second time), that is, it is determined that the influence due to the movement of the moving object / change in the environment is relatively large. In this case, the removal process is executed in step S20705.
- step S20706 Increase processing is executed.
- the removal process is a process that suppresses the influence of the movement of the moving object and the change of the environment to be smaller than the dispersion increasing process.
- the removal process and the dispersion increasing process are selectively used depending on the degree of the influence due to the movement of the moving object and the change of the environment, thereby realizing the matching process between the data using the optimum method. .
- the first embodiment it is possible to perform map generation and self-position estimation of a mobile robot that moves using the map with high accuracy and robustness even in a space with a large environmental change.
- the robot system according to the first embodiment described above includes one mobile robot and at least one environmental sensor node.
- the robot system according to the second embodiment includes a plurality of mobile robots and at least one environmental sensor node.
- Each of the plurality of mobile robots has the same function as the environment sensor node in the first embodiment. That is, the remaining mobile robots can function as environment sensor nodes for one mobile robot that performs map generation and self-position estimation.
- each mobile robot of the second embodiment uses an external sensor as in the case of the environmental sensor node of the first embodiment that finally acquires moving body movement / environment change information using the environmental sensor. It is configured to finally acquire moving body movement / environment change information.
- the mobile robot includes substantially the same components as the moving body / environment change information extraction unit, environment sensor information time series database, and moving body / environment change information database of the environment sensor node shown in FIG. .
- FIG. 11 is a flowchart showing an example of processing of a plurality of mobile robots in the robot system according to the second embodiment.
- the mobile robot according to the second embodiment is configured to operate according to a main process flow T701 and sub-process flows T702A and T702B.
- FIG. 11 shows a robot that executes the main processing flow (corresponding to the mobile robot 100 of the above-described first embodiment) and at least one other mobile robot that executes the sub-processing flows T702A and T702B (the above-described embodiment).
- the exchange of information with a robot that functions as one environmental sensor node 120 is shown.
- the difference between the robot system according to the second embodiment that performs the processing flow shown in FIG. 11 and the robot system according to the first embodiment shown in FIGS. 3A and 3B is that the moving body movement / environment change information is the processing flow. Extracted by a mobile robot that performs T702A and T702B and plays the role of an environmental sensor node, and is transmitted from the robot to a mobile robot that executes a main processing flow and performs map generation and self-position estimation. A mobile robot acting as a sensor node acquires information about its surroundings using an external sensor.
- the map generation and the self-position estimation of the mobile robot that moves using the map can be performed with high accuracy and robustness even in a space with a large environmental change. It can be performed.
- the movable mobile robot acts as an environmental sensor node, so that the area in which moving body movement / environment change can be detected is expanded, and accordingly, the amount of moving body movement / environment change information increases ( Compared to a robot system with a fixed environmental sensor node). As a result, it is possible to further suppress the deterioration in accuracy of the estimated value in the map generation / self-position estimation process and the increase in the calculation amount due to the disturbance.
- the robot system since any one of the plurality of mobile robots plays the role of the environmental sensor node, it is possible to exclude at least one environmental sensor node itself from the robot system. That is, in this case, the robot system does not include the environment sensor node, but includes at least two mobile robots that can play the role of the environment sensor node.
- the technique described in the above aspect can be realized, for example, in the following types of robot systems.
- the types of robot systems in which the technology described in the above aspect is realized are not limited to these.
- the processing of the present disclosure has been described in the embodiment, but the subject and the device that perform each processing are not particularly limited. It may be processed by a processor or the like (described below) embedded in a specific device located locally. Further, it may be processed by a cloud server or the like arranged at a location different from the local device. Moreover, you may share each process demonstrated by this indication by coordinating information between a local apparatus and a cloud server. Embodiments of the present disclosure will be described below.
- the above apparatus is specifically a computer system including a microprocessor, ROM, RAM, a hard disk unit, a display unit, a keyboard, a mouse, and the like.
- a computer program is stored in the RAM or hard disk unit.
- Each device achieves its functions by the microprocessor operating according to the computer program.
- the computer program is configured by combining a plurality of instruction codes indicating instructions for the computer in order to achieve a predetermined function.
- a part or all of the constituent elements constituting the above-described apparatus may be configured by one system LSI (Large Scale Integration).
- the system LSI is an ultra-multifunctional LSI manufactured by integrating a plurality of components on a single chip, and specifically, a computer system including a microprocessor, ROM, RAM, and the like. .
- a computer program is stored in the RAM.
- the system LSI achieves its functions by the microprocessor operating according to the computer program.
- a part or all of the constituent elements constituting the above-described device may be constituted by an IC card or a single module that can be attached to and detached from each device.
- the IC card or the module is a computer system including a microprocessor, a ROM, a RAM, and the like.
- the IC card or the module may include the super multifunctional LSI described above.
- the IC card or the module achieves its function by the microprocessor operating according to the computer program. This IC card or this module may have tamper resistance.
- the present disclosure may be the method described above. Further, the present invention may be a computer program that realizes these methods by a computer, or may be a digital signal composed of the computer program.
- the present disclosure provides a computer-readable recording medium such as a flexible disk, a hard disk, a CD-ROM, an MO, a DVD, a DVD-ROM, a DVD-RAM, a BD ( It may be recorded on a Blu-ray (registered trademark) Disc), a semiconductor memory, or the like.
- the digital signal may be recorded on these recording media.
- the computer program or the digital signal may be transmitted via an electric communication line, a wireless or wired communication line, a network represented by the Internet, a data broadcast, or the like.
- the present disclosure may be a computer system including a microprocessor and a memory, and the memory may store the computer program, and the microprocessor may operate according to the computer program.
- program or the digital signal is recorded on the recording medium and transferred, or the program or the digital signal is transferred via the network or the like and executed by another independent computer system. You may do that.
- the mobile robot and environmental sensor node can take various forms.
- the mobile robot 100 and the environmental sensor nodes 120 (312 to 319) shown in FIG. 2 are used indoors, but are not limited thereto.
- the mobile robot may be in the form of a vehicle that automatically travels on an outdoor road.
- the environmental sensor node is arranged along the road like a road sign or is provided in a building near the road.
- the present disclosure is applicable to a map generation method, a self-position estimation method, a robot system, and a robot.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Optics & Photonics (AREA)
- Human Computer Interaction (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
本開示は、移動ロボットの自己位置推定・環境地図作成を行う技術に関し、特に環境に設置された環境センサと該環境センサと連携する移動ロボットのシステムに関する。 The present disclosure relates to a technique for estimating the position of a mobile robot and creating an environment map, and more particularly, to an environment sensor installed in the environment and a mobile robot system that cooperates with the environment sensor.
人間やその他の機械が存在する環境空間を移動ロボットが自律的に移動するために、移動ロボット自身が自己の位置・姿勢、周辺環境を正しく認識を行うことが必要となる。従来、このような認識を行うための手法として、環境の地図を作成しながら同時に自己位置の推定を行うSLAM(Simultaneous Localization And Mapping)技術が知られている。 In order for a mobile robot to move autonomously in an environment space where humans and other machines exist, it is necessary for the mobile robot itself to correctly recognize its own position / posture and surrounding environment. Conventionally, as a method for performing such recognition, a SLAM (Multiple Localization And Mapping) technique that simultaneously estimates a self-location while creating a map of an environment is known.
SLAM技術における移動ロボットは、移動ロボットの内部の状態を知るためのセンサである内界センサと、外界の状態を知るためのセンサである外界センサを備えている。 The mobile robot in the SLAM technology includes an internal sensor that is a sensor for knowing the internal state of the mobile robot, and an external sensor that is a sensor for detecting the external state.
SLAM技術では、まず内界センサの情報から現在の位置・姿勢を推定する。次に、推定した位置・姿勢において既に保持している周辺環境の地図情報から、移動ロボットの(i)位置(ii)姿勢(iii)位置の不確実性(iv)姿勢の不確実性、の予測を行う。次に、外界センサによって得られた観測情報と、内界センサの情報から予測した情報を比較する。次に、比較によって算出された、それぞれの情報の尤もらしさから、内界センサと外界センサの情報の重みを決定する。そして、内界センサと外界センサのデータの重みの情報を用いて、現在の位置・姿勢と地図情報を更新する処理を行う。 In SLAM technology, the current position / orientation is first estimated from the information of the internal sensor. Next, from the map information of the surrounding environment already held at the estimated position / posture, (i) position (ii) posture (iii) position uncertainty (iv) posture uncertainty Make a prediction. Next, the observation information obtained by the external sensor and the information predicted from the information of the internal sensor are compared. Next, the weight of the information of the inner world sensor and the outer world sensor is determined from the likelihood of each information calculated by the comparison. And the process which updates the present position and attitude | position and map information is performed using the information of the weight of the data of an inner world sensor and an outer world sensor.
特許文献1には複数の異なるセンサの出力に基づいて自己位置推定を行う複数の推定器の信頼度を評価し、信頼度に応じて複数の推定器による自己位置推定情報を統合する、ロボットシステムが開示されている。
特許文献2には環境センサと連携して、人の行動を邪魔することなく、人や障害物との衝突を回避しながら移動を行う移動ロボットが開示されている。環境センサは環境内に存在する全ての人の位置を検出するように複数配置される。
本開示の一態様によれば、
少なくとも1つ以上の環境センサノードを用いて地図生成を行う移動ロボットの地図生成方法であって、
前記移動ロボット周辺の情報を含む予め作成された第1の地図情報を取得し、
前記移動ロボットに搭載された外界センサによって前記移動ロボット周辺の情報を含む第2の地図情報を取得し、
前記環境センサノードから前記移動ロボット周辺の情報を含む第3の地図情報を受け取り、
(i)前記第3の地図情報に時間的変化量が所定の閾値以上の場所の情報が存在する場合、
前記第3の地図情報における時間的変化量が前記所定の閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報とを除去する除去処理を行う、または、
前記第3の地図情報における時間的変化量が前記所定閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報それぞれの分散を増加する分散増加処理を行い、
(ii)前記除去処理または前記分散増加処理が行われた後の第1の地図情報と第2の地図情報とをマッチングし、そのマッチング結果に基づいて地図情報を生成し、
(iii)マッチング結果に基づいて生成された地図情報に、前記第1の地図情報を更新する、地図生成方法が提供される。
According to one aspect of the present disclosure,
A map generation method for a mobile robot that performs map generation using at least one environmental sensor node,
Obtaining pre-created first map information including information around the mobile robot;
Obtaining second map information including information around the mobile robot by an external sensor mounted on the mobile robot;
Receiving third map information including information around the mobile robot from the environmental sensor node;
(I) When there is information on a location where the temporal change amount is equal to or greater than a predetermined threshold in the third map information,
A removal process for removing the location information in the first map information and the location information in the second map information corresponding to a location where the temporal change amount in the third map information is equal to or greater than the predetermined threshold. Or
A variance that increases the variance of each of the location information in the first map information and the location information in the second map information, corresponding to a location where the temporal change in the third map information is greater than or equal to the predetermined threshold. Increase processing,
(Ii) matching the first map information and the second map information after the removal process or the dispersion increasing process is performed, and generating map information based on the matching result;
(Iii) A map generation method is provided in which the first map information is updated to map information generated based on a matching result.
なお、これらの全般的又は具体的な態様は、システム、方法、集積回路、コンピュータプログラム又はコンピュータで読み取り可能なCD-ROM等の記録媒体で実現されても良く、システム、方法、集積回路、コンピュータプログラム又は記録媒体の任意な組み合わせで実現されても良い。 These general or specific aspects may be realized by a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM. The system, method, integrated circuit, computer You may implement | achieve with arbitrary combinations of a program or a recording medium.
本開示によれば、環境変化が大きい空間においても、高精度かつロバストに、地図生成と自己位置推定とを行うことができる。 According to the present disclosure, it is possible to perform map generation and self-position estimation with high accuracy and robustness even in a space with large environmental changes.
(本発明の基礎となった知見)
SLAM技術では、地図生成または自己位置推定の際に該移動ロボットの外界センサの検知範囲内の場所に環境変化があった場合、地図生成・自己位置推定に誤りが生じ、正しく推定を行うことができなくなるという課題がある。オフィスや家庭内などの人やその他の動体による環境変化が大きい空間や、駅やショッピングモールなどの多くの人が行き交う空間では大きな外乱すなわち大きな環境変化が頻繁に発生するためSLAM技術によって地図生成を行うことが困難であり、結果としてSLAM技術によって人や障害物を回避する経路を計画することが難しくなる。
(Knowledge that became the basis of the present invention)
In SLAM technology, if there is an environmental change in the location within the detection range of the external sensor of the mobile robot during map generation or self-position estimation, an error will occur in map generation / self-position estimation and correct estimation will be performed. There is a problem that it becomes impossible. Map generation by SLAM technology because large disturbances, that is, large environmental changes frequently occur in spaces where the environment changes greatly due to people and other moving objects such as offices and homes, and spaces where many people such as stations and shopping malls come and go. It is difficult to do so, and as a result, it becomes difficult to plan a route that avoids people and obstacles by SLAM technology.
さらに、上で述べたように環境変化が大きい空間では、地図生成・自己位置推定の処理において外界センサによって得られた情報と予測した情報とを比較する処理部分の最小二乗法による推定値の収束性が悪化し、計算量が増加する。 Furthermore, in the space where the environmental change is large as described above, the convergence of the estimated value by the least square method of the processing part that compares the information obtained by the external sensor and the predicted information in the process of map generation and self-position estimation The quality deteriorates and the calculation amount increases.
特許文献1や特許文献2などの従来の方法では、環境変化が大きい空間においても、高精度かつロバストに、地図生成と自己位置推定とを行うことについては検討されていなかった。そのために必要な計算量の増加を抑制するができなかった。
In conventional methods such as
本開示は、環境変化が大きい空間においても、高精度かつロバストに、地図生成と自己位置推定とを行う自己位置推定方法、ロボットシステムおよび移動ロボットを提供する。 The present disclosure provides a self-position estimation method, a robot system, and a mobile robot that perform map generation and self-position estimation with high accuracy and robustness even in a space with large environmental changes.
本開示に係る自己位置推定方法は、少なくとも1つ以上の環境センサノードを用いて地図生成を行う移動ロボットの地図生成方法であって、前記移動ロボット周辺の情報を含む予め作成された第1の地図情報を取得し、前記移動ロボットに搭載された外界センサによって前記移動ロボット周辺の情報を含む第2の地図情報を取得し、前記環境センサノードから前記移動ロボット周辺の情報を含む第3の地図情報を受け取り、(i)前記第3の地図情報に時間的変化量が所定の閾値以上の場所の情報が存在する場合、前記第3の地図情報における時間的変化量が前記所定の閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報とを除去する除去処理を行う、または、前記第3の地図情報における時間的変化量が前記所定閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報それぞれの分散を増加する分散増加処理を行い、(ii)前記除去処理または前記分散増加処理が行われた後の第1の地図情報と第2の地図情報とをマッチングし、そのマッチング結果に基づいて地図情報を生成し、(iii)マッチング結果に基づいて生成された地図情報に、前記第1の地図情報を更新する。 The self-position estimation method according to the present disclosure is a map generation method for a mobile robot that performs map generation using at least one environment sensor node, and includes a first preliminarily generated information including information on the periphery of the mobile robot. The map information is acquired, the second map information including information around the mobile robot is acquired by an external sensor mounted on the mobile robot, and the third map including the information around the mobile robot from the environment sensor node. And (i) when the third map information includes information on a place where the temporal change amount is equal to or greater than a predetermined threshold, the temporal change amount in the third map information is equal to or greater than the predetermined threshold value. A removal process is performed to remove the location information in the first map information and the location information in the second map information corresponding to the location, or the third map information Performing a variance increasing process for increasing the variance of each of the location information in the first map information and the location information in the second map information, corresponding to a location where the temporal change amount in FIG. ii) Matching the first map information and the second map information after the removal process or the variance increasing process is performed, and generating map information based on the matching result, (iii) The first map information is updated to the map information generated based on the map information.
また、前記第3の地図情報に第1のタイミングと第2のタイミングとの間で前記時間的変化量が前記所定の閾値以上である場所の情報が存在する場合において、前記第1のタイミングと前記第2のタイミングとの間の時間差が第1の時間と前記第1の時間に比べて長い第2の時間との間の範囲内である場合には、前記除去処理を行い、前記第1のタイミングと前記第2のタイミングとの間の時間差が前記第2の時間以上である場合には、前記分散増加処理を行ってもよい。 In addition, when the third map information includes information on a place where the temporal change amount is equal to or greater than the predetermined threshold between the first timing and the second timing, the first timing and When the time difference from the second timing is within a range between the first time and the second time that is longer than the first time, the removal process is performed, and the first When the time difference between the timing and the second timing is equal to or greater than the second time, the dispersion increasing process may be performed.
また、前記第3の地図情報に前記第1のタイミングと前記第2のタイミングとの間で前記時間的変化量が前記所定の閾値以上である場所の情報が存在する場合において、前記第1のタイミングと前記第2のタイミングとの間の時間差が前記第1の時間以下である場合には、前記除去処理および前記分散増加処理を行わずに、前記第1の地図情報と前記第2の地図情報とをマッチングし、そのマッチング結果に基づいて地図情報を生成してもよい。 In addition, when the third map information includes information on a place where the temporal change amount is equal to or more than the predetermined threshold between the first timing and the second timing, the first map information When the time difference between the timing and the second timing is equal to or less than the first time, the first map information and the second map are not performed without performing the removal process and the variance increasing process. Information may be matched and map information may be generated based on the matching result.
また、前記第3の地図情報に前記時間的変化量が前記所定の閾値以上である場所の情報が存在しない場合、前記第1の地図情報と前記第2の地図情報とをマッチングし、そのマッチング結果に基づいて地図情報を生成してもよい。 In addition, when there is no information on a place where the temporal change amount is equal to or greater than the predetermined threshold in the third map information, the first map information and the second map information are matched, and the matching is performed. Map information may be generated based on the result.
また、前記分散増加処理が、前記第1の地図情報の対応場所と前記第2の地図情報の対応場所それぞれの情報の不確からしさを増加させる処理であってもよい。 In addition, the variance increasing process may be a process of increasing the uncertainty of the information of each of the corresponding location of the first map information and the corresponding location of the second map information.
また、前記第3の地図情報が、前記環境センサノード周辺の物体の存在確率の情報を含み、前記時間的変化量が、前記存在確率の変化量であってもよい
また、前記第1の地図情報、前記第2の地図情報、および前記第3の地図情報が、二次元座標系における座標情報または三次元座標系における座標情報であって、前記マッチングを行う前に、前記第1の地図情報、前記第2の地図情報、および前記第3の地図情報それぞれの座標系を、共通の座標系に変換する座標変換処理を行ってもよい。
The third map information may include information on the existence probability of an object around the environmental sensor node, and the temporal change amount may be the change amount of the existence probability. Information, the second map information, and the third map information are coordinate information in a two-dimensional coordinate system or coordinate information in a three-dimensional coordinate system, and before performing the matching, the first map information A coordinate conversion process for converting the coordinate systems of the second map information and the third map information into a common coordinate system may be performed.
本開示の一態様に係る自己位置推定方法は、少なくとも1つのセンサノードを用いて自己位置推定を行う移動ロボットの自己位置推定方法であって、前記移動ロボット周辺の情報を含む予め作成された第1の地図情報を取得し、前記移動ロボットに搭載された外界センサによって前記移動ロボット周辺の情報を含む第2の地図情報を取得し、環境センサノードから前記移動ロボット周辺の情報を含む第3の地図情報を受け取り、(i)前記第3の地図情報に時間的変化量が所定の閾値以上の場所の情報が存在する場合、前記第3の地図情報における時間的変化量が前記所定の閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報とを除去する除去処理を行う、または、前記第3の地図情報における時間的変化量が前記所定閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報それぞれの分散を増加する分散増加処理を行い、(ii)前記除去処理または前記分散増加処理が行われた後の第1の地図情報と第2の地図情報とをマッチングし、そのマッチング結果に基づいて地図情報を生成し、(iii)マッチング結果に基づいて生成された地図情報に、前記第1の地図情報を更新し、(iv)更新済みの第1の地図情報と、前記移動ロボットの位置および姿勢の少なくとも一方を検出する内界センサの検出結果とに基づいて、前記更新済みの第1の地図情報上での前記移動ロボットの自己位置を推定する。 A self-position estimation method according to an aspect of the present disclosure is a self-position estimation method for a mobile robot that performs self-position estimation using at least one sensor node. 1st map information is acquired, 2nd map information including the information around the said mobile robot is acquired by the external sensor mounted in the said mobile robot, 3rd including the information around the said mobile robot from an environmental sensor node Receiving map information, and (i) if the third map information includes information on a location whose temporal change amount is equal to or greater than a predetermined threshold, the temporal change amount in the third map information is equal to or greater than the predetermined threshold value. A removal process for removing the location information in the first map information and the location information in the second map information corresponding to the location of the third location, or the third location A variance increasing process for increasing the variance of each of the location information in the first map information and the location information in the second map information corresponding to a location where the temporal change amount in the information is equal to or greater than the predetermined threshold, (Ii) matching the first map information and the second map information after the removal process or the dispersion increasing process is performed, and generating map information based on the matching result; (iii) matching result Update the first map information to the map information generated on the basis of (iv) the updated first map information and an internal sensor for detecting at least one of the position and orientation of the mobile robot Based on the detection result, the self-position of the mobile robot on the updated first map information is estimated.
また、前記更新済みの第1の地図情報と推定された自己位置とに基づいて移動経路を算出し、前記移動経路に沿って前記移動ロボットを移動させてもよい。 Also, a moving route may be calculated based on the updated first map information and the estimated self-location, and the mobile robot may be moved along the moving route.
本開示の一態様に係るロボットシステムは、少なくとも1つの環境センサノードと移動ロボットとを含むロボットシステムであって、前記環境センサノードが、前記移動ロボット周辺の情報を含む第3の地図情報を取得し、前記移動ロボットが、前記移動ロボット周辺の情報を含む予め作成された第1の地図情報を記録しているデータベースと、前記移動ロボット周辺の情報を含む第2の地図情報を取得する外界センサと、前記環境センサノードと通信して前記第3の地図情報を取得する通信部と、(i)前記第3の地図情報に時間的変化量が所定の閾値以上の場所の情報が存在する場合、前記第3の地図情報における時間的変化量が前記所定の閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報とを除去する除去処理を行う、または、前記第3の地図情報における時間的変化量が前記所定閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報それぞれの分散を増加する分散増加処理を行い、(ii)前記除去処理または前記分散増加処理が行われた後の第1の地図情報と第2の地図情報とをマッチングし、そのマッチング結果に基づいて地図情報を生成し、(iii)マッチング結果に基づいて生成された地図情報に、前記予め記録していた第1の地図情報を更新する情報処理部と、を備える。 A robot system according to an aspect of the present disclosure is a robot system including at least one environment sensor node and a mobile robot, and the environment sensor node acquires third map information including information around the mobile robot. The mobile robot acquires a database in which first map information created in advance including information around the mobile robot and second map information including information around the mobile robot are acquired. And a communication unit that communicates with the environmental sensor node to acquire the third map information, and (i) information on a location whose temporal variation is equal to or greater than a predetermined threshold exists in the third map information. , The location information in the first map information and the second map information corresponding to locations where the temporal change in the third map information is equal to or greater than the predetermined threshold. A removal process for removing the location information, or the location information in the first map information and the first map information corresponding to a location where the temporal change in the third map information is equal to or greater than the predetermined threshold. (Ii) First map information and second map information after the removal process or the dispersion increasing process is performed. (Iii) an information processing unit that updates the first map information recorded in advance to the map information generated based on the matching result, and generates map information based on the matching result. Prepare.
本開示の一態様に係る移動ロボットは、前記移動ロボット周辺の情報を含む予め作成された第1の地図情報を記録しているデータベースと、前記移動ロボット周辺の情報を含む第2の地図情報を取得する外界センサと、前記移動ロボットの外部にあって且つ前記移動ロボット周辺の情報を含む第3の地図情報を取得する少なくとも1つの環境センサノードと通信して前記第3の地図情報を取得する通信部と、(i)前記第3の地図情報に時間的変化量が所定の閾値以上の場所の情報が存在する場合、前記第3の地図情報における時間的変化量が前記所定の閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報とを除去する除去処理を行う、または、前記第3の地図情報における時間的変化量が前記所定閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報それぞれの分散を増加する分散増加処理を行い、(ii)前記除去処理または前記分散増加処理が行われた後の第1の地図情報と第2の地図情報とをマッチングし、そのマッチング結果に基づいて地図情報を生成し、(iii)マッチング結果に基づいて生成された地図情報に、前記予め記録していた第1の地図情報を更新する情報処理部と、を備える。 A mobile robot according to an aspect of the present disclosure includes a database in which first map information created in advance including information around the mobile robot is recorded, and second map information including information around the mobile robot. The third map information is acquired by communicating with an external sensor to be acquired and at least one environmental sensor node that is external to the mobile robot and acquires third map information including information around the mobile robot. A communication unit; and (i) if the third map information includes information on a place where the temporal change amount is equal to or greater than a predetermined threshold, the temporal change amount in the third map information is equal to or greater than the predetermined threshold. A removal process for removing the location information in the first map information and the location information in the second map information corresponding to the location is performed, or the time in the third map information (Ii) performing a variance increasing process for increasing variance of each of the location information in the first map information and the location information in the second map information corresponding to a location where the change amount is equal to or greater than the predetermined threshold; The first map information and the second map information after the removal process or the dispersion increasing process are matched, map information is generated based on the matching result, and (iii) generated based on the matching result. And an information processing unit that updates the previously recorded first map information in the map information.
なお、これらの全般的又は具体的な態様は、システム、方法、集積回路、コンピュータプログラム又はコンピュータで読み取り可能なCD-ROM等の記録媒体で実現されても良く、システム、方法、集積回路、コンピュータプログラム又は記録媒体の任意な組み合わせで実現されても良い。 These general or specific aspects may be realized by a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM. The system, method, integrated circuit, computer You may implement | achieve with arbitrary combinations of a program or a recording medium.
以下で説明する実施の形態は、いずれも本開示の一例を示すものである。以下の実施の形態で示される数値、形状、構成要素、ステップ、ステップの順序などは、一例であり、本開示を限定する主旨ではない。また、以下の実施の形態における構成要素のうち、最上位概念を示す独立請求項に記載されていない構成要素については、任意の構成要素として説明される。また全ての実施の形態において、各々の内容を組み合わせることもできる。 Each of the embodiments described below is an example of the present disclosure. Numerical values, shapes, components, steps, order of steps, and the like shown in the following embodiments are merely examples, and are not intended to limit the present disclosure. In addition, among the constituent elements in the following embodiments, constituent elements that are not described in the independent claims indicating the highest concept are described as optional constituent elements. In all the embodiments, the contents can be combined.
以下、実施の形態について図面を参照しながら説明する。 Hereinafter, embodiments will be described with reference to the drawings.
(実施の形態1)
図1には、本実施の形態1におけるロボットシステムの構成の一例を示すブロック図である。本ロボットシステムは、移動ロボット100と、環境センサノード120とを含む。
(Embodiment 1)
FIG. 1 is a block diagram showing an example of the configuration of the robot system according to the first embodiment. The robot system includes a
移動ロボット100は、外界センサ101と、内界センサ102と、情報処理部(動体・環境変化情報統合部103、地図生成・自己位置推定部104、地図情報データベース105、および経路計画部106)と、制御部107と、アクチュエータ108と、通信部109とを含む。
The
また、移動ロボット100は、移動ロボット100を移動させる手段として、例えば、2つもしくは4つの車輪、もしくは、2脚以上の脚を含んでいる。それら車輪または脚はアクチュエータ108からの動力により動作し、それにより移動ロボット100を移動させることが可能である。前述した移動ロボット100の脚は、1つ以上の関節を備えることが望ましい。さらに、移動ロボット100を移動させる手段として、センサもしくは任意の物体を装着または把持するための1つ以上の腕が含まれることが望ましい。前述した移動ロボット100の腕は、1つ以上の関節を備えることが望ましい。
Further, the
また、移動ロボット100は、1つ以上のCPU、1つ以上の主記憶、1つ以上の補助記憶を備える(図示せず)。
Further, the
前述した移動ロボット100のCPUは、以下に説明する、動体・環境変化情報統合部103、地図生成・自己位置推定部104、経路計画部106、制御部107などが行う、様々な演算処理を行う。
The above-described CPU of the
前述した移動ロボット100の主記憶は、前述したCPUが直接アクセスすることのできる記憶装置であり、DRAM、SRAM、フラッシュメモリ、ReRAM、FeRAM、MRAM、STT-RAM、PCRAM、等のメモリによって構成され得る。
The main memory of the
前述した移動ロボット100の補助記憶は、前述した主記憶の内容を長期保存、コピーもしくはバックアップを行う記憶装置であり、HDD、フラッシュメモリ、光ディスク、磁気ディスク、等によって構成され得る。
The auxiliary storage of the
外界センサ101は、移動ロボット100の周辺環境についての情報を検出するセンサであり、周辺環境の2次元の形状もしくは3次元の形状および色・材質を検出する。外界センサ101は、例えばLRF(Laser Range Finder)、LIDAR(Laser Imaging Detection and Ranging)、カメラ、深度カメラ、ステレオカメラ、ソナー、RADARなどのセンサもしくはそれぞれの全ての組合せによって構成され得る。ここで、外界センサ101の取得する周辺環境の情報には、座標に関する情報(座標情報)が含まれる。
The
内界センサ102は、移動ロボット100の位置・姿勢を検出するセンサである。内界センサ102は移動ロボット100の移動距離、速度、加速度、角速度、姿勢それぞれの3方向成分(例えば、X-Y-Z直交座標系のX軸方向成分、Y軸方向成分、Z軸方向成分)の絶対値の少なくとも1つを検出することが可能である。また、移動ロボット100の脚、腕の各関節について、速度、加速度、角速度、姿勢、トルクの絶対値を検出することが可能であってもよい。内界センサ102は、例えば加速度センサ、角速度センサ、エンコーダ、地磁気センサ、気圧・水圧センサ、トルクセンサなどのセンサもしくはそれぞれの組合せによって構成され得る。
The
動体・環境変化情報統合部103は、移動ロボット100に搭載される外界センサ101が検出した情報(第2の地図情報)を取得する。また、環境センサノード120において抽出された動体移動・環境変化情報(第3の地図情報)を、通信部109を介して取得する。また、環境センサノード120によって抽出された動体移動・環境変化情報を用いて、外界センサ101が検出した情報における時系列で動的に変化する場所についての情報を除去する処理(除去処理)、あるいはその場所についての情報の信用度を下げる処理(分散増加処理)を行う。除去処理と分散増加処理の具体的な内容に関しては後述する。
The moving body / environment change
地図生成・自己位置推定部104は、内界センサ102より得られた移動ロボット100の移動距離、速度、加速度、角速度、姿勢のそれぞれの3方向成分の絶対値の少なくとも1つを取得する。また、地図生成・自己位置推定部104は、動体・環境変化情報統合部103によって処理された(除去処理済みまたは分散増加処理済みの)外界センサ101による移動ロボット100の周辺環境についての情報を取得する。そして地図生成・自己位置推定部104は、取得した情報から移動ロボット100の自己位置の推定と地図情報の生成を、フィルタリング処理によって同時に行う。なお、自己位置の推定と地図情報の生成は同時に行わなくてもよい。フィルタリング処理は、カルマンフィルタ、拡張カルマンフィルタ、Unsentedカルマンフィルタ、パーティクルフィルタ、Rao-Blackwellizedパーティクルフィルタの各手法によって構成され得る。また、地図生成・自己位置推定部104は生成した地図を、地図情報データベース105へ保存する。既に該当する環境の地図が存在する場合は、地図情報データベース105に保存されている地図情報を合わせて用いて、地図情報の更新を行い、地図情報データベース105への保存を行う。
The map generation / self-
地図情報データベース105は、地図生成・自己位置推定部104によって生成・更新された地図情報(第1の地図情報)を保存する。保存されている地図情報は地図生成・自己位置推定部104、経路計画部106によってそれぞれ使用される。地図情報データベース105は移動ロボット100の主記憶または二次記憶に配置される。
The
経路計画部106は、地図情報データベース105に保存されている地図情報と、地図生成・自己位置推定部104によって推定された移動ロボット100の自己位置の情報を用いて、現在の自己位置から地図上を移動する経路を計画する。計画する経路は、経路のコストを最小化する経路である。経路のコストは、総移動時間、総移動距離、移動に使われる総エネルギ、経路の混雑度の総和もしくはそれぞれの全ての組合せによって表現され得る。
The
制御部107は、実際に移動ロボットが動作する環境において、経路計画部106によって計画された経路について、移動ロボット100の移動動作を実行するための制御を行う。
The
アクチュエータ108は、制御部107からの制御命令に基づいて、例えば車輪などを駆動し、それにより実際に移動ロボット100を移動させる。
The
通信部109は、有線もしくは無線によって、1対多型、多対多型の通信を行う機能を備える。通信部109は、環境センサノード120に対して、指定した場所の動体移動・環境変化情報が存在するか否かを問い合わせる。
The
環境センサノード120は、環境センサ121と、動体移動・環境変化情報抽出部122と、環境センサ情報時系列データベース123と、動体移動・環境変化情報データベース124と、通信部125と、を含む。
The
環境センサ121は、環境センサノード120の周辺環境についての情報を検出するセンサである。周辺環境の情報には、周辺環境の2次元の形状もしくは3次元の形状及び色・材質に関する情報が含まれる。
The
図2は、環境センサノード120、環境センサノード120と同様の複数の環境センサノード312~319、および移動ロボット100が存在する環境の一例を示す概略図である。
FIG. 2 is a schematic diagram showing an example of an environment in which the
なお、図1と図2に示す構成で符号が異なるが名称が同一であれば同一の構成であるものとする。 It should be noted that although the reference numerals are different between the configurations shown in FIGS. 1 and 2, if the names are the same, the configurations are the same.
環境センサノード120、312~319はそれぞれ環境センサ121、322~329を備える。環境センサノード120、312の検出範囲はそれぞれ331、332である。環境センサノード313~319の検出範囲は図の簡略化のため図示しないが、各環境センサノードによって、移動ロボットの移動可能な空間を全て検出可能とするように環境センサノードおよび環境センサを設置している。
The
また、環境センサノード120、312~319の環境センサ121、322~329は、例えばLRF(Laser Range Finder)、LIDAR(Laser Imaging Detection and Ranging)、カメラ、深度カメラ、ステレオカメラ、ソナー、RADAR、焦電型赤外線センサ、赤外線ToF(Time of Flight)センサなどのセンサもしくはそれぞれの全ての組合せによって構成され得る。
The
図2に示すように、移動ロボット100は外界センサ101を備え、移動方向303に移動している。
As shown in FIG. 2, the
図2に示すタイミングにおいて、外界センサ101の検出範囲は304である。
The detection range of the
また、図2には、人/動体の現在の位置341、人/動体の過去の位置342、人/動体の現在の移動方向343を示している。人/動体が位置342から位置341へ移動したことによる時系列の環境変化は、環境センサノード120、312,313,314によって、詳細は後述するが、動体移動・環境変化情報として抽出される。
FIG. 2 shows the current position 341 of the person / moving object, the past position 342 of the person / moving object, and the current moving direction 343 of the person / moving object. The time-series environmental change due to the movement of the person / moving body from the position 342 to the position 341 is extracted by the
この環境センサ121の取得する周辺環境の情報には、座標に関する情報(座標情報)が含まれる。
The information on the surrounding environment acquired by the
動体移動・環境変化情報抽出部122は、環境センサ121が検出した時系列の周辺環境の情報を、検出情報と検出タイミングの組の形で環境センサ情報時系列データベース123へ保存する。例えば、検出タイミングは、時刻で表される。動体移動・環境変化情報抽出部122はまた、環境センサ情報時系列データベース123に保存されている時系列に沿った検出タイミングと検出情報の複数の組に基づいて、周辺環境の変化場所、変化形状、変化の大きさ及び変化にかかった時間を算出する。そして動体移動・環境変化情報抽出部122は、周辺環境の変化場所、その変化形状、その変化の大きさ、その変化にかかった時間を動体移動・環境変化情報として動体移動・環境変化情報データベース124に保存する。なお動体移動・環境変化情報は、周辺環境の変化場所、変化形状、変化の大きさ、変化にかかった時間、を全て含まなくてもよい。これらの情報のうち一部の情報のみを含んでいてもよいし、他の情報を含んでもよい。
The moving object / environment change
環境センサ情報時系列データベース123は、動体移動・環境変化情報抽出部122によって取得した、環境センサ121が検出した時系列の周辺環境の情報を、検出情報と検出タイミングの組で保存している。保存された情報は、動体移動・環境変化情報抽出部122によって、動体移動・環境変化情報を抽出するために使用される。
The environmental sensor information time-series database 123 stores the information of the time-series surrounding environment detected by the
動体移動・環境変化情報データベース124は、動体移動・環境変化情報抽出部122が算出した動体移動・環境変化情報を保存する。保存されている動体移動・環境変化情報は、移動ロボット100の通信部109からの問い合わせを受けた通信部125によって、移動ロボット100に対して送信される。
The moving object / environment
通信部125は、移動ロボット100の通信部109からの動体移動・環境変化情報が存在するか否かの問い合わせを受けて、動体移動・環境変化情報データベース124内の検索を行い、動体移動・環境変化情報の移動ロボット100への送信を無線または有線通信によって行う。
The
次に、図2、図3A、および図3Bを参照しながら、ロボットシステムの処理の流れについて説明する。図3Aは、ロボットシステムの処理の一例を示すフロー図である。T201は移動ロボット100の処理フロー、T221A、T221Bは環境センサノード120の処理フローを表す。
Next, a processing flow of the robot system will be described with reference to FIGS. 2, 3A, and 3B. FIG. 3A is a flowchart illustrating an example of processing of the robot system. T201 represents the processing flow of the
まず、図3AにおけるステップS201において、内界センサ102は、情報を取得する。ここで、取得する情報は移動ロボット100の移動距離、速度、加速度、角速度、姿勢それぞれの3方向成分の絶対値である。
First, in step S201 in FIG. 3A, the
次に、ステップS202において、外界センサ101は移動ロボット100の周辺環境についての情報を取得する。図2に示すタイミングにおける周辺環境の情報は検出範囲304について外界センサ101によって検出される情報である。ここで、取得する情報には、周辺環境の、2次元もしくは3次元の形状および色・材質に関する情報が含まれる。
Next, in step S202, the
次に、ステップS203において、移動ロボット100は通信部109を介して環境センサノード120、312~319に対して、指定した2次元もしくは3次元の任意の大きさを持つ場所内に、指定した時間帯に動体移動・環境変化情報が存在するか否かの問合せを行い、その問い合わせに対する返答を所定の時間待つ。ここで、場所や時間は、予め移動ロボット100を管理するユーザによって任意の(固定の)大きさの場所や、任意の長さの時間に指定されていてもよい。また移動ロボットが状態に応じて、場所の大きさや時間の長さを変更し、指定してもよい。
Next, in step S203, the
また、移動ロボット100からの問合せは、通信可能な他の移動ロボット、他の環境センサノード全てに対して行われてもよい。移動ロボット100は全ての移動ロボット、環境センサノードからの、返答および動体移動・環境変化情報を受信することができる。また、移動ロボット100からの問い合わせは、複数の環境センサノード(複数の他の移動ロボット)に対して、同時に行ってもよいし、順番に行ってもよい。
Further, the inquiry from the
次に、ステップS204において、通信部109は、ステップS203での問合せに対して、環境センサノード120、312~319からの返答があるか否かを判定する。一つ以上の返答があった場合はステップS205へ進む、返答がなかった場合はステップS208へ進む。
Next, in step S204, the
次に、ステップS205において、通信部109は環境センサノード120、312~319および、その他の全ての環境センサノードと、その他の全ての移動ロボット、から動体移動・環境変化情報を受信する。なお、環境センサノード120、312~319の中、ある特定の環境センサノードのみから動体移動・環境変化情報を受信してもよい。ステップS203で指定した場所に関して、環境変化情報を有する環境センサノードのみから情報を取得してもよい。
Next, in step S205, the
ステップS206において、動体・環境変化情報統合部103は、次のステップS207にて用いる、ステップS205で受信した動体移動・環境変化情報が存在する場所に対応する場所の情報を含む地図情報(第1の地図情報)を地図情報データベース105から取得する。
In step S206, the moving body / environment change
ここで、図3Bは、ロボットシステムの処理の一例を示すフロー図であり、具体的には、ステップS207の処理の一例を示すフロー図である。図3Bを参照しながら、ステップS207の具体的な内容、すなわちその詳細であるステップS20701からステップS20706について説明する。 Here, FIG. 3B is a flowchart showing an example of the process of the robot system, and more specifically, a flowchart showing an example of the process of step S207. With reference to FIG. 3B, the specific contents of step S207, that is, the details of steps S20701 to S20706 will be described.
ステップS207において、動体・環境変化情報統合部103は、ステップS202で外界センサ101が取得した情報と、ステップS206で取得した地図情報に対して、環境センサノード120、312~319において抽出された一つ以上の動体移動・環境変化情報を統合する処理を行う。
In step S207, the moving body / environment change
まず、ステップS20701において、動体・環境変化情報統合部103は、ステップS202で外界センサ101が取得した周辺環境の情報の座標系と、環境センサノード120、312~319から送信された動体移動・環境変化情報の座標系を、ステップS206で取得した地図情報の座標系に変換する座標変換を行う。それぞれの座標系を、共通の座標系にする。
First, in step S20701, the moving body / environment change
次に、ステップS20702において、動体・環境変化情報統合部103は、動体移動・環境変化情報の形状に関する異なるタイミング間での時間的変化量の大きさが、閾値Th_chより大きい場所が存在するか否かの判定を行う。
Next, in step S20702, the moving body / environment change
本実施の形態の場合、「時間的変化量」は、例えば、2次元空間上もしくは3次元空間上における各点に物体が存在する確率である各点の存在確率(Exiatence Probability)に関する。具体的には、各点について第1のタイミングでの存在確率と第1のタイミングより時間的に後である第2のタイミングでの存在確率との間の差分の絶対値を求め、その各点それぞれの差分の絶対値を合計した値(Sum of Absolute Difference)である。すなわち、第1のタイミングと第2のタイミングとの間の時間に変化した存在確率の変化量である。また、第1のタイミングおよび第2のタイミングは時刻であってもよいし、ロボットシステムが稼動し始めてからの経過時間であってもよい。 In the case of this embodiment, the “temporal change amount” relates to the existence probability (Exitence Probability) of each point, which is the probability that an object exists at each point in the two-dimensional space or the three-dimensional space, for example. Specifically, for each point, the absolute value of the difference between the existence probability at the first timing and the existence probability at the second timing that is temporally later than the first timing is obtained, and each point is obtained. The sum of the absolute values of the differences (Sum of Absolute Difference). That is, it is the change amount of the existence probability that has changed in the time between the first timing and the second timing. Further, the first timing and the second timing may be time, or may be elapsed time after the robot system starts to operate.
また、形状に関する時間的変化量は、例えば、各点に対するSSD(Sum of Squared Difference)、SATD(Sum of Absolute Transformed Difference)、MAD(Mean Absolute Difference)、MSD(Mean Squared Difference)、もしくは画像特徴量であるSIFT、SURF、HoG等の特徴ベクトルの差分であってもよい。また形状に関する時間的変化量は例えば、それらの特徴ベクトルを用いて生成したBoF(Bag of Feature)特徴ベクトルの差分であってもよい。 In addition, the temporal change amount related to the shape is, for example, SSD (Sum of Squared Difference), SATD (Sum of Absolute Transformed Difference), MAD (Mean Absolute Difference), MSD Quante, or MSD Quant (Image). It may be a difference between feature vectors such as SIFT, SURF, and HoG. Further, the temporal change amount related to the shape may be, for example, a difference between BoF (Bag of Feature) feature vectors generated using these feature vectors.
閾値Th_chの具体的な値の設定は、用いる変化量の基準や状況によって異なる。一例としては、10m×10m×10mの空間において、100mm×100mm×100mmの物体が20mm移動した際の変化量を閾値Th_ch、として設定することが考えられる。ただし、閾値Th_chの具体値の設定は上記例に限定されるものではない。 The setting of the specific value of the threshold Th_ch differs depending on the standard of change and the situation. As an example, a change amount when an object of 100 mm × 100 mm × 100 mm moves 20 mm in a space of 10 m × 10 m × 10 m may be set as the threshold Th_ch. However, the setting of the specific value of the threshold Th_ch is not limited to the above example.
異なるタイミング間の時間的変化量の大きさが閾値Th_chより大きい場所が存在する場合はステップS20703に進む。そうでない場合はステップS20707へ進む。 If there is a place where the amount of temporal change between different timings is larger than the threshold Th_ch, the process proceeds to step S20703. Otherwise, the process proceeds to step S20707.
次に、ステップS20703において、動体・環境変化情報統合部103は、動体移動・環境変化情報の形状に関する時間的変化量が閾値Th_chより大きいタイミング間の時間差が閾値Th_tu(第1の時間)より大きいか否かの判定を行う。閾値Th_tuより大きい場合は、ステップS20707へ進む。これは、時間差が所定の閾値Th_tuより大きい場合は、当該の動体移動・環境変化情報は外乱による一時的な環境変化ではなく、静物体の移動等による半永続的又は永続的な環境の変化であるとみなし、環境の変化した部分を新たな地図として扱うことを意味する。例えば、室内で家具などを動かした際のレイアウト変更がそれに該当し、レイアウト変更のような時間差の大きい変化は新たな地図として扱う。時間差が閾値Th_tuより小さいもしくは等しい場合は、ステップS20704に進む。
Next, in step S20703, the moving body / environment change
次に、ステップS20704において、動体・環境変化情報統合部103は、動体移動・環境変化情報の形状に関する時間的変化量が閾値Th_chより大きいタイミング間の時間差が閾値Th_tb(第2の時間)より小さいか否かの判定を行う。
Next, in step S20704, the moving body / environment change
閾値Th_tbより小さい場合、ステップS20705において、動体・環境変化情報統合部103は、動体移動・環境変化情報(第3の地図情報)における時間的変化量が閾値Th_chより大きい場所に対応する、移動ロボット100の外界センサ101が取得した周辺環境の情報(第2の地図情報)における場所の情報とステップS206で取得した地図情報(第1の地図情報)における場所の情報を除去する除去処理を実行する。その情報の除去処理についての詳細は後述する。
If smaller than the threshold Th_tb, in step S20705, the moving body / environment change
一方、閾値Th_tbより大きいもしくは等しい場合、ステップS20706において、動体・環境変化情報統合部103は、動体移動・環境変化情報(第3の地図情報)における時間的変化量が閾値Th_chより大きい場所に対応する、移動ロボット100の外界センサ101が取得した周辺環境の情報(第2の地図情報)における場所の情報とステップS206で取得した地図情報(第1の地図情報)における場所の情報それぞれの分散を増加変更する分散増加処理を実行する。なお、閾値Th_tbは、閾値Th_tuよりも大きい値である。例えば、分散の増加変更は分散Vに対して、1より大きい定数aを乗じる形(a×V)で計算される。
On the other hand, if the threshold value Th_tb is greater than or equal to the threshold Th_tb, in step S20706, the moving body / environment change
ここで言う「情報の分散」とは情報のばらつき・不確からしさを表す指標を言う。周辺環境の情報に関して言えば、分散が小さい場合には周辺環境の空間上において期待値周辺に検出対象が存在する確率が高いことを意味する。一方、分散が大きい場合には期待値周辺に検出対象が存在する確率が低いことを意味し、検出対象の存在しうる確率の分布が空間上に大きく広がる。 “Distribution of information” here refers to an index that represents the variability and uncertainty of information. Regarding the information on the surrounding environment, when the variance is small, it means that there is a high probability that the detection target exists around the expected value in the space of the surrounding environment. On the other hand, when the variance is large, it means that the probability that the detection target exists around the expected value is low, and the distribution of the probability that the detection target can exist greatly spreads in the space.
また、情報の「分散増加処理」は、情報のばらつき・不確からしさを増加させることを言い、すなわち情報の信用度を下げることを言う。この情報の分散増加処理についての詳細は後述する。 Also, “dispersion increase processing” of information means increasing the likelihood of information variation and uncertainty, that is, reducing the reliability of information. Details of this information increase processing will be described later.
なお、ステップS20704で、動体移動・環境変化情報の形状に関する時間的変化量が閾値Th_chより大きいタイミング間の時間差が閾値Th_tbより小さいか否かを判定したがこれに限られない。例えば、動体移動・環境変化情報の形状に関する時間的変化量の大きさが、閾値Th_chとは異なる新たな閾値より大きいか否かを判定してもよい。すなわち動体移動・環境変化情報の形状に関する時間的変化量の大きさが、新たな閾値より大きい場合(Yes)ステップS20705に進み、そうでない場合(No)ステップS20706に進む。ここで新たな閾値は閾値Th_chより高い値である。すなわちステップS20704でYesの場合はそうでない場合(No)よりも変化量が大きい場合なので、動体移動・環境変化情報を外界センサ情報・地図情報から除去する処理が有効となる。 In step S20704, it is determined whether or not the time difference between timings related to the shape of the moving object / environment change information is greater than the threshold Th_ch, but is not limited thereto. For example, it may be determined whether the magnitude of the temporal change amount related to the shape of the moving object / environment change information is larger than a new threshold value different from the threshold value Th_ch. That is, if the magnitude of the temporal change amount related to the shape of the moving object / environment change information is larger than the new threshold (Yes), the process proceeds to step S20705; otherwise (No), the process proceeds to step S20706. Here, the new threshold value is higher than the threshold value Th_ch. That is, in the case of Yes in step S20704, since the amount of change is larger than in the case where it is not (No), the process of removing the moving body movement / environment change information from the external sensor information / map information is effective.
次に、ステップS20707において、動体・環境変化情報統合部103は、受信した一つ以上の場所に関する動体移動・環境変化情報について、動体・環境変化情報統合部が処理していない、すなわち未処理の動体移動・環境変化情報が存在するか否かを判定する。未処理の動体移動・環境変化情報が存在する場合はステップS20701に戻り、未処理の動体移動・環境変化情報がなくなるまで、処理を継続する。未処理の動体移動・環境変化情報が存在しない場合は、ステップS208に進む。
Next, in step S20707, the moving object / environment change
図3Aに戻り、ステップS208において、地図情報の生成と移動ロボット100の自己位置の推定(マッチング処理を含む)を行う。ステップS204で環境センサノード120からの返答があった場合は、ステップS207で動体移動・環境変化情報が統合された外界センサ101が取得した情報と、内界センサ102がステップS201で取得した情報とを用いて、地図生成・自己位置推定部104が、地図情報の生成と移動ロボット100の自己位置の推定を行う。ここで、自己位置の推定の際には、地図情報と、外界センサが取得したセンサデータとを照らし合わせて(マッチング処理)、類似度の高い変換を探索し、最も類似度の高い変換をもって移動ロボット100の移動距離および移動方向、すなわち現在の自己位置を推定する。ステップS204において環境センサノード120からの返答がなかった場合は、外界センサ101がステップS202にて取得した情報と、内界センサ102がステップS201にて取得した情報とを用いて、地図生成・自己位置推定部104は、地図情報の生成と移動ロボット100の自己位置の推定を行う。地図情報の生成と自己位置の推定は地図生成・自己位置推定部104におけるフィルタリング処理によって同時に行われる。
Referring back to FIG. 3A, in step S208, map information is generated and the position of the
次に、ステップS209において、経路計画部106は、地図情報データベース105に保存されている、ステップS208で生成された地図情報を用いて、現在の自己位置から移動する経路を計画する。経路計画の方法については上述の通りである。
Next, in step S209, the
次に、ステップS210において、制御部107は、ステップS209において計画された経路について、移動ロボット100の移動動作を行うための制御命令を生成する。
Next, in step S210, the
次に、ステップS211において、アクチュエータ108は、ステップS210にて生成された制御命令を受けて実際に移動ロボット100を動かす。
Next, in step S211, the
ここからは、環境センサノード120における処理である、ステップS221からS227について説明する。
Hereafter, steps S221 to S227, which are processes in the
処理フローT221AのステップS221において、環境センサノード120の通信部125は移動ロボット100からの問合せを受け付ける。
In step S221 of the processing flow T221A, the
ステップS223において、移動ロボット100からの問合せ内容である、指定した場所、指定した時間帯という条件に当てはまる動体移動・環境変化情報が存在するか否か、動体移動・環境変化情報データベース124を用いて確認する。確認の結果、条件に当てはまる動体移動・環境変化情報が存在した場合は、ステップS225の処理に進む。条件に当てはまる動体移動・環境変化情報が存在しない場合は、ステップS221に戻る。
In step S223, whether or not there is moving body movement / environment change information that meets the conditions of the specified location and the specified time zone, which are the contents of the inquiry from the
次に、ステップS225において、通信部125は移動ロボット100に対して、ステップS203での指定した場所、時間帯に動体移動・環境変化情報が存在するか否かの問合せに対する返答(指定した場所の動体移動・環境変化情報が存在するか否かの情報)を送信する。
Next, in step S225, the
次に、ステップS226において、通信部125は、ステップS221で受け付けた問合せを行った移動ロボット100に対して、動体移動・環境変化情報を送信する。送信を行った後はステップS221に戻る。
Next, in step S226, the
処理フロー221BのステップS222において、動体移動・環境変化情報抽出部122は、環境センサ121が検出した環境センサノード120の周辺の情報を取得し、環境センサ情報時系列データベース123に、検出時間と検出情報の組の形で時系列順に保存する。
In step S222 of the process flow 221B, the moving object / environment change
ステップS224において、動体移動・環境変化情報抽出部122は、環境センサ情報時系列データベース123に保存されている、環境センサノード120の周辺に関する検出時間と検出情報の複数の組を時系列に沿って参照し、環境センサノード120の周辺の環境が変化しているか否かを判定する。
In step S224, the moving object / environment change
ステップS227にて、動体移動・環境変化情報抽出部122は、動体移動・環境変化情報抽出部122は、環境センサ情報時系列データベース123に保存されている検出時間と検出情報の時系列に沿った複数の組に基づいて、周辺環境の変化場所、変化形状、変化の大きさ及び変化にかかった時間を算出する。前記の変化場所、変化の大きさ、変化にかかった時間を動体移動・環境変化情報として動体移動・環境変化情報データベース124に保存する。
In step S227, the moving body movement / environment change
抽出された動体移動・環境変化情報は、図2に示すように、環境センサノードに搭載される通信手段351と移動ロボット100に搭載される通信手段352によって通信経路353を通って環境センサノードから移動ロボットに送られる。移動ロボットに送られた動体移動・環境変化情報は、動体移動・環境変化情報統合部によって、人/動体による環境変化情報が存在する場所に対応する、移動ロボット100の外界センサ101が検出した情報(第2の地図情報)における場所の情報を除去するために使用される。これにより、外乱である動的な環境変化を移動ロボット100の外界センサ101が検出した情報から除去することが可能となる。
As shown in FIG. 2, the extracted moving body movement / environment change information is transmitted from the environment sensor node through the communication path 353 by the
図4Aは、例えば、任意の時刻Tにおいて移動ロボット100が環境400に存在していることを示す。図4Cに、時刻Tにおいて移動ロボット100の外界センサ101が取得した情報(センサデータ)430を示す。図4Bに、時刻Tから任意の時間間隔αが経過した時刻T+αにおいて移動ロボット100が環境400に存在していることを示す。図4Dに、時刻Tから任意の時間間隔αが経過した時刻T+αにおける移動ロボット100の外界センサ101が取得した情報(センサデータ)431を示す。
FIG. 4A shows that the
図4Aに示すように、環境400には、移動ロボット100と、位置401に存在する静物体Aと、位置402に存在する動体Hと、環境センサノード120が存在している。移動ロボット100は、進行方向405に向かって移動している。動体Hは、進行方向404に向かって位置402から移動している。
As shown in FIG. 4A, the environment 400 includes the
図4Bに示すように、環境400には、移動ロボット100と、位置401に存在する静物体Aと、位置403に存在する動体Hと、環境センサノード120が存在している。
As shown in FIG. 4B, the environment 400 includes the
図4Aと図4Bを比較すると、移動ロボット100の位置が変化している。また、動体Hの位置も、位置402から位置403に変化している。
4A and 4B, the position of the
移動ロボット100に搭載される外界センサ101は、環境400における移動ロボット100の周辺の情報を取得する。
The
時刻Tにおいて、環境400に対して、外界センサ101が取得したセンサデータ430(太実線)を、図4Cは示している。センサデータ430には、環境400における、位置401に存在する静物体Aと、位置402に存在する動体Hと、壁Wそれぞれの形状および外界センサ101からの距離が含まれている。
FIG. 4C shows sensor data 430 (thick solid line) acquired by the
時刻Tから任意の時間間隔αが経過した時刻T+αにおいて、環境400に対して、外界センサ101が取得したセンサデータ431(太破線)を、図4Dは示している。センサデータ431には、環境400における、位置401に存在する静物体Aと、位置403に存在する動体Hと、壁Wそれぞれの形状および外界センサ101からの距離が含まれている。
FIG. 4D shows sensor data 431 (thick dashed line) acquired by the
センサデータ430とセンサデータ431は、2次元の距離データ、3次元の距離データ、2次元の形状データ、3次元の形状データ、画像特徴点、などのデータ形式もしくはそれぞれの全ての組合せを含み得る。すなわち、外界センサ101が取得する情報には座標情報が含まれる。環境センサノード120は、環境センサ121によって、動体Hが時刻Tから時刻T+αの間に、位置402から位置403に移動したことによる環境変化を動体移動・環境変化情報として抽出する。図5A~図5Cに、時刻Tにおける地図データと時刻T+αにおけるセンサデータとのマッチング処理と、その結果例を示す。
The sensor data 430 and the sensor data 431 may include data formats such as two-dimensional distance data, three-dimensional distance data, two-dimensional shape data, three-dimensional shape data, and image feature points, or all combinations thereof. . That is, the information acquired by the
センサデータのマッチング処理は、上述したステップS208にて、地図生成・自己位置推定部104によって実行される。マッチング処理は、地図情報(データ)と、外界センサが取得したセンサデータとを照らし合わせて、類似度の高い変換を探索し、最も類似度の高い変換をもって、移動ロボット100の、移動距離および移動方向、すなわち現在の自己位置を推定する。
The sensor data matching process is executed by the map generation / self-
一例として、センサデータ430をもとに作られた時刻Tにおける地図データ501(すなわち地図情報データベース105内の地図情報)と、時刻T+αにおいて移動ロボット100の外界センサ101によって取得されたセンサデータ431とのマッチング処理について説明する。
As an example, map data 501 at time T (that is, map information in the map information database 105) created based on the sensor data 430, sensor data 431 acquired by the
マッチング処理は、複数のデータを照らし合わせて、相違度が低い、すなわち類似度が高い複数のデータ間の変換を探索する処理である。マッチング処理は、相違度の勾配を計算し、相違度が極小となる変換を探索する。 The matching process is a process of comparing a plurality of data and searching for conversion between a plurality of data having a low degree of difference, that is, a high degree of similarity. In the matching process, the gradient of the dissimilarity is calculated, and a conversion that minimizes the dissimilarity is searched for.
しかし、探索する対象の形状が変形している場合または、データに外乱によるノイズが入っている場合に、あらかじめ定められた相違度の閾値を下回らず、探索が収束しないという問題が発生する。さらに、そのような場合に、本来の正解であるデータ間の対応とは違う誤った対応を最適な対応と判定し、変換の探索の精度が低下する問題が発生する。 However, when the shape of the object to be searched is deformed or noise due to disturbance is included in the data, there is a problem that the search does not converge because the threshold of difference is not below a predetermined threshold. Further, in such a case, an incorrect correspondence different from the correspondence between the data that is the original correct answer is determined as the optimum correspondence, and there arises a problem that the accuracy of the conversion search is lowered.
マッチング処理による、地図データ501とセンサデータ431のマッチング結果は、図5Bに示す結果例503、結果例504のようになり得る。 The matching result of the map data 501 and the sensor data 431 by the matching process can be as a result example 503 and a result example 504 shown in FIG. 5B.
結果例504は、地図データ501とセンサデータ431の対応としての正解(Ground truth)に非常に近い例を示している。 Result example 504 shows an example that is very close to the correct answer (Ground truth) as the correspondence between the map data 501 and the sensor data 431.
結果例504は、位置401から移動しない静物体Aと壁の形状によるデータの部分(左側部分)が高い精度で対応しており、正解と高い精度で対応している。一方、動体Hが位置402から位置403へ移動したことによるセンサデータの形状が変化した部分(右側部分)の対応がうまくとれておらず、その結果として、センサデータ全体の類似度が低下している。
In the result example 504, the still object A that does not move from the
そのため、探索方法や閾値によっては、正解である結果例504へ収束するまでに、マッチング演算の繰り返し回数が増加するという問題が発生する。 Therefore, depending on the search method and the threshold, there is a problem that the number of repetitions of the matching operation increases before the result example 504 that is a correct answer is converged.
図5Cに、マッチングにおける評価値例505を示す。評価値例505として、横軸にマッチングを行う2つのデータ501、431間のX軸方向の距離、縦軸に評価値をとりプロットした例を示す。評価値は、データ間の相違度を表し、データの形状が一致しているほど低い値となる。 FIG. 5C shows an evaluation value example 505 in matching. As an evaluation value example 505, an example is shown in which the horizontal axis represents the distance in the X-axis direction between two pieces of data 501 and 431 to be matched, and the vertical axis represents the evaluation value. The evaluation value represents the degree of dissimilarity between data, and becomes a lower value as the data shapes match.
ここで、理解を容易にするために、評価値例505は、地図データ501に対してセンサデータ431をX軸方向のみに並進させた場合の例を示している。なお、実際には、変換を探索するために、各軸に対するデータの並進、回転が行われる。並進とは、一方向に平行移動する運動を示す。 Here, in order to facilitate understanding, the evaluation value example 505 shows an example in which the sensor data 431 is translated only in the X-axis direction with respect to the map data 501. Actually, in order to search for the transformation, translation and rotation of data with respect to each axis are performed. Translation refers to a movement that translates in one direction.
図5Cに示すように、マッチングにおける評価値例505には、結果例504に対するマッチング演算の評価値507が示されている。評価値507の付近では、評価値の変化がなだらかであり、勾配法等の手法を用いて結果を求める場合、結果へ収束するまでにマッチング演算の繰り返し回数が多数必要となる。 As shown in FIG. 5C, the evaluation value example 505 in matching shows an evaluation value 507 of matching calculation for the result example 504. In the vicinity of the evaluation value 507, the change in the evaluation value is gentle, and when the result is obtained using a method such as the gradient method, a large number of repetitions of the matching operation are required before the result is converged.
また、図5Bに示す結果例503は、地図データ501とセンサデータ431の対応としての正解(Ground truth)との対応度が低い例を示している。図5Cに示すように、マッチングにおける評価値例505には、結果例503に対するマッチング演算の評価値506が示されている。評価値506は、評価値例505において、極小値となっている。そのため、勾配法等の手法を用いて結果を求める場合、評価値506に結果が収束し、正解との対応度が低い対応である結果例503を誤って最適な結果と判定する。このように結果例503に収束し、正解との対応度が低い対応を最適と誤って判定する問題も発生し得る。 Also, the result example 503 shown in FIG. 5B shows an example in which the degree of correspondence between the map data 501 and the correct answer (Ground truth) as the correspondence between the sensor data 431 is low. As shown in FIG. 5C, the evaluation value example 505 in matching shows the evaluation value 506 of the matching operation for the result example 503. The evaluation value 506 is a minimum value in the evaluation value example 505. Therefore, when a result is obtained using a technique such as a gradient method, the result converges to the evaluation value 506, and the result example 503 having a low degree of correspondence with the correct answer is erroneously determined as the optimum result. As described above, the result converges to the result example 503, and there may be a problem that a correspondence having a low degree of correspondence with the correct answer is erroneously determined to be optimal.
それにより、データ間の変換の探索精度が低下し、結果として、地図の生成精度とその地図に基づく移動ロボット100の自己位置の推定精度とが低下する。
Thereby, the search accuracy of conversion between data decreases, and as a result, the generation accuracy of the map and the estimation accuracy of the self-location of the
このように、動体の移動によるセンサデータの形状の変化が生じると、マッチング演算の繰り返し回数が増加し、正解との対応度が低い対応を最適と誤判断することによって地図の生成精度と移動ロボット100の自己位置の推定精度が低下する問題が発生する。以下、この問題に対処する2つの処理について、図6A-図6C、図7A-図7Cを参照しながら説明する。 Thus, when the shape of the sensor data changes due to the movement of the moving body, the number of repetitions of the matching operation increases, and the map generation accuracy and the mobile robot are determined by misjudging the correspondence that has a low degree of correspondence with the correct answer as optimal. There arises a problem that the estimation accuracy of 100 self-positions is lowered. Hereinafter, two processes for dealing with this problem will be described with reference to FIGS. 6A to 6C and FIGS. 7A to 7C.
図6Aは、データの除去処理の一例を示す図である。図6Aには、1つ目の処理によって、すなわち動体Hの移動を原因として動体・環境変化情報が存在する場所(領域C)のデータを除去する処理によって、対象の場所のデータを除去する処理が行われた後の時刻Tの地図データ601と時刻T+αのセンサデータ602とが示されている。 FIG. 6A is a diagram illustrating an example of data removal processing. In FIG. 6A, the process of removing the data of the target place by the first process, that is, the process of removing the data of the place (region C) where the moving body / environment change information exists due to the movement of the moving body H The map data 601 at time T and the sensor data 602 at time T + α after the operation is performed are shown.
なお、図6Aに示す処理は、ステップS20705と同一の処理である。すなわち、図6Aに示す領域Cは、時間的変化量が閾値Th_chより大きい場所に対応する。 Note that the process shown in FIG. 6A is the same process as step S20705. That is, the region C shown in FIG. 6A corresponds to a place where the temporal change amount is larger than the threshold Th_ch.
図6Bと図6Cは、地図データ601とセンサデータ602のマッチング結果例とマッチング評価値例を示す。ここでは、理解を容易にするために、マッチング評価値例は、地図データ601に対してセンサデータ602をX軸方向のみに並進させた場合の例を示している。すなわち図6Cのグラフにおいて、横軸は2つのデータ601、602間のX軸方向の距離を示し、縦軸は評価値(2つのデータ601、602の相違度)を示している。 6B and 6C show an example of matching results and an example of matching evaluation values of the map data 601 and the sensor data 602. FIG. Here, in order to facilitate understanding, the matching evaluation value example shows an example in which the sensor data 602 is translated only in the X-axis direction with respect to the map data 601. That is, in the graph of FIG. 6C, the horizontal axis indicates the distance in the X-axis direction between the two data 601 and 602, and the vertical axis indicates the evaluation value (difference between the two data 601 and 602).
図6Cに示されるように、図6Aに示す除去処理によれば、動体の移動を原因とするセンサデータの形状の変化による影響が除去されているので、評価値の変化が険しくなる。よって勾配法等の手法を用いて結果を求める場合、図5Cに示す評価値例505と比較して、図6Cに示す評価値例では、評価値506の点に対応する極小値が存在しない。すなわち、図6Cに示す評価値例では、評価値607の点に対応する最小値のみが極小値と存在する。この評価値例によれば、誤判断を回避し、正解への高精度な対応が可能である。さらに、評価値607の周辺の評価値の変化が、図5Cに示す評価値507と比較して急峻となっており、それによりマッチング演算の繰り返し回数が図5Cに示す評価値例505を用いる場合に比べて削減される。 As shown in FIG. 6C, according to the removal process shown in FIG. 6A, the influence of the change in the shape of the sensor data caused by the movement of the moving object is removed, so that the change in the evaluation value becomes steep. Therefore, when obtaining a result using a technique such as the gradient method, there is no minimum value corresponding to the point of the evaluation value 506 in the evaluation value example shown in FIG. 6C as compared to the evaluation value example 505 shown in FIG. 5C. That is, in the evaluation value example shown in FIG. 6C, only the minimum value corresponding to the point of the evaluation value 607 exists as the minimum value. According to this evaluation value example, it is possible to avoid misjudgment and to handle the correct answer with high accuracy. Furthermore, the change in the evaluation value around the evaluation value 607 is steep compared to the evaluation value 507 shown in FIG. 5C, so that the number of repeated matching operations uses the evaluation value example 505 shown in FIG. 5C. Compared to
以上により、地図生成・自己位置推定におけるデータ間のマッチング処理において、マッチング対象の2つのデータそれぞれにおける動体が存在する領域(図6Aに示す領域C)の情報をマッチング処理実行前に除去することにより、マッチング処理の高精度化・演算量の低減が図れる。 As described above, in the matching process between the data in the map generation / self-position estimation, the information of the area where the moving object exists in each of the two data to be matched (area C shown in FIG. 6A) is removed before the matching process is executed. Thus, the accuracy of the matching process can be improved and the amount of calculation can be reduced.
次に、2つ目の処理について説明する。図7Aは、データの分散増加処理の一例を示す図である。図7Aには、動体Hの移動を原因として動体・環境変化情報が存在する場所(領域C)のデータの分散を増加させる処理によって、対象の場所のデータの分散を増加させる処理が行われた後の時刻Tの地図データ603と、時刻T+αのセンサデータ604とが示されている。 Next, the second process will be described. FIG. 7A is a diagram illustrating an example of a data distribution increase process. In FIG. 7A, the process of increasing the data distribution at the target location is performed by the process of increasing the data distribution at the location where the moving object / environment change information exists (area C) due to the movement of the moving object H. Later map data 603 at time T and sensor data 604 at time T + α are shown.
ここで、情報の分散の増加について、図8-図10を参照しながら補足する。 Here, the increase in the distribution of information will be supplemented with reference to FIGS.
図8は、時刻Tにおける地図データ501を示している。図8に示すように、地図データ501には、移動ロボット100の外界センサ101によって実測された実測点(実測座標)が含まれている。
FIG. 8 shows map data 501 at time T. As shown in FIG. 8, the map data 501 includes actual measurement points (measurement coordinates) actually measured by the
その地図データ501の情報の分散は、実測点の分散とみなすことができ、その実測点の不確からしさ(その座標情報の信用度)を表している。図9に示すように、実測点の分散は、具体的には実測点をピークとする存在確率の分布の幅で定義することができる。したがって、情報の分散の増加は、実測点をピークとする存在確率の分布の幅の増加を意味する。 The variance of the information of the map data 501 can be regarded as the variance of the measured points, and represents the uncertainty of the measured points (the reliability of the coordinate information). As shown in FIG. 9, the variance of the actual measurement points can be specifically defined by the width of the distribution of existence probabilities having the actual measurement points as a peak. Therefore, an increase in the distribution of information means an increase in the width of the distribution of existence probabilities having the actual measurement point as a peak.
一例として、図10には、地図データ501の一部分(右側部分)において、情報の分散の増加が実行された、すなわち実測点の存在確率の分布の幅が増加した様子が示されている。 As an example, FIG. 10 shows a state in which an increase in the distribution of information is executed in a part (right side part) of the map data 501, that is, the width of the distribution of the existence probability of the measured points is increased.
なお、図7Aに示す処理は、ステップS20706と同一の処理である。すなわち、図7Aに示す領域Cは、時間的変化量が閾値Th_chより大きい場所に対応する。 Note that the process shown in FIG. 7A is the same process as step S20706. That is, a region C illustrated in FIG. 7A corresponds to a place where the temporal change amount is larger than the threshold Th_ch.
図7Bと図7Cに、地図データ603とセンサデータ604のマッチング結果例とマッチング評価値例を示す。ここでは、理解を容易にするために、マッチング評価値例は、地図データ603に対してセンサデータ604をX軸方向のみに並進させた場合の例を示している。 7B and 7C show examples of matching results and matching evaluation values of the map data 603 and sensor data 604. FIG. Here, in order to facilitate understanding, the matching evaluation value example shows an example in which the sensor data 604 is translated only in the X-axis direction with respect to the map data 603.
図7Cに示されるように、図7Aに示す分散増加処理によれば、動体の移動を原因とするセンサデータの形状の変化による影響を緩和しているので評価値の変化が険しくなる。よって勾配法等の手法を用いて結果を求める場合、図5Cに示す評価値例505の評価値506の点に対応する極小値が評価値608の点に比べて充分に大きい値であるため、図5Cの評価値例505と比較して、評価値608へ収束しやすい。この評価例によれば、誤判断を回避しやすくなり、正解への高精度な対応が可能である。さらに、評価値608の周辺の評価値の変化が、図5Cに示す評価値507と比較して急峻となっており、それによりマッチング演算の繰り返し回数が図5Cに示す評価値例505を用いる場合に比べて削減される。 As shown in FIG. 7C, according to the dispersion increasing process shown in FIG. 7A, since the influence due to the change in the shape of the sensor data caused by the movement of the moving object is mitigated, the change in the evaluation value becomes steep. Therefore, when the result is obtained using a method such as the gradient method, the minimum value corresponding to the point of the evaluation value 506 of the evaluation value example 505 shown in FIG. 5C is sufficiently larger than the point of the evaluation value 608. Compared to the evaluation value example 505 in FIG. According to this evaluation example, it is easy to avoid misjudgment and it is possible to respond to a correct answer with high accuracy. Furthermore, the change in the evaluation value around the evaluation value 608 is steep compared to the evaluation value 507 shown in FIG. 5C, and the number of times of the matching operation is repeated using the evaluation value example 505 shown in FIG. 5C. Compared to
以上により、地図生成・自己位置推定におけるデータ間のマッチング処理において、マッチング対象の2つのデータそれぞれにおける動体が存在する領域(例えば図7Aに示す領域C)の情報をマッチング処理実行前に分散増加することにより、マッチング処理の高精度化・演算量の低減が図れる。 As described above, in the matching process between data in map generation / self-position estimation, the information on the area where the moving object exists in each of the two data to be matched (for example, the area C shown in FIG. 7A) is distributed and increased before the matching process is executed. As a result, the accuracy of the matching process can be improved and the amount of calculation can be reduced.
図6Aに示す情報の除去処理と図7Aに示す情報の分散増加処理は、図3Bに示すように、条件に応じて使い分けられる。具体的には、第1のタイミングと第2のタイミングとの間の時間的変化量が閾値Th_chより大きい場所が存在する場合(そのような動体移動・環境変化情報が存在する場所がある場合)、且つ、その第1のタイミングと第2のタイミングとの間の時間差が閾値Th_tb(第2の時間)より小さい場合、すなわち動体の移動・環境の変化による影響が相対的に大きいと判断される場合に、ステップS20705にて除去処理が実行される。一方、第1のタイミングと第2のタイミングとの間の時間差が閾値Th_tbより大きい場合、すなわち動体の移動・環境の変化による影響が相対的に小さいと判断される場合に、ステップS20706にて分散増加処理が実行される。 The information removal process shown in FIG. 6A and the information dispersion increase process shown in FIG. 7A are used according to the conditions as shown in FIG. 3B. Specifically, when there is a place where the amount of temporal change between the first timing and the second timing is greater than the threshold Th_ch (when there is a place where such moving object / environment change information exists) In addition, when the time difference between the first timing and the second timing is smaller than the threshold Th_tb (second time), that is, it is determined that the influence due to the movement of the moving object / change in the environment is relatively large. In this case, the removal process is executed in step S20705. On the other hand, if the time difference between the first timing and the second timing is larger than the threshold value Th_tb, that is, if it is determined that the influence of the moving body / change in environment is relatively small, dispersion is performed in step S20706. Increase processing is executed.
除去処理は、分散増加処理に比べて、動体の移動・環境の変化による影響をより小さくなるように抑制する処理である。このように、本実施の形態では、動体の移動・環境の変化による影響の程度によって除去処理と分散増加処理とを使い分け、それにより最適な手法を用いたデータ間のマッチング処理を実現している。 The removal process is a process that suppresses the influence of the movement of the moving object and the change of the environment to be smaller than the dispersion increasing process. As described above, according to the present embodiment, the removal process and the dispersion increasing process are selectively used depending on the degree of the influence due to the movement of the moving object and the change of the environment, thereby realizing the matching process between the data using the optimum method. .
本実施の形態1によれば、環境変化が大きい空間においても、高精度かつロバストに、地図生成とその地図を用いて移動する移動ロボットの自己位置推定とを行うことができる。 According to the first embodiment, it is possible to perform map generation and self-position estimation of a mobile robot that moves using the map with high accuracy and robustness even in a space with a large environmental change.
(実施の形態2)
上述の実施の形態1に係るロボットシステムは、1つの移動ロボットと、少なくとも1つの環境センサノードとを含んでいる。これに対して、本実施の形態2に係るロボットシステムは、複数の移動ロボットと、少なくとも1つの環境センサノードとを含んでいる。そして、その複数の移動ロボットそれぞれが、上述の実施の形態1における環境センサノードと同様の機能を備える。すなわち、地図生成や自己位置推定を行う1つの移動ロボットに対して、残りの移動ロボットが環境センサノードとして機能することができる。
(Embodiment 2)
The robot system according to the first embodiment described above includes one mobile robot and at least one environmental sensor node. On the other hand, the robot system according to the second embodiment includes a plurality of mobile robots and at least one environmental sensor node. Each of the plurality of mobile robots has the same function as the environment sensor node in the first embodiment. That is, the remaining mobile robots can function as environment sensor nodes for one mobile robot that performs map generation and self-position estimation.
具体的には、本実施の形態2の移動ロボットそれぞれは、環境センサを用いて最終的に動体移動・環境変化情報を取得する上述の実施の形態1の環境センサノードと同様に、外界センサを用いて最終的に動体移動・環境変化情報を取得するように構成されている。また、移動ロボットは、図1に示す環境センサノードの動体移動・環境変化情報抽出部、環境センサ情報時系列データベース、および動体移動・環境変化情報データベースと実質的に同一の構成要素を含んでいる。 Specifically, each mobile robot of the second embodiment uses an external sensor as in the case of the environmental sensor node of the first embodiment that finally acquires moving body movement / environment change information using the environmental sensor. It is configured to finally acquire moving body movement / environment change information. The mobile robot includes substantially the same components as the moving body / environment change information extraction unit, environment sensor information time series database, and moving body / environment change information database of the environment sensor node shown in FIG. .
図11に、本実施の形態2に係るロボットシステムにおける、複数の移動ロボットの処理の一例を示すフロー図である。 FIG. 11 is a flowchart showing an example of processing of a plurality of mobile robots in the robot system according to the second embodiment.
図11に示すように、本実施の形態2に係る移動ロボットは、メイン処理フローT701と、サブ処理フローT702A、T702Bとにしたがって動作するように構成されている。また、図11は、メイン処理フローを実行するロボット(上述の実施の形態1の移動ロボット100に対応)とサブ処理フローT702A、T702Bを実行する少なくとも1つの別の移動ロボット(上述の実施の形態1の環境センサノード120として機能するロボット)との情報のやりとりが示されている。
As shown in FIG. 11, the mobile robot according to the second embodiment is configured to operate according to a main process flow T701 and sub-process flows T702A and T702B. FIG. 11 shows a robot that executes the main processing flow (corresponding to the
図11に示す処理フローを行う本実施の形態2に係るロボットシステムと図3A、図3Bに示す上述の実施の形態1に係るロボットシステムとの違いは、動体移動・環境変化情報が、処理フローT702A、T702Bを実行して環境センサノードの役割をする移動ロボットによって抽出され、そのロボットから、メイン処理フローを実行して地図生成および自己位置推定を行う移動ロボットに対して送信されること、環境センサノードの役割をする移動ロボットが外界センサを用いてその周辺の情報を取得すること、である。 The difference between the robot system according to the second embodiment that performs the processing flow shown in FIG. 11 and the robot system according to the first embodiment shown in FIGS. 3A and 3B is that the moving body movement / environment change information is the processing flow. Extracted by a mobile robot that performs T702A and T702B and plays the role of an environmental sensor node, and is transmitted from the robot to a mobile robot that executes a main processing flow and performs map generation and self-position estimation. A mobile robot acting as a sensor node acquires information about its surroundings using an external sensor.
本実施の形態2によれば、上述の実施の形態1と同様に、環境変化が大きい空間においても、高精度かつロバストに、地図生成とその地図を用いて移動する移動ロボットの自己位置推定とを行うことができる。また、移動可能な移動ロボットが環境センサノードの役割をすることで、動体移動・環境変化を検出することができるエリアが拡大し、それにともなって動体移動・環境変化情報の情報量が増加する(固定の環境センサノードのロボットシステムに比べて)。その結果として、地図生成・自己位置推定の処理における推定値の精度劣化の抑制と、外乱による計算量の増加の抑制とがより図られる。 According to the second embodiment, as in the first embodiment described above, the map generation and the self-position estimation of the mobile robot that moves using the map can be performed with high accuracy and robustness even in a space with a large environmental change. It can be performed. In addition, the movable mobile robot acts as an environmental sensor node, so that the area in which moving body movement / environment change can be detected is expanded, and accordingly, the amount of moving body movement / environment change information increases ( Compared to a robot system with a fixed environmental sensor node). As a result, it is possible to further suppress the deterioration in accuracy of the estimated value in the map generation / self-position estimation process and the increase in the calculation amount due to the disturbance.
なお、本実施の形態2の場合、複数の移動ロボットのいずれかが環境センサノードの役割を果たすので、少なくとも1つの環境センサノード自体をロボットシステムから除外することも可能である。すなわち、この場合、ロボットシステムは、環境センサノードを含まず、環境センサノードの役割を果たすことができる少なくとも2つの移動ロボットを含んでいる。 In the second embodiment, since any one of the plurality of mobile robots plays the role of the environmental sensor node, it is possible to exclude at least one environmental sensor node itself from the robot system. That is, in this case, the robot system does not include the environment sensor node, but includes at least two mobile robots that can play the role of the environment sensor node.
なお、上記態様において説明された技術は、例えば、以下のロボットシステムの類型において実現されうる。しかし、上記態様において説明された技術が実現されるロボットシステムの類型はこれらに限られるものでない。 The technique described in the above aspect can be realized, for example, in the following types of robot systems. However, the types of robot systems in which the technology described in the above aspect is realized are not limited to these.
以上、実施の形態にて本開示の処理について説明したが、各処理が実施される主体や装置に関しては特に限定しない。ローカルに配置された特定の装置内に組み込まれたプロセッサーなど(以下に説明)によって処理されてもよい。またローカルの装置と異なる場所に配置されているクラウドサーバなどによって処理されてもよい。また、ローカルの装置とクラウドサーバ間で情報の連携を行うことで、本開示にて説明した各処理を分担してもよい。以下本開示の実施態様を説明する。 As described above, the processing of the present disclosure has been described in the embodiment, but the subject and the device that perform each processing are not particularly limited. It may be processed by a processor or the like (described below) embedded in a specific device located locally. Further, it may be processed by a cloud server or the like arranged at a location different from the local device. Moreover, you may share each process demonstrated by this indication by coordinating information between a local apparatus and a cloud server. Embodiments of the present disclosure will be described below.
(1)上記の装置は、具体的には、マイクロプロセッサ、ROM、RAM、ハードディスクユニット、ディスプレイユニット、キーボード、マウスなどから構成されるコンピュータシステムである。前記RAMまたはハードディスクユニットには、コンピュータプログラムが記憶されている。前記マイクロプロセッサが、前記コンピュータプログラムにしたがって動作することにより、各装置は、その機能を達成する。ここでコンピュータプログラムは、所定の機能を達成するために、コンピュータに対する指令を示す命令コードが複数個組み合わされて構成されたものである。 (1) The above apparatus is specifically a computer system including a microprocessor, ROM, RAM, a hard disk unit, a display unit, a keyboard, a mouse, and the like. A computer program is stored in the RAM or hard disk unit. Each device achieves its functions by the microprocessor operating according to the computer program. Here, the computer program is configured by combining a plurality of instruction codes indicating instructions for the computer in order to achieve a predetermined function.
(2)上記の装置を構成する構成要素の一部または全部は、1個のシステムLSI(Large Scale Integration:大規模集積回路)から構成されているとしてもよい。システムLSIは、複数の構成部を1個のチップ上に集積して製造された超多機能LSIであり、具体的には、マイクロプロセッサ、ROM、RAMなどを含んで構成されるコンピュータシステムである。前記RAMには、コンピュータプログラムが記憶されている。前記マイクロプロセッサが、前記コンピュータプログラムにしたがって動作することにより、システムLSIは、その機能を達成する。 (2) A part or all of the constituent elements constituting the above-described apparatus may be configured by one system LSI (Large Scale Integration). The system LSI is an ultra-multifunctional LSI manufactured by integrating a plurality of components on a single chip, and specifically, a computer system including a microprocessor, ROM, RAM, and the like. . A computer program is stored in the RAM. The system LSI achieves its functions by the microprocessor operating according to the computer program.
(3)上記の装置を構成する構成要素の一部または全部は、各装置に脱着可能なICカードまたは単体のモジュールから構成されているとしてもよい。前記ICカードまたは前記モジュールは、マイクロプロセッサ、ROM、RAMなどから構成されるコンピュータシステムである。前記ICカードまたは前記モジュールは、上記の超多機能LSIを含むとしてもよい。マイクロプロセッサが、コンピュータプログラムにしたがって動作することにより、前記ICカードまたは前記モジュールは、その機能を達成する。このICカードまたはこのモジュールは、耐タンパ性を有するとしてもよい。 (3) A part or all of the constituent elements constituting the above-described device may be constituted by an IC card or a single module that can be attached to and detached from each device. The IC card or the module is a computer system including a microprocessor, a ROM, a RAM, and the like. The IC card or the module may include the super multifunctional LSI described above. The IC card or the module achieves its function by the microprocessor operating according to the computer program. This IC card or this module may have tamper resistance.
(4)本開示は、上記に示す方法であるとしてもよい。また、これらの方法をコンピュータにより実現するコンピュータプログラムであるとしてもよいし、前記コンピュータプログラムからなるデジタル信号であるとしてもよい。 (4) The present disclosure may be the method described above. Further, the present invention may be a computer program that realizes these methods by a computer, or may be a digital signal composed of the computer program.
(5)また、本開示は、前記コンピュータプログラムまたは前記デジタル信号をコンピュータで読み取り可能な記録媒体、例えば、フレキシブルディスク、ハードディスク、CD-ROM、MO、DVD、DVD-ROM、DVD-RAM、BD(Blu-ray(登録商標) Disc)、半導体メモリなどに記録したものとしてもよい。また、これらの記録媒体に記録されている前記デジタル信号であるとしてもよい。 (5) Further, the present disclosure provides a computer-readable recording medium such as a flexible disk, a hard disk, a CD-ROM, an MO, a DVD, a DVD-ROM, a DVD-RAM, a BD ( It may be recorded on a Blu-ray (registered trademark) Disc), a semiconductor memory, or the like. The digital signal may be recorded on these recording media.
また、本開示は、前記コンピュータプログラムまたは前記デジタル信号を、電気通信回線、無線または有線通信回線、インターネットを代表とするネットワーク、データ放送等を経由して伝送するものとしてもよい。 In the present disclosure, the computer program or the digital signal may be transmitted via an electric communication line, a wireless or wired communication line, a network represented by the Internet, a data broadcast, or the like.
また、本開示は、マイクロプロセッサとメモリを備えたコンピュータシステムであって、前記メモリは、上記コンピュータプログラムを記憶しており、前記マイクロプロセッサは、前記コンピュータプログラムにしたがって動作するとしてもよい。 Further, the present disclosure may be a computer system including a microprocessor and a memory, and the memory may store the computer program, and the microprocessor may operate according to the computer program.
また、前記プログラムまたは前記デジタル信号を前記記録媒体に記録して移送することにより、または前記プログラムまたは前記デジタル信号を、前記ネットワーク等を経由して移送することにより、独立した他のコンピュータシステムにより実施するとしてもよい。 In addition, the program or the digital signal is recorded on the recording medium and transferred, or the program or the digital signal is transferred via the network or the like and executed by another independent computer system. You may do that.
(6)上記実施の形態及びその変形例をそれぞれ組み合わせるとしてもよい。 (6) The above embodiment and its modifications may be combined.
最後に、移動ロボットおよび環境センサノードは、様々な形態が可能である。例えば、図2に示す移動ロボット100や環境センサノード120(312-319)は屋内で使用されるが、これに限らない。例えば、移動ロボットが屋外の道路などを自動走行する車両の形態であってもよい。この場合、環境センサノードは、道路標識のように道路に沿って配置される、または、道路近傍の建物に設けられる。
Finally, the mobile robot and environmental sensor node can take various forms. For example, the
本開示は、地図生成方法、自己位置推定方法、ロボットシステム、およびロボットに適用可能である。 The present disclosure is applicable to a map generation method, a self-position estimation method, a robot system, and a robot.
100 移動ロボット
101 外界センサ
102 内界センサ
103 動体・環境変化情報統合部
104 地図生成・自己位置推定部
105 地図情報データベース
106 経路計画部
107 制御部
108 アクチュエータ
109 通信部
120,312,313,314,315,316,317,318,319 環境センサノード
121 環境センサ
122 動体移動・環境変化情報抽出部
123 環境センサ情報時系列データベース
124 動体移動・環境変化情報データベース
125 通信部
DESCRIPTION OF
Claims (11)
前記移動ロボット周辺の情報を含む予め作成された第1の地図情報を取得し、
前記移動ロボットに搭載された外界センサによって前記移動ロボット周辺の情報を含む第2の地図情報を取得し、
前記環境センサノードから前記移動ロボット周辺の情報を含む第3の地図情報を受け取り、
(i)前記第3の地図情報に時間的変化量が所定の閾値以上の場所の情報が存在する場合、
前記第3の地図情報における時間的変化量が前記所定の閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報とを除去する除去処理を行う、または、
前記第3の地図情報における時間的変化量が前記所定閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報それぞれの分散を増加する分散増加処理を行い、
(ii)前記除去処理または前記分散増加処理が行われた後の第1の地図情報と第2の地図情報とをマッチングし、そのマッチング結果に基づいて地図情報を生成し、
(iii)マッチング結果に基づいて生成された地図情報に、前記第1の地図情報を更新する、地図生成方法。 A map generation method for a mobile robot that performs map generation using at least one environmental sensor node,
Obtaining pre-created first map information including information around the mobile robot;
Obtaining second map information including information around the mobile robot by an external sensor mounted on the mobile robot;
Receiving third map information including information around the mobile robot from the environmental sensor node;
(I) When there is information on a location where the temporal change amount is equal to or greater than a predetermined threshold in the third map information,
A removal process for removing the location information in the first map information and the location information in the second map information corresponding to a location where the temporal change amount in the third map information is equal to or greater than the predetermined threshold. Or
A variance that increases the variance of each of the location information in the first map information and the location information in the second map information, corresponding to a location where the temporal change in the third map information is greater than or equal to the predetermined threshold. Increase processing,
(Ii) matching the first map information and the second map information after the removal process or the dispersion increasing process is performed, and generating map information based on the matching result;
(Iii) A map generation method in which the first map information is updated to map information generated based on a matching result.
前記第1のタイミングと前記第2のタイミングとの間の時間差が第1の時間と前記第1の時間に比べて長い第2の時間との間の範囲内である場合には、前記除去処理を行い、
前記第1のタイミングと前記第2のタイミングとの間の時間差が前記第2の時間以上である場合には、前記分散増加処理を行う、請求項1に記載の地図生成方法。 In the case where there is information on a location where the temporal change amount is equal to or greater than the predetermined threshold between the first timing and the second timing in the third map information,
When the time difference between the first timing and the second timing is within a range between the first time and a second time longer than the first time, the removal process And
The map generation method according to claim 1, wherein when the time difference between the first timing and the second timing is equal to or greater than the second time, the variance increasing process is performed.
前記第1のタイミングと前記第2のタイミングとの間の時間差が前記第1の時間以下である場合には、前記除去処理および前記分散増加処理を行わずに、前記第1の地図情報と前記第2の地図情報とをマッチングし、そのマッチング結果に基づいて地図情報を生成する、請求項2に記載の地図生成方法。 In the case where there is information on a location where the temporal change amount is equal to or greater than the predetermined threshold between the first timing and the second timing in the third map information,
When the time difference between the first timing and the second timing is equal to or less than the first time, the first map information and the first map information The map generation method according to claim 2, wherein the map information is matched with the second map information, and the map information is generated based on the matching result.
前記時間的変化量が、前記存在確率の変化量である、請求項1から5のいずれか一項に記載の地図生成方法。 The third map information includes information on the existence probability of objects around the environmental sensor node,
The map generation method according to any one of claims 1 to 5, wherein the temporal change amount is a change amount of the existence probability.
前記マッチングを行う前に、前記第1の地図情報、前記第2の地図情報、および前記第3の地図情報それぞれの座標系を、共通の座標系に変換する座標変換処理を行う、請求項1から6のいずれか一項に記載の地図生成方法。 The first map information, the second map information, and the third map information are coordinate information in a two-dimensional coordinate system or coordinate information in a three-dimensional coordinate system,
The coordinate conversion process of converting each coordinate system of the first map information, the second map information, and the third map information into a common coordinate system is performed before the matching. 7. The map generation method according to any one of items 6 to 6.
前記移動ロボット周辺の情報を含む予め作成された第1の地図情報を取得し、
前記移動ロボットに搭載された外界センサによって前記移動ロボット周辺の情報を含む第2の地図情報を取得し、
環境センサノードから前記移動ロボット周辺の情報を含む第3の地図情報を受け取り、
(i)前記第3の地図情報に時間的変化量が所定の閾値以上の場所の情報が存在する場合、
前記第3の地図情報における時間的変化量が前記所定の閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報とを除去する除去処理を行う、または、
前記第3の地図情報における時間的変化量が前記所定閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報それぞれの分散を増加する分散増加処理を行い、
(ii)前記除去処理または前記分散増加処理が行われた後の第1の地図情報と第2の地図情報とをマッチングし、そのマッチング結果に基づいて地図情報を生成し、
(iii)マッチング結果に基づいて生成された地図情報に、前記第1の地図情報を更新し、
(iv)更新済みの第1の地図情報と、前記移動ロボットの位置および姿勢の少なくとも一方を検出する内界センサの検出結果とに基づいて、前記更新済みの第1の地図情報上での前記移動ロボットの自己位置を推定する、自己位置推定方法。 A mobile robot self-position estimation method for performing self-position estimation using at least one sensor node,
Obtaining pre-created first map information including information around the mobile robot;
Obtaining second map information including information around the mobile robot by an external sensor mounted on the mobile robot;
Receiving third map information including information around the mobile robot from an environmental sensor node;
(I) When there is information on a location where the temporal change amount is equal to or greater than a predetermined threshold in the third map information,
A removal process for removing the location information in the first map information and the location information in the second map information corresponding to a location where the temporal change amount in the third map information is equal to or greater than the predetermined threshold. Or
A variance that increases the variance of each of the location information in the first map information and the location information in the second map information, corresponding to a location where the temporal change in the third map information is greater than or equal to the predetermined threshold. Increase processing,
(Ii) matching the first map information and the second map information after the removal process or the dispersion increasing process is performed, and generating map information based on the matching result;
(Iii) updating the first map information to the map information generated based on the matching result;
(Iv) Based on the updated first map information and a detection result of an internal sensor that detects at least one of the position and orientation of the mobile robot, the updated map information on the updated first map information A self-position estimation method for estimating the self-position of a mobile robot.
前記移動経路に沿って前記移動ロボットを移動させる、請求項8に記載の自己位置推定方法。 Calculating a travel route based on the updated first map information and the estimated self-position,
The self-position estimation method according to claim 8, wherein the mobile robot is moved along the movement path.
前記環境センサノードが、前記移動ロボット周辺の情報を含む第3の地図情報を取得し、
前記移動ロボットが、
前記移動ロボット周辺の情報を含む予め作成された第1の地図情報を記録しているデータベースと、
前記移動ロボット周辺の情報を含む第2の地図情報を取得する外界センサと、
前記環境センサノードと通信して前記第3の地図情報を取得する通信部と、
(i)前記第3の地図情報に時間的変化量が所定の閾値以上の場所の情報が存在する場合、
前記第3の地図情報における時間的変化量が前記所定の閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報とを除去する除去処理を行う、または、
前記第3の地図情報における時間的変化量が前記所定閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報それぞれの分散を増加する分散増加処理を行い、
(ii)前記除去処理または前記分散増加処理が行われた後の第1の地図情報と第2の地図情報とをマッチングし、そのマッチング結果に基づいて地図情報を生成し、
(iii)マッチング結果に基づいて生成された地図情報に、前記予め記録していた第1の地図情報を更新する情報処理部と、を備える、ロボットシステム。 A robot system including at least one environmental sensor node and a mobile robot,
The environmental sensor node acquires third map information including information around the mobile robot;
The mobile robot is
A database that records pre-created first map information including information around the mobile robot;
An external sensor for acquiring second map information including information around the mobile robot;
A communication unit that communicates with the environmental sensor node to obtain the third map information;
(I) When there is information on a location where the temporal change amount is equal to or greater than a predetermined threshold in the third map information,
A removal process for removing the location information in the first map information and the location information in the second map information corresponding to a location where the temporal change amount in the third map information is equal to or greater than the predetermined threshold. Or
A variance that increases the variance of each of the location information in the first map information and the location information in the second map information, corresponding to a location where the temporal change in the third map information is greater than or equal to the predetermined threshold. Increase processing,
(Ii) matching the first map information and the second map information after the removal process or the dispersion increasing process is performed, and generating map information based on the matching result;
(Iii) A robot system comprising: an information processing unit that updates the previously recorded first map information to map information generated based on a matching result.
前記移動ロボット周辺の情報を含む予め作成された第1の地図情報を記録しているデータベースと、
前記移動ロボット周辺の情報を含む第2の地図情報を取得する外界センサと、
前記移動ロボットの外部にあって且つ前記移動ロボット周辺の情報を含む第3の地図情報を取得する少なくとも1つの環境センサノードと通信して前記第3の地図情報を取得する通信部と、
(i)前記第3の地図情報に時間的変化量が所定の閾値以上の場所の情報が存在する場合、
前記第3の地図情報における時間的変化量が前記所定の閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報とを除去する除去処理を行う、または、
前記第3の地図情報における時間的変化量が前記所定閾値以上の場所に対応する、前記第1の地図情報における場所の情報と前記第2の地図情報における場所の情報それぞれの分散を増加する分散増加処理を行い、
(ii)前記除去処理または前記分散増加処理が行われた後の第1の地図情報と第2の地図情報とをマッチングし、そのマッチング結果に基づいて地図情報を生成し、
(iii)マッチング結果に基づいて生成された地図情報に、前記予め記録していた第1の地図情報を更新する情報処理部と、を備える、移動ロボット。 A mobile robot,
A database that records pre-created first map information including information around the mobile robot;
An external sensor for acquiring second map information including information around the mobile robot;
A communication unit that is external to the mobile robot and communicates with at least one environmental sensor node that acquires third map information including information around the mobile robot, and acquires the third map information;
(I) When there is information on a location where the temporal change amount is equal to or greater than a predetermined threshold in the third map information,
A removal process for removing the location information in the first map information and the location information in the second map information corresponding to a location where the temporal change amount in the third map information is equal to or greater than the predetermined threshold. Or
A variance that increases the variance of each of the location information in the first map information and the location information in the second map information, corresponding to a location where the temporal change in the third map information is greater than or equal to the predetermined threshold. Increase processing,
(Ii) matching the first map information and the second map information after the removal process or the dispersion increasing process is performed, and generating map information based on the matching result;
(Iii) A mobile robot, comprising: an information processing unit that updates the previously recorded first map information to map information generated based on a matching result.
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201680003184.4A CN106796434B (en) | 2015-08-28 | 2016-08-08 | Map generation method, self-position estimation method, robot system, and robot |
| EP16841053.8A EP3343307B1 (en) | 2015-08-28 | 2016-08-08 | Mapping method, localization method, robot system, and robot |
| US15/825,159 US10549430B2 (en) | 2015-08-28 | 2017-11-29 | Mapping method, localization method, robot system, and robot |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2015-169412 | 2015-08-28 | ||
| JP2015169412 | 2015-08-28 | ||
| JP2016134284A JP6849330B2 (en) | 2015-08-28 | 2016-07-06 | Map generation method, self-position estimation method, robot system, and robot |
| JP2016-134284 | 2016-07-06 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/825,159 Continuation US10549430B2 (en) | 2015-08-28 | 2017-11-29 | Mapping method, localization method, robot system, and robot |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2017038012A1 true WO2017038012A1 (en) | 2017-03-09 |
Family
ID=58186807
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2016/003634 Ceased WO2017038012A1 (en) | 2015-08-28 | 2016-08-08 | Mapping method, localization method, robot system, and robot |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2017038012A1 (en) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2018179649A1 (en) * | 2017-03-28 | 2018-10-04 | 株式会社日立産機システム | Map creation system and robot system |
| WO2019131198A1 (en) * | 2017-12-28 | 2019-07-04 | ソニー株式会社 | Control device, control method, program, and mobile body |
| CN110069586A (en) * | 2017-09-26 | 2019-07-30 | 卡西欧计算机株式会社 | Cartographic information provides device, system and method and pocket map sending device |
| JP2020095435A (en) * | 2018-12-12 | 2020-06-18 | 株式会社日立製作所 | Mobile |
| CN113519019A (en) * | 2019-03-15 | 2021-10-19 | 日立安斯泰莫株式会社 | Self-position estimation device, automatic driving system equipped with same, and self-generated map sharing device |
| US11216973B2 (en) | 2019-03-07 | 2022-01-04 | Mitsubishi Heavy Industries, Ltd. | Self-localization device, self-localization method, and non-transitory computer-readable medium |
| CN118219280A (en) * | 2024-05-23 | 2024-06-21 | 北京大学 | A multi-machine collaborative exploration system, method, robot and human end |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2010511957A (en) * | 2006-12-08 | 2010-04-15 | 韓國電子通信研究院 | Apparatus and method for creating an environment map for a moving body capable of quickly adapting to surrounding environmental changes and creating an environment map |
| WO2013002067A1 (en) * | 2011-06-29 | 2013-01-03 | 株式会社日立産機システム | Mobile robot and self location and attitude estimation system installed upon mobile body |
-
2016
- 2016-08-08 WO PCT/JP2016/003634 patent/WO2017038012A1/en not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2010511957A (en) * | 2006-12-08 | 2010-04-15 | 韓國電子通信研究院 | Apparatus and method for creating an environment map for a moving body capable of quickly adapting to surrounding environmental changes and creating an environment map |
| WO2013002067A1 (en) * | 2011-06-29 | 2013-01-03 | 株式会社日立産機システム | Mobile robot and self location and attitude estimation system installed upon mobile body |
Cited By (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPWO2018179649A1 (en) * | 2017-03-28 | 2019-11-07 | 株式会社日立産機システム | Mapping system and robot system |
| WO2018179649A1 (en) * | 2017-03-28 | 2018-10-04 | 株式会社日立産機システム | Map creation system and robot system |
| CN110069586B (en) * | 2017-09-26 | 2023-06-27 | 卡西欧计算机株式会社 | Map information providing device, system and method, and portable map sending device |
| CN110069586A (en) * | 2017-09-26 | 2019-07-30 | 卡西欧计算机株式会社 | Cartographic information provides device, system and method and pocket map sending device |
| JPWO2019131198A1 (en) * | 2017-12-28 | 2020-12-24 | ソニー株式会社 | Control devices, control methods, programs, and mobiles |
| JP7151725B2 (en) | 2017-12-28 | 2022-10-12 | ソニーグループ株式会社 | CONTROL DEVICE, CONTROL METHOD, PROGRAM, AND MOVING OBJECT |
| WO2019131198A1 (en) * | 2017-12-28 | 2019-07-04 | ソニー株式会社 | Control device, control method, program, and mobile body |
| US11822341B2 (en) | 2017-12-28 | 2023-11-21 | Sony Corporation | Control device, control method, and mobile object to estimate the mobile object's self-position |
| JP2020095435A (en) * | 2018-12-12 | 2020-06-18 | 株式会社日立製作所 | Mobile |
| JP7302966B2 (en) | 2018-12-12 | 2023-07-04 | 株式会社日立製作所 | moving body |
| US11216973B2 (en) | 2019-03-07 | 2022-01-04 | Mitsubishi Heavy Industries, Ltd. | Self-localization device, self-localization method, and non-transitory computer-readable medium |
| CN113519019A (en) * | 2019-03-15 | 2021-10-19 | 日立安斯泰莫株式会社 | Self-position estimation device, automatic driving system equipped with same, and self-generated map sharing device |
| CN113519019B (en) * | 2019-03-15 | 2023-10-20 | 日立安斯泰莫株式会社 | Self-position estimating device, automatic driving system equipped with same, and self-generated map sharing device |
| CN118219280A (en) * | 2024-05-23 | 2024-06-21 | 北京大学 | A multi-machine collaborative exploration system, method, robot and human end |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6849330B2 (en) | Map generation method, self-position estimation method, robot system, and robot | |
| WO2017038012A1 (en) | Mapping method, localization method, robot system, and robot | |
| KR101776621B1 (en) | Apparatus for recognizing location mobile robot using edge based refinement and method thereof | |
| JP4942733B2 (en) | Self-localization method of robot based on object recognition and surrounding environment information including recognized object | |
| KR101725060B1 (en) | Apparatus for recognizing location mobile robot using key point based on gradient and method thereof | |
| US8649557B2 (en) | Method of mobile platform detecting and tracking dynamic objects and computer-readable medium thereof | |
| KR101708061B1 (en) | Control apparatus, control method and recording medium | |
| Diosi et al. | Interactive SLAM using laser and advanced sonar | |
| KR101503903B1 (en) | Apparatus and method for building map used in mobile robot | |
| US8467902B2 (en) | Method and apparatus for estimating pose of mobile robot using particle filter | |
| KR101784183B1 (en) | APPARATUS FOR RECOGNIZING LOCATION MOBILE ROBOT USING KEY POINT BASED ON ADoG AND METHOD THEREOF | |
| JP5892663B2 (en) | Self-position estimation device, self-position estimation method, self-position estimation program, and moving object | |
| WO2019190395A1 (en) | Method and system for returning a displaced autonomous mobile robot to its navigational path | |
| JP2019525342A (en) | How to control an autonomous mobile robot | |
| KR20120046974A (en) | Moving robot and simultaneous localization and map-buliding method thereof | |
| JP5276931B2 (en) | Method for recovering from moving object and position estimation error state of moving object | |
| JP2015055974A (en) | 3D object recognition apparatus, 3D object recognition method, and moving body | |
| Lee et al. | Vision-based kidnap recovery with SLAM for home cleaning robots | |
| JP2015215651A (en) | Robot and self-position estimation method | |
| WO2019100354A1 (en) | State sensing method and related apparatus | |
| JP2020042447A (en) | Device, program and method for estimating terminal position from immovable object information | |
| KR102105105B1 (en) | Method of aiding driving and apparatuses performing the same | |
| Xie et al. | A real-time robust global localization for autonomous mobile robots in large environments | |
| Nguyen et al. | A visual SLAM system on mobile robot supporting localization services to visually impaired people | |
| Berkvens et al. | Feasibility of geomagnetic localization and geomagnetic RatSLAM |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16841053 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2016841053 Country of ref document: EP |