[go: up one dir, main page]

WO2024111893A1 - Robot se déplaçant dans l'espace en utilisant une carte, et procédé d'identification de son emplacement - Google Patents

Robot se déplaçant dans l'espace en utilisant une carte, et procédé d'identification de son emplacement Download PDF

Info

Publication number
WO2024111893A1
WO2024111893A1 PCT/KR2023/016167 KR2023016167W WO2024111893A1 WO 2024111893 A1 WO2024111893 A1 WO 2024111893A1 KR 2023016167 W KR2023016167 W KR 2023016167W WO 2024111893 A1 WO2024111893 A1 WO 2024111893A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
map
clue
area
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2023/016167
Other languages
English (en)
Korean (ko)
Inventor
이태윤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of WO2024111893A1 publication Critical patent/WO2024111893A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/003Controls for manipulators by means of an audio-responsive input
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/243Means capturing signals occurring naturally from the environment, e.g. ambient optical, acoustic, gravitational or magnetic signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/246Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM]

Definitions

  • This disclosure relates to a robot and a method of identifying its location, and more specifically, to a robot that travels in space using a map and a method of identifying its location.
  • robots In general, robots have been developed for industrial purposes and are widely used in various industrial fields. Recently, the field using robots has expanded further, and they are being used not only in ordinary homes but also in various stores.
  • the robot can move within the space where the robot is located using a map of the space where the robot is located. To do this, the robot can estimate its location on the map.
  • a robot that travels in space using a map includes a camera, a microphone, memory, and one or more processors.
  • One or more processors repeatedly process an object fixedly located in the space and a specific location in the space based on an image captured through the camera and a sound received through the microphone while the robot travels the space using a map. Identify the sound received as a clue.
  • One or more processors generate a clue map with information about the clue added to the map and store it in the memory. When a preset event occurs, one or more processors identify the area where the robot is located on the map using the clue map, and perform relocalization on the identified area to identify the location of the robot on the map. .
  • the information about the clue may include at least one of a feature point of the object and a location of the object.
  • the information about the clue may include at least one of the characteristics of the sound and the location of the sound source that output the sound.
  • the one or more processors acquire information about the object using an image taken of the surrounding area of the robot through the camera, and generate sound using the sound received through the microphone. Obtain information about, generate an image patch corresponding to the area where the robot is currently located using the obtained information about the object and the information about the sound, and compare the generated image patch with the clue map. , the area where the robot is located can be identified on the map.
  • the one or more processors when the information about the clue included in one of the plurality of regions of the clue map matches the information about the object and the information about the sound included in the image patch, the one or more processors generate the clue map
  • the area matching the image patch can be identified as the area where the robot is located.
  • the one or more processors may identify an area in the clue map in which the clue does not exist as the area where the robot is located.
  • the preset event may include an event in which the robot cannot identify a location on the map or an event in which the robot that has been turned off is turned on.
  • a method for identifying the location of a robot that includes a camera and a microphone and travels in a space using a map includes an image captured by the camera while the robot is traveling in the space using a map, and Identifying an object fixedly located in the space and a sound repeatedly received at a specific location in the space based on the sound received through the microphone as a clue, creating a clue map to which information about the clue is added to the map.
  • a generating step and a preset event occur, identifying the area where the robot is located on the map using the clue map, and performing relocalization on the identified area to identify the location of the robot on the map. Includes steps.
  • a non-transitory computer readable device that includes a camera and a microphone according to an embodiment of the present disclosure and stores computer instructions that, when executed by one or more processors of a robot that navigates space using a map, cause the robot to perform an action.
  • the operation may be performed on an object fixedly located in the space and on the space based on an image captured by the camera and a sound received through the microphone while the robot travels the space using a map.
  • FIG. 1 is a diagram for explaining a robot according to an embodiment of the present disclosure
  • Figure 2A is a block diagram for explaining the configuration of a robot according to an embodiment of the present disclosure
  • Figure 2b is a block diagram for explaining the detailed configuration of a robot according to an embodiment of the present disclosure
  • FIG. 3 is a diagram for explaining an example of a clue map according to an embodiment of the present disclosure.
  • FIG. 4 is a diagram for explaining a method of updating a clue map according to an embodiment of the present disclosure
  • FIG. 5 is a diagram for explaining an example of an image patch according to an embodiment of the present disclosure.
  • FIGS. 6 and 7 are diagrams illustrating a method of identifying an area where a robot is currently located using a clue map and performing relocalization on the identified area according to an embodiment of the present disclosure
  • FIGS. 8A and 8B are diagrams illustrating a method of identifying an area where a robot is currently located using a user's voice according to an embodiment of the present disclosure.
  • Figure 9 is a flowchart for explaining a method for identifying the location of a robot according to an embodiment of the present disclosure.
  • expressions such as “have,” “may have,” “includes,” or “may include” indicate the presence of the corresponding feature (e.g., a numerical value, function, operation, or component such as a part). indicates, does not rule out the presence of additional features.
  • expressions such as “A or B,” “at least one of A or/and B,” or “one or more of A or/and B” may include all possible combinations of the items listed together.
  • “A or B,” “at least one of A and B,” or “at least one of A or B” (1) includes at least one A, (2) includes at least one B, or (3) it may refer to all cases including both at least one A and at least one B.
  • a component e.g., a first component
  • another component e.g., a second component
  • any component may be directly connected to the other component or may be connected through another component (e.g., a third component).
  • a component e.g., a first component
  • another component e.g., a second component
  • no other component e.g., a third component
  • the expression “configured to” used in the present disclosure may mean, for example, “suitable for,” “having the capacity to,” depending on the situation. ,” can be used interchangeably with “designed to,” “adapted to,” “made to,” or “capable of.”
  • the term “configured (or set to)” may not necessarily mean “specifically designed to” in hardware.
  • the expression “a device configured to” may mean that the device is “capable of” working with other devices or components.
  • the phrase "processor configured (or set) to perform A, B, and C" refers to a processor dedicated to performing the operations (e.g., an embedded processor), or by executing one or more software programs stored on a memory device.
  • a 'module' or 'unit' performs at least one function or operation, and may be implemented as hardware or software, or as a combination of hardware and software. Additionally, a plurality of 'modules' or a plurality of 'units' may be integrated into at least one module and implemented with at least one processor, except for 'modules' or 'units' that need to be implemented with specific hardware.
  • FIG. 1 is a diagram for explaining a robot according to an embodiment of the present disclosure.
  • the robot 100 may be a home service robot. That is, the robot 100 is located at home and can provide various services such as housework, education, healthcare, etc. to the user. However, this is an example, and the type of robot 100 is not limited to home service robots.
  • the robot 100 may be located in an office, store, department store, restaurant, etc. Additionally, the robot 100 can perform various functions, such as guiding the user on a route, explaining a product to the user, or carrying the user's items and moving along with the user.
  • the robot 100 can drive using a map of the space (eg, home) where the robot 100 is located.
  • a map of the space where the robot 100 is located is a map of the space where the robot 100 is located, and the map used for driving the robot 100 is referred to as a driving map.
  • the robot 100 can drive in space using the driving map.
  • the robot 100 can use SLAM (Simultaneous Localization and Mapping) to create a driving map and estimate the position of the robot 100 in the driving map to perform autonomous driving in space. .
  • SLAM Simultaneous Localization and Mapping
  • the robot 100 can obtain information about the surrounding environment through the sensor of the robot 100.
  • the robot 100 re-estimates the position of the robot 100 on the driving map.
  • the process of doing so that is, relocalization, can be performed.
  • the robot 100 does not identify the location of the robot 100 by searching the entire driving map, but first identifies the area in the driving map where the robot 100 is located, and then identifies the area where the robot 100 is located. By performing relocalization on the area, the location of the robot 100 can be identified on the driving map.
  • the robot 100 can use various information existing within the area as a clue to identify the area where the robot 100 is located.
  • the robot 100 estimates the area in which the robot 100 is currently located in the driving map and performs relocalization on the estimated area rather than the entire driving map, so that the robot 100
  • the success rate of location estimation of (100) can be increased.
  • FIG. 2A is a block diagram for explaining the configuration of a robot according to an embodiment of the present disclosure.
  • the robot 100 includes a camera 110, a microphone 120, a memory 130, and one or more processors 140.
  • Camera 110 can generate images.
  • the camera may include at least one of a camera (ie, a monocular camera), a stereo camera, and an RGB-D camera.
  • the camera 110 may generate an image by photographing the surrounding area of the robot 100. At this time, if there is an object existing around the robot 100, the image may include the object.
  • objects are various objects that exist in a space (e.g., a house), such as walls, doors, ceilings, home appliances (e.g., TV, air conditioner, refrigerator, etc.), furniture (e.g., bed, sofa, dining table, sink, etc.) It can be.
  • the microphone 120 can receive sound.
  • the microphone 120 may receive sound generated around the robot 100.
  • a driving map may be stored in the memory 130.
  • the driving map is a map of the space in which the robot 100 drives, and can be created based on SLAM.
  • the robot 100 can drive in the space where the robot 100 is located using the driving map.
  • a clue map may be stored in the memory 130.
  • the Clue map may be a map to which information about Clue is added to a driving map.
  • the clue may include objects and sounds that can be used to estimate the area where the robot 100 is located on the driving map.
  • One or more processors 140 are electrically connected to the camera 110, microphone 120, and memory 130 to control the overall operation and functions of the robot 100.
  • One or more processors 140 include a Central Processing Unit (CPU), Graphics Processing Unit (GPU), Accelerated Processing Unit (APU), Many Integrated Core (MIC), Digital Signal Processor (DSP), Neural Processing Unit (NPU), and hardware. It may include one or more of an accelerator or machine learning accelerator.
  • One or more processors 140 may control one or any combination of other components of the robot 100 and may perform operations related to communication or data processing.
  • One or more processors 140 may execute one or more programs or instructions stored in memory. For example, one or more processors 140 may perform a method according to an embodiment of the present disclosure by executing one or more instructions stored in memory.
  • the plurality of operations may be performed by one processor or by a plurality of processors.
  • the first operation, the second operation, and the third operation may all be performed by the first processor.
  • the first operation and the second operation may be performed by a first processor (eg, a general-purpose processor) and the third operation may be performed by a second processor (eg, an artificial intelligence-specific processor).
  • the one or more processors 140 may be implemented as a single core processor including one core, or one or more multi-cores including a plurality of cores (e.g., homogeneous multi-core or heterogeneous multi-core). It may also be implemented as a processor (multicore processor). When one or more processors 140 are implemented as multi-core processors, each of the plurality of cores included in the multi-core processor may include processor internal memory such as cache memory and on-chip memory, and may include a plurality of processor internal memories such as cache memory and on-chip memory. A common cache shared by cores may be included in multi-core processors.
  • each of the plurality of cores (or some of the plurality of cores) included in the multi-core processor may independently read and perform program instructions for implementing the method according to an embodiment of the present disclosure, and all of the plurality of cores may (or part of it) may be linked to read and perform program instructions for implementing the method according to an embodiment of the present disclosure.
  • the plurality of operations may be performed by one core among a plurality of cores included in a multi-core processor, or may be performed by a plurality of cores.
  • the first operation, the second operation, and the third operation are all included in the multi-core processor. It may be performed by a core, and the first operation and the second operation may be performed by the first core included in the multi-core processor, and the third operation may be performed by the second core included in the multi-core processor. .
  • a processor may mean a system-on-chip (SoC) in which one or more processors and other electronic components are integrated, a single-core processor, a multi-core processor, or a core included in a single-core processor or a multi-core processor.
  • SoC system-on-chip
  • the core may be implemented as a CPU, GPU, APU, MIC, DSP, NPU, hardware accelerator, or machine learning accelerator, but embodiments of the present disclosure are not limited thereto.
  • processor 140 one or more processors 140 will be referred to as processor 140.
  • the processor 140 processes objects and A sound that is repeatedly received at a specific location in space is identified as a clue. Thereafter, the processor 140 generates a Clue map to which information about the Clue is added to the driving map and stores it in the memory 130.
  • the clue may include objects and sounds that can be used to estimate the area where the robot 100 is located on the driving map. Additionally, information about the clue may include information about the object and information about the sound.
  • the processor 140 identifies the area where the robot 100 is located on the driving map using the clue map, and performs relocalization on the area to identify the robot 100 on the driving map. Identify the location of
  • the processor 140 compares the information about objects and sounds acquired in the area where the robot 100 is currently located with the information about the clue included in the clue map, and determines whether the robot 100 is currently located on the driving map.
  • the area in which it is located can be identified. Additionally, the processor 140 may identify the location of the robot 100 on the driving map by performing relocalization on the identified area.
  • the success rate of location estimation of the robot 100 can be increased in that relocalization is performed on the area where the robot 100 is currently located rather than the entire driving map.
  • Figure 2b is a block diagram for explaining the detailed configuration of a robot according to an embodiment of the present disclosure.
  • the robot 100 includes a camera 110, an inertial measurement unit (IMU) sensor 111, an encoder 112, a lidar sensor 113, a microphone 120, a memory 130, and a processor. It may include (140), a driving unit 150, a communication unit 160, an input unit 170, and an output unit 180.
  • IMU inertial measurement unit
  • the robot 100 includes a camera 110, an inertial measurement unit (IMU) sensor 111, an encoder 112, a lidar sensor 113, a microphone 120, a memory 130, and a processor. It may include (140), a driving unit 150, a communication unit 160, an input unit 170, and an output unit 180.
  • this configuration is an example, and of course, in carrying out the present disclosure, new configurations may be added or some configurations may be omitted in addition to these configurations.
  • FIG. 2B parts that overlap with parts already described in FIG. 2A will be omitted or abbreviated.
  • the processor 140 may obtain information about the surrounding environment using at least one sensor among the camera 110, IMU sensor 111, encoder 112, and lidar sensor 113.
  • the IMU sensor 111 can sense the acceleration and angular velocity of the robot 100 using an accelerometer, gyroscope, magnetometer, etc.
  • the encoder 112 can sense the rotation speed and rotation direction of each of the plurality of wheels installed on the robot 100.
  • the plurality of wheels are rotated by a motor and may serve to move the robot 100.
  • the lidar sensor 113 can rotate 360 degrees and output a laser. And, when the laser is reflected and received from an object around the robot 100, the LIDAR sensor 113 can detect the distance to the object based on the time when the laser was received.
  • the processor 140 generates a driving map using the image acquired through the camera 110 and the information obtained from the IMU sensor 111, encoder 112, and lidar sensor 113, The location of the robot 100 can be identified on the driving map.
  • the driving unit 150 can move the robot 100.
  • the driving unit 150 may include a plurality of wheels 152 and a motor 152 for rotating the plurality of wheels 152.
  • the driving unit 150 may further include gears, actuators, etc.
  • the motor 151 controls the plurality of wheels 152 according to the control of the processor 140, and can control various driving operations of the robot 100, such as movement, stopping, speed control, and direction change.
  • the communication unit 160 includes circuitry.
  • the communication unit 160 is a component that performs communication with an external device.
  • the processor 140 can transmit various data to an external device and receive various data from the external device through the communication unit 160.
  • the processor 140 may receive a user command for controlling the operation of the robot 100 through the communication unit 160.
  • the communication interface 160 can communicate with an external device through a wireless communication method such as Bluetooth (BT), Bluetooth Low Energy (BLE), or Wireless Fidelity (WI-FI).
  • BT Bluetooth
  • BLE Bluetooth Low Energy
  • WI-FI Wireless Fidelity
  • the input unit 170 includes a circuit.
  • the input unit 170 can receive user commands for setting or selecting various functions supported by the robot 100.
  • the input unit 170 may include at least one button.
  • the input unit 170 may be implemented as a touch screen that can simultaneously perform the functions of the display 181.
  • the processor 140 may control the operation of the robot 100 based on a user command input through the input unit 170.
  • the processor 140 operates the robot ( 100) can be controlled.
  • the output unit 180 may include a display 181 and a speaker 182.
  • the display 181 can display various information.
  • the display 181 may be implemented as a liquid crystal display (LCD) or the like.
  • the display 181 may be implemented as a touch screen that can simultaneously perform the functions of the input unit 170.
  • the processor 140 may display information related to the operation of the robot 100 on the display 181.
  • Speaker 182 can output audio. Specifically, the processor 140 may output various notification sounds or voice guidance messages related to the operation of the robot 100 through the speaker 182.
  • the processor 140 may generate a driving map and store the driving map in the memory 130. Additionally, the processor 140 can control the robot 100 to drive in the space where the robot 100 is located using the driving map.
  • the processor 140 may obtain information about the surrounding environment using at least one of the camera 110, the IMU sensor 111, the encoder 112, and the LiDAR sensor 113. Additionally, the processor 140 can generate a driving map using SLAM based on the acquired information and identify the location of the robot 100 in the driving map. Additionally, the processor 140 may control the driving unit 150 so that the robot 100 travels based on the location where the robot 100 is identified.
  • the processor 140 may generate a clue map. To this end, the processor 140 can identify the clue.
  • the clue may include objects and sounds that can be used to estimate the spatial area where the robot 100 is located.
  • the processor 140 specifies objects and spaces fixedly located in space based on images captured through the camera 110 and sounds received through the microphone 120 while the robot 100 is traveling in space. A sound that is repeatedly received at a location can be identified as a clue.
  • the processor 140 acquires an image using the camera 110 while the robot 100 travels in space using a driving map, and identifies objects that are fixedly located in space based on the acquired image. can do.
  • the processor 140 may acquire an image taken through the camera 110 while the robot 100 travels in space, detect an object in the acquired image, and obtain feature points of the object.
  • the characteristic points of the object may include edges, corners, ridges, etc.
  • the processor 140 can identify the location of the object on the driving map.
  • the location of the object may include coordinate values on the driving map.
  • the processor 140 may identify the location of the robot 100 on the driving map.
  • the processor 140 analyzes images captured by the robot 100 through the camera 110 using the location of the robot 100, the shooting direction of the camera 110, and the distance between the robot 100 and the object.
  • the location of the object can be identified on the driving map.
  • the processor 140 uses the information about the names stored in the memory 130 to determine the name of the area where the object is located. can be identified.
  • the driving map is divided into a plurality of areas, and the names of the plurality of areas are “living room,” “kitchen,” and “room.”
  • the processor 140 may identify the object as being located in the “living room” area.
  • the processor 140 may store information about the feature points of the object and the location of the object in the memory 130.
  • the processor 140 repeats the above-described process while the robot 100 runs in space more than a preset number of times (here, the preset number of times is n (n is a natural number of 2 or more)), and moves the robot 100 fixedly in space.
  • the located object can be identified.
  • the processor 140 detects an object with the same feature point at the same location more than a preset number of times while the robot 100 travels in space more than a preset number of times (here, the preset number is m ( m is a natural number of 2 or more)), the detected object can be identified as an object that is fixedly located in space.
  • the processor 140 uses a driving map to receive sound through the microphone 120 while the robot 100 is traveling in space, and identifies sounds that are repeatedly received at a specific location in space based on the sound. can do.
  • the processor 140 may obtain characteristics of the sound from the received sound.
  • the characteristics of the sound may include waveform, wavelength, frequency band, amplitude, frequency, etc.
  • the characteristics of a sound may include various parameters that can distinguish the sound from other sounds.
  • the processor 140 may identify the location of the robot 100 on the driving map and identify the location where the robot 100 received the sound on the driving map.
  • the location where the robot 100 received the sound may include coordinate values on the driving map.
  • the processor 140 may identify the location of a sound source that outputs sound based on the identified location.
  • the processor 140 may estimate the location where the robot 100 received the sound on the driving map as the location of the sound source.
  • the processor 140 may estimate an area including the location where the robot 100 received the sound among the plurality of areas of the driving map as the location of the sound source. For example, the processor 140 may estimate that the sound source is located in area 2 if the location where the robot 100 received the sound is included in area 2 among areas 1 and 2 of the driving map. .
  • the processor 140 uses the information about the names stored in the memory 130 to make the robot 100 produce sound.
  • the name of the received area can be identified.
  • the driving map is divided into a plurality of areas, and the names of the plurality of areas are “living room,” “kitchen,” and “room.”
  • the processor 140 determines the location where the robot 100 received the sound. can be identified as being a “living room” area and a “kitchen” area.
  • the processor 140 may store information about the characteristics of the sound and the location of the sound source in the memory 130.
  • the processor 140 repeats the above-described process while the robot 100 travels in space more than a preset number of times (here, the preset number of times is n (n is a natural number of 2 or more)) to determine a specific location in space. You can identify sounds that are repeatedly received within the system.
  • the processor 140 receives sounds having the same characteristics more than a preset number of times while the robot 100 travels in space more than a preset number of times, and positions the sound source corresponding to the sound received more than the preset number of times. If is identified as the same, the corresponding sound can be identified as a sound that is received repeatedly.
  • the processor 140 can identify fixedly located objects and sounds that are repeatedly received.
  • the processor 140 may identify the identified objects and sounds as a clue, generate a clue map to which information about the clue is added to the driving map, and store it in the memory 130.
  • the information about the clue may include information about the object identified as the clue and information about the sound identified as the clue.
  • information about the object may include at least one of the characteristic points of the object and the location of the object.
  • information about the sound may include at least one of the characteristics of the sound and the location of the sound source that output the sound.
  • the processor 140 can display the location of the object on the driving map and map the feature points of the object obtained from the object to the location of the object. Additionally, the processor 140 may display the location of the sound source on the driving map and map the characteristics of the sound obtained from the sound to the location of the sound source. Accordingly, the processor 140 may generate a clue map. In this case, the information constituting the Clue map can be stored and managed in the Clue DB.
  • Figure 3 is a diagram for explaining an example of a clue map according to an embodiment of the present disclosure.
  • the processor 140 may generate a clue map 400 by adding information about the clue to the driving map 300 .
  • the locations 411 to 419 of the objects identified as the clues are displayed on the clue map 400, and the characteristic points of the objects may be mapped for each object.
  • the clue map 320 displays the locations 421 and 422 of sound sources corresponding to sounds identified as clues, and the characteristics of the sound can be mapped for each sound source.
  • the clue map includes feature points of the object.
  • the clue map may include an image in which an object identified as a clue is captured.
  • the processor 140 may obtain feature points of the object from the image.
  • the processor 140 may display the location of the object on the driving map. As described above, the processor 140 can travel through space multiple times and identify the location of an object in space each time it travels. At this time, the location of the same object may be measured with different coordinate values due to error. In this case, the processor 140 may determine the position of the object using the average value of the positions of the identified objects, or may determine the position of the object using images taken of the object from a plurality of viewpoints through the camera 110. Additionally, the processor 140 may display the location of the determined object on the driving map.
  • the processor 140 may update the clue map. Specifically, the processor 140 may identify the clu based on the image acquired through the camera 110 and the sound acquired through the microphone 120 while the robot 100 travels in space.
  • the processor 140 compares the information about the object and sound identified as a clue with the information about the clip included in the clip map, and if the information about the object and sound is not included in the clip map, the object and sound You can update the Clue map by adding information about it to the Clue map.
  • Figure 4 is a diagram for explaining a method of updating a clue map according to an embodiment of the present disclosure.
  • the processor 140 displays the location 431 of the object newly identified as a clue on the clip map 400, and maps the feature points of the object newly identified as a clip to that location. You can.
  • the processor 140 can create and update a clue map.
  • the processor 140 identifies the spatial area in which the robot 100 is located in the driving map using a clue map, and performs relocalization on the spatial area to identify the robot (100) in the driving map. 100) identifies the location.
  • the processor 140 acquires information about the object using an image taken of the surrounding area of the robot 100 using the camera 110 and uses the microphone 120 to obtain information about the object. Information about the sound can be obtained using the received sound.
  • the preset event may include an event in which the robot 100 cannot identify its location on the driving map. That is, the processor 140 can identify the location of the robot 100 on the driving map using SLAM. In this case, a case may occur where the user arbitrarily changes the location of the robot 100 or the processor 140 cannot identify the location of the robot 100 on the driving map for reasons such as the surrounding environment of the robot 100. there is.
  • the preset event may include an event in which the power of the robot 100 that was turned off is turned on.
  • the processor 140 acquires information about the object using an image taken of the surrounding area of the robot 100 through the camera 110 and uses the sound received through the microphone 120. You can use this to obtain information about sound.
  • the processor 140 may obtain information about the object using the acquired image and obtain information about the sound using the received sound.
  • information about the object may include at least one of the characteristic points of the object and the location of the object.
  • information about the sound may include at least one of the characteristics of the sound and the location of the sound source that output the sound.
  • the processor 140 may generate an image patch corresponding to the area where the robot 100 is currently located using the acquired information about the object and information about the sound.
  • the image patch is based on the relationship between the positions of objects existing around the robot 100 based on the current position of the robot 100 and the position of the sound source that outputs the sound that the robot 100 receives at the current position. This can be created by imaging the area corresponding to the current location of the robot 100.
  • the processor 140 may obtain an image by photographing the surrounding area of the robot 100 through the camera 110.
  • the processor 140 may obtain an image by photographing the surroundings of the robot 100 in a 360-degree direction using the camera 110.
  • the camera 110 can be installed on the robot 100 so that it can rotate 360 degrees.
  • the processor 140 may control the driving unit 150 so that the robot 100 rotates 360 degrees and perform imaging using the camera 110 while the robot 100 rotates.
  • the processor 140 may analyze an image taken of the surrounding area of the robot 100 and detect a boundary object in the image.
  • the boundary object may be a plane.
  • the processor 140 analyzes images taken of the surrounding area of the robot 100 using the shooting direction of the camera 110 and the distance between the robot 100 and the boundary object, and determines the location of the robot 100. The location of the boundary object can be identified as a reference.
  • the processor 140 may detect an object in an image taken of the surrounding area of the robot 100 and obtain feature points of the object.
  • feature points of the object may include boundaries, corners, curves, etc. of the object.
  • the processor 140 can identify the location of the object. Specifically, the processor 140 analyzes images taken of the surrounding area of the robot 100 using the shooting direction of the camera 110 and the distance between the robot 100 and the object, and determines the location of the robot 100. The location of the object can be identified based on the reference.
  • the processor 140 may receive sound from around the robot 100 using the microphone 120.
  • the processor 140 may obtain sound characteristics from the received sound.
  • the characteristics of the sound may include waveform, wavelength, frequency band, amplitude, frequency, etc.
  • the processor 140 can identify the location of a sound source that outputs sound. For example, the processor 140 may estimate the current location of the robot 100 as the location of the sound source. Additionally, the processor 140 may set the area within the boundary object as the location of the sound source.
  • the processor 140 may generate an image patch using information about the object and information about the sound.
  • Figure 5 is a diagram for explaining an example of an image patch according to an embodiment of the present disclosure.
  • the processor 140 may acquire an image by photographing the surrounding area of the robot 100 through the camera 110.
  • the processor 140 can detect boundary objects located in the first direction of the robot 100 and a second direction opposite to the first direction from the acquired image, and detect objects located around the robot 100. there is.
  • the processor 140 may receive sound through the microphone 120.
  • the processor 140 may generate an image patch using the location of the boundary object, the location of the object, and the location of the sound source.
  • the image patch 510 may display the locations of boundary objects 511 and 512, the location of the object 521, and the location of the sound source 531.
  • the characteristic points of the object obtained from the object may be mapped to the location 521 of the object, and the characteristics of the sound obtained from the sound may be mapped to the location 531 of the sound source.
  • the processor 140 may compare the generated image patch with the clue map to identify the area where the robot is located on the driving map.
  • the processor 140 matches the image patch in the clue map when information about the clue included in one of the plurality of areas of the clue map matches information about the object and information about the sound included in the image patch.
  • the area can be identified as the area where the robot 100 is located.
  • the processor 140 may identify an area that matches the image patch in the clue map using a template matching algorithm.
  • the template matching algorithm may refer to an algorithm that overlaps the template image on the original image, moves the template image, and finds an area in the original image that matches the template image.
  • the clue map since the clue map is created based on the driving map, it may include a boundary object corresponding to a wall existing in the space in which the robot 100 drives. Additionally, the location of the object identified as a clue and the location of the sound source that output the sound identified as a clue may be displayed on the clue map.
  • an image patch may be created by imaging information about the area around the robot 100 based on the current location of the robot 100. Additionally, the location of the boundary object around the robot 100, the location of the object around the robot 100, and the location of the sound source that output the sound received by the robot 100 may be displayed on the image patch.
  • the processor 140 may overlap the image patch on the clue map, move the image patch, and search for an area that matches the image patch in the clue map. Accordingly, the area of the clue map where the location of the boundary object, the location of the object, and the location of the sound source matches the image patch can be searched.
  • the processor 140 If the feature points of the object included in the searched area match the feature points of the object included in the image patch, and the features of the sound included in the searched area match the features of the sound included in the image patch,
  • the searched area can be identified as the area where the robot 100 is located.
  • the processor 140 determines the same area in the driving map as the area where the robot 100 identified in the clue map is located, and The area where the robot 100 is located can be identified.
  • the processor 140 may generate an image patch using information about the object and information about the sound. Additionally, the processor 140 may search for an area that matches the image patch in the clue map. In this case, an area of the clue map where the location of the object and the location of the sound source matches the image patch can be searched.
  • the processor 140 may identify the area in the clue map where the clue does not exist as the area where the robot 100 is located.
  • the processor 140 determines the area where the object and sound source is not located in the clue map as the area where the robot 100 is located in the driving map. can be identified.
  • the area where the object is not located when the area where the object is not located is captured through the camera 110, the area includes the area where the object is not captured in the Clue map depending on the location of the object, the location of the boundary object, and the angle of view of the camera 110. can do. Additionally, the area where the object is not located may include at least one area where the object is not located among the plurality of areas when the clue map is divided into a plurality of areas by a boundary object such as a wall.
  • the area in which the sound source is not located may include the remaining area excluding the location of the sound source determined based on the location where the sound identified as the clue in the clue map was received.
  • the processor 140 may identify the location of the robot 100 on the driving map by performing relocalization on the area where the robot is located on the driving map.
  • the processor 140 may perform relocalization for each of the plurality of areas.
  • Figures 6 and 7 are diagrams for explaining a method of identifying an area where a robot is currently located using a clue map and performing relocalization on the identified area according to an embodiment of the present disclosure.
  • the processor 140 may identify an area 441 that matches the image patch 510 in the clue map 400. Additionally, the processor 140 may identify an area 321 at the same location as the area 441 in the driving map 300 and perform relocalization on the area 321.
  • the processor 140 selects areas 442, 443, and 444 in the clue map where objects and sound sources are not located. ) can be identified.
  • the processor 140 identifies areas 322, 323, and 324 in the same location as the areas 442, 443, and 444 in the driving map 300, and performs relocalization for each area 322, 323, and 324. It can be done.
  • a robot performing autonomous driving can create a map and then drive through location recognition, that is, localization, to find its own location on the map.
  • location recognition that is, localization
  • the robot may incorrectly estimate its location due to environmental factors (e.g. feature-less, etc.).
  • the robot In the case of a small home robot, if a person turns the robot off, moves it to a random location, and then turns it back on, the robot will not be able to recognize its current location. In this case, the robot must find its location on the map again, and it finds its location through relocalization.
  • relocalization is not performed on the entire driving map, but first, the area where the robot 100 is currently located is identified in the driving map, and relocalization is performed on the identified area to determine the driving map.
  • the location of the robot 100 can be identified on the map. Accordingly, the success rate of estimating the position of the robot 100 may increase.
  • the processor 140 may identify the area where the robot 100 is located based on the user's voice.
  • the name of each of a plurality of areas of the Clue map may be stored in the memory 130. That is, each of the plurality of areas may be labeled with the name of the area. In this case, the name of the area can be entered according to the user command. Additionally, information about the names labeled in each area can be stored and managed in Clue DB. For example, as shown in FIG. 8A, the area 451 of the clue map 400 may be labeled with the name “kitchen,” and the area 452 may be labeled with the name “room.”
  • the processor 140 may receive the user's voice through the microphone 120 and identify the area where the robot 100 is located based on the user's voice.
  • the processor 140 performs voice recognition on the user's voice received through the microphone 120, obtains information about the area where the robot 100 included in the user's voice is located, and stores the obtained information. By comparing the names labeled for each area, the area where the robot 100 is located can be identified.
  • the processor 140 may output a sound such as “Where am I?” through the speaker 182 of the robot 100, and receive a user voice such as “room” in response.
  • the processor 140 may obtain “room” from the user's voice through voice recognition and search the area 452 labeled “room” in the clue map 400. Accordingly, the processor 140 may identify that the robot 100 is currently located in the area 452 in the clue map 400.
  • the processor 140 may update the clue map based on the acquired information.
  • the processor 140 uses an image patch to label the robot 100 in the clue map.
  • the area where this is located can be identified.
  • the processor 140 may update the Clue map by storing information about the area where the robot 100, obtained from the user's voice, is located as the name of the identified area.
  • the processor 140 may use an image patch to identify the robot 100 as being located in the area 452 and label the area 452 as “large room.” Accordingly, area 452 may be labeled “room” and “large room.”
  • the area where the robot 100 is currently located may be identified using the user's voice.
  • Figure 9 is a flowchart for explaining a method for identifying the location of a robot according to an embodiment of the present disclosure.
  • the robot includes a camera and microphone, and can navigate the space using a map.
  • the area where the robot is located on the map is identified using the clue map, and relocalization is performed on the identified area to identify the location of the robot on the map (S930).
  • information about the clue may include at least one of the feature points of the object and the location of the object. Additionally, information about the clue may include at least one of the characteristics of the sound and the location of the sound source that output the sound.
  • step S930 when a preset event occurs, information about the object is obtained using an image taken of the surrounding area of the robot through a camera, information about the sound is obtained using sound received through a microphone, and , Using the information about the acquired object and the information about the sound, an image patch corresponding to the area where the robot is currently located is created, and the generated image patch is compared with the Clue map to identify the area where the robot is located on the map. can do.
  • step S930 when the information about the clue included in an area among the plurality of areas of the clue map matches the information about the object and the information about the sound included in the image patch, the area matching the image patch in the clue map is selected. It can be identified as the area where the robot is located.
  • step S930 if there is no area matching the image patch created in the clue map, the area in the clue map where the clue does not exist can be identified as the area where the robot is located.
  • the preset event may include an event in which the robot cannot identify its location on the map or an event in which the power of the robot that was turned off is turned on.
  • the method according to the embodiments of the present disclosure may be included and provided in a computer program product.
  • Computer program products are commodities and can be traded between sellers and buyers.
  • the computer program product may be distributed in the form of a machine-readable storage medium (e.g. compact disc read only memory (CD-ROM)) or through an application store (e.g. Play StoreTM) or on two user devices (e.g. It can be distributed (e.g. downloaded or uploaded) directly between smartphones) or online.
  • a portion of the computer program product e.g., a downloadable app
  • a machine-readable storage medium such as the memory of a manufacturer's server, an application store's server, or a relay server. It can be temporarily stored or created temporarily.
  • Each component e.g., module or program
  • each component may be composed of a single or multiple entities, and some of the sub-components described above may be omitted. Alternatively, other sub-components may be further included in various embodiments. Alternatively or additionally, some components (e.g., modules or programs) may be integrated into a single entity and perform the same or similar functions performed by each corresponding component prior to integration.
  • operations performed by a module, program, or other component may be executed sequentially, in parallel, iteratively, or heuristically, or at least some operations may be executed in a different order, omitted, or other operations may be added. You can.
  • unit or “module” used in the present disclosure includes a unit comprised of hardware, software, or firmware, and may be used interchangeably with terms such as logic, logic block, component, or circuit, for example. You can.
  • a “part” or “module” may be an integrated part, a minimum unit that performs one or more functions, or a part thereof.
  • a module may be comprised of an application-specific integrated circuit (ASIC).
  • ASIC application-specific integrated circuit
  • a non-transitory computer readable medium storing a program that sequentially performs the control method according to the present disclosure.
  • a non-transitory readable medium refers to a medium that stores data semi-permanently and can be read by a device, rather than a medium that stores data for a short period of time, such as registers, caches, and memories.
  • the various applications or programs described above may be stored and provided on non-transitory readable media such as CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM, etc.
  • embodiments of the present disclosure may be implemented as software including instructions stored in a machine-readable storage media (e.g., a computer).
  • the device is a device capable of calling instructions stored in a storage medium and operating according to the called instructions, and may include an electronic device (eg, robot 100) according to the disclosed embodiments.
  • the processor may perform the function corresponding to the instruction directly or using other components under the control of the processor.
  • Instructions may contain code generated or executed by a compiler or interpreter.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

L'invention concerne un robot qui se déplace dans un espace en utilisant une carte. Le robot comprend : un appareil de prise de vues ; un microphone ; une mémoire ; et un ou plusieurs processeurs qui identifient des objets, présents à des emplacements fixes dans l'espace, et des sons, reçus de manière répétée à partir d'un emplacement spécifique dans l'espace, en tant qu'indices sur la base d'images capturées par l'intermédiaire de l'appareil de prise de vues et de sons reçus à travers le microphone tandis que le robot se déplace dans l'espace en utilisant la carte, génèrent une carte d'indices en ajoutant des informations concernant les indices à la carte et enregistrent la carte d'indices dans la mémoire et, lorsqu'un événement présent se produit, utilisent la carte d'indices pour identifier la zone dans laquelle le robot est situé sur la carte et effectuent une relocalisation sur la zone identifiée pour identifier l'emplacement du robot sur la carte.
PCT/KR2023/016167 2022-11-23 2023-10-18 Robot se déplaçant dans l'espace en utilisant une carte, et procédé d'identification de son emplacement Ceased WO2024111893A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2022-0158184 2022-11-23
KR1020220158184A KR20240076871A (ko) 2022-11-23 2022-11-23 맵을 이용하여 공간을 주행하는 로봇 및 그의 위치 식별 방법

Publications (1)

Publication Number Publication Date
WO2024111893A1 true WO2024111893A1 (fr) 2024-05-30

Family

ID=91195879

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/016167 Ceased WO2024111893A1 (fr) 2022-11-23 2023-10-18 Robot se déplaçant dans l'espace en utilisant une carte, et procédé d'identification de son emplacement

Country Status (2)

Country Link
KR (1) KR20240076871A (fr)
WO (1) WO2024111893A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006035381A (ja) * 2004-07-28 2006-02-09 Honda Motor Co Ltd 移動ロボットの制御装置
KR20130103204A (ko) * 2012-03-09 2013-09-23 엘지전자 주식회사 로봇 청소기 및 이의 제어 방법
JP6157598B2 (ja) * 2013-04-12 2017-07-05 株式会社日立製作所 移動ロボット、及び、音源位置推定システム
JP2018165759A (ja) * 2017-03-28 2018-10-25 カシオ計算機株式会社 音声検出装置、音声検出方法、及びプログラム
KR20210004677A (ko) * 2019-07-05 2021-01-13 엘지전자 주식회사 이동 로봇 및 그 제어방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006035381A (ja) * 2004-07-28 2006-02-09 Honda Motor Co Ltd 移動ロボットの制御装置
KR20130103204A (ko) * 2012-03-09 2013-09-23 엘지전자 주식회사 로봇 청소기 및 이의 제어 방법
JP6157598B2 (ja) * 2013-04-12 2017-07-05 株式会社日立製作所 移動ロボット、及び、音源位置推定システム
JP2018165759A (ja) * 2017-03-28 2018-10-25 カシオ計算機株式会社 音声検出装置、音声検出方法、及びプログラム
KR20210004677A (ko) * 2019-07-05 2021-01-13 엘지전자 주식회사 이동 로봇 및 그 제어방법

Also Published As

Publication number Publication date
KR20240076871A (ko) 2024-05-31

Similar Documents

Publication Publication Date Title
WO2020141924A1 (fr) Appareil et procédé de génération de données cartographiques d'espace de nettoyage
WO2019059505A1 (fr) Procédé et appareil de reconnaissance d'objet
EP3664974A1 (fr) Robot domestique mobile et procédé de commande du robot domestique mobile
WO2021133053A1 (fr) Dispositif électronique et son procédé de commande
WO2020055112A1 (fr) Dispositif électronique, et procédé pour l'identification d'une position par un dispositif électronique
WO2014069722A1 (fr) Dispositif d'affichage en trois dimensions, et procédé correspondant pour la mise en œuvre d'une interface utilisateur
WO2019156480A1 (fr) Procédé de détection d'une région d'intérêt sur la base de la direction du regard et dispositif électronique associé
WO2019164092A1 (fr) Dispositif électronique de fourniture d'un second contenu pour un premier contenu affiché sur un dispositif d'affichage selon le mouvement d'un objet externe, et son procédé de fonctionnement
WO2023287103A1 (fr) Dispositif électronique pour commander un robot de nettoyage, et son procédé de fonctionnement
WO2021230503A1 (fr) Appareil électronique et son procédé de commande
WO2022035054A1 (fr) Robot et son procédé de commande
WO2023136549A1 (fr) Procédé et dispositif de fourniture de contenu augmenté par l'intermédiaire d'une vue en réalité augmentée sur la base d'un espace unitaire préétabli
WO2020153818A1 (fr) Procédé de commande d'un dispositif électronique externe et dispositif électronique le prenant en charge
WO2020111727A1 (fr) Dispositif électronique et procédé de commande de dispositif électronique
WO2022186598A1 (fr) Robot nettoyeur et son procédé de commande
WO2024090942A1 (fr) Procédé et dispositif électronique pour l'entraînement de modèle de réseau neuronal par augmentation d'images représentant des objets capturés par de multiples caméras
WO2024111893A1 (fr) Robot se déplaçant dans l'espace en utilisant une carte, et procédé d'identification de son emplacement
WO2020022617A1 (fr) Dispositif électronique permettant de détecter un objet externe au moyen d'une pluralité d'antennes réseau disposées dans une pluralité de directions et procédé associé
WO2022075686A1 (fr) Dispositif électronique et son procédé de fonctionnement
WO2021107200A1 (fr) Terminal mobile et procédé de commande de terminal mobile
WO2020251151A1 (fr) Procédé et appareil d'estimation de la pose d'un utilisateur en utilisant un modèle virtuel d'espace tridimensionnel
WO2021162381A1 (fr) Procédé de fourniture de carte de guidage et dispositif électronique le prenant en charge
WO2020091182A1 (fr) Dispositif électronique pour fournir des données d'image à l'aide de la réalité augmentée et son procédé de commande
WO2023027341A1 (fr) Robot et procédé de commande associé
WO2024080592A1 (fr) Robot de partage de carte et son procédé de partage de carte

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23894822

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23894822

Country of ref document: EP

Kind code of ref document: A1