[go: up one dir, main page]

CN116184892A - AI identification control method and system for robot object taking - Google Patents

AI identification control method and system for robot object taking Download PDF

Info

Publication number
CN116184892A
CN116184892A CN202310062938.5A CN202310062938A CN116184892A CN 116184892 A CN116184892 A CN 116184892A CN 202310062938 A CN202310062938 A CN 202310062938A CN 116184892 A CN116184892 A CN 116184892A
Authority
CN
China
Prior art keywords
image
fetching
area
item
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310062938.5A
Other languages
Chinese (zh)
Other versions
CN116184892B (en
Inventor
白雪飞
刘丹丹
秦生升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yancheng Jianhaoyue Intelligent Technology Co.,Ltd.
Original Assignee
Yancheng Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yancheng Institute of Technology filed Critical Yancheng Institute of Technology
Priority to CN202310062938.5A priority Critical patent/CN116184892B/en
Publication of CN116184892A publication Critical patent/CN116184892A/en
Application granted granted Critical
Publication of CN116184892B publication Critical patent/CN116184892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25257Microcontroller
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供一种机器人取物的AI识别控制方法及系统,其中方法包括:获取取物指令;解析取物指令,确定待取物体的信息;获取周界图像;基于AI识别模型从周界图像中确定目标物体的位置;基于位置及机器人的取物装置的结构,确定夹取点;基于夹取点,控制机器人进行取物。本发明的机器人取物的AI识别控制方法,实现对不同种类的物体的抓取。

Figure 202310062938

The present invention provides an AI recognition control method and system for robot fetching objects, wherein the method includes: obtaining fetching instructions; analyzing the fetching instructions to determine the information of the object to be fetched; obtaining a perimeter image; Determine the position of the target object; determine the gripping point based on the position and the structure of the robot's pick-up device; control the robot to pick up the object based on the gripping point. The AI recognition control method for robot picking objects of the present invention realizes the grasping of different types of objects.

Figure 202310062938

Description

一种机器人取物的AI识别控制方法及系统AI recognition control method and system for robot fetching objects

技术领域technical field

本发明涉及智能控制技术领域,特别涉及一种机器人取物的AI识别控制方法及系统。The present invention relates to the technical field of intelligent control, in particular to an AI identification control method and system for picking up objects by a robot.

背景技术Background technique

目前,机器人在自动化工厂、无人仓储等领域已广泛应用;但是现有的机器人在取物一般只对单一种类的物体进行抓取,无法精确实现对多种类不同物体的抓取。At present, robots have been widely used in automated factories, unmanned storage and other fields; however, existing robots generally only grasp a single type of object when fetching objects, and cannot accurately grasp multiple types of different objects.

发明内容Contents of the invention

本发明目的之一在于提供了一种机器人取物的AI识别控制方法,实现对不同种类的物体的抓取。One of the objectives of the present invention is to provide an AI recognition control method for a robot to pick up objects, so as to realize the grabbing of different types of objects.

本发明实施例提供的一种机器人取物的AI识别控制方法,包括:An AI recognition control method for a robot to pick up objects provided by an embodiment of the present invention includes:

获取取物指令;Obtain fetch instruction;

解析取物指令,确定待取物体的信息;Analyze the fetching instruction to determine the information of the object to be fetched;

获取周界图像;Get the perimeter image;

基于AI识别模型从周界图像中确定目标物体的位置;Determine the position of the target object from the perimeter image based on the AI recognition model;

基于位置及机器人的取物装置的结构,确定夹取点;Determine the gripping point based on the position and the structure of the robot's fetching device;

基于夹取点,控制机器人进行取物。Based on the gripping point, the robot is controlled to pick up the object.

优选的,获取取物指令,包括:Preferably, obtaining fetch instructions includes:

从服务器获取取物请求信息;Obtain fetch request information from the server;

解析取物请求信息,生成取物指令;Analyze the fetching request information and generate fetching instructions;

或,or,

获取设置在取物区域的音频采集模块采集的第一音频;Obtain the first audio collected by the audio collection module arranged in the fetching area;

解析第一音频,确定是否进入取物指令生成模式;Analyzing the first audio to determine whether to enter the fetch command generation mode;

当进入取物指令生成模式时,获取设置在取物区域的第一图像采集模块采集的取物区域内的第一图像;When entering the fetching instruction generation mode, acquire the first image in the fetching area collected by the first image acquisition module set in the fetching area;

解析第一图像,确定第一图像中是否存在第一类物品;Analyzing the first image to determine whether there is a first type of item in the first image;

当存在第一类物品时,从第一图像中提取第一类物品的第一区域图像并基于提取的第一区域图像生成取物指令;When there is a first type of item, extracting a first area image of the first type of item from the first image and generating a retrieval instruction based on the extracted first area image;

其中,第一类物品包括:授权取物的表单或标识物。Wherein, the first category of items includes: a form or an identifier for authorizing the retrieval of items.

优选的,在解析第一图像时,确定第一图像中不存在第一类物品时,还包括:Preferably, when parsing the first image, when it is determined that the first type of item does not exist in the first image, it also includes:

确定是否存在第二类物品;determine whether a second class item is present;

当存在第二类物品时,构建待取物品清单并通过设置在取物区域的触摸屏输出第一问询信息;第一问询信息包括:是否进行取物以及是否进行身份核验;When there is a second type of item, build a list of items to be picked up and output the first inquiry information through the touch screen arranged in the pick-up area; the first inquiry information includes: whether to pick up the item and whether to perform identity verification;

接收取物人员通过触摸屏输入的对应问询信息的肯定反馈时,通过设置在取物区域的第二图像采集模块采集的取物人员的第二图像;When receiving the affirmative feedback of the corresponding inquiry information input by the pick-up personnel through the touch screen, the second image of the pick-up personnel collected by the second image acquisition module arranged in the pick-up area;

基于第二图像,对取物人员的权限进行核验;Based on the second image, verify the authority of the person taking the object;

当核验通过时,通过触摸屏输出第二问询信息;第二问询信息包括:待取物品清单上各个待取物品的数量;When the verification is passed, the second inquiry information is output through the touch screen; the second inquiry information includes: the quantity of each item to be picked up on the list of items to be picked up;

接收取物人员通过触摸屏输入的对应问询信息的输入信息并基于输入信息和待取物品清单生成取物指令。The input information corresponding to the query information input by the picker through the touch screen is received, and the pick-up instruction is generated based on the input information and the list of items to be picked up.

优选的,基于AI识别模型从周界图像中确定目标物体的位置,包括:Preferably, the position of the target object is determined from the perimeter image based on the AI recognition model, including:

基于待取物体的信息,从预设的标准图像库,确定对应待取物体的多个标准图像;Determining a plurality of standard images corresponding to the object to be obtained from a preset standard image library based on the information of the object to be obtained;

对周界图像进行边缘提取并分割,确定多个第二区域图像;performing edge extraction and segmentation on the perimeter image, and determining a plurality of second region images;

将标准图像和第二区域图像输入AI识别模型,确定两者的是否匹配;Input the standard image and the second area image into the AI recognition model to determine whether the two match;

确定相匹配的标准图像和第二区域图像;determining a matching standard image and a second region image;

将与标准图像匹配的第二区域图像对应物体作为目标物体;Taking the object corresponding to the second region image matching the standard image as the target object;

确定包含目标物体的周界图像的中心对应的第一方向向量;determining a first direction vector corresponding to the center of the perimeter image containing the target object;

基于第二区域图像在周界图像中的位置和第一方向向量,确定第二区域图像的中心对应的第二方向向量;determining a second direction vector corresponding to the center of the second area image based on the position of the second area image in the perimeter image and the first direction vector;

基于第二区域图像和预设的标准图像对应的物体距离及位姿识别库,确定目标物体的距离及位姿;Determine the distance and pose of the target object based on the object distance and pose recognition library corresponding to the second area image and the preset standard image;

基于第二方向向量、距离和位姿,确定目标物体的位置。Based on the second direction vector, the distance and the pose, the position of the target object is determined.

优选的,基于位置及机器人的取物装置的结构,确定夹取点,包括:Preferably, based on the position and the structure of the robot's fetching device, determine the gripping point, including:

调取对应取物装置的结构的夹取点确定库;Calling the library for determining the gripping point corresponding to the structure of the fetching device;

基于预设的第一量化模型对位置进行量化,确定第一量化参数Quantify the position based on a preset first quantization model, and determine a first quantization parameter

基于预设的第二量化模型对待取物体的信息进行量化,确定第二量化参数;Quantify the information of the object to be acquired based on a preset second quantization model, and determine a second quantization parameter;

基于第一量化参数和第二量化参数,确定参数集;determining a parameter set based on the first quantization parameter and the second quantization parameter;

基于参数集和夹取点确定库,确定夹取点。Determine the library based on the parameter set and the gripping point, and determine the gripping point.

本发明还提供一种机器人取物的AI识别控制系统,包括:The present invention also provides an AI recognition control system for a robot to pick up objects, including:

第一获取模块,用于获取取物指令;The first obtaining module is used to obtain the retrieval instruction;

解析模块,用于解析取物指令,确定待取物体的信息;The analysis module is used to analyze the fetching instruction and determine the information of the object to be fetched;

第二获取模块,用于获取周界图像;The second acquisition module is used to acquire the perimeter image;

第一确定模块,用于基于AI识别模型从周界图像中确定目标物体的位置;The first determination module is used to determine the position of the target object from the perimeter image based on the AI recognition model;

第二确定模块,用于基于位置及机器人的取物装置的结构,确定夹取点;The second determination module is used to determine the gripping point based on the position and the structure of the object fetching device of the robot;

控制模块,用于基于夹取点,控制机器人进行取物。The control module is used to control the robot to pick objects based on the gripping point.

优选的,第一获取模块获取取物指令,执行如下操作:Preferably, the first acquiring module acquires the fetching instruction and performs the following operations:

从服务器获取取物请求信息;Obtain fetch request information from the server;

解析取物请求信息,生成取物指令;Analyze the fetching request information and generate fetching instructions;

或,or,

获取设置在取物区域的音频采集模块采集的第一音频;Obtain the first audio collected by the audio collection module arranged in the fetching area;

解析第一音频,确定是否进入取物指令生成模式;Analyzing the first audio to determine whether to enter the fetch command generation mode;

当进入取物指令生成模式时,获取设置在取物区域的第一图像采集模块采集的取物区域内的第一图像;When entering the fetching instruction generation mode, acquire the first image in the fetching area collected by the first image acquisition module set in the fetching area;

解析第一图像,确定第一图像中是否存在第一类物品;Analyzing the first image to determine whether there is a first type of item in the first image;

当存在第一类物品时,从第一图像中提取第一类物品的第一区域图像并基于提取的第一区域图像生成取物指令;When there is a first type of item, extracting a first area image of the first type of item from the first image and generating a retrieval instruction based on the extracted first area image;

其中,第一类物品包括:授权取物的表单或标识物。Wherein, the first category of items includes: a form or an identifier for authorizing the retrieval of items.

优选的,在解析第一图像时,确定第一图像中不存在第一类物品时,第一获取模块还执行如下操作:Preferably, when parsing the first image, when it is determined that the first type of item does not exist in the first image, the first acquisition module also performs the following operations:

确定是否存在第二类物品;determine whether a second class item is present;

当存在第二类物品时,构建待取物品清单并通过设置在取物区域的触摸屏输出第一问询信息;第一问询信息包括:是否进行取物以及是否进行身份核验;When there is a second type of item, build a list of items to be picked up and output the first inquiry information through the touch screen arranged in the pick-up area; the first inquiry information includes: whether to pick up the item and whether to perform identity verification;

接收取物人员通过触摸屏输入的对应问询信息的肯定反馈时,通过设置在取物区域的第二图像采集模块采集的取物人员的第二图像;When receiving the affirmative feedback of the corresponding inquiry information input by the pick-up personnel through the touch screen, the second image of the pick-up personnel collected by the second image acquisition module arranged in the pick-up area;

基于第二图像,对取物人员的权限进行核验;Based on the second image, verify the authority of the person taking the object;

当核验通过时,通过触摸屏输出第二问询信息;第二问询信息包括:待取物品清单上各个待取物品的数量;When the verification is passed, the second inquiry information is output through the touch screen; the second inquiry information includes: the quantity of each item to be picked up on the list of items to be picked up;

接收取物人员通过触摸屏输入的对应问询信息的输入信息并基于输入信息和待取物品清单生成取物指令。The input information corresponding to the query information input by the picker through the touch screen is received, and the pick-up instruction is generated based on the input information and the list of items to be picked up.

优选的,第一确定模块基于AI识别模型从周界图像中确定目标物体的位置,执行如下操作:Preferably, the first determination module determines the position of the target object from the perimeter image based on the AI recognition model, and performs the following operations:

基于待取物体的信息,从预设的标准图像库,确定对应待取物体的多个标准图像;Determining a plurality of standard images corresponding to the object to be obtained from a preset standard image library based on the information of the object to be obtained;

对周界图像进行边缘提取并分割,确定多个第二区域图像;performing edge extraction and segmentation on the perimeter image, and determining a plurality of second area images;

将标准图像和第二区域图像输入AI识别模型,确定两者的是否匹配;Input the standard image and the second area image into the AI recognition model to determine whether the two match;

确定相匹配的标准图像和第二区域图像;determining a matching standard image and a second region image;

将与标准图像匹配的第二区域图像对应物体作为目标物体;Taking the object corresponding to the second region image matching the standard image as the target object;

确定包含目标物体的周界图像的中心对应的第一方向向量;determining a first direction vector corresponding to the center of the perimeter image containing the target object;

基于第二区域图像在周界图像中的位置和第一方向向量,确定第二区域图像的中心对应的第二方向向量;determining a second direction vector corresponding to the center of the second area image based on the position of the second area image in the perimeter image and the first direction vector;

基于第二区域图像和预设的标准图像对应的物体距离及位姿识别库,确定目标物体的距离及位姿;Determine the distance and pose of the target object based on the object distance and pose recognition library corresponding to the second area image and the preset standard image;

基于第二方向向量、距离和位姿,确定目标物体的位置。Based on the second direction vector, the distance and the pose, the position of the target object is determined.

优选的,第二确定模块基于位置及机器人的取物装置的结构,确定夹取点,执行如下操作:Preferably, the second determining module determines the gripping point based on the position and the structure of the object picking device of the robot, and performs the following operations:

调取对应取物装置的结构的夹取点确定库;Calling the library for determining the gripping point corresponding to the structure of the fetching device;

基于预设的第一量化模型对位置进行量化,确定第一量化参数Quantify the position based on a preset first quantization model, and determine a first quantization parameter

基于预设的第二量化模型对待取物体的信息进行量化,确定第二量化参数;Quantify the information of the object to be acquired based on a preset second quantization model, and determine a second quantization parameter;

基于第一量化参数和第二量化参数,确定参数集;determining a parameter set based on the first quantization parameter and the second quantization parameter;

基于参数集和夹取点确定库,确定夹取点。Determine the library based on the parameter set and the gripping point, and determine the gripping point.

本发明的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而了解。本发明的目的和其他优点可通过在所写的说明书、权利要求书、以及附图中所特别指出的结构来实现和获得。Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.

下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。The technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings and embodiments.

附图说明Description of drawings

附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。在附图中:The accompanying drawings are used to provide a further understanding of the present invention, and constitute a part of the description, and are used together with the embodiments of the present invention to explain the present invention, and do not constitute a limitation to the present invention. In the attached picture:

图1为本发明实施例中一种机器人取物的AI识别控制方法的示意图;FIG. 1 is a schematic diagram of an AI recognition control method for a robot to fetch objects in an embodiment of the present invention;

图2为本发明实施例中一种机器人取物的AI识别控制系统的示意图。Fig. 2 is a schematic diagram of an AI recognition control system for a robot to pick up objects in an embodiment of the present invention.

具体实施方式Detailed ways

以下结合附图对本发明的优选实施例进行说明,应当理解,此处所描述的优选实施例仅用于说明和解释本发明,并不用于限定本发明。The preferred embodiments of the present invention will be described below in conjunction with the accompanying drawings. It should be understood that the preferred embodiments described here are only used to illustrate and explain the present invention, and are not intended to limit the present invention.

本发明实施例提供了一种机器人取物的AI识别控制方法,如图1所示,包括:An embodiment of the present invention provides an AI recognition control method for a robot to fetch objects, as shown in FIG. 1 , including:

步骤S1:获取取物指令;Step S1: Obtaining an object fetching instruction;

步骤S2:解析取物指令,确定待取物体的信息;Step S2: Analyzing the fetching instruction to determine the information of the object to be fetched;

步骤S3:获取周界图像;Step S3: Acquire the perimeter image;

步骤S4:基于AI识别模型从周界图像中确定目标物体的位置;Step S4: Determine the position of the target object from the perimeter image based on the AI recognition model;

步骤S5:基于位置及机器人的取物装置的结构,确定夹取点;Step S5: Determine the gripping point based on the position and the structure of the object fetching device of the robot;

步骤S6:基于夹取点,控制机器人进行取物。Step S6: Based on the gripping point, control the robot to pick up the object.

上述技术方案的工作原理及有益效果为:The working principle and beneficial effects of the above-mentioned technical scheme are:

首先,机器人从外界获取到取物指令,该取物指令可以是用户通过终端发送,也可是机器人根据外界情况自动判断生成;通过对取物指令的解析,从而确定取物指令对应的待取物体,主要通过待取物体的信息(例如:长宽高等三维信息、颜色以及图片等);通过AI识别模型根据待取物体的信息从机器人周界图像中确定是否存在待取物体,当确定存在时,确定该待取物体为目标物体,然后进一步确定出目标物体的位置(空间上所占的区域);根据目标物体的位置和机器人配置的取物装置的具体结构,从目标物体上确定夹取点;然后控制取物装置上的抓取点与夹取点对位,实现了机器人取物;其中,在基于位置及取物装置的结构,确定夹取点时,针对不同形状的物体的夹取点是不同的,例如:球形物体的夹取点为均匀分布在水平平面上的圆周的三个点位;长方形物体的夹取点为分布位于两个对称面上的对称的两个点位。本发明的机器人取物的AI识别控制方法精确实现对多种类不同物体的抓取。First of all, the robot obtains the fetching command from the outside world. The fetching command can be sent by the user through the terminal, or it can be automatically generated by the robot according to the external situation; through the analysis of the fetching command, the object to be fetched corresponding to the fetching command can be determined , mainly through the information of the object to be fetched (for example: three-dimensional information such as length, width, height, color, and picture, etc.); through the AI recognition model, according to the information of the object to be fetched, it is determined whether there is an object to be fetched from the perimeter image of the robot. When it is determined that it exists , determine the object to be fetched as the target object, and then further determine the position of the target object (area occupied in space); according to the position of the target object and the specific structure of the fetching device configured by the robot, determine the gripping point; then control the alignment of the grabbing point and the gripping point on the fetching device to realize the robot fetching; wherein, when determining the gripping point based on the position and the structure of the fetching device, the gripping point for objects of different shapes The picking points are different, for example: the picking points of a spherical object are three points evenly distributed on the circumference of the horizontal plane; the picking points of a rectangular object are two symmetrical points distributed on two symmetrical planes . The AI recognition control method of the robot fetching object of the present invention accurately realizes the grasping of many kinds of different objects.

在一个实施例中,获取取物指令,包括:In one embodiment, obtaining the retrieval instruction includes:

从服务器获取取物请求信息;Obtain fetch request information from the server;

解析取物请求信息,生成取物指令;Analyze the fetching request information and generate fetching instructions;

或,or,

获取设置在取物区域的音频采集模块采集的第一音频;Obtain the first audio collected by the audio collection module arranged in the fetching area;

解析第一音频,确定是否进入取物指令生成模式;Analyzing the first audio to determine whether to enter the fetch command generation mode;

当进入取物指令生成模式时,获取设置在取物区域的第一图像采集模块采集的取物区域内的第一图像;When entering the fetching instruction generation mode, acquire the first image in the fetching area collected by the first image acquisition module set in the fetching area;

解析第一图像,确定第一图像中是否存在第一类物品;Analyzing the first image to determine whether there is a first type of item in the first image;

当存在第一类物品时,从第一图像中提取第一类物品的第一区域图像并基于提取的第一区域图像生成取物指令;When there is a first type of item, extracting a first area image of the first type of item from the first image and generating a retrieval instruction based on the extracted first area image;

其中,第一类物品包括:授权取物的表单或标识物。Wherein, the first category of items includes: a form or an identifier for authorizing the retrieval of items.

上述技术方案的工作原理及有益效果为:The working principle and beneficial effects of the above-mentioned technical scheme are:

本实施例提供两种方式获取取物指令,一种是采用从服务器上获取,主要是应用在用户的远程操控以及服务器远程自动控制时,用户通过移动终端与服务器通讯,在取物界面选择需要取的物品,服务器根据用户选择的物品生成取物请求信息然后发送至机器人侧,机器人的控制模块通过解析取物请求信息,生成取物指令。另一种,为根据实际情况进行分析自动生成取物指令,即主要应用在无人仓储的取货场景,例如:取货人员手持经过授权取物的表单(经过上级签核后的取货单),取货人员将取货单放置到取物区域,然后说出“取货”等对应的触犯关键词,机器人通过设置在取货区域的音频采集模块采集到该包含触发关键词的第一音频时,进入取物指令生成模式,在该模式下,机器人通过取物区域的第一图像采集模块采集取物区域内的第一图像,通过分析第一图像中的取物单,对取物单上文字进行提取,进而确定待取物品的信息(物品编号、形状、名称、数量等),进而根据待取物品的信息自动生成取物指令;此外,放置到取物区域内的物品也可以是标识物,该标识物对应着唯一物品;例如:红色长签代表物品A、黄色长签代表物品B……,长签的数量代表着取物数量。更进一步,标识物包括:第一显示区域和第二显示区域;第一显示区域显示以待取物品的信息生成的二维码;第二显示区域显示以取物人员的信息生成的二维码;通过第一图像采集模块对第一显示区域和第二显示区域进行拍摄,提取待取物品的信息和取物人员的信息,将取物信息与取物人员进行关联,保证无人仓储等管理的规范及安全。This embodiment provides two ways to obtain fetching instructions, one is to obtain from the server, which is mainly used in the remote control of the user and the remote automatic control of the server. The user communicates with the server through the mobile terminal and selects the desired command on the fetching interface According to the item selected by the user, the server generates a fetching request information based on the item selected by the user and then sends it to the robot side. The control module of the robot generates a fetching instruction by analyzing the fetching request information. The other is to automatically generate pick-up instructions based on actual situation analysis, which is mainly used in unmanned warehouse pick-up scenarios, for example: the pick-up personnel hold the authorized pick-up form (the pick-up list approved by the superior ), the pick-up personnel put the pick-up list in the pick-up area, and then say the corresponding offending keywords such as "pick-up", and the robot collects the first word containing the trigger keyword through the audio collection module set in the pick-up area. During the audio, it enters the fetch command generation mode. In this mode, the robot collects the first image in the fetch area through the first image acquisition module of the fetch area, and analyzes the fetch list in the first image to determine the fetch order. Extract the text on the list, and then determine the information of the items to be picked up (item number, shape, name, quantity, etc.), and then automatically generate the pick-up instruction according to the information of the items to be picked up; in addition, the items placed in the pick-up area can also be It is an identifier, and the identifier corresponds to a unique item; for example: a red long sign represents item A, a yellow long sign represents item B..., and the number of long signs represents the number of items to be retrieved. Furthermore, the identifier includes: a first display area and a second display area; the first display area displays a two-dimensional code generated from the information of the item to be picked up; the second display area displays a two-dimensional code generated from the information of the person picking up the object ; The first display area and the second display area are photographed by the first image acquisition module, the information of the items to be picked up and the information of the pick-up personnel are extracted, and the pick-up information is associated with the pick-up personnel to ensure unmanned storage and other management specifications and safety.

在无人仓储应用场景下,当获取到取物指令后,获取周界图像之前,需要确定机器人需要移动的点位。在仓库内仓库的库位与机器人可停留的点位对应关联;需要确定待取物品在仓库的库位,然后机器人移动到对应的可停留的点位;在可停留的点位处通过机器人上的图像采集模块采集周界图像;然后在基于AI识别模型从周界图像中确定目标物体的位置;基于位置及机器人的取物装置的结构,确定夹取点;基于夹取点,控制机器人进行取物;取好的货物经由自动移动小车运送到取物区域。其中,机器人可停留的点位为库位与库位之间的过道上。In the unmanned storage application scenario, after obtaining the fetch command and before obtaining the perimeter image, it is necessary to determine the point where the robot needs to move. In the warehouse, the location of the warehouse is associated with the point where the robot can stay; it is necessary to determine the location of the item to be picked up in the warehouse, and then the robot moves to the corresponding point where the robot can stay; The image acquisition module collects the perimeter image; then, based on the AI recognition model, the position of the target object is determined from the perimeter image; based on the position and the structure of the robot's object picking device, the gripping point is determined; based on the gripping point, the robot is controlled to carry out Pick-up: The picked-up goods are transported to the pick-up area by an automatic mobile trolley. Among them, the point where the robot can stay is on the aisle between storage locations.

此外,还可以应用到无人仓储的入库场景,入库区域内的图像采集模块采集工作人员放置的入库单,建立入库对应的取物指令,机器人对入库单对应的物品存放区拍摄图像,然后确定物品存放区域内的各个物品与入库单内的对应关系,确定各个物品对应的库位,然后根据库位与可停留的点位的对应关系对物品进行分组,根据可停留的点位与物品存放区域的距离对各个分组进行排序,距离远的顺序在前,距离近的顺序在后;依次提取各个分组内的物品,分别放置到各个对应的点位所对应的自动移动小车上,当入库区域内的该分组的物品取完或者自动移动小车达到最大载量时,停止取物,然后各个自动移动小车移动到对应的点位;机器人从最远的点位开始从自动移动小车上提取物品放置到库位内。当最近的点位的自动移动小车上的物品放置完成后在进行下一轮的物品存放区域的物品的抓取;实现了机器人的自动入库。In addition, it can also be applied to the storage scene of unmanned storage. The image acquisition module in the storage area collects the storage order placed by the staff, establishes the pick-up instruction corresponding to the storage, and the robot stores the item corresponding to the storage area. Take images, then determine the corresponding relationship between each item in the item storage area and the storage order, determine the storage location corresponding to each item, and then group the items according to the corresponding relationship between the storage location and the point where they can stay. Sort each group according to the distance between the point and the storage area of the item, the order with the longest distance is first, and the order with the shortest distance is last; the items in each group are extracted in turn, and placed in the automatic movement corresponding to each corresponding point. On the trolley, when the items in the group in the storage area are taken out or the automatic mobile trolley reaches the maximum load, it stops picking up the objects, and then each automatic mobile trolley moves to the corresponding point; the robot starts from the farthest point. Pick up items on the automatic mobile trolley and place them in the warehouse. When the items on the automatic mobile trolley at the nearest point are placed, the next round of grabbing items in the item storage area is carried out; the automatic storage of the robot is realized.

在一个实施例中,在解析第一图像时,确定第一图像中不存在第一类物品时,还包括:In one embodiment, when parsing the first image, when it is determined that the first type of item does not exist in the first image, it further includes:

确定是否存在第二类物品;第二类物品为与待取物品相同的实物或者外壳等;Determine whether there is a second type of item; the second type of item is the same object or shell as the item to be picked;

当存在第二类物品时,构建待取物品清单并通过设置在取物区域的触摸屏输出第一问询信息;第一问询信息包括:是否进行取物以及是否进行身份核验;When there is a second type of item, build a list of items to be picked up and output the first inquiry information through the touch screen arranged in the pick-up area; the first inquiry information includes: whether to pick up the item and whether to perform identity verification;

接收取物人员通过触摸屏输入的对应问询信息的肯定反馈时,通过设置在取物区域的第二图像采集模块采集的取物人员的第二图像;When receiving the affirmative feedback of the corresponding inquiry information input by the pick-up personnel through the touch screen, the second image of the pick-up personnel collected by the second image acquisition module arranged in the pick-up area;

基于第二图像,对取物人员的权限进行核验;主要确定取物人员是否具有领取待取物品的权限;Based on the second image, verify the authority of the picker; mainly determine whether the picker has the authority to receive the item to be picked;

当核验通过时,通过触摸屏输出第二问询信息;第二问询信息包括:待取物品清单上各个待取物品的数量;When the verification is passed, the second inquiry information is output through the touch screen; the second inquiry information includes: the quantity of each item to be picked up on the list of items to be picked up;

接收取物人员通过触摸屏输入的对应问询信息的输入信息并基于输入信息和待取物品清单生成取物指令。The input information corresponding to the query information input by the picker through the touch screen is received, and the pick-up instruction is generated based on the input information and the list of items to be picked up.

上述技术方案的工作原理及有益效果为:The working principle and beneficial effects of the above-mentioned technical scheme are:

本实施例主要应用场景当取物人员为维修人员,在维修人员急需配件时,而未获得签核的取物单或者其他凭证时,直接将需要更换的配件(拆卸下来的配件)放置到取物区域,然后经过身份核验以及取物人员输入的数量生成取物指令,保证应急情况的取物。The main application scenario of this embodiment is when the pick-up personnel are maintenance personnel, and when the maintenance personnel are in urgent need of accessories and have not obtained the signed pick-up list or other certificates, they directly place the accessories that need to be replaced (disassembled accessories) in the pick-up box. After the identity verification and the quantity input by the picker, a pick-up instruction is generated to ensure pick-up in emergency situations.

在一个实施例中,基于AI识别模型从周界图像中确定目标物体的位置,包括:In one embodiment, determining the position of the target object from the perimeter image based on the AI recognition model includes:

基于待取物体的信息,从预设的标准图像库,确定对应待取物体的多个标准图像;多个标准图像对应为不同角度的待取物品的图像;例如:从正面、侧面以及对角等方向拍摄的待取物品的图像;Based on the information of the object to be picked up, from the preset standard image library, determine multiple standard images corresponding to the object to be picked up; multiple standard images correspond to images of the item to be picked up from different angles; for example: from the front, side and diagonal Images of the items to be picked up in the same direction;

对周界图像进行边缘提取并分割,确定多个第二区域图像;通过边缘提取及分割,将周界图像中的物品都分割出来,一个物品对应一个第二区域图像;Perform edge extraction and segmentation on the perimeter image to determine a plurality of second area images; through edge extraction and segmentation, segment the items in the perimeter image, one item corresponds to one second area image;

将标准图像和第二区域图像输入AI识别模型,确定两者的是否匹配;Input the standard image and the second area image into the AI recognition model to determine whether the two match;

确定相匹配的标准图像和第二区域图像;determining a matching standard image and a second region image;

将与标准图像匹配的第二区域图像对应物体作为目标物体;Taking the object corresponding to the second region image matching the standard image as the target object;

确定包含目标物体的周界图像的中心对应的第一方向向量;一般情况下,周界图像拍摄时,图像采集模块的中心轴线在空间坐标系中的方向向量即周界图像的中心对应的第一方向向量;Determine the first direction vector corresponding to the center of the perimeter image containing the target object; in general, when the perimeter image is taken, the direction vector of the central axis of the image acquisition module in the space coordinate system is the first direction vector corresponding to the center of the perimeter image a direction vector;

基于第二区域图像在周界图像中的位置和第一方向向量,确定第二区域图像的中心对应的第二方向向量;主要根据第二区域图像的中心与周界图像的中心的偏差,确定一个偏转向量,将该向量与第一方向向量相加就获得了第二方向向量;Based on the position of the second area image in the perimeter image and the first direction vector, determine the second direction vector corresponding to the center of the second area image; mainly according to the deviation between the center of the second area image and the center of the perimeter image, determine A deflection vector, which is added to the first direction vector to obtain the second direction vector;

基于第二区域图像和预设的标准图像对应的物体距离及位姿识别库,确定目标物体的距离及位姿;具体操作为:确定第二区域图像中的物体的所在区域与标准图像中的物体所占区域的比值,将标准图像缩放至与第二区域图像相同或相近大小,然后将第二区域图像与标准图像采用相同的分块规则进行分块,确定各个分块对应的相似度,然后将相似度按序进行排列,形成相似度集和;将比值和相似度集合合并为分析集;将分析集与库中标准集匹配,提取匹配的标准集对应的距离及位姿参数;位姿参数表示物体相对于空间上定义的参考位置的偏转;例如:定义物体正面平放为参考位置;在实际摆放时并不是每个物体都是正面平放状态,通过位姿表示与参考位置的偏转,更能精确表示物体的位置;Based on the object distance and pose recognition library corresponding to the second area image and the preset standard image, determine the distance and pose of the target object; the specific operation is: determine the location of the object in the second area image and the object in the standard image The ratio of the area occupied by the object, the standard image is scaled to the same or similar size as the second area image, and then the second area image and the standard image are divided into blocks using the same block rules to determine the similarity corresponding to each block. Then arrange the similarity in order to form a similarity set sum; merge the ratio and similarity set into an analysis set; match the analysis set with the standard set in the library, and extract the distance and pose parameters corresponding to the matched standard set; The attitude parameter indicates the deflection of the object relative to the reference position defined in space; for example: define the front flat of the object as the reference position; not every object is in the front flat state when it is actually placed, and the reference position is represented by the pose The deflection of the object can more accurately represent the position of the object;

基于第二方向向量、距离和位姿,确定目标物体的位置。位置的要素包括第二方向向量、距离及位姿等。Based on the second direction vector, the distance and the pose, the position of the target object is determined. Elements of the position include a second direction vector, a distance, a pose, and the like.

在一个实施例中,基于位置及机器人的取物装置的结构,确定夹取点,包括:In one embodiment, determining the gripping point based on the position and the structure of the object fetching device of the robot includes:

调取对应取物装置的结构的夹取点确定库;Calling the library for determining the gripping point corresponding to the structure of the fetching device;

基于预设的第一量化模型对位置进行量化,确定第一量化参数Quantify the position based on a preset first quantization model, and determine a first quantization parameter

基于预设的第二量化模型对待取物体的信息进行量化,确定第二量化参数;Quantify the information of the object to be acquired based on a preset second quantization model, and determine a second quantization parameter;

基于第一量化参数和第二量化参数,确定参数集;determining a parameter set based on the first quantization parameter and the second quantization parameter;

基于参数集和夹取点确定库,确定夹取点。Determine the library based on the parameter set and the gripping point, and determine the gripping point.

上述技术方案的工作原理及有益效果为:The working principle and beneficial effects of the above-mentioned technical scheme are:

取物结构不同对应设置不同的夹取点确定库;取物结构包括:Different pick-up structures correspond to setting different pick-up point determination libraries; pick-up structures include:

通过综合分析取物装置的结构以及物体的形状位置等,适应性地选着夹取点,保证抓取的稳固,并且可以使机器人取物适应不同的物体的抓取,提高了机器人取物的应用性。其中,通过第一量化模型对位置的参数信息进行量化,确定至少一个第一量化参数,通过第二量化模型对于待取物体的信息进行量化,确定至少一个第二量化参数;然后构建参数集,将参数集与夹取点确定库的分析集匹配,提取匹配的分析集对应关联的夹取点确定集;解析夹取点确定集确定其余位置中的物体中心坐标的偏差,进而确定夹取点的坐标,实现了机器人的夹取;其中,第二量化参数包括:对应的物体的重量的参数、对应物体的三维尺寸的参数等。Through the comprehensive analysis of the structure of the pick-up device and the shape and position of the object, etc., the pick-up point is adaptively selected to ensure the stability of the pick-up, and the robot pick-up can be adapted to the pick-up of different objects, which improves the pick-up efficiency of the robot. applicability. Wherein, quantify the parameter information of the position by the first quantization model, determine at least one first quantization parameter, quantify the information of the object to be fetched by the second quantization model, and determine at least one second quantization parameter; then construct the parameter set, Match the parameter set with the analysis set of the gripping point determination library, and extract the matching analysis set corresponding to the associated gripping point determination set; analyze the gripping point determination set to determine the deviation of the center coordinates of the object in the remaining positions, and then determine the gripping point The coordinates of the robot realize the gripping of the robot; wherein, the second quantitative parameter includes: a parameter corresponding to the weight of the object, a parameter corresponding to the three-dimensional size of the object, and the like.

本发明还提供一种机器人取物的AI识别控制系统,如图2所示,包括:The present invention also provides an AI recognition control system for a robot to pick up objects, as shown in Figure 2, including:

第一获取模块1,用于获取取物指令;The first obtaining module 1 is used to obtain the retrieval instruction;

解析模块2,用于解析取物指令,确定待取物体的信息;Parsing module 2, used for parsing the fetching instruction, and determining the information of the object to be fetched;

第二获取模块3,用于获取周界图像;The second acquiring module 3 is used to acquire the perimeter image;

第一确定模块4,用于基于AI识别模型从周界图像中确定目标物体的位置;The first determination module 4 is used to determine the position of the target object from the perimeter image based on the AI recognition model;

第二确定模块5,用于基于位置及机器人的取物装置的结构,确定夹取点;The second determining module 5 is used to determine the gripping point based on the position and the structure of the object picking device of the robot;

控制模块6,用于基于夹取点,控制机器人进行取物。The control module 6 is configured to control the robot to pick up objects based on the gripping point.

在一个实施例中,第一获取模块1获取取物指令,执行如下操作:In one embodiment, the first acquiring module 1 acquires the fetching instruction and performs the following operations:

从服务器获取取物请求信息;Obtain fetch request information from the server;

解析取物请求信息,生成取物指令;Analyze the fetching request information and generate fetching instructions;

或,or,

获取设置在取物区域的音频采集模块采集的第一音频;Obtain the first audio collected by the audio collection module arranged in the fetching area;

解析第一音频,确定是否进入取物指令生成模式;Analyzing the first audio to determine whether to enter the fetch command generation mode;

当进入取物指令生成模式时,获取设置在取物区域的第一图像采集模块采集的取物区域内的第一图像;When entering the fetching instruction generation mode, acquire the first image in the fetching area collected by the first image acquisition module set in the fetching area;

解析第一图像,确定第一图像中是否存在第一类物品;Analyzing the first image to determine whether there is a first type of item in the first image;

当存在第一类物品时,从第一图像中提取第一类物品的第一区域图像并基于提取的第一区域图像生成取物指令;When there is a first type of item, extracting a first area image of the first type of item from the first image and generating a retrieval instruction based on the extracted first area image;

其中,第一类物品包括:授权取物的表单或标识物。Wherein, the first category of items includes: a form or an identifier for authorizing the retrieval of items.

在一个实施例中,在解析第一图像时,确定第一图像中不存在第一类物品时,第一获取模块1还执行如下操作:In one embodiment, when parsing the first image, when it is determined that the first type of item does not exist in the first image, the first acquisition module 1 also performs the following operations:

确定是否存在第二类物品;determine whether a second class item is present;

当存在第二类物品时,构建待取物品清单并通过设置在取物区域的触摸屏输出第一问询信息;第一问询信息包括:是否进行取物以及是否进行身份核验;When there is a second type of item, build a list of items to be picked up and output the first inquiry information through the touch screen arranged in the pick-up area; the first inquiry information includes: whether to pick up the item and whether to perform identity verification;

接收取物人员通过触摸屏输入的对应问询信息的肯定反馈时,通过设置在取物区域的第二图像采集模块采集的取物人员的第二图像;When receiving the affirmative feedback of the corresponding inquiry information input by the pick-up personnel through the touch screen, the second image of the pick-up personnel collected by the second image acquisition module arranged in the pick-up area;

基于第二图像,对取物人员的权限进行核验;Based on the second image, verify the authority of the person taking the object;

当核验通过时,通过触摸屏输出第二问询信息;第二问询信息包括:待取物品清单上各个待取物品的数量;When the verification is passed, the second inquiry information is output through the touch screen; the second inquiry information includes: the quantity of each item to be picked up on the list of items to be picked up;

接收取物人员通过触摸屏输入的对应问询信息的输入信息并基于输入信息和待取物品清单生成取物指令。The input information corresponding to the query information input by the picker through the touch screen is received, and the pick-up instruction is generated based on the input information and the list of items to be picked up.

在一个实施例中,第一确定模块4基于AI识别模型从周界图像中确定目标物体的位置,执行如下操作:In one embodiment, the first determination module 4 determines the position of the target object from the perimeter image based on the AI recognition model, and performs the following operations:

基于待取物体的信息,从预设的标准图像库,确定对应待取物体的多个标准图像;Determining a plurality of standard images corresponding to the object to be obtained from a preset standard image library based on the information of the object to be obtained;

对周界图像进行边缘提取并分割,确定多个第二区域图像;performing edge extraction and segmentation on the perimeter image, and determining a plurality of second area images;

将标准图像和第二区域图像输入AI识别模型,确定两者的是否匹配;Input the standard image and the second area image into the AI recognition model to determine whether the two match;

确定相匹配的标准图像和第二区域图像;determining a matching standard image and a second region image;

将与标准图像匹配的第二区域图像对应物体作为目标物体;Taking the object corresponding to the second region image matching the standard image as the target object;

确定包含目标物体的周界图像的中心对应的第一方向向量;determining a first direction vector corresponding to the center of the perimeter image containing the target object;

基于第二区域图像在周界图像中的位置和第一方向向量,确定第二区域图像的中心对应的第二方向向量;determining a second direction vector corresponding to the center of the second area image based on the position of the second area image in the perimeter image and the first direction vector;

基于第二区域图像和预设的标准图像对应的物体距离及位姿识别库,确定目标物体的距离及位姿;Determine the distance and pose of the target object based on the object distance and pose recognition library corresponding to the second area image and the preset standard image;

基于第二方向向量、距离和位姿,确定目标物体的位置。Based on the second direction vector, the distance and the pose, the position of the target object is determined.

在一个实施例中,第二确定模块5基于位置及机器人的取物装置的结构,确定夹取点,执行如下操作:In one embodiment, the second determination module 5 determines the gripping point based on the position and the structure of the object fetching device of the robot, and performs the following operations:

调取对应取物装置的结构的夹取点确定库;Calling the library for determining the gripping point corresponding to the structure of the fetching device;

基于预设的第一量化模型对位置进行量化,确定第一量化参数Quantify the position based on a preset first quantization model, and determine a first quantization parameter

基于预设的第二量化模型对待取物体的信息进行量化,确定第二量化参数;Quantify the information of the object to be acquired based on a preset second quantization model, and determine a second quantization parameter;

基于第一量化参数和第二量化参数,确定参数集;determining a parameter set based on the first quantization parameter and the second quantization parameter;

基于参数集和夹取点确定库,确定夹取点。Determine the library based on the parameter set and the gripping point, and determine the gripping point.

显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。Obviously, those skilled in the art can make various changes and modifications to the present invention without departing from the spirit and scope of the present invention. Thus, if these modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalent technologies, the present invention also intends to include these modifications and variations.

Claims (10)

1.一种机器人取物的AI识别控制方法,其特征在于,包括:1. A kind of AI recognition control method of robot fetching, it is characterized in that, comprising: 获取取物指令;Obtain fetch instruction; 解析取物指令,确定待取物体的信息;Analyze the fetching instruction to determine the information of the object to be fetched; 获取周界图像;Get the perimeter image; 基于AI识别模型从所述周界图像中确定目标物体的位置;Determining the position of the target object from the perimeter image based on the AI recognition model; 基于所述位置及机器人的取物装置的结构,确定夹取点;Determine the gripping point based on the position and the structure of the fetching device of the robot; 基于所述夹取点,控制所述机器人进行取物。Based on the gripping point, the robot is controlled to fetch objects. 2.如权利要求1所述的机器人取物的AI识别控制方法,其特征在于,所述获取取物指令,包括:2. The AI recognition control method of robot fetching as claimed in claim 1, wherein said acquiring fetching instructions includes: 从服务器获取取物请求信息;Obtain fetch request information from the server; 解析所述取物请求信息,生成所述取物指令;Analyzing the retrieval request information to generate the retrieval instruction; 或,or, 获取设置在取物区域的音频采集模块采集的第一音频;Obtain the first audio collected by the audio collection module arranged in the fetching area; 解析所述第一音频,确定是否进入取物指令生成模式;Analyzing the first audio to determine whether to enter the fetch instruction generation mode; 当进入取物指令生成模式时,获取设置在取物区域的第一图像采集模块采集的取物区域内的第一图像;When entering the fetching instruction generation mode, acquire the first image in the fetching area collected by the first image acquisition module set in the fetching area; 解析所述第一图像,确定所述第一图像中是否存在第一类物品;Analyzing the first image to determine whether there is a first type of item in the first image; 当存在所述第一类物品时,从所述第一图像中提取第一类物品的第一区域图像并基于提取的所述第一区域图像生成所述取物指令;When the first type of item exists, extracting a first area image of the first type of item from the first image and generating the retrieval instruction based on the extracted first area image; 其中,所述第一类物品包括:授权取物的表单或标识物。Wherein, the first type of item includes: a form or an identifier for authorizing the retrieval of the item. 3.如权利要求2所述的机器人取物的AI识别控制方法,其特征在于,在解析所述第一图像时,确定所述第一图像中不存在所述第一类物品时,还包括:3. The AI recognition control method for robot fetching objects according to claim 2, wherein when analyzing the first image, when it is determined that the first type of item does not exist in the first image, further comprising: : 确定是否存在第二类物品;determine whether a second class item is present; 当存在第二类物品时,构建待取物品清单并通过设置在所述取物区域的触摸屏输出第一问询信息;所述第一问询信息包括:是否进行取物以及是否进行身份核验;When there is a second type of item, build a list of items to be picked up and output the first inquiry information through the touch screen arranged in the picking area; the first inquiry information includes: whether to carry out picking and whether to perform identity verification; 接收取物人员通过所述触摸屏输入的对应所述问询信息的肯定反馈时,通过设置在所述取物区域的第二图像采集模块采集的取物人员的第二图像;When receiving the affirmative feedback corresponding to the inquiry information input by the pick-up personnel through the touch screen, the second image of the pick-up personnel collected by the second image acquisition module arranged in the pick-up area; 基于所述第二图像,对所述取物人员的权限进行核验;Based on the second image, verify the authority of the fetching person; 当核验通过时,通过所述触摸屏输出第二问询信息;所述第二问询信息包括:所述待取物品清单上各个待取物品的数量;When the verification is passed, the second inquiry information is output through the touch screen; the second inquiry information includes: the quantity of each item to be picked up on the item list to be picked up; 接收取物人员通过所述触摸屏输入的对应所述问询信息的输入信息并基于所述输入信息和所述待取物品清单生成所述取物指令。receiving the input information corresponding to the query information input by the pick-up personnel through the touch screen, and generating the pick-up instruction based on the input information and the list of items to be picked. 4.如权利要求1所述的机器人取物的AI识别控制方法,其特征在于,所述基于所述AI识别模型从所述周界图像中确定目标物体的位置,包括:4. The AI recognition control method of robot fetching as claimed in claim 1, wherein the determining the position of the target object from the perimeter image based on the AI recognition model comprises: 基于待取物体的信息,从预设的标准图像库,确定对应待取物体的多个标准图像;Determining a plurality of standard images corresponding to the object to be obtained from a preset standard image library based on the information of the object to be obtained; 对所述周界图像进行边缘提取并分割,确定多个第二区域图像;performing edge extraction and segmentation on the perimeter image to determine a plurality of second area images; 将所述标准图像和所述第二区域图像输入所述AI识别模型,确定两者的是否匹配;Input the standard image and the second area image into the AI recognition model, and determine whether the two match; 确定相匹配的所述标准图像和所述第二区域图像;determining matching of the standard image and the second area image; 将与所述标准图像匹配的所述第二区域图像对应物体作为所述目标物体;taking the object corresponding to the second region image that matches the standard image as the target object; 确定包含所述目标物体的所述周界图像的中心对应的第一方向向量;determining a first direction vector corresponding to the center of the perimeter image containing the target object; 基于所述第二区域图像在所述周界图像中的位置和所述第一方向向量,确定所述第二区域图像的中心对应的第二方向向量;determining a second direction vector corresponding to the center of the second area image based on the position of the second area image in the perimeter image and the first direction vector; 基于所述第二区域图像和预设的所述标准图像对应的物体距离及位姿识别库,确定所述目标物体的距离及位姿;Determine the distance and pose of the target object based on the second area image and the preset object distance and pose recognition library corresponding to the standard image; 基于所述第二方向向量、所述距离和所述位姿,确定所述目标物体的位置。A position of the target object is determined based on the second direction vector, the distance, and the pose. 5.如权利要求4所述的机器人取物的AI识别控制方法,其特征在于,所述基于所述位置及机器人的取物装置的结构,确定夹取点,包括:5. The AI recognition control method for robot fetching as claimed in claim 4, wherein the determination of the gripping point based on the position and the structure of the fetching device of the robot includes: 调取对应取物装置的结构的夹取点确定库;Calling the library for determining the gripping point corresponding to the structure of the fetching device; 基于预设的第一量化模型对所述位置进行量化,确定第一量化参数Quantify the position based on a preset first quantization model, and determine a first quantization parameter 基于预设的第二量化模型对所述待取物体的信息进行量化,确定第二量化参数;Quantify the information of the object to be acquired based on a preset second quantization model, and determine a second quantization parameter; 基于所述第一量化参数和所述第二量化参数,确定参数集;determining a parameter set based on the first quantization parameter and the second quantization parameter; 基于所述参数集和所述夹取点确定库,确定所述夹取点。The gripping point is determined based on the parameter set and the gripping point determination library. 6.一种机器人取物的AI识别控制系统,其特征在于,包括:6. An AI recognition control system for robot fetching, characterized in that it comprises: 第一获取模块,用于获取取物指令;The first obtaining module is used to obtain the retrieval instruction; 解析模块,用于解析取物指令,确定待取物体的信息;The analysis module is used to analyze the fetching instruction and determine the information of the object to be fetched; 第二获取模块,用于获取周界图像;The second acquisition module is used to acquire the perimeter image; 第一确定模块,用于基于AI识别模型从所述周界图像中确定目标物体的位置;The first determination module is used to determine the position of the target object from the perimeter image based on the AI recognition model; 第二确定模块,用于基于所述位置及机器人的取物装置的结构,确定夹取点;The second determination module is used to determine the gripping point based on the position and the structure of the object fetching device of the robot; 控制模块,用于基于所述夹取点,控制所述机器人进行取物。A control module, configured to control the robot to pick up objects based on the gripping point. 7.如权利要求6所述的机器人取物的AI识别控制系统,其特征在于,所述第一获取模块获取取物指令,执行如下操作:7. The AI recognition control system for robot fetching as claimed in claim 6, wherein the first acquisition module obtains the fetching instruction and performs the following operations: 从服务器获取取物请求信息;Obtain fetch request information from the server; 解析所述取物请求信息,生成所述取物指令;Analyzing the retrieval request information to generate the retrieval instruction; 或,or, 获取设置在取物区域的音频采集模块采集的第一音频;Obtain the first audio collected by the audio collection module arranged in the fetching area; 解析所述第一音频,确定是否进入取物指令生成模式;Analyzing the first audio to determine whether to enter the fetch instruction generation mode; 当进入取物指令生成模式时,获取设置在取物区域的第一图像采集模块采集的取物区域内的第一图像;When entering the fetching instruction generation mode, acquire the first image in the fetching area collected by the first image acquisition module set in the fetching area; 解析所述第一图像,确定所述第一图像中是否存在第一类物品;Analyzing the first image to determine whether there is a first type of item in the first image; 当存在所述第一类物品时,从所述第一图像中提取第一类物品的第一区域图像并基于提取的所述第一区域图像生成所述取物指令;When the first type of item exists, extracting a first area image of the first type of item from the first image and generating the retrieval instruction based on the extracted first area image; 其中,所述第一类物品包括:授权取物的表单或标识物。Wherein, the first type of item includes: a form or an identifier for authorizing the retrieval of the item. 8.如权利要求7所述的机器人取物的AI识别控制系统,其特征在于,在解析所述第一图像时,确定所述第一图像中不存在所述第一类物品时,所述第一获取模块还执行如下操作:8. The AI recognition control system for robot fetching objects according to claim 7, wherein when analyzing the first image, when it is determined that the first type of item does not exist in the first image, the The first acquisition module also performs the following operations: 确定是否存在第二类物品;determine whether a second class item is present; 当存在第二类物品时,构建待取物品清单并通过设置在所述取物区域的触摸屏输出第一问询信息;所述第一问询信息包括:是否进行取物以及是否进行身份核验;When there is a second type of item, build a list of items to be picked up and output the first inquiry information through the touch screen arranged in the picking area; the first inquiry information includes: whether to carry out picking and whether to perform identity verification; 接收取物人员通过所述触摸屏输入的对应所述问询信息的肯定反馈时,通过设置在所述取物区域的第二图像采集模块采集的取物人员的第二图像;When receiving the affirmative feedback corresponding to the inquiry information input by the pick-up personnel through the touch screen, the second image of the pick-up personnel collected by the second image acquisition module arranged in the pick-up area; 基于所述第二图像,对所述取物人员的权限进行核验;Based on the second image, verify the authority of the fetching person; 当核验通过时,通过所述触摸屏输出第二问询信息;所述第二问询信息包括:所述待取物品清单上各个待取物品的数量;When the verification is passed, the second inquiry information is output through the touch screen; the second inquiry information includes: the quantity of each item to be picked up on the item list to be picked up; 接收取物人员通过所述触摸屏输入的对应所述问询信息的输入信息并基于所述输入信息和所述待取物品清单生成所述取物指令。receiving the input information corresponding to the query information input by the pick-up personnel through the touch screen, and generating the pick-up instruction based on the input information and the list of items to be picked. 9.如权利要求6所述的机器人取物的AI识别控制系统,其特征在于,所述第一确定模块基于所述AI识别模型从所述周界图像中确定目标物体的位置,执行如下操作:9. The AI recognition control system for robot fetching according to claim 6, wherein the first determination module determines the position of the target object from the perimeter image based on the AI recognition model, and performs the following operations : 基于待取物体的信息,从预设的标准图像库,确定对应待取物体的多个标准图像;Determining a plurality of standard images corresponding to the object to be obtained from a preset standard image library based on the information of the object to be obtained; 对所述周界图像进行边缘提取并分割,确定多个第二区域图像;performing edge extraction and segmentation on the perimeter image to determine a plurality of second area images; 将所述标准图像和所述第二区域图像输入所述AI识别模型,确定两者的是否匹配;Input the standard image and the second area image into the AI recognition model, and determine whether the two match; 确定相匹配的所述标准图像和所述第二区域图像;determining matching of the standard image and the second area image; 将与所述标准图像匹配的所述第二区域图像对应物体作为所述目标物体;taking the object corresponding to the second region image that matches the standard image as the target object; 确定包含所述目标物体的所述周界图像的中心对应的第一方向向量;determining a first direction vector corresponding to the center of the perimeter image containing the target object; 基于所述第二区域图像在所述周界图像中的位置和所述第一方向向量,确定所述第二区域图像的中心对应的第二方向向量;determining a second direction vector corresponding to the center of the second area image based on the position of the second area image in the perimeter image and the first direction vector; 基于所述第二区域图像和预设的所述标准图像对应的物体距离及位姿识别库,确定所述目标物体的距离及位姿;Determine the distance and pose of the target object based on the second area image and the preset object distance and pose recognition library corresponding to the standard image; 基于所述第二方向向量、所述距离和所述位姿,确定所述目标物体的位置。A position of the target object is determined based on the second direction vector, the distance, and the pose. 10.如权利要求9所述的机器人取物的AI识别控制系统,其特征在于,所述第二确定模块基于所述位置及机器人的取物装置的结构,确定夹取点,执行如下操作:10. The AI recognition control system for robot fetching as claimed in claim 9, wherein the second determination module determines the gripping point based on the position and the structure of the fetching device of the robot, and performs the following operations: 调取对应取物装置的结构的夹取点确定库;Calling the library for determining the gripping point corresponding to the structure of the fetching device; 基于预设的第一量化模型对所述位置进行量化,确定第一量化参数Quantify the position based on a preset first quantization model, and determine a first quantization parameter 基于预设的第二量化模型对所述待取物体的信息进行量化,确定第二量化参数;Quantify the information of the object to be acquired based on a preset second quantization model, and determine a second quantization parameter; 基于所述第一量化参数和所述第二量化参数,确定参数集;determining a parameter set based on the first quantization parameter and the second quantization parameter; 基于所述参数集和所述夹取点确定库,确定所述夹取点。The gripping point is determined based on the parameter set and the gripping point determination library.
CN202310062938.5A 2023-01-19 2023-01-19 An AI recognition control method and system for robots to pick up objects Active CN116184892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310062938.5A CN116184892B (en) 2023-01-19 2023-01-19 An AI recognition control method and system for robots to pick up objects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310062938.5A CN116184892B (en) 2023-01-19 2023-01-19 An AI recognition control method and system for robots to pick up objects

Publications (2)

Publication Number Publication Date
CN116184892A true CN116184892A (en) 2023-05-30
CN116184892B CN116184892B (en) 2024-02-06

Family

ID=86432111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310062938.5A Active CN116184892B (en) 2023-01-19 2023-01-19 An AI recognition control method and system for robots to pick up objects

Country Status (1)

Country Link
CN (1) CN116184892B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118061196A (en) * 2024-04-17 2024-05-24 中建八局西南建设工程有限公司 A clamping device adaptation system based on object characteristics

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190099891A1 (en) * 2017-10-02 2019-04-04 Canon Kabushiki Kaisha Information processing apparatus, method, and robot system
CN109910018A (en) * 2019-04-26 2019-06-21 清华大学 Robotic virtual-real interactive operation execution system and method with visual semantic perception
EP3587044A1 (en) * 2018-06-28 2020-01-01 Sick Ag Method for gripping objects in a search area, control unit and positioning system
CN110712202A (en) * 2019-09-24 2020-01-21 鲁班嫡系机器人(深圳)有限公司 Special-shaped component grabbing method, device and system, control device and storage medium
CN110963209A (en) * 2019-12-27 2020-04-07 中电海康集团有限公司 Garbage sorting device and method based on deep reinforcement learning
CN111145257A (en) * 2019-12-27 2020-05-12 深圳市越疆科技有限公司 Article grabbing method and system and article grabbing robot
CN112365654A (en) * 2020-11-16 2021-02-12 深圳华侨城文化旅游科技集团有限公司 AI vending machine and system
CN113352314A (en) * 2020-03-06 2021-09-07 思特威(上海)电子科技股份有限公司 Robot motion control system and method based on closed-loop feedback
WO2021198053A1 (en) * 2020-04-03 2021-10-07 Beumer Group A/S Pick and place robot system, method, use and sorter system
CN113688825A (en) * 2021-05-17 2021-11-23 海南师范大学 An AI intelligent garbage identification and classification system and method
US20220016764A1 (en) * 2019-01-29 2022-01-20 Japan Cash Machine Co., Ltd. Object grasping system
DE102021104001B3 (en) * 2021-02-19 2022-04-28 Gerhard Schubert Gesellschaft mit beschränkter Haftung Method for automatically grasping, in particular moving, objects
CN114882214A (en) * 2022-04-02 2022-08-09 华南理工大学 Method for predicting object grabbing sequence from image based on deep learning
CN218195270U (en) * 2022-07-19 2023-01-03 湖南云集环保科技有限公司 AI image recognition combines multi-angle grabbing device for 3D positioning robot

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190099891A1 (en) * 2017-10-02 2019-04-04 Canon Kabushiki Kaisha Information processing apparatus, method, and robot system
EP3587044A1 (en) * 2018-06-28 2020-01-01 Sick Ag Method for gripping objects in a search area, control unit and positioning system
US20220016764A1 (en) * 2019-01-29 2022-01-20 Japan Cash Machine Co., Ltd. Object grasping system
CN109910018A (en) * 2019-04-26 2019-06-21 清华大学 Robotic virtual-real interactive operation execution system and method with visual semantic perception
CN110712202A (en) * 2019-09-24 2020-01-21 鲁班嫡系机器人(深圳)有限公司 Special-shaped component grabbing method, device and system, control device and storage medium
CN110963209A (en) * 2019-12-27 2020-04-07 中电海康集团有限公司 Garbage sorting device and method based on deep reinforcement learning
CN111145257A (en) * 2019-12-27 2020-05-12 深圳市越疆科技有限公司 Article grabbing method and system and article grabbing robot
CN113352314A (en) * 2020-03-06 2021-09-07 思特威(上海)电子科技股份有限公司 Robot motion control system and method based on closed-loop feedback
WO2021198053A1 (en) * 2020-04-03 2021-10-07 Beumer Group A/S Pick and place robot system, method, use and sorter system
CN112365654A (en) * 2020-11-16 2021-02-12 深圳华侨城文化旅游科技集团有限公司 AI vending machine and system
DE102021104001B3 (en) * 2021-02-19 2022-04-28 Gerhard Schubert Gesellschaft mit beschränkter Haftung Method for automatically grasping, in particular moving, objects
CN113688825A (en) * 2021-05-17 2021-11-23 海南师范大学 An AI intelligent garbage identification and classification system and method
CN114882214A (en) * 2022-04-02 2022-08-09 华南理工大学 Method for predicting object grabbing sequence from image based on deep learning
CN218195270U (en) * 2022-07-19 2023-01-03 湖南云集环保科技有限公司 AI image recognition combines multi-angle grabbing device for 3D positioning robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118061196A (en) * 2024-04-17 2024-05-24 中建八局西南建设工程有限公司 A clamping device adaptation system based on object characteristics

Also Published As

Publication number Publication date
CN116184892B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
KR100651010B1 (en) Computer-readable recording medium recording image contrast system, image contrast method and image contrast program using three-dimensional object model
JP6397226B2 (en) Apparatus, apparatus control method, and program
JP2019040227A (en) Inventory management system
CN113657565A (en) Method, device, robot and cloud server for moving robot across floors
CN104903892A (en) Object-based Image Retrieval System and Retrieval Method
CN106156888A (en) A kind of polling path method and device for planning of crusing robot
US12387494B2 (en) Method for warehouse storage-location monitoring, computer device, and non-volatile storage medium
CN116187718A (en) Method and system for identifying and sorting intelligent goods based on computer vision
JP2014056486A (en) Image network system, image display terminal, and image processing method
US12288415B2 (en) Selecting image to display based on facial distance between target person and another person
US20250037434A1 (en) Method and apparatus for updating target detection model
CN113901911A (en) Image recognition method, image recognition device, model training method, model training device, electronic equipment and storage medium
CN116184892B (en) An AI recognition control method and system for robots to pick up objects
CN113592390A (en) Warehousing digital twin method and system based on multi-sensor fusion
CN105760844A (en) Video stream data processing method, apparatus and system
GB2523776A (en) Methods for 3D object recognition and registration
JP2018185240A (en) Positioning device
CN114770559B (en) A robot fetching control system and method
CN113298453A (en) Data processing method and system and electronic equipment
CN116229469A (en) Multi-target goods picking system and method based on AR technology
CN113127758A (en) Article storage point pushing method and device, electronic equipment and storage medium
CN117216001A (en) File management system and method based on cloud platform
CN117196473A (en) Machine vision-based management method, system and media for the entry and exit of electric power tools into the warehouse
WO2022015480A1 (en) Directional guidance and layout compliance for item collection
WO2022132025A1 (en) Autonomous drone inventory of palleted collections placed within pallet bays of an indoor warehouse

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20250224

Address after: Room 117-2, Building 4, No. 77 Wengang South Road, Xinhe Street, Yannan High tech Zone, Yancheng City, Jiangsu Province, China 224000

Patentee after: Yancheng Surui Information Technology Co.,Ltd.

Country or region after: China

Address before: 224000 No. 1 Hope Avenue Middle Road, Tinghu District, Yancheng City, Jiangsu Province

Patentee before: YANCHENG INSTITUTE OF TECHNOLOGY

Country or region before: China

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20250331

Address after: Building 2, 2nd Floor, Standard Factory Building, No. 2 Fengyi Road, Longgang Town Industrial Park, Yandu District, Yancheng City, Jiangsu Province, China 224000

Patentee after: Yancheng Jianhaoyue Intelligent Technology Co.,Ltd.

Country or region after: China

Address before: Room 117-2, Building 4, No. 77 Wengang South Road, Xinhe Street, Yannan High tech Zone, Yancheng City, Jiangsu Province, China 224000

Patentee before: Yancheng Surui Information Technology Co.,Ltd.

Country or region before: China