[go: up one dir, main page]

WO2021012681A1 - Object carrying method applied to carrying robot, and carrying robot thereof - Google Patents

Object carrying method applied to carrying robot, and carrying robot thereof Download PDF

Info

Publication number
WO2021012681A1
WO2021012681A1 PCT/CN2020/078286 CN2020078286W WO2021012681A1 WO 2021012681 A1 WO2021012681 A1 WO 2021012681A1 CN 2020078286 W CN2020078286 W CN 2020078286W WO 2021012681 A1 WO2021012681 A1 WO 2021012681A1
Authority
WO
WIPO (PCT)
Prior art keywords
handling
target
axis
phase
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2020/078286
Other languages
French (fr)
Chinese (zh)
Inventor
张建民
龙佳乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuyi University Fujian
Original Assignee
Wuyi University Fujian
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuyi University Fujian filed Critical Wuyi University Fujian
Publication of WO2021012681A1 publication Critical patent/WO2021012681A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates

Definitions

  • the invention relates to the field of intelligent robots, in particular to a target handling method applied to a handling robot and a handling robot thereof.
  • Industrial robots have been used more and more in industrial production, and industrial production is becoming more intelligent. However, most of the industrial robots currently used in the handling process run in accordance with the built-in established procedures, and can only transport a single workpiece placed in a specific fixed position. Once the position of the workpiece changes or the shape of the workpiece changes, the industrial robot will Unable to respond to changes, unable to complete handling tasks.
  • the purpose of the present invention is to solve at least one of the technical problems existing in the prior art, and to provide a target handling method applied to a handling robot and a handling robot thereof, which can cope with diversified handling tasks.
  • a target handling method applied to a handling robot including a traveling mechanism, a first camera, a second camera, a projector, a controller, and a mechanical arm; the target handling method It includes the following steps:
  • the handling robot moves to the front of the handling target; the handling robot uses the first camera to photograph the handling target from above to obtain a top view image of the handling target;
  • the handling robot projects stripes onto the object to be transported through a projector, and uses the second camera to photograph the object from the front to obtain two fringe patterns with different wavelengths;
  • the handling robot uses OpenCV technology to obtain the shape of the handling target and the X-axis and Y-axis coordinates of the handling target according to the top view image;
  • the handling robot uses fringe projection measurement technology to measure the Z-axis coordinates of the handling target according to the fringe pattern;
  • the handling robot controls the robotic arm to grab the handling target according to the three-dimensional coordinates of the handling target.
  • the above-mentioned object handling method applied to the handling robot has at least the following beneficial effects: taking a top view image through the first camera, acquiring the shape of the handling target and its X-axis coordinates and Y-axis coordinates through the OpenCV technology, and taking pictures of the handling target through the second camera
  • the stripe pattern is used to measure the Z-axis coordinates of the transport target through the fringe projection measurement technology, and combine with each other to obtain the three-dimensional coordinates of the transport target.
  • the transport robot controls the robotic arm to grasp the transport target according to the three-dimensional coordinates; this method enables the transport robot to automatically recognize
  • the morphological characteristics of the transportation target are used to achieve the transportation purpose. It has a high degree of intelligence and can handle a variety of transportation targets in different positions to complete a variety of transportation tasks.
  • the transportation robot using OpenCV technology to obtain the shape of the transportation target and the X-axis coordinates and Y-axis coordinates of the transportation target according to the top view image includes the following steps:
  • the Z-axis coordinates of the transport target measured by the transport robot using the fringe projection measurement technique according to the fringe pattern are specifically:
  • the fringe image is processed by the six-step phase shift fringe analysis method to obtain the wrapped phase image ⁇ 1 (y) and ⁇ 2 (y);
  • the phase unwrapping of the wrapped phase image using a dual-wavelength fringe phase unwrapping algorithm based on wavelength selection to obtain the phase unwrapping image includes the following steps:
  • the reconstruction of the three-dimensional point cloud by combining the calibration parameters and the phase unfolded map is specifically:
  • the X-axis digital matrix, the Y-axis digital matrix and the Z-axis digital matrix are established respectively;
  • the conversion from pixel coordinate system to world coordinate system is realized, and the X-axis digital matrix, Y-axis digital matrix and Z-axis digital matrix are respectively converted into X-axis space matrix, Y-axis space matrix and Z-axis space matrix;
  • the filtering of the three-dimensional point cloud to obtain the object point cloud specifically includes:
  • the background noise filter is used to filter the pixels and noise points in the three-dimensional point cloud that are less affected by the stripes to obtain the object cloud points.
  • the calculation of the Z-axis coordinates of the transport target based on the object point cloud data is specifically: extracting the spatial coordinates of each point from the object point cloud, and calculating the height data of the highest point and the lowest point of the transported object Calculate the difference of the height data to obtain the height of the transported object.
  • the robotic arm is a six-degree-of-freedom robotic arm, including a first arm, a second arm, and a third arm; the handling robot controls the robotic arm to grasp according to the three-dimensional coordinates of the handling target
  • the specific handling objectives are:
  • a transport robot including a traveling mechanism, a first camera, a second camera, a projector, a mechanical arm, a controller, and a memory; the memory is connected to the controller and stores instructions The traveling mechanism, the first camera, the second camera, the projector and the mechanical arm are respectively connected to the controller; the controller executes the instructions to control the traveling mechanism, the first camera, the second camera, The projector and the robot arm implement the object handling method applied to the handling robot as described in the first aspect of the present invention.
  • the above-mentioned carrying robot has at least the following beneficial effects: taking a top view image through the first camera and acquiring the shape of the carrying object and its X-axis and Y-axis coordinates through the OpenCV technology, and at the same time photographing the stripe image of the carrying object through the second camera and projecting it through the stripe
  • the measurement technology measures the Z-axis coordinates of the transport target, and combines them to obtain the three-dimensional coordinates of the transport target.
  • the transport robot controls the robotic arm to grasp the transport target according to the three-dimensional coordinates; it can automatically recognize the morphological characteristics of the transport target and grasp it accordingly
  • the purpose of the transportation target is highly intelligent, which can handle a variety of transportation targets in different positions and complete diversified transportation tasks.
  • FIG. 1 is a flowchart of a target handling method applied to a handling robot according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a handling robot according to an embodiment of the present invention.
  • Figure 3 is a schematic diagram of the movement of the robotic arm.
  • first and second are only for the purpose of distinguishing technical features, and cannot be understood as indicating or implying relative importance or implicitly specifying the number of the indicated technical features or implicitly specifying the order of the indicated technical features relationship.
  • an embodiment of the present invention provides a method of object handling applied to a handling robot, the handling robot includes a traveling mechanism 4, a first camera 8, a second camera 6, a projector 7, Controller 2 and robotic arm 5;
  • the target handling method includes the following steps:
  • Step S100 the handling robot moves to before the handling target
  • Step S200 the transport robot uses the first camera 8 to photograph the transport target from above to obtain a top view image of the transport target;
  • Step S300 The transport robot projects stripes onto the transport target through the projector 7, and uses the second camera 6 to photograph the transport target from the front to obtain two fringe patterns with different wavelengths;
  • Step S400 The handling robot uses OpenCV technology to obtain the shape of the handling target and the X-axis coordinates and Y-axis coordinates of the handling target according to the top view image;
  • Step S500 The transport robot uses the fringe projection measurement technology to measure the Z-axis coordinates of the transport target according to the fringe pattern;
  • Step S600 The handling robot controls the robotic arm 5 to grab the handling target according to the three-dimensional coordinates of the handling target.
  • the top view image is taken through the first camera 8 and the shape of the transport target and its X-axis and Y-axis coordinates are acquired through the OpenCV technology, while the fringe pattern of the transport target is captured by the second camera 6 and measured by fringe projection
  • the technology measures the Z-axis coordinates of the transport target and combines them to obtain the three-dimensional coordinates of the transport target.
  • the transport robot controls the robotic arm 5 to grab the transport target according to the three-dimensional coordinates; this method enables the transport robot to automatically recognize the morphological characteristics of the transport target. And according to this to achieve the purpose of transportation, with a high degree of intelligence, it can handle a variety of transportation targets in different locations, and complete diversified transportation tasks.
  • step S400 specifically includes the following steps:
  • Step S410 Binarize the overhead image to make the overhead image present a black area and a white area, where the black area is mainly the background image, and the white area is mainly the transportation target;
  • Step S420 Use the largest rectangular frame to mark the transfer target, and calculate the coordinates of the center point of the largest rectangular frame as the X-axis coordinates and Y-axis coordinates of the transfer target;
  • Step S430 Perform connected area collection on the white area in the overhead image
  • Step S440 Perform edge recognition after processing the noise of the connected area to obtain the shape of the transport target.
  • step S400 the OpenCV technology can be used to quickly measure and obtain the accurate shape of the transport target and the x-axis and y-axis coordinates of the transport target, which is conducive to the accuracy of the transport.
  • step S500 specifically includes the following steps:
  • Step S510 Use a six-step phase shift fringe analysis method to process the fringe image to obtain wrapped phase images ⁇ 1 (y) and ⁇ 2 (y);
  • Step S520 Perform phase unwrapping on the wrapped phase map using a dual-wavelength fringe phase unwrapping algorithm based on wavelength selection to obtain a phase unwrapping map;
  • Step S530 Reconstruct a three-dimensional point cloud by combining the calibration parameters and the phase expansion map
  • Step S540 filtering the three-dimensional point cloud to obtain the object point cloud
  • Step S550 Calculate the Z-axis coordinate of the transport target according to the object point cloud data.
  • step S520 includes the following steps:
  • Step S521 Establish a look-up table for the relationship between m 1 (y), m 2 (y) and [m 2 (y) ⁇ 2 -m 1 (y) ⁇ 1 ], where ⁇ 1 and ⁇ 2 are projectors 7
  • ⁇ 1 and ⁇ 2 are projectors 7
  • the wavelengths of the two fringes projected on the transport target, m 1 (y) and m 2 (y) are the fringe orders of the two fringes respectively;
  • Step S522 Calculate [ ⁇ 1 ⁇ 1 (y)- ⁇ 2 ⁇ 2 (y)] and round up to get the result value, and look up the value of [m 2 (y) ⁇ 2 -m 1 (y) ⁇ 1 ] in the relational look-up table The corresponding target values of m 1 (y) and m 2 (y) in the row closest to the result value;
  • Step S523 According to the relationship between absolute phase and wrapped phase And the target values of m 1 (y) and m 2 (y) are phase-expanded to obtain a phase expansion diagram, where ⁇ 1 (y) and ⁇ 2 (y) are absolute phases.
  • step S530 specifically includes the following steps:
  • Step S531 According to the phase expansion diagram, an X-axis digital matrix, a Y-axis digital matrix, and a Z-axis digital matrix are respectively established therefrom;
  • Step S532 Combine the calibration parameters to realize the conversion from the pixel coordinate system to the world coordinate system, and convert the X-axis digital matrix, Y-axis digital matrix and Z-axis digital matrix into X-axis space matrix, Y-axis space matrix and Z-axis space matrix respectively ;
  • step S533 a three-dimensional point cloud is reconstructed according to the coordinate data about the X axis, the Y axis and the Z axis of each point.
  • step S540 specifically includes the following steps:
  • Step S541 Design a background noise filter according to the modulated light intensity reflected by the projected stripes on the surface of the conveying target;
  • Step S542 Use a background noise filter to filter the pixels and noise points in the three-dimensional point cloud that are less affected by the stripes to obtain the object cloud point.
  • step S550 is specifically: extracting the spatial coordinates of each point from the object point cloud, and calculating the difference between the height data of the highest point and the height data of the lowest point of the transported object to obtain the height of the transported object, and then obtain the z-axis coordinate .
  • the fringe projection measurement technology can quickly measure and obtain the accurate z-axis coordinates of the transport target, which is conducive to the accuracy of transport.
  • the mechanical arm 5 is a six-degree-of-freedom mechanical arm, including a first arm 51, a second arm 52, and a third arm 53;
  • Step S600 specifically includes the following steps:
  • Step S610 Set the origin O as the connection point between the robot arm 5 and the handling robot, and the lengths of the first arm 51, the second arm 52 and the third arm 53 are respectively L 1 , L 2 and L 3 ;
  • Step S620 Calculate the distance between the origin O and the transport target And the angle of origin O and the transportation target
  • Step S630 Calculate the bisecting angle of the angle formed by the first arm 51 and the second arm 52 And the angle formed by the line between the origin O and the end of the second arm 52 and the line between the origin O and the transport target
  • Step S640 Calculate the rotation angle of the first arm 51 The rotation angle of the second arm 52 And the rotation angle of the third arm 53 Where S is the angle coefficient, and the value is 1 or -1.
  • a handling robot including a main frame 1, a traveling mechanism 4, a first camera 8, a second camera 6, a projector 7, a robotic arm 5, and a controller 2.
  • the projector 7 and the second camera 6 are located in front of the main frame 1
  • the traveling mechanism 4 is located at the bottom of the main frame 1
  • the first camera 8 is mounted on the main frame 1 through a telescopic rod
  • the controller 2 and the memory 3 are arranged inside the main frame 1.
  • the mechanical arm 5 is located at the center of the front of the main frame 1.
  • the mechanical arm 5 is a six-degree-of-freedom mechanical arm 5 and includes a first arm 51, a second arm 52 and a third arm 53.
  • the memory 3 is connected to the controller 2 and stores instructions; the traveling mechanism 4, the first camera 8, the second camera 6, the projector 7, and the robot arm 5 are respectively connected to the controller 2; The controller 2 executes the instructions to control the traveling mechanism 4, the first camera 8, the second camera 6, the projector 7 and the robot arm 5 to implement the object handling method applied to the handling robot as described above.
  • the top view image is taken through the first camera 8 and the shape of the transport target and its X-axis and Y-axis coordinates are acquired through the OpenCV technology, while the fringe pattern of the transport target is captured by the second camera 6 and measured by fringe projection
  • the technology measures the Z-axis coordinates of the transport target, and combines them to obtain the three-dimensional coordinates of the transport target.
  • the transport robot controls the robotic arm 5 to grab the transport target according to the three-dimensional coordinates; it can automatically recognize the morphological characteristics of the transport target and grab it accordingly
  • the purpose of the transportation target is highly intelligent, which can handle a variety of transportation targets in different positions and complete diversified transportation tasks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Manipulator (AREA)

Abstract

Provided are an object carrying method applied to a carrying robot, and a carrying robot thereof. A first camera (8) photographs a top-view image, and the shape of a carried object and the X-axis coordinate and the Y-axis coordinate of the carried object are acquired by means of OpenCV technology; moreover, a second camera (6) photographs a fringe pattern of the carried object, and the Z-axis coordinate of the carried object is measured by means of fringe projection measurement technology, and the X-axis coordinate, the Y-axis coordinate and the Z-axis coordinate are mutually combined to obtain the three-dimensional coordinates of the carried object; and the carrying robot controls a robot arm to grab the carried object according to the three-dimensional coordinates, so that the aim of automatically cognizing the morphological features of the carried object and grabbing the carried object according to the morphological features is achieved. The object carrying method and the carrying robot have high intelligentization, such that multiple carried objects at different positions can be carried, and diversified carrying tasks can be completed.

Description

一种应用于搬运机器人的目标搬运方法及其搬运机器人Target handling method applied to handling robot and handling robot 技术领域Technical field

本发明涉及智能机器人领域,特别是一种应用于搬运机器人的目标搬运方法及其搬运机器人。The invention relates to the field of intelligent robots, in particular to a target handling method applied to a handling robot and a handling robot thereof.

背景技术Background technique

工业机器人在工业生产中得到了越来越多的运用,工业生产越趋向于智能化。但目前应用在搬运工序的工业机器人大部分是按照内设的既定程序来运行,只能对特定的放在固定位置的单一工件进行搬运,一旦工件位置发生变化或者工件形态发生变化,工业机器人则无法应对变化,无法完成搬运任务。Industrial robots have been used more and more in industrial production, and industrial production is becoming more intelligent. However, most of the industrial robots currently used in the handling process run in accordance with the built-in established procedures, and can only transport a single workpiece placed in a specific fixed position. Once the position of the workpiece changes or the shape of the workpiece changes, the industrial robot will Unable to respond to changes, unable to complete handling tasks.

发明内容Summary of the invention

本发明的目的在于至少解决现有技术中存在的技术问题之一,提供一种应用于搬运机器人的目标搬运方法及其搬运机器人,能应对多元化的搬运任务。The purpose of the present invention is to solve at least one of the technical problems existing in the prior art, and to provide a target handling method applied to a handling robot and a handling robot thereof, which can cope with diversified handling tasks.

本发明解决其问题所采用的技术方案是:The technical solutions adopted by the present invention to solve its problems are:

本发明的第一方面,提供了一种应用于搬运机器人的目标搬运方法,所述搬运机器人包括行进机构、第一摄像头、第二摄像头、投影仪、控制器和机械臂;所述目标搬运方法包括以下步骤:In a first aspect of the present invention, there is provided a target handling method applied to a handling robot, the handling robot including a traveling mechanism, a first camera, a second camera, a projector, a controller, and a mechanical arm; the target handling method It includes the following steps:

搬运机器人移动至搬运目标前;搬运机器人通过第一摄像头从上方拍摄搬运目标获得搬运目标的俯视图像;The handling robot moves to the front of the handling target; the handling robot uses the first camera to photograph the handling target from above to obtain a top view image of the handling target;

搬运机器人通过投影仪投影条纹至搬运目标上,并通过第二摄像头从正面拍摄搬运目标获得波长不同的两种条纹图;The handling robot projects stripes onto the object to be transported through a projector, and uses the second camera to photograph the object from the front to obtain two fringe patterns with different wavelengths;

搬运机器人根据俯视图像利用OpenCV技术得到搬运目标的形状和搬运目标的X轴坐标和Y轴坐标;The handling robot uses OpenCV technology to obtain the shape of the handling target and the X-axis and Y-axis coordinates of the handling target according to the top view image;

搬运机器人根据条纹图利用条纹投影测量技术测得搬运目标的Z轴坐标;The handling robot uses fringe projection measurement technology to measure the Z-axis coordinates of the handling target according to the fringe pattern;

搬运机器人根据搬运目标的三维坐标控制机械臂抓取搬运目标。The handling robot controls the robotic arm to grab the handling target according to the three-dimensional coordinates of the handling target.

上述应用于搬运机器人的目标搬运方法至少具有以下有益效果:通过第一摄像头拍摄俯视图像并通过OpenCV技术获取搬运目标的形状及其X轴坐标和Y轴坐标,同时通过第二摄像头拍摄搬运目标的条纹图并通过条纹投影测量技术测得搬运目标的Z轴坐标,相互结合得到搬运目标的三维坐标,同时搬运机器人根据三维坐标控制机械臂抓取搬运目标;利用该方法使搬运机器人能自动认知搬运目标的形态特征,并据此来实现搬运目的,具有高度的智能化,能对不同位置的多种搬运目标进行搬运,完成多样化的搬运任务。The above-mentioned object handling method applied to the handling robot has at least the following beneficial effects: taking a top view image through the first camera, acquiring the shape of the handling target and its X-axis coordinates and Y-axis coordinates through the OpenCV technology, and taking pictures of the handling target through the second camera The stripe pattern is used to measure the Z-axis coordinates of the transport target through the fringe projection measurement technology, and combine with each other to obtain the three-dimensional coordinates of the transport target. At the same time, the transport robot controls the robotic arm to grasp the transport target according to the three-dimensional coordinates; this method enables the transport robot to automatically recognize The morphological characteristics of the transportation target are used to achieve the transportation purpose. It has a high degree of intelligence and can handle a variety of transportation targets in different positions to complete a variety of transportation tasks.

根据本发明的第一方面,所述搬运机器人根据俯视图像利用OpenCV技术得到搬运目标的形状和搬运目标的X轴坐标和Y轴坐标包括以下步骤:According to the first aspect of the present invention, the transportation robot using OpenCV technology to obtain the shape of the transportation target and the X-axis coordinates and Y-axis coordinates of the transportation target according to the top view image includes the following steps:

对俯视图像进行二值化处理,使俯视图像呈现黑色区域和白色区域;Binarize the bird's-eye view image so that the bird's-eye view image presents black and white areas;

利用最大矩形框标记搬运目标,计算最大矩形框的中心点坐标作 为搬运目标的X轴坐标和Y轴坐标;Use the largest rectangular box to mark the transportation target, and calculate the center point coordinates of the largest rectangular box as the X-axis and Y-axis coordinates of the transportation target;

对俯视图像中的白色区域进行连通区域采集;Collect connected areas on the white areas in the overhead image;

对连通区域噪声处理后进行边缘识别,得到搬运目标的形状。After processing the noise of the connected area, perform edge recognition to obtain the shape of the conveying target.

根据本发明的第一方面,所述搬运机器人根据条纹图利用条纹投影测量技术测得搬运目标的Z轴坐标具体为:According to the first aspect of the present invention, the Z-axis coordinates of the transport target measured by the transport robot using the fringe projection measurement technique according to the fringe pattern are specifically:

对条纹图利用六步相移的条纹分析方法处理,得到包裹相位图φ 1(y)和φ 2(y); The fringe image is processed by the six-step phase shift fringe analysis method to obtain the wrapped phase image φ 1 (y) and φ 2 (y);

对包裹相位图利用基于波长选择的双波长条纹相位展开算法进行相位展开,得到相位展开图;Use the dual-wavelength fringe phase unwrapping algorithm based on wavelength selection to phase unwrap the wrapped phase image to obtain the phase unwrapping image;

结合标定参数和相位展开图重构三维立体点云;Combine calibration parameters and phase unfolded map to reconstruct a three-dimensional point cloud;

对三维立体点云进行滤波得到物体点云;Filter the three-dimensional point cloud to obtain the object point cloud;

根据物体点云数据计算搬运目标的Z轴坐标。Calculate the Z-axis coordinates of the transport target according to the object point cloud data.

根据本发明的第一方面,所述对包裹相位图利用基于波长选择的双波长条纹相位展开算法进行相位展开,得到相位展开图包括以下步骤:According to the first aspect of the present invention, the phase unwrapping of the wrapped phase image using a dual-wavelength fringe phase unwrapping algorithm based on wavelength selection to obtain the phase unwrapping image includes the following steps:

建立关于m 1(y)、m 2(y)和[m 2(y)λ 2-m 1(y)λ 1]的关系查找表,其中λ 1和λ 2分别是投影仪投影至搬运目标上的两种条纹的波长,m 1(y)和m 2(y)分别是对应两种条纹的条纹阶数; Establish a lookup table for the relationship between m 1 (y), m 2 (y), and (m 2 (y)λ 2 -m 1 (y)λ 1 ], where λ 1 and λ 2 are the projections of the projector to the transport target, respectively The wavelengths of the two fringes above, m 1 (y) and m 2 (y) are the fringe orders of the two fringes respectively;

计算[λ 1φ1(y)-λ 2φ2(y)]并四舍五入取整得到结果值,在关系查找表中查找[m 2(y)λ 2-m 1(y)λ 1]的值与结果值最接近的一行中对应的m 1(y)和m 2(y)的目标值; Calculate [λ 1φ1 (y)-λ 2φ2 (y)] and round up to get the result value, look up the value of [m 2 (y)λ 2 -m 1 (y)λ 1 ] and the result value in the relational look-up table The corresponding target values of m 1 (y) and m 2 (y) in the closest row;

根据绝对相位和包裹相位的关系式

Figure PCTCN2020078286-appb-000001
以及m 1(y)和m 2(y)的目标值进行相位展开,得到相位展开图,其中,Φ 1(y)和Φ 2(y)是绝对相位。 According to the relationship between absolute phase and wrapped phase
Figure PCTCN2020078286-appb-000001
And the target values of m 1 (y) and m 2 (y) are phase-expanded to obtain a phase expansion diagram, where Φ 1 (y) and Φ 2 (y) are absolute phases.

根据本发明的第一方面,所述结合标定参数和相位展开图重构三维立体点云具体为:According to the first aspect of the present invention, the reconstruction of the three-dimensional point cloud by combining the calibration parameters and the phase unfolded map is specifically:

根据相位展开图,以此分别建立X轴数字矩阵、Y轴数字矩阵和Z轴数字矩阵;According to the phase expansion diagram, the X-axis digital matrix, the Y-axis digital matrix and the Z-axis digital matrix are established respectively;

结合标定参数,实现像素坐标系到世界坐标系的转换,把X轴数字矩阵、Y轴数字矩阵和Z轴数字矩阵分别转换X轴空间矩阵、Y轴空间矩阵和Z轴空间矩阵;Combined with the calibration parameters, the conversion from pixel coordinate system to world coordinate system is realized, and the X-axis digital matrix, Y-axis digital matrix and Z-axis digital matrix are respectively converted into X-axis space matrix, Y-axis space matrix and Z-axis space matrix;

根据每一个点的关于X轴、Y轴和Z轴的坐标数据,重构出三维立体点云。According to the coordinate data of each point on the X-axis, Y-axis and Z-axis, a three-dimensional point cloud is reconstructed.

根据本发明的第一方面,所述对三维立体点云进行滤波得到物体点云具体为:According to the first aspect of the present invention, the filtering of the three-dimensional point cloud to obtain the object point cloud specifically includes:

根据投影的条纹经搬运目标表面反射的调制光强设计背景噪声滤波器;Design a background noise filter based on the modulated light intensity reflected by the projected fringe on the surface of the transport target;

利用背景噪声滤波器过滤三维立体点云中受条纹影响小的像素点和噪声点得到物体云点。The background noise filter is used to filter the pixels and noise points in the three-dimensional point cloud that are less affected by the stripes to obtain the object cloud points.

根据本发明的第一方面,所述根据物体点云数据计算搬运目标的Z轴坐标具体为:从物体点云中提取各点的空间坐标,并对搬运物体 的最高点的高度数据和最低点的高度数据进行求差,得到搬运物体的高度。According to the first aspect of the present invention, the calculation of the Z-axis coordinates of the transport target based on the object point cloud data is specifically: extracting the spatial coordinates of each point from the object point cloud, and calculating the height data of the highest point and the lowest point of the transported object Calculate the difference of the height data to obtain the height of the transported object.

根据本发明的第一方面,所述机械臂为六自由度机械臂,包括第一臂杆、第二臂杆和第三臂杆;所述搬运机器人根据搬运目标的三维坐标控制机械臂抓取搬运目标具体为:According to the first aspect of the present invention, the robotic arm is a six-degree-of-freedom robotic arm, including a first arm, a second arm, and a third arm; the handling robot controls the robotic arm to grasp according to the three-dimensional coordinates of the handling target The specific handling objectives are:

设定原点O为机械臂与搬运机器人的连接点,第一臂杆、第二臂杆和第三臂杆的长度分别为L 1、L 2和L 3Set the origin O as the connection point between the robotic arm and the handling robot, and the lengths of the first arm, the second arm and the third arm are L 1 , L 2 and L 3 respectively ;

计算原点O与搬运目标的连线距离

Figure PCTCN2020078286-appb-000002
以及原点O与搬运目标的角度
Figure PCTCN2020078286-appb-000003
Calculate the distance between the origin O and the transportation target
Figure PCTCN2020078286-appb-000002
And the angle of origin O and the transportation target
Figure PCTCN2020078286-appb-000003

计算第一臂杆与第二臂杆所组成的角的平分角

Figure PCTCN2020078286-appb-000004
以及原点O与第二臂杆末端的连线和原点O与搬运目标的连线所组成的角度
Figure PCTCN2020078286-appb-000005
Calculate the bisecting angle of the angle formed by the first boom and the second boom
Figure PCTCN2020078286-appb-000004
And the angle formed by the line between the origin O and the end of the second boom and the line between the origin O and the carrying target
Figure PCTCN2020078286-appb-000005

计算第一臂杆的旋转角度

Figure PCTCN2020078286-appb-000006
第二臂杆的旋转角度
Figure PCTCN2020078286-appb-000007
和第三臂杆的旋转角度
Figure PCTCN2020078286-appb-000008
其中S为角度系数,取值为1或-1。 Calculate the rotation angle of the primary boom
Figure PCTCN2020078286-appb-000006
The rotation angle of the second boom
Figure PCTCN2020078286-appb-000007
And the rotation angle of the third boom
Figure PCTCN2020078286-appb-000008
Where S is the angle coefficient, and the value is 1 or -1.

本发明的第二方面,提供了一种搬运机器人,包括行进机构、第一摄像头、第二摄像头、投影仪、机械臂、控制器和存储器;所述存 储器与所述控制器连接且存储有指令;所述行进机构、第一摄像头、第二摄像头、投影仪和机械臂分别与所述控制器连接;所述控制器执行所述指令以控制所述行进机构、第一摄像头、第二摄像头、投影仪和机械臂实施如本发明第一方面所述的应用于搬运机器人的目标搬运方法。In a second aspect of the present invention, a transport robot is provided, including a traveling mechanism, a first camera, a second camera, a projector, a mechanical arm, a controller, and a memory; the memory is connected to the controller and stores instructions The traveling mechanism, the first camera, the second camera, the projector and the mechanical arm are respectively connected to the controller; the controller executes the instructions to control the traveling mechanism, the first camera, the second camera, The projector and the robot arm implement the object handling method applied to the handling robot as described in the first aspect of the present invention.

上述搬运机器人至少具有以下有益效果:通过第一摄像头拍摄俯视图像并通过OpenCV技术获取搬运目标的形状及其X轴坐标和Y轴坐标,同时通过第二摄像头拍摄搬运目标的条纹图并通过条纹投影测量技术测得搬运目标的Z轴坐标,相互结合得到搬运目标的三维坐标,同时搬运机器人根据三维坐标控制机械臂抓取搬运目标;能实现自动认知搬运目标的形态特征并据此来抓取搬运目标的目的,具有高度的智能化,能对不同位置的多种搬运目标进行搬运,完成多样化的搬运任务。The above-mentioned carrying robot has at least the following beneficial effects: taking a top view image through the first camera and acquiring the shape of the carrying object and its X-axis and Y-axis coordinates through the OpenCV technology, and at the same time photographing the stripe image of the carrying object through the second camera and projecting it through the stripe The measurement technology measures the Z-axis coordinates of the transport target, and combines them to obtain the three-dimensional coordinates of the transport target. At the same time, the transport robot controls the robotic arm to grasp the transport target according to the three-dimensional coordinates; it can automatically recognize the morphological characteristics of the transport target and grasp it accordingly The purpose of the transportation target is highly intelligent, which can handle a variety of transportation targets in different positions and complete diversified transportation tasks.

附图说明Description of the drawings

下面结合附图和实例对本发明作进一步说明。The present invention will be further explained below with reference to the drawings and examples.

图1是本发明实施例一种应用于搬运机器人的目标搬运方法的流程图;FIG. 1 is a flowchart of a target handling method applied to a handling robot according to an embodiment of the present invention;

图2是本发明实施例一种搬运机器人的示意图;2 is a schematic diagram of a handling robot according to an embodiment of the present invention;

图3是机械臂的运动示意图。Figure 3 is a schematic diagram of the movement of the robotic arm.

具体实施方式Detailed ways

本部分将详细描述本发明的具体实施例,本发明之较佳实施例在 附图中示出,附图的作用在于用图形补充说明书文字部分的描述,使人能够直观地、形象地理解本发明的每个技术特征和整体技术方案,但其不能理解为对本发明保护范围的限制。This section will describe the specific embodiments of the present invention in detail. The preferred embodiments of the present invention are shown in the drawings. The function of the drawings is to supplement the description of the text part of the manual with graphics, so that people can intuitively and vividly understand the text. Each technical feature and overall technical solution of the invention cannot be understood as a limitation on the protection scope of the present invention.

在本发明的描述中,需要理解的是,涉及到方位描述,例如上、下、前、后、左、右等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。In the description of the present invention, it should be understood that the orientation description involved, for example, the orientation or positional relationship indicated by up, down, front, back, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and only In order to facilitate the description of the present invention and simplify the description, it does not indicate or imply that the pointed device or element must have a specific orientation, be constructed and operate in a specific orientation, and therefore cannot be understood as a limitation of the present invention.

如果有描述到第一、第二只是用于区分技术特征为目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量或者隐含指明所指示的技术特征的先后关系。If it is described that the first and second are only for the purpose of distinguishing technical features, and cannot be understood as indicating or implying relative importance or implicitly specifying the number of the indicated technical features or implicitly specifying the order of the indicated technical features relationship.

本发明的描述中,除非另有明确的限定,设置、安装、连接等词语应做广义理解,所属技术领域技术人员可以结合技术方案的具体内容合理确定上述词语在本发明中的具体含义。In the description of the present invention, unless otherwise clearly defined, terms such as setting, installation, and connection should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meaning of the above terms in the present invention in combination with the specific content of the technical solution.

参照图1和图2,本发明的一个实施例,提供了一种应用于搬运机器人的目标搬运方法,所述搬运机器人包括行进机构4、第一摄像头8、第二摄像头6、投影仪7、控制器2和机械臂5;1 and 2, an embodiment of the present invention provides a method of object handling applied to a handling robot, the handling robot includes a traveling mechanism 4, a first camera 8, a second camera 6, a projector 7, Controller 2 and robotic arm 5;

所述目标搬运方法包括以下步骤:The target handling method includes the following steps:

步骤S100、搬运机器人移动至搬运目标前;Step S100, the handling robot moves to before the handling target;

步骤S200、搬运机器人通过第一摄像头8从上方拍摄搬运目标获得搬运目标的俯视图像;Step S200, the transport robot uses the first camera 8 to photograph the transport target from above to obtain a top view image of the transport target;

步骤S300、搬运机器人通过投影仪7投影条纹至搬运目标上,并通过第二摄像头6从正面拍摄搬运目标获得波长不同的两种条纹图;Step S300: The transport robot projects stripes onto the transport target through the projector 7, and uses the second camera 6 to photograph the transport target from the front to obtain two fringe patterns with different wavelengths;

步骤S400、搬运机器人根据俯视图像利用OpenCV技术得到搬运目标的形状和搬运目标的X轴坐标和Y轴坐标;Step S400: The handling robot uses OpenCV technology to obtain the shape of the handling target and the X-axis coordinates and Y-axis coordinates of the handling target according to the top view image;

步骤S500、搬运机器人根据条纹图利用条纹投影测量技术测得搬运目标的Z轴坐标;Step S500: The transport robot uses the fringe projection measurement technology to measure the Z-axis coordinates of the transport target according to the fringe pattern;

步骤S600、搬运机器人根据搬运目标的三维坐标控制机械臂5抓取搬运目标。Step S600: The handling robot controls the robotic arm 5 to grab the handling target according to the three-dimensional coordinates of the handling target.

在该实施例中,通过第一摄像头8拍摄俯视图像并通过OpenCV技术获取搬运目标的形状及其X轴坐标和Y轴坐标,同时通过第二摄像头6拍摄搬运目标的条纹图并通过条纹投影测量技术测得搬运目标的Z轴坐标,相互结合得到搬运目标的三维坐标,同时搬运机器人根据三维坐标控制机械臂5抓取搬运目标;利用该方法使搬运机器人能自动认知搬运目标的形态特征,并据此来实现搬运目的,具有高度的智能化,能对不同位置的多种搬运目标进行搬运,完成多样化的搬运任务。In this embodiment, the top view image is taken through the first camera 8 and the shape of the transport target and its X-axis and Y-axis coordinates are acquired through the OpenCV technology, while the fringe pattern of the transport target is captured by the second camera 6 and measured by fringe projection The technology measures the Z-axis coordinates of the transport target and combines them to obtain the three-dimensional coordinates of the transport target. At the same time, the transport robot controls the robotic arm 5 to grab the transport target according to the three-dimensional coordinates; this method enables the transport robot to automatically recognize the morphological characteristics of the transport target. And according to this to achieve the purpose of transportation, with a high degree of intelligence, it can handle a variety of transportation targets in different locations, and complete diversified transportation tasks.

进一步,步骤S400具体包括以下步骤:Further, step S400 specifically includes the following steps:

步骤S410、对俯视图像进行二值化处理,使俯视图像呈现黑色区域和白色区域,其中黑色区域主要为背景图像,白色区域主要为搬运目标;Step S410: Binarize the overhead image to make the overhead image present a black area and a white area, where the black area is mainly the background image, and the white area is mainly the transportation target;

步骤S420、利用最大矩形框标记搬运目标,计算最大矩形框的中心点坐标作为搬运目标的X轴坐标和Y轴坐标;Step S420: Use the largest rectangular frame to mark the transfer target, and calculate the coordinates of the center point of the largest rectangular frame as the X-axis coordinates and Y-axis coordinates of the transfer target;

步骤S430、对俯视图像中的白色区域进行连通区域采集;Step S430: Perform connected area collection on the white area in the overhead image;

步骤S440、对连通区域噪声处理后进行边缘识别,得到搬运目标的形状。Step S440: Perform edge recognition after processing the noise of the connected area to obtain the shape of the transport target.

通过步骤S400,利用OpenCV技术能快速测量得到准确的搬运目标的形状以及搬运目标的x轴坐标和y轴坐标,有利于搬运的精准化。Through step S400, the OpenCV technology can be used to quickly measure and obtain the accurate shape of the transport target and the x-axis and y-axis coordinates of the transport target, which is conducive to the accuracy of the transport.

进一步,步骤S500具体包括以下步骤:Further, step S500 specifically includes the following steps:

步骤S510、对条纹图利用六步相移的条纹分析方法处理,得到包裹相位图φ 1(y)和φ 2(y); Step S510: Use a six-step phase shift fringe analysis method to process the fringe image to obtain wrapped phase images φ 1 (y) and φ 2 (y);

步骤S520、对包裹相位图利用基于波长选择的双波长条纹相位展开算法进行相位展开,得到相位展开图;Step S520: Perform phase unwrapping on the wrapped phase map using a dual-wavelength fringe phase unwrapping algorithm based on wavelength selection to obtain a phase unwrapping map;

步骤S530、结合标定参数和相位展开图重构三维立体点云;Step S530: Reconstruct a three-dimensional point cloud by combining the calibration parameters and the phase expansion map;

步骤S540、对三维立体点云进行滤波得到物体点云;Step S540, filtering the three-dimensional point cloud to obtain the object point cloud;

步骤S550、根据物体点云数据计算搬运目标的Z轴坐标。Step S550: Calculate the Z-axis coordinate of the transport target according to the object point cloud data.

进一步,步骤S520包括以下步骤:Further, step S520 includes the following steps:

步骤S521、建立关于m 1(y)、m 2(y)和[m 2(y)λ 2-m 1(y)λ 1]的关系查找表,其中λ 1和λ 2分别是投影仪7投影至搬运目标上的两种条纹的波长,m 1(y)和m 2(y)分别是对应两种条纹的条纹阶数; Step S521: Establish a look-up table for the relationship between m 1 (y), m 2 (y) and [m 2 (y)λ 2 -m 1 (y)λ 1 ], where λ 1 and λ 2 are projectors 7 The wavelengths of the two fringes projected on the transport target, m 1 (y) and m 2 (y) are the fringe orders of the two fringes respectively;

步骤S522、计算[λ 1φ1(y)-λ 2φ2(y)]并四舍五入取整得到结果值,在关系查找表中查找[m 2(y)λ 2-m 1(y)λ 1]的值与结果值最接近的 一行中对应的m 1(y)和m 2(y)的目标值; Step S522: Calculate [λ 1φ1 (y)-λ 2φ2 (y)] and round up to get the result value, and look up the value of [m 2 (y)λ 2 -m 1 (y)λ 1 ] in the relational look-up table The corresponding target values of m 1 (y) and m 2 (y) in the row closest to the result value;

步骤S523、根据绝对相位和包裹相位的关系式

Figure PCTCN2020078286-appb-000009
以及m 1(y)和m 2(y)的目标值进行相位展开,得到相位展开图,其中,Φ 1(y)和Φ 2(y)是绝对相位。 Step S523: According to the relationship between absolute phase and wrapped phase
Figure PCTCN2020078286-appb-000009
And the target values of m 1 (y) and m 2 (y) are phase-expanded to obtain a phase expansion diagram, where Φ 1 (y) and Φ 2 (y) are absolute phases.

进一步,步骤S530具体包括以下步骤:Further, step S530 specifically includes the following steps:

步骤S531、根据相位展开图,以此分别建立X轴数字矩阵、Y轴数字矩阵和Z轴数字矩阵;Step S531: According to the phase expansion diagram, an X-axis digital matrix, a Y-axis digital matrix, and a Z-axis digital matrix are respectively established therefrom;

步骤S532、结合标定参数,实现像素坐标系到世界坐标系的转换,把X轴数字矩阵、Y轴数字矩阵和Z轴数字矩阵分别转换为X轴空间矩阵、Y轴空间矩阵和Z轴空间矩阵;Step S532: Combine the calibration parameters to realize the conversion from the pixel coordinate system to the world coordinate system, and convert the X-axis digital matrix, Y-axis digital matrix and Z-axis digital matrix into X-axis space matrix, Y-axis space matrix and Z-axis space matrix respectively ;

步骤S533、根据每一个点的关于X轴、Y轴和Z轴的坐标数据,重构出三维立体点云。In step S533, a three-dimensional point cloud is reconstructed according to the coordinate data about the X axis, the Y axis and the Z axis of each point.

进一步,步骤S540具体包括以下步骤:Further, step S540 specifically includes the following steps:

步骤S541、根据投影的条纹经搬运目标表面反射的调制光强设计背景噪声滤波器;Step S541: Design a background noise filter according to the modulated light intensity reflected by the projected stripes on the surface of the conveying target;

步骤S542、利用背景噪声滤波器过滤三维立体点云中受条纹影响小的像素点和噪声点得到物体云点。Step S542: Use a background noise filter to filter the pixels and noise points in the three-dimensional point cloud that are less affected by the stripes to obtain the object cloud point.

进一步,步骤S550具体为:从物体点云中提取各点的空间坐标,并对搬运物体的最高点的高度数据和最低点的高度数据进行求差,得到搬运物体的高度,进而得到z轴坐标。Further, step S550 is specifically: extracting the spatial coordinates of each point from the object point cloud, and calculating the difference between the height data of the highest point and the height data of the lowest point of the transported object to obtain the height of the transported object, and then obtain the z-axis coordinate .

通过步骤S500,利用条纹投影测量技术能快速测量得到准确的搬运目标的z轴坐标,有利于搬运的精准化。Through step S500, the fringe projection measurement technology can quickly measure and obtain the accurate z-axis coordinates of the transport target, which is conducive to the accuracy of transport.

参照图3,进一步,所述机械臂5为六自由度机械臂,包括第一臂杆51、第二臂杆52和第三臂杆53;3, further, the mechanical arm 5 is a six-degree-of-freedom mechanical arm, including a first arm 51, a second arm 52, and a third arm 53;

步骤S600具体包括以下步骤:Step S600 specifically includes the following steps:

步骤S610、设定原点O为机械臂5与搬运机器人的连接点,第一臂杆51、第二臂杆52和第三臂杆53的长度分别为L 1、L 2和L 3Step S610: Set the origin O as the connection point between the robot arm 5 and the handling robot, and the lengths of the first arm 51, the second arm 52 and the third arm 53 are respectively L 1 , L 2 and L 3 ;

步骤S620、计算原点O与搬运目标的连线距离

Figure PCTCN2020078286-appb-000010
以及原点O与搬运目标的角度
Figure PCTCN2020078286-appb-000011
Step S620: Calculate the distance between the origin O and the transport target
Figure PCTCN2020078286-appb-000010
And the angle of origin O and the transportation target
Figure PCTCN2020078286-appb-000011

步骤S630、计算第一臂杆51与第二臂杆52所组成的角的平分角

Figure PCTCN2020078286-appb-000012
以及原点O与第二臂杆52末端的连线和原点O与搬运目标的连线所组成的角度
Figure PCTCN2020078286-appb-000013
Step S630: Calculate the bisecting angle of the angle formed by the first arm 51 and the second arm 52
Figure PCTCN2020078286-appb-000012
And the angle formed by the line between the origin O and the end of the second arm 52 and the line between the origin O and the transport target
Figure PCTCN2020078286-appb-000013

步骤S640、计算第一臂杆51的旋转角度

Figure PCTCN2020078286-appb-000014
第二臂杆52的旋转角度
Figure PCTCN2020078286-appb-000015
和第三臂杆53的旋转角度
Figure PCTCN2020078286-appb-000016
其中S为角度系数,取值为1或-1。 Step S640: Calculate the rotation angle of the first arm 51
Figure PCTCN2020078286-appb-000014
The rotation angle of the second arm 52
Figure PCTCN2020078286-appb-000015
And the rotation angle of the third arm 53
Figure PCTCN2020078286-appb-000016
Where S is the angle coefficient, and the value is 1 or -1.

参照图2,本发明的另一个实施例,提供了一种搬运机器人,包括主框体1、行进机构4、第一摄像头8、第二摄像头6、投影仪7、 机械臂5、控制器2和存储器3;所述投影仪7和第二摄像头6位于主框体1的前方,所述行进机构4位于主框体1的底部,所述第一摄像头8通过伸缩杆安装在主框体1的顶端,所述控制器2和存储器3设置在主框体1的内部。所述机械臂5位于主框体1的前方的中心位置,所述机械臂5为六自由度机械臂5,包括第一臂杆51、第二臂杆52和第三臂杆53。所述存储器3与所述控制器2连接且存储有指令;所述行进机构4、第一摄像头8、第二摄像头6、投影仪7和机械臂5分别与所述控制器2连接;所述控制器2执行所述指令以控制所述行进机构4、第一摄像头8、第二摄像头6、投影仪7和机械臂5实施如上所述的应用于搬运机器人的目标搬运方法。2, another embodiment of the present invention provides a handling robot, including a main frame 1, a traveling mechanism 4, a first camera 8, a second camera 6, a projector 7, a robotic arm 5, and a controller 2. And memory 3; the projector 7 and the second camera 6 are located in front of the main frame 1, the traveling mechanism 4 is located at the bottom of the main frame 1, the first camera 8 is mounted on the main frame 1 through a telescopic rod The controller 2 and the memory 3 are arranged inside the main frame 1. The mechanical arm 5 is located at the center of the front of the main frame 1. The mechanical arm 5 is a six-degree-of-freedom mechanical arm 5 and includes a first arm 51, a second arm 52 and a third arm 53. The memory 3 is connected to the controller 2 and stores instructions; the traveling mechanism 4, the first camera 8, the second camera 6, the projector 7, and the robot arm 5 are respectively connected to the controller 2; The controller 2 executes the instructions to control the traveling mechanism 4, the first camera 8, the second camera 6, the projector 7 and the robot arm 5 to implement the object handling method applied to the handling robot as described above.

在该实施例中,通过第一摄像头8拍摄俯视图像并通过OpenCV技术获取搬运目标的形状及其X轴坐标和Y轴坐标,同时通过第二摄像头6拍摄搬运目标的条纹图并通过条纹投影测量技术测得搬运目标的Z轴坐标,相互结合得到搬运目标的三维坐标,同时搬运机器人根据三维坐标控制机械臂5抓取搬运目标;能实现自动认知搬运目标的形态特征并据此来抓取搬运目标的目的,具有高度的智能化,能对不同位置的多种搬运目标进行搬运,完成多样化的搬运任务。In this embodiment, the top view image is taken through the first camera 8 and the shape of the transport target and its X-axis and Y-axis coordinates are acquired through the OpenCV technology, while the fringe pattern of the transport target is captured by the second camera 6 and measured by fringe projection The technology measures the Z-axis coordinates of the transport target, and combines them to obtain the three-dimensional coordinates of the transport target. At the same time, the transport robot controls the robotic arm 5 to grab the transport target according to the three-dimensional coordinates; it can automatically recognize the morphological characteristics of the transport target and grab it accordingly The purpose of the transportation target is highly intelligent, which can handle a variety of transportation targets in different positions and complete diversified transportation tasks.

以上所述,只是本发明的较佳实施例而已,本发明并不局限于上述实施方式,只要其以相同的手段达到本发明的技术效果,都应属于本发明的保护范围。The above are only preferred embodiments of the present invention. The present invention is not limited to the above-mentioned embodiments. As long as they achieve the technical effects of the present invention by the same means, they should fall within the protection scope of the present invention.

Claims (9)

一种应用于搬运机器人的目标搬运方法,其特征在于,所述搬运机器人包括行进机构、第一摄像头、第二摄像头、投影仪、控制器和机械臂;所述目标搬运方法包括以下步骤:A target handling method applied to a handling robot, characterized in that the handling robot includes a traveling mechanism, a first camera, a second camera, a projector, a controller and a mechanical arm; the target handling method includes the following steps: 搬运机器人移动至搬运目标前;The handling robot moves before the handling target; 搬运机器人通过第一摄像头从上方拍摄搬运目标获得搬运目标的俯视图像;The handling robot shoots the handling target from above through the first camera to obtain the overhead image of the handling target; 搬运机器人通过投影仪投影条纹至搬运目标上,并通过第二摄像头从正面拍摄搬运目标获得波长不同的两种条纹图;The handling robot projects stripes onto the object to be transported through a projector, and uses the second camera to photograph the object from the front to obtain two fringe patterns with different wavelengths; 搬运机器人根据俯视图像利用OpenCV技术得到搬运目标的形状和搬运目标的X轴坐标和Y轴坐标;The handling robot uses OpenCV technology to obtain the shape of the handling target and the X-axis and Y-axis coordinates of the handling target according to the top view image; 搬运机器人根据条纹图利用条纹投影测量技术测得搬运目标的Z轴坐标;The handling robot uses fringe projection measurement technology to measure the Z-axis coordinates of the handling target according to the fringe pattern; 搬运机器人根据搬运目标的三维坐标控制机械臂抓取搬运目标。The handling robot controls the robotic arm to grab the handling target according to the three-dimensional coordinates of the handling target. 根据权利要求1所述的一种应用于搬运机器人的目标搬运方法,其特征在于,所述搬运机器人根据俯视图像利用OpenCV技术得到搬运目标的形状和搬运目标的X轴坐标和Y轴坐标包括以下步骤:对俯视图像进行二值化处理,使俯视图像呈现黑色区域和白色区域;The object handling method applied to a handling robot according to claim 1, wherein the handling robot uses OpenCV technology to obtain the shape of the handling target and the X-axis and Y-axis coordinates of the handling target according to the top view image. Step: Binarize the top view image to make the top view image present black and white areas; 利用最大矩形框标记搬运目标,计算最大矩形框的中心点坐标作为搬运目标的X轴坐标和Y轴坐标;Use the largest rectangular box to mark the transportation target, and calculate the center point coordinates of the largest rectangular box as the X-axis and Y-axis coordinates of the transportation target; 对俯视图像中的白色区域进行连通区域采集;Collect connected areas on the white areas in the overhead image; 对连通区域噪声处理后进行边缘识别,得到搬运目标的形状。After processing the noise of the connected area, perform edge recognition to obtain the shape of the conveying target. 根据权利要求1所述的一种应用于搬运机器人的目标搬运方法,其特征在于,所述搬运机器人根据条纹图利用条纹投影测量技术测得搬运目标的Z轴坐标具体为:The method of claim 1, wherein the Z-axis coordinate of the object to be transported is measured by the transport robot using the fringe projection measurement technique according to the fringe pattern: 对两种条纹图利用六步相移的条纹分析方法处理,得到包裹相位图φ 1(y)和φ 2(y); The two fringe patterns are processed by the six-step phase shift fringe analysis method to obtain the wrapped phase patterns φ 1 (y) and φ 2 (y); 对包裹相位图利用基于波长选择的双波长条纹相位展开算法进行相位展开,得到相位展开图;Use the dual-wavelength fringe phase unwrapping algorithm based on wavelength selection to phase unwrap the wrapped phase image to obtain the phase unwrapping image; 结合标定参数和相位展开图重构三维立体点云;Combine calibration parameters and phase unfolded map to reconstruct a three-dimensional point cloud; 对三维立体点云进行滤波得到物体点云;Filter the three-dimensional point cloud to obtain the object point cloud; 根据物体点云数据计算搬运目标的Z轴坐标。Calculate the Z-axis coordinates of the transport target according to the object point cloud data. 根据权利要求3所述的一种应用于搬运机器人的目标搬运方法,其特征在于,所述对包裹相位图利用基于波长选择的双波长条纹相位展开算法进行相位展开,得到相位展开图包括以下步骤:The target handling method applied to a handling robot according to claim 3, characterized in that the phase unwrapping of the package phase map using a dual-wavelength fringe phase unwrapping algorithm based on wavelength selection to obtain the phase unwrapping map comprises the following steps : 建立关于m 1(y)、m 2(y)和[m 2(y)λ 2-m 1(y)λ 1]的关系查找表,其中λ 1和λ 2分别是投影仪投影至搬运目标上的两种条纹的波长,m 1(y)和m 2(y)分别是对应两种条纹的条纹阶数; Establish a lookup table for the relationship between m 1 (y), m 2 (y), and (m 2 (y)λ 2 -m 1 (y)λ 1 ], where λ 1 and λ 2 are the projections of the projector to the transport target, respectively The wavelengths of the two fringes above, m 1 (y) and m 2 (y) are the fringe orders of the two fringes respectively; 计算[λ 1φ 1(y)-λ 2φ 2(y)]并四舍五入取整得到结果值,在关系查找表中查找[m 2(y)λ 2-m 1(y)λ 1]的值与结果值最接近的一行中对应的m 1(y)和m 2(y)的目标值; Calculate [λ 1 φ 1 (y)-λ 2 φ 2 (y)] and round to the nearest whole number to get the result value. Look up [m 2 (y)λ 2 -m 1 (y)λ 1 ] in the relational lookup table The corresponding target values of m 1 (y) and m 2 (y) in the row where the value is closest to the result value; 根据绝对相位和包裹相位的关系式
Figure PCTCN2020078286-appb-100001
以及m 1(y)和m 2(y)的目标值进行相位展开,得到相位展开图,其中Φ 1(y)和Φ 2(y)是绝对相位。
According to the relationship between absolute phase and wrapped phase
Figure PCTCN2020078286-appb-100001
And the target values of m 1 (y) and m 2 (y) are phase unwrapped to obtain a phase unwrapping diagram, where Φ 1 (y) and Φ 2 (y) are absolute phases.
根据权利要求3所述的一种应用于搬运机器人的目标搬运方法,其特征在于,所述结合标定参数和相位展开图重构三维立体点云具体为:The target handling method applied to a handling robot according to claim 3, wherein the reconstruction of the three-dimensional point cloud by combining the calibration parameters and the phase expansion map is specifically: 根据相位展开图,以此分别建立X轴数字矩阵、Y轴数字矩阵和Z轴数字矩阵;According to the phase expansion diagram, the X-axis digital matrix, the Y-axis digital matrix and the Z-axis digital matrix are established respectively; 结合标定参数,实现像素坐标系到世界坐标系的转换,把X轴数字矩阵、Y轴数字矩阵和Z轴数字矩阵分别转换为X轴空间矩阵、Y轴空间矩阵和Z轴空间矩阵;Combine calibration parameters to realize the conversion from pixel coordinate system to world coordinate system, and convert X-axis digital matrix, Y-axis digital matrix and Z-axis digital matrix into X-axis space matrix, Y-axis space matrix and Z-axis space matrix respectively; 根据每一个点的关于X轴、Y轴和Z轴的坐标数据,重构出三维立体点云。According to the coordinate data of each point on the X-axis, Y-axis and Z-axis, a three-dimensional point cloud is reconstructed. 根据权利要求3所述的一种应用于搬运机器人的目标搬运方法,其特征在于,所述对三维立体点云进行滤波得到物体点云具体为:根据投影的条纹经搬运目标表面反射的调制光强设计背景噪声滤波器;The object handling method applied to a handling robot according to claim 3, wherein the filtering of the three-dimensional point cloud to obtain the object point cloud is specifically: modulated light reflected from the surface of the handling target according to the projected fringes Strong design of background noise filter; 利用背景噪声滤波器过滤三维立体点云中受条纹影响小的像素点和噪声点得到物体云点。The background noise filter is used to filter the pixels and noise points in the three-dimensional point cloud that are less affected by the stripes to obtain the object cloud points. 根据权利要求3所述的一种应用于搬运机器人的目标搬运方法, 其特征在于,所述根据物体点云数据计算搬运目标的Z轴坐标具体为:从物体点云中提取各点的空间坐标,并对搬运物体的最高点的高度数据和最低点的高度数据进行求差,得到搬运物体的高度。The object handling method applied to a handling robot according to claim 3, wherein the calculating the Z-axis coordinates of the handling target according to the object point cloud data is specifically: extracting the spatial coordinates of each point from the object point cloud , And calculate the difference between the height data of the highest point and the height data of the lowest point of the transported object to obtain the height of the transported object. 根据权利要求1所述的一种应用于搬运机器人的目标搬运方法,其特征在于,所述机械臂为六自由度机械臂,包括第一臂杆、第二臂杆和第三臂杆;所述搬运机器人根据搬运目标的三维坐标控制机械臂抓取搬运目标具体为:The object handling method applied to a handling robot according to claim 1, wherein the mechanical arm is a six-degree-of-freedom mechanical arm, including a first arm, a second arm and a third arm; The handling robot controls the robotic arm to grab the handling target according to the three-dimensional coordinates of the handling target: 设定原点O为机械臂与搬运机器人的连接点,第一臂杆、第二臂杆和第三臂杆的长度分别为L 1、L 2和L 3Set the origin O as the connection point between the robotic arm and the handling robot, and the lengths of the first arm, the second arm and the third arm are L 1 , L 2 and L 3 respectively ; 计算原点O与搬运目标的连线距离
Figure PCTCN2020078286-appb-100002
以及原点O与搬运目标的角度
Figure PCTCN2020078286-appb-100003
Calculate the distance between the origin O and the transportation target
Figure PCTCN2020078286-appb-100002
And the angle of origin O and the transportation target
Figure PCTCN2020078286-appb-100003
计算第一臂杆与第二臂杆所组成的角的平分角
Figure PCTCN2020078286-appb-100004
以及原点O与第二臂杆末端的连线和原点O与搬运目标的连线所组成的角度
Figure PCTCN2020078286-appb-100005
Calculate the bisecting angle of the angle formed by the first boom and the second boom
Figure PCTCN2020078286-appb-100004
And the angle formed by the line between the origin O and the end of the second boom and the line between the origin O and the carrying target
Figure PCTCN2020078286-appb-100005
计算第一臂杆的旋转角度
Figure PCTCN2020078286-appb-100006
第二臂杆的旋转角度
Figure PCTCN2020078286-appb-100007
和第三臂杆的旋转角度
Figure PCTCN2020078286-appb-100008
其中S为角度系数,取值为1或-1。
Calculate the rotation angle of the primary boom
Figure PCTCN2020078286-appb-100006
The rotation angle of the second boom
Figure PCTCN2020078286-appb-100007
And the rotation angle of the third boom
Figure PCTCN2020078286-appb-100008
Where S is the angle coefficient, and the value is 1 or -1.
一种搬运机器人,其特征在于,包括行进机构、第一摄像头、第二摄像头、投影仪、机械臂、控制器和存储器;所述存储器与所述控制器连接且存储有指令;所述行进机构、第一摄像头、第二摄像头、投影仪和机械臂分别与所述控制器连接;所述控制器执行所述指令以控制所述行进机构、第一摄像头、第二摄像头、投影仪和机械臂实施如权利要求1-8任一项所述的应用于搬运机器人的目标搬运方法。A handling robot, characterized in that it comprises a traveling mechanism, a first camera, a second camera, a projector, a mechanical arm, a controller, and a memory; the memory is connected to the controller and stores instructions; the traveling mechanism , The first camera, the second camera, the projector, and the robotic arm are respectively connected to the controller; the controller executes the instruction to control the traveling mechanism, the first camera, the second camera, the projector, and the robotic arm Implement the object handling method applied to a handling robot according to any one of claims 1-8.
PCT/CN2020/078286 2019-07-19 2020-03-06 Object carrying method applied to carrying robot, and carrying robot thereof Ceased WO2021012681A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910653564.8A CN110480631A (en) 2019-07-19 2019-07-19 A kind of target method for carrying and its transfer robot applied to transfer robot
CN201910653564.8 2019-07-19

Publications (1)

Publication Number Publication Date
WO2021012681A1 true WO2021012681A1 (en) 2021-01-28

Family

ID=68546162

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/078286 Ceased WO2021012681A1 (en) 2019-07-19 2020-03-06 Object carrying method applied to carrying robot, and carrying robot thereof

Country Status (2)

Country Link
CN (1) CN110480631A (en)
WO (1) WO2021012681A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110480631A (en) * 2019-07-19 2019-11-22 五邑大学 A kind of target method for carrying and its transfer robot applied to transfer robot
CN111220093A (en) * 2020-02-24 2020-06-02 五邑大学 Trolley image identification method and device with three-dimensional vision and storage medium
CN111319052A (en) * 2020-02-28 2020-06-23 五邑大学 Multi-stain cleaning robot and moving path control method based on same
CN113146653A (en) * 2021-04-21 2021-07-23 贵州工程应用技术学院 Mechanical arm capable of recognizing, positioning, bearing and grabbing without damage based on machine learning
CN113858199B (en) * 2021-09-27 2023-02-28 河南松源企业管理咨询有限公司 Furniture transfer robot

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012168180A (en) * 2011-02-16 2012-09-06 Steinbichler Optotechnik Gmbh Device and method for determining 3d coordinates of object and calibrating industrial robot
CN106485746A (en) * 2016-10-17 2017-03-08 广东技术师范学院 Visual servo mechanical hand based on image no demarcation and its control method
US20170136626A1 (en) * 2015-11-16 2017-05-18 Abb Technology Ag Facilitating robot positioning
CN106969704A (en) * 2015-12-16 2017-07-21 精工爱普生株式会社 Measuring system, measuring method, robot control method, robot, robot system and pick device
CN108789414A (en) * 2018-07-17 2018-11-13 五邑大学 Intelligent machine arm system based on three-dimensional machine vision and its control method
CN109551472A (en) * 2017-09-27 2019-04-02 精工爱普生株式会社 Robot system
CN110480631A (en) * 2019-07-19 2019-11-22 五邑大学 A kind of target method for carrying and its transfer robot applied to transfer robot

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102679908A (en) * 2012-05-10 2012-09-19 天津大学 Dynamic measurement method of three-dimensional shape projected by dual-wavelength fiber interference fringe
CN106530276B (en) * 2016-10-13 2019-04-09 中科金睛视觉科技(北京)有限公司 A kind of manipulator localization method and positioning system for non-standard component crawl
JP7003462B2 (en) * 2017-07-11 2022-01-20 セイコーエプソン株式会社 Robot control device, robot system, and camera calibration method
CN108274476B (en) * 2018-03-01 2020-09-04 华侨大学 Method for grabbing ball by humanoid robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012168180A (en) * 2011-02-16 2012-09-06 Steinbichler Optotechnik Gmbh Device and method for determining 3d coordinates of object and calibrating industrial robot
US20170136626A1 (en) * 2015-11-16 2017-05-18 Abb Technology Ag Facilitating robot positioning
CN106969704A (en) * 2015-12-16 2017-07-21 精工爱普生株式会社 Measuring system, measuring method, robot control method, robot, robot system and pick device
CN106485746A (en) * 2016-10-17 2017-03-08 广东技术师范学院 Visual servo mechanical hand based on image no demarcation and its control method
CN109551472A (en) * 2017-09-27 2019-04-02 精工爱普生株式会社 Robot system
CN108789414A (en) * 2018-07-17 2018-11-13 五邑大学 Intelligent machine arm system based on three-dimensional machine vision and its control method
CN110480631A (en) * 2019-07-19 2019-11-22 五邑大学 A kind of target method for carrying and its transfer robot applied to transfer robot

Also Published As

Publication number Publication date
CN110480631A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
WO2021012681A1 (en) Object carrying method applied to carrying robot, and carrying robot thereof
CN110555878B (en) Method, device, storage medium and robot for determining the spatial position and shape of an object
EP2870428B1 (en) System and method for 3d measurement of the surface geometry of an object
JP4309439B2 (en) Object take-out device
US20080123937A1 (en) Fast Three Dimensional Recovery Method and Apparatus
CN102141398A (en) Monocular vision-based method for measuring positions and postures of multiple robots
CN111815717A (en) A Multi-sensor Fusion External Reference Joint Semi-autonomous Calibration Method
CN110793464A (en) Large-field fringe projection vision three-dimensional measurement system and method
CN115082538A (en) 3D reconstruction system and method of multi-vision gimbal parts surface based on line structured light projection
CN114413790A (en) Large-view-field three-dimensional scanning device and method for fixedly connecting photogrammetric camera
CN113674402A (en) Plant three-dimensional hyperspectral point cloud model generation method, correction method and device
CN114140534A (en) Combined calibration method for laser radar and camera
CN104240233A (en) Method for solving camera homography matrix and projector homography matrix
CN108362205B (en) Spatial ranging method based on fringe projection
JP3668769B2 (en) Method for calculating position / orientation of target object and method for calculating position / orientation of observation camera
JP2012177671A (en) Fine aperiodic pattern projection device and method and three-dimensional measuring device using the same
CN117934625A (en) Three-dimensional space rapid calibration and positioning method based on laser radar
CN112509062B (en) Calibration board, calibration system and calibration method
CN114897981A (en) Hanger pose identification method based on visual detection
Bastuscheck et al. Object recognition by three‐dimensional curve matching
CN114463444A (en) Non-contact type relative pose detection method and system
JP3343583B2 (en) 3D surface shape estimation method using stereo camera with focal light source
KR101314101B1 (en) System for three-dimensional measurement and method therefor
JP5330341B2 (en) Ranging device using in-vehicle camera
JP3519296B2 (en) Automatic measurement method and automatic measurement device for thermal image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20843288

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20843288

Country of ref document: EP

Kind code of ref document: A1