CN108858199B - Method for grabbing target object by service robot based on vision - Google Patents
Method for grabbing target object by service robot based on vision Download PDFInfo
- Publication number
- CN108858199B CN108858199B CN201810841533.0A CN201810841533A CN108858199B CN 108858199 B CN108858199 B CN 108858199B CN 201810841533 A CN201810841533 A CN 201810841533A CN 108858199 B CN108858199 B CN 108858199B
- Authority
- CN
- China
- Prior art keywords
- target object
- point cloud
- obstacle
- plane
- cloud data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000007613 environmental effect Effects 0.000 claims abstract description 54
- 230000006870 function Effects 0.000 claims description 22
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 11
- 238000010586 diagram Methods 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 6
- 238000013135 deep learning Methods 0.000 claims description 5
- 230000008901 benefit Effects 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000012546 transfer Methods 0.000 claims description 2
- 238000011160 research Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009193 crawling Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/008—Manipulators for service tasks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
本发明涉及服务机器人技术领域,具体涉及一种基于视觉的服务机器人抓取目标物体的方法,旨在解决存在阻碍目标物体被直接抓取的障碍物情形下的机器人抓取问题。本发明中抓取目标物体的方法包括:获取彩色图像以及相机坐标系下的原始三维点云数据;进行目标物体的检测,获取目标物体的点云数据,并得到目标物体周围的环境点云数据,将上述点云数据变换到机械臂坐标系下;在机械臂坐标系下,拟合目标物体所在平面的平面方程;在此基础上,获取各障碍物的位置和尺寸信息,以及目标物体的位置及尺寸信息;基于上述位置和尺寸信息,先对阻碍目标物体被直接抓取的障碍物进行搬移,而后完成对目标物体的抓取。本发明有效提高了服务机器人的抓取能力。
The invention relates to the technical field of service robots, in particular to a vision-based method for a service robot to grab a target object, aiming to solve the problem of robot grabbing in the presence of obstacles that prevent the target object from being directly grabbed. The method for grabbing a target object in the present invention includes: acquiring a color image and original three-dimensional point cloud data in a camera coordinate system; detecting the target object, acquiring point cloud data of the target object, and obtaining environmental point cloud data around the target object , transform the above point cloud data into the manipulator coordinate system; fit the plane equation of the plane where the target object is located in the manipulator coordinate system; on this basis, obtain the position and size information of each obstacle, as well as the target object's Position and size information; based on the above position and size information, first move the obstacles that prevent the target object from being directly grasped, and then complete the grasp of the target object. The invention effectively improves the grasping ability of the service robot.
Description
技术领域technical field
本发明涉及服务机器人技术领域,具体涉及一种基于视觉的服务机器人抓取目标物体的方法。The invention relates to the technical field of service robots, in particular to a method for a vision-based service robot to grab a target object.
背景技术Background technique
随着服务机器人和人工智能技术的不断发展,其应用领域也在不断拓展。智能服务机器人的核心技术除了环境感知、定位、导航外,物体抓取技术也非常重要,是服务机器人提供优质服务的前提。例如,在家庭、办公、医护等环境中,物品的取拿等服务均需要服务机器人具有抓取物体的能力。为了使得服务机器人实施对物体的抓取,需要将机械臂安装在服务机器人的移动平台上。在移动平台到达指定操作区域后,服务机器人将控制机械臂实施抓取。服务机器人抓取涉及到对目标的检测和机械臂的运动规划,其中基于视觉的机器人抓取目标物体的方法研究最为普遍。With the continuous development of service robot and artificial intelligence technology, its application fields are also expanding. In addition to environmental perception, positioning, and navigation, the core technology of intelligent service robots is also very important for object grasping technology, which is the premise for service robots to provide high-quality services. For example, in environments such as home, office, and medical care, services such as picking up items require service robots to have the ability to grab objects. In order for the service robot to grasp objects, the robotic arm needs to be installed on the mobile platform of the service robot. After the mobile platform reaches the designated operating area, the service robot will control the robotic arm to implement grasping. The grasping of service robots involves the detection of objects and the motion planning of robotic arms, among which the research on the method of grasping objects based on vision is the most common.
国内外研究人员在基于视觉的机器人抓取方面开展了深入的研究。目标检测是服务机器人抓取的前提,传统的目标检测方法通常需要手工设计特征,泛化能力不强,鲁棒性较差。近年来深度学习受到关注,其优势在于特征提取环节不需要手工设计特征,而是通过利用大规模数据集对模型进行训练,学得目标物体具备的特征,具有较强的鲁棒性。目前常见的基于深度学习的目标检测方法有RCNN、Fast-RCNN、Faster-RCNN、YOLO、SSD等,其中,SSD结合了YOLO快速和Faster-RCNN检测精度高的优点而受到普遍关注。在完成目标物体检测后,服务机器人即可进行自身机械臂的运动规划,进而完成对目标物体的抓取,一种常见的进行机械臂运动规划的方式是调用MoveIt!功能包。MoveIt!功能包集成了机械臂的正逆运动学求解、运动规划、碰撞检测等功能,其中运动规划使用开源运动规划库(OpenMotionPlanning Library,OMPL),目前MoveIt!功能包已经广泛应用于主流的机械臂运动规划中。Domestic and foreign researchers have carried out in-depth research on vision-based robotic grasping. Object detection is the premise of service robot grasping. Traditional object detection methods usually require hand-designed features, which have poor generalization ability and poor robustness. In recent years, deep learning has attracted attention, and its advantage is that the feature extraction process does not require manual design of features, but by using large-scale data sets to train the model, the features of the target object can be learned, which has strong robustness. At present, the common target detection methods based on deep learning include RCNN, Fast-RCNN, Faster-RCNN, YOLO, SSD, etc. Among them, SSD combines the advantages of YOLO's fast speed and Faster-RCNN's high detection accuracy and has attracted widespread attention. After completing the detection of the target object, the service robot can plan the motion of its own robotic arm, and then complete the grasping of the target object. A common way to plan the motion of the robotic arm is to call MoveIt! Feature pack. MoveIt! The function package integrates the forward and inverse kinematics solution, motion planning, collision detection and other functions of the robotic arm. The motion planning uses the open source motion planning library (OpenMotionPlanning Library, OMPL). Currently, MoveIt! The function package has been widely used in the mainstream robotic arm motion planning.
现有的机器人抓取技术通常在完成对指定目标物体的检测后,直接实施对该目标物体的抓取。实际上,在现实环境中,目标物体周围有时会存在一些阻碍服务机器人的机械臂直接抓取的障碍物。为此,需要对现有的机器人抓取技术进行更深入的研究,以解决现有技术难以较好地满足存在阻碍目标物体被直接抓取的障碍物情形下的机器人抓取问题。The existing robot grasping technology usually directly grasps the target object after completing the detection of the specified target object. In fact, in real-world environments, there are sometimes obstacles around the target object that prevent the robotic arm of the service robot from grabbing directly. Therefore, it is necessary to conduct more in-depth research on the existing robot grasping technology, in order to solve the problem that the existing technology is difficult to satisfy the robot grasping problem under the situation of obstacles that prevent the target object from being directly grasped.
发明内容SUMMARY OF THE INVENTION
为了解决现有技术中的上述问题,本发明提出了一种基于视觉的服务机器人抓取目标物体的方法,能够在存在阻碍目标物体被直接抓取的障碍物的情形下,实现服务机器人对目标物体的抓取,提高了服务机器人的抓取能力。In order to solve the above problems in the prior art, the present invention proposes a vision-based method for a service robot to grasp a target object, which can realize the service robot's ability to grasp the target object in the presence of obstacles that prevent the target object from being directly grasped. The grasping of objects improves the grasping ability of the service robot.
本发明提出一种基于视觉的服务机器人抓取目标物体的方法,包括以下步骤:The present invention provides a method for a vision-based service robot to grab a target object, comprising the following steps:
步骤S1,通过安装于服务机器人上面的Kinect传感器获取彩色图像Is以及相机坐标系OcXcYcZc下的原始三维点云数据Ds;Step S1, obtain the color image Is and the original three-dimensional point cloud data D s under the camera coordinate system O c X c Y c Z c through the Kinect sensor installed on the service robot ;
步骤S2,基于所述彩色图像Is,利用深度学习SSD(Single ShotmultiBoxDetector)目标检测方法进行目标物体的检测,进而从所述原始三维点云数据Ds中获得所述目标物体的第一点云数据Dto;在所述原始三维点云数据Ds中去除所述目标物体的第一点云数据Dto,得到第一环境点云数据Drs;Step S2, based on the color image I s , use a deep learning SSD (Single Shot multiBox Detector) target detection method to detect the target object, and then obtain the first point cloud of the target object from the original three-dimensional point cloud data D s data D to ; remove the first point cloud data D to of the target object from the original three-dimensional point cloud data D s to obtain the first environmental point cloud data D rs ;
步骤S3,根据相机坐标系OcXcYcZc到所述服务机器人的机械臂坐标系OrXrYrZr的坐标变换关系,将所述第一环境点云数据Drs转换到所述机械臂坐标系OrXrYrZr下,得到第二环境点云数据Dr;将所述目标物体的第一点云数据Dto转换到所述机械臂坐标系OrXrYrZr下,得到所述目标物体的第二点云数据Dt;Step S3, according to the coordinate transformation relationship from the camera coordinate system O c X c Y c Z c to the manipulator coordinate system Or X r Y r Z r of the service robot, transform the first environmental point cloud data Drs Under the coordinate system Or X r Y r Z r of the robot arm, the second environmental point cloud data D r is obtained ; the first point cloud data D to of the target object is converted to the coordinate system Or of the robot arm Under X r Y r Z r , obtain the second point cloud data D t of the target object;
步骤S4,结合机械臂在XrYr平面的工作空间Srm,对所述第二环境点云数据Dr,去除Yr轴方向上数值位于[ymin,ymax]区间范围外的数据,以及去除Xr轴方向上数值位于[xmin,xmax]区间范围外的数据,得到第三环境点云数据Df;其中,所述工作空间Srm的左下角和右上角坐标分别为(xmin,ymin)和(xmax,ymax),xmin、ymin、xmax和ymax均为预设的阈值;Step S4, in combination with the working space S rm of the robotic arm on the X r Y r plane, for the second environmental point cloud data D r , remove the data whose values in the Y r axis direction are outside the range of [y min , y max ] , and remove the data whose values in the X r axis direction are outside the range of [x min , x max ] to obtain the third environmental point cloud data D f ; wherein, the coordinates of the lower left corner and the upper right corner of the workspace S rm are respectively (x min , y min ) and (x max , y max ), x min , y min , x max and y max are all preset thresholds;
步骤S5,基于所述第三环境点云数据Df,采用随机抽样一致性(Random SampleConsensus,RANSAC)算法对所述目标物体所在平面进行拟合,得到所述目标物体所在平面的平面方程;Step S5, based on the third environmental point cloud data D f , using a random sampling consistency (Random Sample Consensus, RANSAC) algorithm to fit the plane where the target object is located, to obtain a plane equation of the plane where the target object is located;
步骤S6,结合所述目标物体所在平面的平面方程,在所述第三环境点云数据Df中去除该平面以及该平面下方的点所对应的数据,得到第四环境点云数据Do,进而进行点云簇聚类,并获取各点云簇所对应障碍物的位置及尺寸信息;Step S6, in combination with the plane equation of the plane where the target object is located, remove the data corresponding to the plane and the point below the plane in the third environmental point cloud data D f to obtain the fourth environmental point cloud data D o , Then perform point cloud cluster clustering, and obtain the position and size information of obstacles corresponding to each point cloud cluster;
步骤S7,结合所述平面方程,在所述目标物体的第二点云数据Dt中去除该平面以及该平面下方的点所对应的数据,得到所述目标物体的第三点云数据Dtr,并求出所述目标物体的位置及尺寸信息;Step S7, in combination with the plane equation, remove the plane and the data corresponding to the point below the plane in the second point cloud data D t of the target object to obtain the third point cloud data D tr of the target object , and obtain the position and size information of the target object;
步骤S8,基于所述目标物体的位置和尺寸信息,以及各障碍物的位置和尺寸信息,先对阻碍所述目标物体被直接抓取的障碍物进行搬移,而后完成对所述目标物体的抓取。Step S8, based on the position and size information of the target object and the position and size information of each obstacle, first move the obstacles that prevent the target object from being directly grasped, and then complete the grasping of the target object. Pick.
优选地,步骤S3中所述相机坐标系OcXcYcZc到服务机器人的机械臂坐标系OrXrYrZr的坐标变换关系为:Preferably, the coordinate transformation relationship from the camera coordinate system O c X c Y c Z c to the robotic arm coordinate system Or X r Y r Z r of the service robot in step S3 is:
其中,(xcp,ycp,zcp)T和(xrp,yrp,zrp)T分别为所述原始三维点云数据Ds中的点在所述相机坐标系OcXcYcZc和所述机械臂坐标系OrXrYrZr中的坐标,Tm为手眼关系矩阵;Wherein, (x cp , y cp , z cp ) T and (x rp , y rp , z rp ) T are respectively the points in the original 3D point cloud data D s in the camera coordinate system O c X c Y c Z c and the coordinates in the robotic arm coordinate system Or X r Y r Z r , T m is the hand-eye relationship matrix;
所述手眼关系矩阵的计算方法包括:The calculation method of the hand-eye relationship matrix includes:
控制机械臂末端移动到不同的位置,分别获得所述机械臂末端在所述相机坐标系OcXcYcZc和所述机械臂坐标系OrXrYrZr中的ng个三维坐标和对应的齐次坐标分别为和其中,ng为预设的常数,t=1,2,…,ng;Control the end of the manipulator to move to different positions, and obtain the n g of the end of the manipulator in the camera coordinate system O c X c Y c Z c and the manipulator coordinate system Or X r Y r Z r respectively three-dimensional coordinates and The corresponding homogeneous coordinates are and Among them, n g is a preset constant, t=1,2,...,n g ;
根据下式计算所述手眼关系矩阵:The hand-eye relationship matrix is calculated according to the following formula:
其中,表示求取矩阵的伪逆。in, Indicates to find a matrix The pseudo-inverse of .
优选地,获得所述机械臂末端在所述相机坐标系OcXcYcZc中坐标的方法具体为:Preferably, the method for obtaining the coordinates of the end of the robotic arm in the camera coordinate system O c X c Y c Z c is as follows:
在机械臂末端固定一个红色小球,基于所述Kinect传感器提供的彩色图像,采用基于颜色特征的目标识别算法识别所述红色小球,从而获得所述红色小球的中心点在所述相机坐标系OcXcYcZc中的坐标(xc,yc,zc)T,作为所述机械臂末端在所述相机坐标系OcXcYcZc中的坐标。A red ball is fixed at the end of the robotic arm, and based on the color image provided by the Kinect sensor, the target recognition algorithm based on color features is used to identify the red ball, so as to obtain the center point of the red ball at the camera coordinates The coordinates (x c , y c , z c ) T in the system O c X c Y c Z c are taken as the coordinates of the end of the robotic arm in the camera coordinate system O c X c Y c Z c .
优选地,获得所述机械臂末端在所述机械臂坐标系OrXrYrZr中坐标的方法具体为:Preferably, the method for obtaining the coordinates of the end of the manipulator in the coordinate system of the manipulator Or X r Y r Z r is as follows:
利用MoveIt!功能包通过正运动学求解得到所述机械臂末端在所述机械臂坐标系OrXrYrZr中的三维坐标(xr,yr,zr)T。Take advantage of MoveIt! The function package obtains the three-dimensional coordinates (x r , y r , z r ) T of the end of the manipulator in the coordinate system Or X r Y r Z r of the manipulator through forward kinematics solution .
优选地,步骤S5中“基于所述第三环境点云数据Df,采用随机抽样一致性算法对所述目标物体所在平面进行拟合,得到所述目标物体所在平面的平面方程”,具体包括:Preferably, in step S5 "based on the third environmental point cloud data D f , a random sampling consistency algorithm is used to fit the plane where the target object is located to obtain a plane equation of the plane where the target object is located", which specifically includes :
步骤S51,从所述第三环境点云数据Df中任选3个数据,根据所述3个数据所对应的点计算出一个平面;Step S51, select 3 data from the third environmental point cloud data D f , and calculate a plane according to the corresponding points of the 3 data;
步骤S52,求取所述第三环境点云数据Df中所述3个数据以外的数据所对应的点到该平面的距离,将距离在dh范围内的点作为内点构成该平面对应的内点集,其中dh为预设的距离阈值;Step S52, obtain the distance from the point corresponding to the data other than the 3 data in the third environmental point cloud data D f to the plane, and use the point within the range of d h as the inner point to form the plane corresponding to The interior point set of , where d h is the preset distance threshold;
步骤S53,重复执行步骤S51-S52,直至所述内点集之中内点的数量与所述第三环境点云数据Df中所有数据的数量的比值超过λ或达到预设迭代次数MK;其中λ为预设的数量比的阈值;Step S53, repeating steps S51-S52, until the ratio of the number of inliers in the inlier set to the number of all data in the third environmental point cloud data Df exceeds λ or reaches a preset number of iterations MK ; where λ is the threshold of the preset quantity ratio;
步骤S54,以最大的内点数量所对应的平面方程作为所述目标物体所在平面的平面方程,该平面方程在所述机械臂坐标系OrXrYrZr中表示为:Step S54, the plane equation corresponding to the maximum number of interior points is used as the plane equation of the plane where the target object is located, and the plane equation is expressed in the robotic arm coordinate system Or X r Y r Z r as:
Aop·x+Bop·y+Cop·z+Dop=0A op x+B op y+C op z+D op =0
其中,Aop、Bop、Cop、Dop为该平面方程的四个系数。Among them, A op , B op , C op , and D op are the four coefficients of the plane equation.
优选地,步骤S6中“结合所述目标物体所在平面的平面方程,在所述第三环境点云数据Df中去除该平面以及该平面下方的点所对应的数据,得到第四环境点云数据Do,进而进行点云簇聚类,并获取各点云簇所对应障碍物的位置及尺寸信息”,包括:Preferably, in step S6 "combining the plane equation of the plane where the target object is located, remove the data corresponding to the plane and the points below the plane in the third environmental point cloud data D f to obtain the fourth environmental point cloud Data D o , and then perform point cloud cluster clustering, and obtain the position and size information of obstacles corresponding to each point cloud cluster, including:
步骤S61,结合所述目标物体所在平面的平面方程,在所述第三环境点云数据Df中去除该平面以及该平面下方的点所对应的数据,得到第四环境点云数据Do;Step S61, in conjunction with the plane equation of the plane where the target object is located, remove the data corresponding to the plane and the point below the plane in the third environmental point cloud data D f to obtain the fourth environmental point cloud data D o ;
步骤S62,基于所述第四环境点云数据Do,利用欧式聚类算法进行点云簇聚类,获取各点云簇所对应障碍物的位置及尺寸信息;Step S62, based on the fourth environmental point cloud data D o , use the Euclidean clustering algorithm to perform point cloud cluster clustering, and obtain the position and size information of obstacles corresponding to each point cloud cluster;
其中,步骤S62具体包括:Wherein, step S62 specifically includes:
步骤S621,对所述第四环境点云数据Do,设置聚类集C和队列Q,初始时所述聚类集C和所述队列Q均为空集,并标记Do中的各数据为未处理状态;Step S621 , for the fourth environmental point cloud data Do, set a cluster set C and a queue Q, initially the cluster set C and the queue Q are empty sets, and mark each data in Do is unprocessed;
步骤S622,判断所述第四环境点云数据Do中的数据所对应的点pi,如果点pi在所述第四环境点云数据Do中所对应的数据处于已处理状态,则继续判断点pi的下一个点;当所述第四环境点云数据Do中的数据全部处于已处理状态,则转至步骤S626;若点pi在所述第四环境点云数据Do中所对应的数据处于未处理状态,则转至步骤S623;其中,i=0,1…,Nd-1,Nd为Do中数据的总数;Step S622, judging the point p i corresponding to the data in the fourth environment point cloud data D o , if the data corresponding to the point p i in the fourth environment point cloud data D o is in the processed state, then Continue to judge the next point of point p i ; when all the data in the fourth environment point cloud data D o are in the processed state, go to step S626; if point p i is in the fourth environment point cloud data D o If the corresponding data in o is in an unprocessed state, go to step S623; wherein, i=0, 1..., N d -1, and N d is the total number of data in D o ;
步骤S623,将所述队列Q置为空集,将点pi放入所述队列Q中,并标记该点在所述第四环境点云数据Do中所对应的数据为已处理状态;Step S623, set the queue Q as an empty set, put the point p i in the queue Q, and mark the data corresponding to the point in the fourth environment point cloud data D o as a processed state;
步骤S624,从所述队列Q中读取一个点,记为q,求取所述第四环境点云数据Do内所有处于未处理状态的数据对应的点与点q之间的距离,当距离不超过dth时,将相应的点加入所述队列Q中,同时将该点在Do中所对应的数据标记为已处理状态;重复执行,直到所述队列Q中所有的点都遍历完成而且没有新的点加入到所述队列Q为止;其中,dth为预设的阈值;Step S624, read a point from the queue Q, denoted as q, and obtain the distance between the point and the point q corresponding to all the data in the unprocessed state in the fourth environment point cloud data D o , when When the distance does not exceed d th , the corresponding point is added to the queue Q, and the data corresponding to the point in Do is marked as a processed state; the execution is repeated until all points in the queue Q are traversed. Complete and no new points are added to the queue Q; wherein, d th is a preset threshold;
步骤S625,若所述队列Q中点的数目大于Nmin,则将所述队列Q中所有的点记为一个点云簇并放入所述聚类集C中;转至步骤S622;其中,Nmin为预设的阈值;Step S625, if the number of points in the queue Q is greater than N min , record all the points in the queue Q as a point cloud cluster and put it into the cluster set C; go to step S622; wherein, N min is the preset threshold;
步骤S626,判断所述聚类集C是否为空集,若不是空集,则计算每个点云簇所对应障碍物的位置和尺寸信息。Step S626, determine whether the cluster set C is an empty set, if not, calculate the position and size information of the obstacle corresponding to each point cloud cluster.
优选地,步骤S8中“基于所述目标物体的位置和尺寸信息,以及各障碍物的位置和尺寸信息,先对阻碍所述目标物体被直接抓取的障碍物进行搬移,而后完成对所述目标物体的抓取”,具体包括:Preferably, in step S8 "based on the position and size information of the target object and the position and size information of each obstacle, first move the obstacles that prevent the target object from being directly grasped, and then complete the Grab the target object", including:
步骤S81,基于所述目标物体的位置和尺寸信息,以及各障碍物的位置和尺寸信息,通过MoveIt!功能包判断所述目标物体是否能够被直接抓取,若是,则根据MoveIt!功能包的规划结果实施对所述目标物体的抓取;否则,执行步骤S82;其中,机械臂末端的抓取方向为沿Zr轴竖直向下抓取;Step S81, based on the position and size information of the target object and the position and size information of each obstacle, use MoveIt! The function package determines whether the target object can be directly grabbed, and if so, according to MoveIt! The planning result of the function package implements the grasping of the target object; otherwise, step S82 is performed; wherein, the grasping direction of the end of the mechanical arm is vertically downward grasping along the Z r axis;
步骤S82,对阻碍所述目标物体被直接抓取的障碍物进行搬移,进而完成对所述目标物体的抓取。In step S82, the obstacles that prevent the target object from being directly grasped are moved, so as to complete the grasping of the target object.
优选地,步骤S82中“对阻碍所述目标物体被直接抓取的障碍物进行搬移,进而完成对所述目标物体的抓取”,具体包括:Preferably, in step S82, "moving obstacles that prevent the target object from being directly grasped, so as to complete the grasping of the target object", specifically includes:
步骤S821,在所述目标物体的第三点云数据Dtr中,选取在Zr轴方向上数值位于[zmax-zh,zmax]内的所有数据,作为所述目标物体抓取区的点云数据Dgr;其中,zmax为Dtr中各数据在Zr轴方向上的最大值,zh为预设的阈值;Step S821, in the third point cloud data D tr of the target object, select all the data whose value is within [z max -z h , z max ] in the direction of the Z r axis as the target object grasping area The point cloud data D gr ; wherein, z max is the maximum value of each data in D tr in the direction of the Z r axis, and z h is a preset threshold;
步骤S822,对于每个障碍物,求取所述目标物体抓取区的点云数据Dgr所对应的点集合中的各点与该障碍物相应点云簇中各点之间距离的最小值,该最小值记为该障碍物到所述目标物体的距离,将该障碍物到所述目标物体的距离以及该障碍物的ID号组合起来形成障碍物影响序列;Step S822, for each obstacle, obtain the minimum value of the distance between each point in the point set corresponding to the point cloud data D gr of the target object grasping area and each point in the corresponding point cloud cluster of the obstacle. , the minimum value is recorded as the distance from the obstacle to the target object, and the distance from the obstacle to the target object and the ID number of the obstacle are combined to form an obstacle influence sequence;
步骤S823,对机械臂在XrYr平面的工作空间Srm进行栅格划分,得到所述工作空间Srm的栅格图;分别将所述目标物体的第三点云数据Dtr对应的所有点以及各障碍物所对应的点云簇,投影到所述机械臂坐标系OrXrYrZr的XrYr平面上,从而获得相应的外接矩形;Step S823, divide the working space S rm of the robotic arm on the X r Y r plane into a grid to obtain a grid map of the working space S rm ; respectively divide the third point cloud data D tr of the target object corresponding to All points and point cloud clusters corresponding to obstacles are projected onto the X r Y r plane of the robotic arm coordinate system Or X r Y r Z r , thereby obtaining the corresponding circumscribed rectangle;
步骤S824,根据所述障碍物影响序列中各障碍物所对应的到所述目标物体的距离,选定到所述目标物体距离最小的障碍物作为待搬移障碍物Obs_mov;Step S824, according to the distance to the target object corresponding to each obstacle in the obstacle influence sequence, select the obstacle with the smallest distance to the target object as the obstacle to be moved O bs_mov ;
步骤S825,对所述工作空间Srm中的每一个栅格进行赋值,生成所述待搬移障碍物Obs_mov对应的引力势场栅格图;Step S825, assigning a value to each grid in the workspace S rm to generate a grid map of the gravitational potential field corresponding to the obstacle to be moved O bs_mov ;
步骤S826,对于每一个其他物体,将所述工作空间Srm中的每一个栅格重新赋值,生成该物体对应的斥力势场栅格图;其中,所述其他物体包括:除所述待搬移障碍物Obs_mov之外的其他障碍物以及所述目标物体;Step S826, for each other object, reassign each grid in the workspace S rm to generate a grid map of the repulsion potential field corresponding to the object; wherein, the other objects include: in addition to the to-be-moved Obstacles other than the obstacle O bs_mov and the target object;
步骤S827,将所述待搬移障碍物Obs_mov对应的引力势场栅格图和所述其他物体的斥力势场栅格图叠加起来,生成最终的势场栅格图;选取该势场栅格图中数值最小的栅格所对应的位置作为所述待搬移障碍物Obs_mov的待放置位置;若数值最小的栅格数量大于1,则选取距离所述待搬移障碍物Obs_mov在XrYr平面上投影得到的外接矩形的中心点最近的栅格所对应的位置作为所述待搬移障碍物Obs_mov的待放置位置;Step S827, superimpose the gravitational potential field grid map corresponding to the obstacle to be moved O bs_mov and the repulsive force potential field grid map of the other objects to generate the final potential field grid map; select the potential field grid map The position corresponding to the grid with the smallest numerical value in the figure is used as the position to be placed of the obstacle to be moved O bs_mov ; if the number of grids with the smallest numerical value is greater than 1, then select the distance from the obstacle to be moved O bs_mov at X r Y The position corresponding to the grid closest to the center point of the circumscribed rectangle projected on the r plane is used as the position to be placed of the obstacle to be moved O bs_mov ;
步骤S828,基于所述待搬移障碍物Obs_mov的当前位置和尺寸信息,以及所述待搬移障碍物Obs_mov的待放置位置、其他障碍物和所述目标物体的位置和尺寸信息,采用MoveIt!功能包进行机械臂的运动规划,完成对所述待搬移障碍物Obs_mov的搬移;将所述待搬移障碍物Obs_mov改称为被搬移障碍物Obs_mov,将该障碍物的所述待放置位置改称为新位置;将所述被搬移障碍物Obs_mov移出所述障碍物影响序列,该障碍物在XrYr平面上投影得到的外接矩形的中心点的位置用该障碍物被搬移后的新位置加以更新;Step S828, based on the current position and size information of the obstacle to be moved Obs_mov , the position to be placed of the obstacle to be moved Obs_mov , other obstacles and the position and size information of the target object, use MoveIt! The function package performs the motion planning of the manipulator, and completes the transfer of the obstacle to be moved Obs_mov ; the obstacle to be moved Obs_mov is renamed the obstacle to be moved Obs_mov , and the to-be-placed position of the obstacle Renamed as a new position; move the moved obstacle O bs_mov out of the obstacle influence sequence, the position of the center point of the circumscribed rectangle obtained by the projection of the obstacle on the X r Y r plane is used after the obstacle has been moved. be updated with the new location;
步骤S829,基于所述目标物体的位置和尺寸信息,以及所述被搬移障碍物Obs_mov的新位置和尺寸信息、其他障碍物的当前位置和尺寸信息,通过MoveIt!功能包判断所述目标物体是否能够被直接抓取,若是,则通过MoveIt!功能包进行机械臂的运动规划,进而对所述目标物体进行抓取;否则,转至步骤S824。Step S829, based on the position and size information of the target object, the new position and size information of the moved obstacle O bs_mov , and the current position and size information of other obstacles, through MoveIt! The function package judges whether the target object can be directly grabbed, and if so, use MoveIt! The function package performs motion planning of the robotic arm, and then grasps the target object; otherwise, go to step S824.
优选地,步骤S825中“对所述工作空间Srm中的每一个栅格进行赋值,生成所述待搬移障碍物Obs_mov对应的引力势场栅格图”具体包括:Preferably, in step S825, "assigning each grid in the workspace S rm to generate a grid map of the gravitational potential field corresponding to the obstacle to be moved O bs_mov " specifically includes:
将所述待搬移障碍物Obs_mov在XrYr平面上投影得到的外接矩形内所有栅格的数值,赋值为该障碍物沿Zr轴方向的高度;The numerical value of all grids in the circumscribed rectangle obtained by the projection of the obstacle to be moved O bs_mov on the X r Y r plane is assigned as the height of the obstacle along the Z r axis direction;
基于所述待搬移障碍物Obs_mov在XrYr平面上投影得到的外接矩形,以栅格为单位向外扩展,每次在Xr轴正方向、Yr轴正方向、Xr轴负方向、Yr轴负方向上同时扩展一个栅格,从而产生一系列扩展矩形,直到最外围的扩展矩形将所述工作空间Srm内的所有栅格全部包含为止,其中,每个扩展矩形用产生该扩展矩形时的扩展次数进行表征,位于所述待搬移障碍物Obs_mov在XrYr平面上投影得到的外接矩形之外且在所述工作空间Srm之内的栅格Gat的数值根据下式进行赋值:Based on the circumscribed rectangle obtained by the projection of the obstacle to be moved O bs_mov on the X r Y r plane, expand outward in the unit of grid, each time in the positive direction of the X r axis, the positive direction of the Y r axis, and the negative direction of the X r axis A grid is simultaneously expanded in the negative direction of the Y r -axis, thereby generating a series of expanded rectangles until the outermost expanded rectangle contains all the grids in the workspace S rm , wherein each expanded rectangle uses The number of expansions when the expanded rectangle is generated is characterized, and is located outside the circumscribed rectangle obtained by the projection of the obstacle to be moved O bs_mov on the X r Y r plane and within the working space S rm The grid G at Values are assigned according to the following formula:
Fat=nat·ka F at =n at · ka
其中,Fat为栅格Gat的数值,ka为预设的引力场系数,nat为包含栅格Gat的最小面积的扩展矩形所对应的扩展次数,nat=1,2,3,…。Among them, F at is the value of the grid Ga at , ka is the preset gravitational field coefficient, n at is the expansion times corresponding to the expansion rectangle containing the minimum area of the grid Ga at , n at =1,2,3 ,….
优选地,步骤S826中“对于每一个其他物体,将所述工作空间Srm中的每一个栅格重新赋值,生成该物体对应的斥力势场栅格图”具体包括:Preferably, in step S826, "for each other object, reassign each grid in the workspace S rm to generate a grid map of the repulsive force potential field corresponding to the object" specifically includes:
将该物体在XrYr平面上投影得到的外接矩形内所有栅格的数值,赋值为该物体沿Zr轴方向的高度;The value of all grids in the circumscribed rectangle obtained by projecting the object on the X r Y r plane is assigned as the height of the object along the Z r axis;
基于该物体在XrYr平面上投影得到的外接矩形,以栅格为单位向外扩展,每次在Xr轴正方向、Yr轴正方向、Xr轴负方向、Yr轴负方向上同时扩展一个栅格,从而产生一系列扩展矩形,直到最外围的扩展矩形将所述工作空间Srm内的所有栅格全部包含为止,其中,每个扩展矩形用产生该扩展矩形时的扩展次数进行表征,位于该物体在XrYr平面上投影得到的外接矩形之外且在所述工作空间Srm之内的栅格Gre的数值根据下式进行赋值:Based on the circumscribed rectangle projected by the object on the X r Y r plane , it expands outward in units of grids . A grid is simultaneously expanded in the direction, thereby generating a series of expanded rectangles, until the outermost expanded rectangle contains all the grids in the workspace S rm , wherein each expanded rectangle uses the The number of expansions is used to characterize, and the value of the grid G re located outside the circumscribed rectangle obtained by the projection of the object on the X r Y r plane and within the working space S rm is assigned according to the following formula:
其中,Fre为栅格Gre的数值,kr为预设的斥力场系数,Oh为物体的高度,nre为包含栅格Gre的最小面积的扩展矩形所对应的扩展次数,nre=1,2,3,…,μ是给定的高斯分布的均值,σ是给定的高斯分布的标准差。Among them, F re is the value of the grid Gre , k r is the preset repulsion field coefficient, O h is the height of the object, n re is the expansion times corresponding to the expansion rectangle containing the minimum area of the grid Gre , n re = 1,2,3,..., μ is the mean of the given Gaussian distribution, σ is the standard deviation of the given Gaussian distribution.
采用本发明的抓取方法,当存在阻碍目标物体被直接抓取的障碍物时,能够先进行障碍物搬移操作,而后完成对目标物体的抓取,为服务机器人在家庭、办公、医护等环境下的抓取作业提供技术支持。By adopting the grasping method of the present invention, when there is an obstacle that prevents the target object from being grasped directly, the obstacle removal operation can be performed first, and then the grasping of the target object can be completed, which is useful for the service robot in environments such as home, office, and medical care. Technical support is provided for the crawling jobs under.
附图说明Description of drawings
图1是本发明一种基于视觉的服务机器人抓取目标物体的方法实施例的流程示意图;1 is a schematic flowchart of an embodiment of a method for grasping a target object by a vision-based service robot of the present invention;
图2a-图2e是本发明实施例中生成待搬移障碍物对应的引力势场栅格图的方法示意图。2a-2e are schematic diagrams of a method for generating a grid map of a gravitational potential field corresponding to an obstacle to be moved according to an embodiment of the present invention.
具体实施方式Detailed ways
下面参照附图来描述本发明的优选实施方式。本领域技术人员应当理解的是,这些实施方式仅用于解释本发明的技术原理,并非旨在限制本发明的保护范围。Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only used to explain the technical principle of the present invention, and are not intended to limit the protection scope of the present invention.
参阅附图1,图1给出了本发明一种基于视觉的服务机器人抓取目标物体的方法实施例的流程示意图。如图1所示,本实施例的抓取方法包括下述步骤:Referring to FIG. 1 , FIG. 1 shows a schematic flowchart of an embodiment of a method for grasping a target object by a vision-based service robot of the present invention. As shown in Figure 1, the grabbing method of this embodiment includes the following steps:
步骤S1,服务机器人通过安装于自身的Kinect传感器获取彩色图像Is以及相机坐标系OcXcYcZc下的原始三维点云数据Ds,其中Kinect传感器倾斜向下安装;相机坐标系OcXcYcZc为右手系,相机坐标系OcXcYcZc的原点Oc位于Kinect传感器的中心,Yc轴垂直于Kinect传感器的底面且方向向上,Zc轴垂直于Yc轴且与Kinect传感器的正前方保持一致。Step S1, the service robot obtains the color image Is and the original three-dimensional point cloud data D s under the camera coordinate system O c X c Y c Z c through the Kinect sensor installed on itself, wherein the Kinect sensor is installed obliquely downward ; the camera coordinate system O c X c Y c Z c is a right-handed system, the origin O c of the camera coordinate system O c X c Y c Z c is located in the center of the Kinect sensor, the Y c axis is perpendicular to the bottom surface of the Kinect sensor and the direction is upward, and the Z c axis is vertical On the Y c -axis and in line with the front of the Kinect sensor.
步骤S2,基于彩色图像Is,利用深度学习SSD(Single Shot multiBox Detector)目标检测方法进行目标物体的检测,得到目标物体的包围框,进而从原始三维点云数据Ds中获得目标物体的包围框所对应的三维数据,这些三维数据构成目标物体的第一点云数据Dto;在原始三维点云数据Ds中去除目标物体的第一点云数据Dto,将得到的三维点云数据记为第一环境点云数据Drs。Step S2, based on the color image Is, use the deep learning SSD (Single Shot multiBox Detector) target detection method to detect the target object, obtain the bounding box of the target object, and then obtain the surrounding frame of the target object from the original three-dimensional point cloud data D s . The three-dimensional data corresponding to the frame, these three-dimensional data constitute the first point cloud data D to of the target object; remove the first point cloud data D to of the target object in the original three-dimensional point cloud data D s , and the obtained three-dimensional point cloud data Denoted as the first environmental point cloud data Drs .
步骤S3,根据相机坐标系OcXcYcZc到服务机器人的机械臂坐标系OrXrYrZr的坐标变换关系,将相机坐标系OcXcYcZc下的第一环境点云数据Drs转换到机械臂坐标系OrXrYrZr下,得到第二环境点云数据Dr;同样的,将相机坐标系OcXcYcZc下的目标物体的第一点云数据Dto转换到机械臂坐标系OrXrYrZr下,得到目标物体的第二点云数据Dt;机械臂坐标系OrXrYrZr为右手系,机械臂坐标系OrXrYrZr的原点Or位于服务机器人的机械臂的基座中心,Zr轴垂直于地面且方向向上,Xr轴垂直于Zr轴且与服务机器人正前方保持一致,其中,服务机器人的机械臂的基座底面平行于地面。Step S3, according to the coordinate transformation relationship from the camera coordinate system O c X c Y c Z c to the manipulator coordinate system Or X r Y r Z r of the service robot, the camera coordinate system O c X c Y c Z c The first environmental point cloud data Drs is converted to the robotic arm coordinate system Or X r Y r Z r , and the second environmental point cloud data Dr is obtained ; similarly, the camera coordinate system O c X c Y c Z c The first point cloud data D to of the target object is converted to the robot arm coordinate system Or X r Y r Z r , and the second point cloud data D t of the target object is obtained; the robot arm coordinate system Or X r Y r Z r is a right-handed system, the origin Or of the robotic arm coordinate system Or X r Y r Z r is located at the center of the base of the robotic arm of the service robot, the Z r axis is perpendicular to the ground and the direction is upward, and the X r axis is perpendicular to the Z r axis And it is consistent with the front of the service robot, wherein the bottom surface of the base of the robotic arm of the service robot is parallel to the ground.
该步骤中,相机坐标系OcXcYcZc到机械臂坐标系OrXrYrZr的坐标变换关系如公式(1)所示:In this step, the coordinate transformation relationship from the camera coordinate system O c X c Y c Z c to the manipulator coordinate system Or X r Y r Z r is shown in formula (1):
其中,(xcp,ycp,zcp)T和(xrp,yrp,zrp)T分别为原始三维点云数据Ds中的点在相机坐标系OcXcYcZc和机械臂坐标系OrXrYrZr中的坐标,Tm为手眼关系矩阵,手眼关系矩阵Tm的具体获取方法如下:Among them, (x cp , y cp , z cp ) T and (x rp , y rp , z rp ) T are the points in the original three-dimensional point cloud data D s in the camera coordinate system O c X c Y c Z c and The coordinates in the robot arm coordinate system O r X r Y r Z r , T m is the hand-eye relationship matrix, and the specific acquisition method of the hand-eye relationship matrix T m is as follows:
首先在机械臂末端固定一个红色小球,基于Kinect传感器提供的彩色图像,采用基于HSV(Hue,Saturation,Value)颜色特征的目标识别算法来识别红色小球,从而获得该红色小球的中心点在相机坐标系OcXcYcZc中的坐标(xc,yc,zc)T,用该中心点对机械臂末端进行描述,得到机械臂末端在相机坐标系OcXcYcZc中的三维坐标为(xc,yc,zc)T,其齐次坐标为pc=(xc,yc,zc,1)T;并利用MoveIt!功能包通过正运动学求解得到机械臂末端在机械臂坐标系OrXrYrZr中的三维坐标(xr,yr,zr)T,其齐次坐标为pr=(xr,yr,zr,1)T;然后控制机械臂末端移动到不同的位置,分别获得机械臂末端在相机坐标系OcXcYcZc和机械臂坐标系OrXrYrZr中的ng个三维坐标和t=1,2,…,ng,其中ng为预设的常数,从而得到对应的齐次坐标分别为和t=1,2,…,ng;基于和t=1,2,…,ng,以及公式(1),得到公式(2):First, a red ball is fixed at the end of the robotic arm. Based on the color image provided by the Kinect sensor, the target recognition algorithm based on HSV (Hue, Saturation, Value) color features is used to identify the red ball, so as to obtain the center point of the red ball. The coordinates (x c , y c , z c ) T in the camera coordinate system O c X c Y c Z c , use this center point to describe the end of the manipulator, and get the end of the manipulator in the camera coordinate system O c X c The three-dimensional coordinates in Y c Z c are (x c , y c , z c ) T , and its homogeneous coordinates are p c =(x c , y c , z c , 1) T ; and use MoveIt! The function package obtains the three-dimensional coordinates (x r , y r , z r ) T of the end of the manipulator in the manipulator coordinate system Or X r Y r Z r through the forward kinematics solution , and its homogeneous coordinates are p r =(x r , y r , z r , 1) T ; then control the end of the manipulator to move to different positions to obtain the end of the manipulator in the camera coordinate system O c X c Y c Z c and the manipulator coordinate system Or X r Y respectively n g 3D coordinates in r Z r and t=1,2,...,n g , where n g is a preset constant, so the corresponding homogeneous coordinates are obtained as and t=1,2,...,n g ; based on and t=1,2,...,n g , and formula (1), resulting in formula (2):
计算手眼关系矩阵Tm如下:The hand-eye relationship matrix T m is calculated as follows:
其中,表示求取矩阵的伪逆。in, Indicates to find a matrix The pseudo-inverse of .
步骤S4,结合机械臂在XrYr平面的工作空间Srm,对第二环境点云数据Dr进行缩小范围的处理,其中,机械臂在XrYr平面的工作空间Srm为矩形区域,其左下角和右上角坐标分别为(xmin,ymin)和(xmax,ymax),xmin、ymin、xmax和ymax均为预设的阈值;对第二环境点云数据Dr进行处理的过程为:去除Yr轴方向上数值位于[ymin,ymax]区间范围外的数据,以及去除Xr轴方向上数值位于[xmin,xmax]区间范围外的数据,得到第三环境点云数据Df。Step S4, in combination with the workspace S rm of the robotic arm on the X r Y r plane, the second environmental point cloud data Dr is processed in a reduced range, wherein the working space S rm of the robotic arm on the X r Y r plane is a rectangle area, the coordinates of the lower left corner and upper right corner are (x min , y min ) and (x max , y max ) respectively, and x min , y min , x max and y max are all preset thresholds; for the second environment point The process of processing cloud data D r is to remove the data whose values are outside the range of [y min , y max ] in the direction of the Y r axis, and remove the data whose values are outside the range of [x min , x max ] in the direction of the X r axis , obtain the third environmental point cloud data D f .
步骤S5,基于第三环境点云数据Df,采用RANSAC(Random Sample Consensus,随机抽样一致性)算法进行目标物体所在平面的平面拟合,得到目标物体所在平面的平面方程。具体包括S51-S54:Step S5 , based on the third environmental point cloud data D f , a RANSAC (Random Sample Consensus) algorithm is used to perform plane fitting of the plane where the target object is located to obtain a plane equation of the plane where the target object is located. Specifically include S51-S54:
步骤S51,从第三环境点云数据Df中任选3个数据,根据这3个数据所对应的点计算出一个平面;Step S51, select 3 data from the third environmental point cloud data D f , and calculate a plane according to the corresponding points of these 3 data;
步骤S52,求取第三环境点云数据Df中的其它数据(步骤S51中所选取的3个数据以外的数据)所对应的点到该平面的距离,将距离在dh范围内的点作为内点构成该平面对应的内点集,其中dh为预设的距离阈值;Step S52, obtain the distance from the corresponding point to the plane of other data in the third environmental point cloud data D f (data other than the 3 data selected in step S51), and the distance is within the range of d h . As an interior point, an interior point set corresponding to the plane is formed, where d h is a preset distance threshold;
步骤S53,重复执行步骤S51-S52,直至内点集之中内点的数量与第三环境点云数据Df中所有数据的数量的比值超过λ或达到预设迭代次数MK;其中λ为预设的数量比的阈值;Step S53, repeat steps S51-S52, until the ratio of the number of inliers in the inlier set to the number of all data in the third environmental point cloud data D f exceeds λ or reaches the preset number of iterations M K ; wherein λ is The preset number ratio threshold;
步骤S54,以最大的内点数量所对应的平面方程作为目标物体所在平面的平面方程,该平面方程在机械臂坐标系OrXrYrZr中表示为Aop·x+Bop·y+Cop·z+Dop=0,其中Aop、Bop、Cop、Dop为该平面方程的四个系数。In step S54, the plane equation corresponding to the maximum number of interior points is used as the plane equation of the plane where the target object is located, and the plane equation is expressed as A op x+B op in the coordinate system Or X r Y r Z r of the robot arm y+C op ·z+D op =0, wherein A op , B op , C op , and D op are the four coefficients of the plane equation.
步骤S6,结合目标物体所在平面的平面方程,在第三环境点云数据Df中去除该平面以及该平面下方的点所对应的数据,得到第四环境点云数据Do,进而采用欧式聚类算法进行点云簇聚类,获取各点云簇所对应障碍物的位置及尺寸信息。具体包括步骤S61-S62:Step S6, in combination with the plane equation of the plane where the target object is located, remove the data corresponding to the plane and the points below the plane in the third environmental point cloud data D f to obtain the fourth environmental point cloud data D o , and then adopt Euclidean aggregation. The clustering algorithm is used to cluster point cloud clusters, and the position and size information of obstacles corresponding to each point cloud cluster are obtained. Specifically, it includes steps S61-S62:
步骤S61,结合目标物体所在平面的平面方程,在第三环境点云数据Df中去除该平面以及该平面下方的点所对应的数据,得到第四环境点云数据Do。Step S61 , combining the plane equation of the plane where the target object is located, remove the data corresponding to the plane and the points below the plane from the third environmental point cloud data D f to obtain fourth environmental point cloud data D o .
步骤S62,基于第四环境点云数据Do,利用欧式聚类算法进行点云簇聚类,获取各点云簇所对应障碍物的位置及尺寸信息。具体包括步骤S621-S626:Step S62 , based on the fourth environmental point cloud data D o , use the Euclidean clustering algorithm to cluster point cloud clusters, and obtain the position and size information of obstacles corresponding to each point cloud cluster. Specifically, it includes steps S621-S626:
步骤S621,对第四环境点云数据Do,设置聚类集C和队列Q,初始时聚类集C和队列Q均为空集,并标记Do中的各数据为未处理状态;Step S621 , for the fourth environmental point cloud data Do, set a cluster set C and a queue Q, initially both the cluster set C and the queue Q are empty sets, and mark each data in Do as an unprocessed state;
步骤S622,对第四环境点云数据Do中的数据所对应的点pi进行判断(i=0,1…,Nd-1,Nd为Do中数据的总数,其中点pi的坐标即为该点在Do中所对应的数据),如果点pi在第四环境点云数据Do中所对应的数据处于已处理状态,则继续判断点pi的下一个点,当第四环境点云数据Do中的数据全部处于已处理状态,跳转到步骤S626;如果点pi在第四环境点云数据Do中所对应的数据处于未处理状态,则执行步骤S623;Step S622, judge the point pi corresponding to the data in the fourth environmental point cloud data Do ( i = 0 , 1..., N d -1 , N d is the total number of data in Do, where the point pi The coordinates of the point are the data corresponding to the point in Do), if the data corresponding to the point p i in the fourth environmental point cloud data Do is in the processed state, continue to judge the next point of the point p i , When all the data in the fourth environment point cloud data D o is in the processed state, jump to step S626; if the data corresponding to the point p i in the fourth environment point cloud data D o is in the unprocessed state, then execute the step S626 S623;
步骤S623,队列Q置为空集,将点pi放入队列Q中,并标记该点在第四环境点云数据Do中所对应的数据为已处理状态;Step S623, the queue Q is set to an empty set, the point p i is put into the queue Q, and the data corresponding to the point in the fourth environment point cloud data D o is marked as a processed state;
步骤S624,从队列Q中读取一个点,记为q,求取第四环境点云数据Do内所有处于未处理状态的数据对应的点与点q之间的距离,当距离不超过dth时,将相应的点加入队列Q中,同时将该点在Do中所对应的数据标记为已处理状态,其中dth为预设的阈值。重复执行该步骤,直到队列Q中所有的点都遍历完成而且没有新的点加入队列Q为止;Step S624, read a point from the queue Q, denoted as q, and obtain the distance between the point and the point q corresponding to all the data in the unprocessed state in the fourth environmental point cloud data D o , when the distance does not exceed d. At th , the corresponding point is added to the queue Q, and the data corresponding to the point in Do is marked as a processed state, where d th is a preset threshold. Repeat this step until all points in the queue Q are traversed and no new points are added to the queue Q;
步骤S625,如果队列Q中的点的数目大于Nmin,则将队列Q的所有点记为一个点云簇,并放入聚类集C中,其中,Nmin为预设的阈值,返回步骤S622;Step S625, if the number of points in the queue Q is greater than N min , record all the points in the queue Q as a point cloud cluster, and put them into the cluster set C, where N min is a preset threshold, and return to the step S622;
步骤S626,得到聚类集C中的点云簇的个数,记为Nc,每个点云簇对应一个障碍物;聚类集C为空集即Nc=0,意味着没有产生点云簇;当聚类集C不是空集时,获得每个点云簇所对应障碍物的位置和尺寸信息,其中将障碍物相应点云簇的所有点的坐标的平均值记为该障碍物的位置信息;障碍物的尺寸信息根据障碍物相应点云簇中的所有点在Xr轴、Yr轴、Zr轴方向的值,将Xr轴方向上的最大值与最小值的差记为该障碍物沿Xr轴方向的长度,将Yr轴方向上的最大值与最小值的差记为该障碍物沿Yr轴方向的长度,将Zr轴方向上的最大值与最小值的差记为该障碍物沿Zr轴方向的高度。Step S626, obtain the number of point cloud clusters in the cluster set C, denoted as N c , each point cloud cluster corresponds to an obstacle; the cluster set C is an empty set, that is, N c =0, which means that no point is generated Cloud clusters; when the cluster set C is not an empty set, the position and size information of the obstacles corresponding to each point cloud cluster are obtained, and the average value of the coordinates of all the points of the corresponding point cloud clusters of the obstacle is recorded as the obstacle The position information of the obstacle; the size information of the obstacle is based on the value of all points in the corresponding point cloud cluster of the obstacle in the X r axis, Y r axis, Z r axis direction, and the difference between the maximum value and the minimum value in the X r axis direction is calculated. It is recorded as the length of the obstacle along the X r axis direction, the difference between the maximum value and the minimum value in the Y r axis direction is recorded as the length of the obstacle along the Y r axis direction, and the maximum value in the Z r axis direction and The difference of the minimum value is recorded as the height of the obstacle along the Z r axis.
步骤S7,结合目标物体所在平面的平面方程,在目标物体的第二点云数据Dt中去除该平面以及该平面下方的点所对应的数据,得到目标物体的第三点云数据Dtr,将Dtr中所有数据的平均值记为目标物体的位置信息;目标物体的尺寸信息根据Dtr中所有数据在Xr轴、Yr轴、Zr轴方向的值,将Xr轴方向上的最大值与最小值的差记为目标物体沿Xr轴方向的长度,将Yr轴方向上的最大值与最小值的差记为目标物体沿Yr轴方向的长度,将Zr轴方向上的最大值与最小值的差记为目标物体沿Zr轴方向的高度;Step S7, combining the plane equation of the plane where the target object is located, remove the data corresponding to the plane and the point below the plane from the second point cloud data D t of the target object, and obtain the third point cloud data D tr of the target object, The average value of all data in D tr is recorded as the position information of the target object ; the size information of the target object is based on the values of all data in D tr in the directions of X r axis, Y r axis and Z r axis. The difference between the maximum value and the minimum value of the target object is recorded as the length of the target object along the X r -axis direction, the difference between the maximum value and the minimum value in the Y r -axis direction is recorded as the length of the target object along the Y r -axis direction, and the Z r -axis direction is recorded. The difference between the maximum value and the minimum value in the direction is recorded as the height of the target object along the Z r axis;
步骤S8,基于目标物体的位置和尺寸信息,以及各障碍物的位置和尺寸信息,先对阻碍目标物体被直接抓取的障碍物进行搬移,而后完成对目标物体的抓取。具体包括步骤S81-S82:Step S8, based on the position and size information of the target object and the position and size information of each obstacle, first move the obstacles that prevent the target object from being directly grasped, and then complete the grasping of the target object. Specifically, it includes steps S81-S82:
步骤S81,基于目标物体的位置和尺寸信息,以及各障碍物的位置和尺寸信息,通过MoveIt!功能包判断服务机器人的机械臂是否可以直接抓取目标物体,如果能够直接抓取目标物体,则机械臂将直接根据MoveIt!功能包的规划结果实施对目标物体的抓取;否则,执行步骤S82;其中,机械臂末端的抓取方向均为沿Zr轴竖直向下抓取;Step S81, based on the position and size information of the target object and the position and size information of each obstacle, use MoveIt! The function package determines whether the robot arm of the service robot can directly grab the target object. If it can directly grab the target object, the robot arm will directly follow the MoveIt! The planning result of the function package implements the grasping of the target object; otherwise, step S82 is performed; wherein, the grasping direction of the end of the mechanical arm is vertically downward along the Z r axis;
步骤S82,对阻碍目标物体被直接抓取的障碍物进行搬移,而后完成对目标物体的抓取。具体包括步骤S821-S829:In step S82, the obstacles that prevent the target object from being directly grasped are moved, and then the grasping of the target object is completed. Specifically, it includes steps S821-S829:
步骤S821,在目标物体的第三点云数据Dtr中,选取在Zr轴方向上数值位于[zmax-zh,zmax]内的所有数据,作为目标物体抓取区的点云数据Dgr,其中,zmax为Dtr中各数据在Zr轴方向上的最大值,zh为预设的阈值;Step S821, in the third point cloud data D tr of the target object, select all the data whose value is within [z max -z h , z max ] in the direction of the Z r axis as the point cloud data of the grasping area of the target object D gr , where z max is the maximum value of each data in D tr in the direction of the Z r axis, and z h is a preset threshold;
步骤S822,对于每个障碍物,求取目标物体抓取区的点云数据Dgr所对应的点集合中的各点与该障碍物相应点云簇中各点之间距离的最小值,该最小值记为该障碍物到目标物体的距离,将该障碍物到目标物体的距离以及该障碍物的ID号组合起来形成障碍物影响序列;Step S822, for each obstacle, obtain the minimum value of the distance between each point in the point set corresponding to the point cloud data D gr of the target object grasping area and each point in the corresponding point cloud cluster of the obstacle. The minimum value is recorded as the distance from the obstacle to the target object, and the distance from the obstacle to the target object and the ID number of the obstacle are combined to form the obstacle influence sequence;
步骤S823,对于机械臂在XrYr平面的工作空间Srm,以cx×cx为单位进行栅格划分得到机械臂在XrYr平面的工作空间Srm的栅格图,其中cx为预设的栅格划分单位,cx能整除xmax-xmin,且cx能整除ymax-ymin(前面步骤S4中提到(xmin,ymin)和(xmax,ymax)分别为工作空间Srm的左下角和右上角坐标);分别将目标物体的第三点云数据Dtr对应的所有点以及各障碍物所对应的点云簇,投影到机械臂坐标系OrXrYrZr的XrYr平面上,并获得各自对应的外接矩形;Step S823, for the workspace S rm of the robot arm on the X r Y r plane, divide the grid in units of c x ×c x to obtain a grid map of the workspace S rm of the robot arm on the X r Y r plane, wherein c x is the preset grid division unit, c x can be divided into x max -x min , and c x can be divided into y max -y min (mentioned in the previous step S4 (x min , y min ) and (x max , y max ) are the coordinates of the lower left corner and upper right corner of the workspace S rm respectively); project all the points corresponding to the third point cloud data D tr of the target object and the point cloud clusters corresponding to each obstacle to the coordinates of the robot arm On the X r Y r plane of Or X r Y r Z r , and obtain their corresponding circumscribed rectangles;
步骤S824,根据障碍物影响序列中各障碍物所对应的到目标物体的距离,选定到目标物体距离最小的障碍物作为本次待搬移的障碍物Obs_mov;Step S824, according to the distance to the target object corresponding to each obstacle in the obstacle influence sequence, select the obstacle with the smallest distance to the target object as the obstacle O bs_mov to be moved this time;
步骤S825,对于机械臂在XrYr平面的工作空间Srm中的每一个栅格进行赋值,生成待搬移障碍物Obs_mov对应的引力势场栅格图。具体赋值方法为:将障碍物Obs_mov在XrYr平面上投影得到的外接矩形内所有栅格的数值,赋值为该障碍物沿Zr轴方向的高度;基于障碍物Obs_mov在XrYr平面上投影得到的外接矩形,以栅格为单位向外扩展,每次在Xr轴正方向、Yr轴正方向、Xr轴负方向、Yr轴负方向上同时扩展一个栅格,从而产生一系列扩展矩形,直到最外围的扩展矩形将机械臂在XrYr平面的工作空间Srm内的所有栅格全部包含为止,其中,每个扩展矩形用产生该扩展矩形时的扩展次数进行表征,位于投影得到的外接矩形之外且在Srm之内的栅格Gat的数值Fat按照公式(4)进行赋值:Step S825, assigning an assignment to each grid of the robotic arm in the workspace S rm of the X r Y r plane to generate a grid map of the gravitational potential field corresponding to the obstacle to be moved Obs_mov . The specific assignment method is: assign the values of all grids in the circumscribed rectangle obtained by the projection of the obstacle Obs_mov on the X r Y r plane as the height of the obstacle along the Z r axis; based on the obstacle Obs_mov in the X r The circumscribed rectangle projected on the Y r plane expands outward in units of grids, each time a grid is expanded in the positive direction of the X r axis, the positive direction of the Y r axis, the negative direction of the X r axis, and the negative direction of the Y r axis. grid, thereby generating a series of extended rectangles, until the outermost extended rectangle contains all the grids of the robot arm in the workspace S rm of the X r Y r plane, wherein each extended rectangle is generated with the time of generating the extended rectangle. The number of expansions is characterized, and the value F at of the grid G at outside the projected circumscribed rectangle and within S rm is assigned according to formula (4):
Fat=nat·ka (4)F at =n at · ka (4)
其中,ka为预设的引力场系数,nat为包含栅格Gat的最小面积的扩展矩形所对应的扩展次数,nat=1,2,3,…。Wherein, ka is the preset gravitational field coefficient, n at is the expansion times corresponding to the expansion rectangle containing the minimum area of the grid G at , n at =1, 2, 3, . . .
为了更清楚地描述生成引力势场栅格图的方法,我们在这里举例说明,假设机械臂在XrYr平面的工作空间Srm由8×6个栅格组成,待搬移障碍物在XrYr平面上投影得到的外接矩形包括3×2个栅格。图2a-图2e是生成待搬移障碍物对应的引力势场栅格图的方法示意图。其中,图2a描述了机械臂在XrYr平面的工作空间Srm,由8×6个虚线栅格组成;在图2b中,包括3×2个实线栅格的矩形表示障碍物Obs_mov在XrYr平面上投影得到的外接矩形,该外接矩形中每个栅格被赋值为障碍物沿Zr轴方向的高度,本例中为0.1米;在图2c中,包括5×4个实线栅格的矩形表示经过第1次扩展以后得到的扩展矩形,用该扩展矩形的扩展次数1来表征该扩展矩形,本次扩展出来的栅格赋值为1×ka;在图2d中,包括7×6个实线栅格的矩形表示经过第2次扩展以后得到的扩展矩形,用该扩展矩形的扩展次数2来表征该扩展矩形,本次扩展出来的栅格有一部分已经到了工作空间Srm之外,但是我们并不关心这部分栅格,只对位于工作空间Srm之内的栅格赋值为2×ka;在图2e中,包括9×8个实线栅格的矩形表示经过第3次扩展以后得到的扩展矩形,用该扩展矩形的扩展次数3来表征该扩展矩形,对于本次扩展出来的栅格,我们同样只对位于工作空间Srm之内的栅格赋值,赋值为3×ka。可以看出,经过3次扩展以后,最外围的扩展矩形已经将机械臂在XrYr平面的工作空间Srm内的所有栅格全部包含,图2e中所有赋过值的栅格共同组成了本例中待搬移障碍物对应的引力势场栅格图。In order to describe the method of generating the grid map of the gravitational potential field more clearly, we give an example here, assuming that the workspace S rm of the manipulator in the X r Y r plane consists of 8 × 6 grids, and the obstacles to be moved are in the X r Y r plane. The circumscribed rectangle projected on the r Y r plane includes 3×2 grids. 2a-2e are schematic diagrams of a method for generating a grid map of a gravitational potential field corresponding to an obstacle to be moved. Among them, Figure 2a describes the workspace S rm of the manipulator in the X r Y r plane, which consists of 8 × 6 dotted grids; in Figure 2b, a rectangle including 3 × 2 solid grids represents the obstacle O The enclosing rectangle projected by bs_mov on the X r Y r plane, each grid in the enclosing rectangle is assigned the height of the obstacle along the Z r axis, which is 0.1 m in this example; in Figure 2c, including 5× The rectangle of the four solid-line grids represents the expanded rectangle obtained after the first expansion, and the expanded rectangle is represented by the
步骤S826,对于每一个其他物体(包括除待搬移障碍物Obs_mov之外的其他障碍物以及目标物体),将机械臂在XrYr平面的工作空间Srm中的每一个栅格重新赋值,生成该物体对应的斥力势场栅格图。具体赋值方法为:将该物体在XrYr平面上投影得到的外接矩形内所有栅格的数值,赋值为该物体沿Zr轴方向的高度;基于该物体在XrYr平面上投影得到的外接矩形,以栅格为单位向外扩展,每次在Xr轴正方向、Yr轴正方向、Xr轴负方向、Yr轴负方向上同时扩展一个栅格,从而产生一系列扩展矩形,直到最外围的扩展矩形将机械臂在XrYr平面的工作空间Srm内的所有栅格全部包含为止,其中,每个扩展矩形用产生该扩展矩形时的扩展次数进行表征,位于投影得到的外接矩形之外且在Srm之内的栅格Gre的数值Fre按照公式(5)进行赋值:Step S826, for each other object (including other obstacles and target objects except the obstacle to be moved O bs_mov ), reassign each grid of the robotic arm in the workspace S rm of the X r Y r plane , generate a grid map of the repulsive potential field corresponding to the object. The specific assignment method is: assign the value of all grids in the circumscribed rectangle obtained by projecting the object on the X r Y r plane as the height of the object along the Z r axis; based on the projection of the object on the X r Y r plane The obtained circumscribed rectangle expands outward in units of grids, and expands one grid at the same time in the positive direction of the X r axis, the positive direction of the Y r axis, the negative direction of the X r axis, and the negative direction of the Y r axis, thereby generating a grid. A series of expansion rectangles until the outermost expansion rectangle contains all the grids of the robot arm in the workspace S rm of the X r Y r plane, wherein each expansion rectangle is characterized by the number of expansions when the expansion rectangle is generated , the value Fre of the grid Gre outside the projected circumscribed rectangle and within S rm is assigned according to formula (5):
其中,kr为预设的斥力场系数,Oh为物体的高度,nre为包含栅格Gre的最小面积的扩展矩形所对应的扩展次数,nre=1,2,3,…,μ是给定的高斯分布的均值,σ是给定的高斯分布的标准差;Among them, k r is the preset repulsion field coefficient, Oh is the height of the object, n re is the expansion times corresponding to the expansion rectangle containing the minimum area of the
步骤S827,将待搬移障碍物Obs_mov对应的引力势场栅格图和其他障碍物及目标物体各自生成的斥力势场栅格图叠加起来(即将上述栅格图中同一栅格的数值相加),生成最终的势场栅格图,选取该势场栅格图中数值最小的栅格所对应的位置作为障碍物Obs_mov的待放置位置,如果数值最小的栅格不止一个,则选取到障碍物Obs_mov在XrYr平面上投影得到的外接矩形的中心点距离最近的栅格所对应的位置作为障碍物Obs_mov的待放置位置;Step S827, superimpose the gravitational potential field grid diagram corresponding to the obstacle O bs_mov to be moved and the repulsive force potential field grid diagrams generated by other obstacles and the target object respectively (that is, add the values of the same grid in the above grid diagram. ) to generate the final potential field grid map, select the position corresponding to the grid with the smallest value in the potential field grid map as the position to be placed for the obstacle O bs_mov , if there is more than one grid with the smallest value, select the The position corresponding to the nearest grid from the center point of the circumscribed rectangle obtained by the projection of the obstacle O bs_mov on the X r Y r plane is taken as the to-be-placed position of the obstacle O bs_mov ;
步骤S828,基于待搬移障碍物Obs_mov的当前位置和尺寸信息,以及待搬移障碍物的待放置位置、其他障碍物和目标物体的位置和尺寸信息,采用MoveIt!功能包进行机械臂的运动规划,完成对障碍物Obs_mov的搬移(至此,前面提到的“待放置位置”改称为该障碍物的“新位置”,障碍物Obs_mov也由“待搬移障碍物”改称为“被搬移障碍物”);将障碍物Obs_mov移出障碍物影响序列,该障碍物在XrYr平面上投影得到的外接矩形的中心点的位置用该障碍物被搬移后的新位置加以更新;Step S828, based on the current position and size information of the obstacle to be moved Obs_mov , as well as the position and size information of the obstacle to be moved, other obstacles and the position and size information of the target object, use MoveIt! The function package performs the motion planning of the manipulator and completes the movement of the obstacle O bs_mov (so far, the "position to be placed" mentioned above has been renamed the "new position" of the obstacle, and the obstacle O bs_mov is also changed from "the obstacle to be moved""object" was renamed "moved obstacle"); the obstacle O bs_mov was moved out of the obstacle influence sequence, and the position of the center point of the circumscribed rectangle obtained by the projection of the obstacle on the X r Y r plane was moved by the obstacle. be updated with the new location;
步骤S829,基于目标物体的位置和尺寸信息,以及被搬移障碍物Obs_mov的新位置和尺寸信息、其他障碍物的当前位置和尺寸信息,通过MoveIt!功能包判断目标物体是否可以被直接抓取,如果目标物体不能被直接抓取,则返回步骤S824;否则,通过MoveIt!功能包进行机械臂的运动规划,控制机械臂完成对目标物体的抓取。Step S829, based on the position and size information of the target object, the new position and size information of the moved obstacle O bs_mov , and the current position and size information of other obstacles, through MoveIt! The function package judges whether the target object can be directly grabbed, if the target object cannot be grabbed directly, return to step S824; otherwise, go through MoveIt! The function package performs the motion planning of the robotic arm and controls the robotic arm to complete the grasping of the target object.
在一个具体实施例中,Kinect传感器倾斜向下安装,其倾斜角度为45°,ng=15,xmin=0.1米,xmax=0.5米,ymin=-0.25米,ymax=0.25米,dh=0.01米,λ=0.5,MK=100,dth=0.02米,Nmin=80,zh=0.05米,cx=0.01米,ka=0.015,kr=0.1,μ=0,σ=10。In a specific embodiment, the Kinect sensor is installed obliquely downward at a tilt angle of 45°, ng = 15, x min = 0.1 m, x max = 0.5 m, y min = -0.25 m, y max = 0.25 m , dh = 0.01 m, λ = 0.5, M K = 100, d th = 0.02 m, N min = 80, zh = 0.05 m, c x = 0.01 m, ka = 0.015, k r = 0.1, μ =0, σ=10.
采用本发明的抓取方法,当存在阻碍目标物体被直接抓取的障碍物时,能够先进行障碍物搬移操作,而后完成对目标物体的抓取,为服务机器人在家庭、办公、医护等环境下的抓取作业提供技术支持,能够实现较好的技术效果。By adopting the grasping method of the present invention, when there is an obstacle that prevents the target object from being grasped directly, the obstacle removal operation can be performed first, and then the grasping of the target object can be completed, which is useful for the service robot in environments such as home, office, and medical care. Provide technical support for the grabbing operation below, which can achieve better technical results.
上述实施例中虽然将各个步骤按照上述先后次序的方式进行了描述,但是本领域技术人员可以理解,为了实现本实施例的效果,不同的步骤之间不必按照这样的次序执行,其可以同时(并行)执行或以颠倒的次序执行,这些简单的变化都在本发明的保护范围之内。In the above-mentioned embodiment, although each step is described according to the above-mentioned order, those skilled in the art can understand that, in order to realize the effect of this embodiment, different steps need not be performed in this order, and it can be performed simultaneously ( parallel) or in reverse order, simple variations of these are within the scope of the present invention.
本领域技术人员应该能够意识到,结合本文中所公开的实施例描述的各示例的方法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明电子硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以电子硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。Those skilled in the art should be aware that the method steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, computer software, or a combination of the two, in order to clearly illustrate the possibilities of electronic hardware and software. Interchangeability, the above description has generally described the components and steps of each example in terms of functionality. Whether these functions are performed in electronic hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods of implementing the described functionality for each particular application, but such implementations should not be considered beyond the scope of the present invention.
至此,已经结合附图所示的优选实施方式描述了本发明的技术方案,但是,本领域技术人员容易理解的是,本发明的保护范围显然不局限于这些具体实施方式。在不偏离本发明的原理的前提下,本领域技术人员可以对相关技术特征做出等同的更改或替换,这些更改或替换之后的技术方案都将落入本发明的保护范围之内。So far, the technical solutions of the present invention have been described with reference to the preferred embodiments shown in the accompanying drawings, however, those skilled in the art can easily understand that the protection scope of the present invention is obviously not limited to these specific embodiments. Without departing from the principle of the present invention, those skilled in the art can make equivalent changes or substitutions to the relevant technical features, and the technical solutions after these changes or substitutions will fall within the protection scope of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810841533.0A CN108858199B (en) | 2018-07-27 | 2018-07-27 | Method for grabbing target object by service robot based on vision |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810841533.0A CN108858199B (en) | 2018-07-27 | 2018-07-27 | Method for grabbing target object by service robot based on vision |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN108858199A CN108858199A (en) | 2018-11-23 |
| CN108858199B true CN108858199B (en) | 2020-04-07 |
Family
ID=64305734
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810841533.0A Active CN108858199B (en) | 2018-07-27 | 2018-07-27 | Method for grabbing target object by service robot based on vision |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN108858199B (en) |
Families Citing this family (30)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109767452A (en) * | 2018-12-24 | 2019-05-17 | 深圳市道通智能航空技术有限公司 | A target positioning method and device, and an unmanned aerial vehicle |
| CN109934871A (en) * | 2019-02-18 | 2019-06-25 | 武汉大学 | A system and method for intelligent unmanned aerial vehicle to grab target for high-risk environment |
| CN109978949B (en) * | 2019-03-26 | 2023-04-28 | 南开大学 | A method of crop recognition and three-dimensional coordinate extraction of feature points based on computer vision |
| CN109910011B (en) * | 2019-03-29 | 2024-08-30 | 齐鲁工业大学 | Multi-sensor-based mechanical arm obstacle avoidance method and mechanical arm |
| CN110000783B (en) * | 2019-04-04 | 2021-04-30 | 上海节卡机器人科技有限公司 | Visual grabbing method and device for robot |
| CN110182529B (en) * | 2019-04-11 | 2020-12-18 | 上海快仓智能科技有限公司 | Inbound and outbound control method and handling system for rack array |
| CN110182527B (en) * | 2019-04-11 | 2021-03-23 | 上海快仓智能科技有限公司 | Warehouse-in and warehouse-out control method and conveying system for shelf array |
| CN110116067A (en) * | 2019-05-26 | 2019-08-13 | 天津大学 | A kind of axle automatic spray apparatus and method |
| CN110405730B (en) * | 2019-06-06 | 2022-07-08 | 大连理工大学 | Human-computer interaction mechanical arm teaching system based on RGB-D image |
| CN110298885B (en) * | 2019-06-18 | 2023-06-27 | 仲恺农业工程学院 | A Stereo Vision Recognition Method for Non-smooth Spherical Objects, a Positioning and Clamping Detection Device and Its Application |
| CN110231035B (en) * | 2019-06-27 | 2020-03-20 | 北京克莱明科技有限公司 | Climbing mobile robot path guiding method |
| US11548152B2 (en) | 2019-08-02 | 2023-01-10 | Dextrous Robotics, Inc. | Systems and methods for robotic control under contact |
| CN110568866B (en) * | 2019-08-23 | 2023-01-17 | 成都新西旺自动化科技有限公司 | Three-dimensional curved surface visual guidance alignment system and alignment method |
| CN110455189B (en) * | 2019-08-26 | 2021-04-06 | 广东博智林机器人有限公司 | Visual positioning method for large-size materials and carrying robot |
| CN110706288A (en) * | 2019-10-10 | 2020-01-17 | 上海眼控科技股份有限公司 | Target detection method, device, equipment and readable storage medium |
| CN110744544B (en) * | 2019-10-31 | 2021-03-02 | 昆山市工研院智能制造技术有限公司 | Service robot vision grabbing method and service robot |
| CN110989631B (en) * | 2019-12-30 | 2022-07-12 | 科沃斯机器人股份有限公司 | Self-moving robot control method, device, self-moving robot and storage medium |
| CN113325832B (en) * | 2020-02-28 | 2023-08-11 | 杭州萤石软件有限公司 | Movable robot obstacle avoidance method and movable robot |
| CN111882610B (en) * | 2020-07-15 | 2022-09-20 | 中国科学院自动化研究所 | Method for grabbing target object by service robot based on elliptical cone artificial potential field |
| CN112053398B (en) * | 2020-08-11 | 2021-08-27 | 浙江大华技术股份有限公司 | Object grabbing method and device, computing equipment and storage medium |
| CN112060087B (en) * | 2020-08-28 | 2021-08-03 | 佛山隆深机器人有限公司 | A point cloud collision detection method for robot grasping scene |
| CN112508933B (en) * | 2020-12-21 | 2024-06-14 | 航天东方红卫星有限公司 | A flexible robotic arm obstacle avoidance method based on complex spatial obstacle positioning |
| CN112914727A (en) * | 2021-03-19 | 2021-06-08 | 联仁健康医疗大数据科技股份有限公司 | Non-target obstacle separating method, system, medical robot and storage medium |
| CN113084817B (en) * | 2021-04-15 | 2022-08-19 | 中国科学院自动化研究所 | Object searching and grabbing control method of underwater robot in turbulent flow environment |
| CN112907594B (en) * | 2021-04-19 | 2024-10-22 | 联仁健康医疗大数据科技股份有限公司 | Non-target object auxiliary separation method, system, medical robot and storage medium |
| CN115476348A (en) * | 2021-05-31 | 2022-12-16 | 梅卡曼德(北京)机器人科技有限公司 | Clamp control method and device, electronic equipment and storage medium |
| CN113925742B (en) * | 2021-10-20 | 2022-06-21 | 南通大学 | Control method and control system of target-driven upper limb exoskeleton rehabilitation robot |
| CN114004954B (en) * | 2021-10-29 | 2025-09-16 | 珠海格力电器股份有限公司 | Job cell merging method, device, computer equipment and storage medium |
| WO2023205176A1 (en) | 2022-04-18 | 2023-10-26 | Dextrous Robotics, Inc. | System and/or method for grasping objects |
| CN114862875A (en) * | 2022-05-20 | 2022-08-05 | 中国工商银行股份有限公司 | Method, device and electronic device for determining movement path of robot |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10112303B2 (en) * | 2013-10-25 | 2018-10-30 | Aleksandar Vakanski | Image-based trajectory robot programming planning approach |
| WO2016155787A1 (en) * | 2015-03-31 | 2016-10-06 | Abb Technology Ltd | A method for controlling an industrial robot by touch |
| CN106407947B (en) * | 2016-09-29 | 2019-10-22 | 百度在线网络技术(北京)有限公司 | Target object recognition methods and device for automatic driving vehicle |
| CN107053173A (en) * | 2016-12-29 | 2017-08-18 | 芜湖哈特机器人产业技术研究院有限公司 | The method of robot grasping system and grabbing workpiece |
-
2018
- 2018-07-27 CN CN201810841533.0A patent/CN108858199B/en active Active
Also Published As
| Publication number | Publication date |
|---|---|
| CN108858199A (en) | 2018-11-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108858199B (en) | Method for grabbing target object by service robot based on vision | |
| TWI662388B (en) | Obstacle avoidance control system and method for a robot | |
| CN112060087B (en) | A point cloud collision detection method for robot grasping scene | |
| TWI620627B (en) | Robots and methods for localization of the robots | |
| CN115213896A (en) | Object grasping method, system, device and storage medium based on robotic arm | |
| JP7632469B2 (en) | ROBOT CONTROL DEVICE, ROBOT CONTROL METHOD, AND PROGRAM | |
| CN108247635B (en) | A method of deep vision robot grasping objects | |
| WO2017133453A1 (en) | Method and system for tracking moving body | |
| CN111462154A (en) | Target positioning method and device based on depth vision sensor and automatic grabbing robot | |
| CN117325170A (en) | Method for grasping hard disk rack by robotic arm guided by depth vision | |
| CN108942923A (en) | A kind of mechanical arm crawl control method | |
| JP2020512646A (en) | Imaging system for localization and mapping of scenes containing static and dynamic objects | |
| WO2022042304A1 (en) | Method and apparatus for identifying scene contour, and computer-readable medium and electronic device | |
| JP2019121136A (en) | Information processing apparatus, information processing system and information processing method | |
| CN108908334A (en) | A kind of intelligent grabbing system and method based on deep learning | |
| CN111383263A (en) | System, method and device for grabbing object by robot | |
| CN115157269A (en) | A three-dimensional robotic arm automatic grasping method and device based on segmentation recognition | |
| WO2022021156A1 (en) | Method and apparatus for robot to grab three-dimensional object | |
| CN115578460A (en) | Robot Grasping Method and System Based on Multimodal Feature Extraction and Dense Prediction | |
| CN107009357A (en) | A kind of method that object is captured based on NAO robots | |
| CN114683251A (en) | Robot grasping method, device, electronic device and readable storage medium | |
| CN113538576A (en) | Grasping method and device based on dual-arm robot and dual-arm robot | |
| CN115773759A (en) | Indoor positioning method, device, equipment and storage medium for autonomous mobile robot | |
| WO2022259600A1 (en) | Information processing device, information processing system, information processing method, and program | |
| TW202231426A (en) | Robot controlling method, motion computing device and robot system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |