[go: up one dir, main page]

WO1993017375A1 - Systeme de commande d'attitude pour robots avec degre de liberte redondant - Google Patents

Systeme de commande d'attitude pour robots avec degre de liberte redondant Download PDF

Info

Publication number
WO1993017375A1
WO1993017375A1 PCT/JP1993/000132 JP9300132W WO9317375A1 WO 1993017375 A1 WO1993017375 A1 WO 1993017375A1 JP 9300132 W JP9300132 W JP 9300132W WO 9317375 A1 WO9317375 A1 WO 9317375A1
Authority
WO
WIPO (PCT)
Prior art keywords
freedom
robot
degree
data
redundant
Prior art date
Application number
PCT/JP1993/000132
Other languages
English (en)
Japanese (ja)
Inventor
Hiroshi Sugimura
Original Assignee
Fanuc Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fanuc Ltd. filed Critical Fanuc Ltd.
Publication of WO1993017375A1 publication Critical patent/WO1993017375A1/fr

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/1643Programme controls characterised by the control loop redundant control

Definitions

  • the present invention relates to a posture control method of a redundant degree of freedom mouth bot for controlling the posture of a redundant degree of freedom mouth bot, and in particular, to a redundancy degree of freedom that uniquely determines and controls the posture of the redundancy degree of freedom robot including an elbow angle. It relates to a robot attitude control method. Background technology
  • the configuration of the manipulator when positioning the hand of the manipulator at a point in space Is finite. Therefore, when there are obstacles in the working environment or when working in a narrow environment, the working posture of the manipulator is limited, which may result in reduced work efficiency or the inability to use the manipulator. is there.
  • a robot with a redundant degree of freedom with more than ⁇ ⁇ degrees of freedom has been developed.
  • this robot with redundant degrees of freedom the number of configurations of the manipulator when positioning the hand of the manipulator at a certain point in space is infinite. Therefore, a method is needed to select and specify the configuration.
  • a human arm In the case of a robot with a degree of freedom, if the elbow angle is specified by using its mechanical features, the degree of freedom can be restricted. Finger at this elbow position
  • the positions (angles) of the seven drive shafts can be analytically determined from the settings and the hand positions at that time, and the configuration is also uniquely determined.
  • the redundant DOF robot with more than 7 DOF drive axes has better workability than the robot manipulator with 6 DOF or less, but the redundancy freedom associated with utilizing the redundancy. Because of the lack of control technology for the degree constraint method, it was not possible to utilize the degree of freedom for the yo-yo. Disclosure of the invention
  • the present invention has been made in view of such a point, and a redundancy degree of freedom robot capable of determining a configuration of a redundancy degree of freedom robot in real time according to a work target and a work environment.
  • An object of the present invention is to provide an attitude control method.
  • Redundancy degree of freedom that uniquely determines and controls the attitude of bots Elbow of the robot with redundant degrees of freedom by the neural network An angle is obtained, and the attitude (configuration) of the redundant degree-of-freedom mouth bot is uniquely determined and controlled from the data of the work object and the data of the elbow angle.
  • the attitude control method of the robot is proposed.
  • Fig. 1 is a flow chart showing the procedure for determining the configuration of the mouth box during actual work.
  • FIG. 2 is a diagram showing the overall configuration of the attitude control method for a redundant degree of freedom mouth bot according to the present invention.
  • Figure 3 shows the schematic configuration of the robot.
  • Figure 4 is an illustration of the elbow joint
  • Figure 5 is an illustration of the neural network.
  • Figure 6 is a flowchart showing the procedure of neural network learning. It is. BEST MODE FOR CARRYING OUT THE INVENTION
  • BEST MODE FOR CARRYING OUT THE INVENTION an embodiment of the present invention will be described with reference to the drawings.
  • FIG. 2 is a diagram showing the overall configuration of the attitude control system with n-both redundant degrees of freedom of the present invention.
  • a robot 1 is a human-arm type 7-degree-of-freedom robot, which operates in response to a command from the robot controller 3. Obstacles 41 and work objects 40 are located at the work return line of robot 1.
  • the camera 201, the light source 202, and the visual sensor control device 2 constitute a vision system.
  • the camera 201 captures an image of the obstacle 41 and the work object 40, and the captured data is used as a visual sensor controller.
  • the visual sensor control device 2 is configured around a host processor (CPU) 20.
  • the imaging data from the camera 201 is temporarily stored in the image memory 26 via the camera interface 29.
  • the host processor 20 reads out the image data stored in the image memory 26 and processes it according to the processing program stored in the R ⁇ M 21. Also, the three-dimensional position information of the obstacle 41 and the work object 40 obtained as a result is output from the LAN interface 24 to the robot control device 3.
  • the coprocessor 25 and the image processor 27 are coupled to the host processor 20 via a bus 200, and perform processing such as floating-point arithmetic / shading of imaging data.
  • the RAM 22 and the nonvolatile RAM 23 store various data and data for arithmetic processing.
  • the host processor 20 controls the on / off of the light source 202.
  • a command signal is output to the light source 202 via the light source interface 28.
  • the robot controller 3 is mainly configured by a host processor (CPU) 30.
  • the three-dimensional position information of the obstacle 41 and the work object 40 sent from the visual sensor control device 2 is temporarily stored in the RAM 32 via the LAN interface 37. Is done.
  • the host processor 30 reads out the three-dimensional position information stored in the RAM 32 and, in accordance with the processing program stored in the ROM 31, learns a neural network or performs a new network.
  • the elbow angle is estimated using the ral- network. The details will be described later. Further, the host processor 30 obtains each joint angle of the robot 1 and outputs a command signal to each servo motor (not shown) of the robot 1 via the servo amplifier 33.
  • the coprocessor 35 is connected to the host processor 30 by a bus 300 and performs floating point operations and the like.
  • RAM 32 temporarily stores data for arithmetic processing.
  • the coupling weight coefficient for neural network learning is also stored in RAM32.
  • the nonvolatile RAM 36 contains the number of units in the dual-network, the dual-function function type, and the coupling load coefficient finally determined in the neural network learning. Is stored.
  • .Teaching operation panel (TP) 38 is connected to robot controller 3 via serial interface 34. The operator operates the teaching operation panel 38 to perform the manual operation of the robot 1> o
  • FIG. 3 is a diagram showing a schematic configuration of the mouth bot.
  • Robot 1 is a human-arm type 7-DOF robot as described above, and has seven joints. It consists of 11, 12, 13, 14, 15, 16, 17, and 18. Joints 11, 12, 15, and 17 are axes of rotation, and L3, 14, and 16 are axes of flexion.
  • L4 can be regarded as an elbow joint. Next, the elbow joint will be described.
  • FIG. 4 is an explanatory diagram of the elbow joint.
  • joint 1 3 is defined as shoulder point 0 s
  • joint 1 16 is defined as wrist point 0 w
  • joint 14 is rotated about axis 100 connecting shoulder 0 s and wrist point ⁇ w.
  • the position and posture of the hand 18 at the tip of the robot 1 are unchanged. That is, with the position and posture of the hand 18 fixed, the position of the joint 14 can be freely set, and the joint 14 can be regarded as an elbow joint. Therefore, the robot 1 can be made redundant by the joints 14, and the position (elbow point) Oe of the joint 14 is used as a parameter to define the degree of redundancy. Can be used.
  • the elbow point ⁇ e is represented by elbow angle.
  • the elbow angle passes through two points, wrist point 0 w and shoulder point 0 s, a plane PL 0 perpendicular to the X-Y plane of the base coordinate system, wrist point 0 w, shoulder point ⁇ s and elbow position ⁇ It is defined as the angle between the plane PL 1 passing through the three points e.
  • a method for determining the elbow angle will be described.
  • Figure 5 is an explanatory diagram of the neural network.
  • the neural network is a hierarchical neural network and is composed of an input layer 51, a middle layer 52, and an output layer 53.
  • each of the three-dimensional position information Xs and Xd is information obtained based on the imaging data of the camera 201, and is input to the input layer 51 of the neural network.
  • Each of the three-dimensional position information Xs and .Xd has six degrees of freedom, and the input layer 51 corresponds to the six degrees of freedom Xs (Xsl to Xs6) and Xd (Xdl to Xd6).
  • the output layer 53 is provided with one unit 53 N corresponding to the elbow angle ⁇ .
  • the operator operates the robot 1 manually using the teaching operation panel 38, and the robot 1 avoids the obstacle 41 and reaches the target work object 40. This is the elbow angle of Robot 1 when the robot is moved.
  • the three-dimensional position information Xs and Xd of the obstacle 41 and the work object 40 and the elbow angle ⁇ d of the robot 1 are used as an example pattern, and this example pattern is used as the robot 1
  • the required number is repeatedly acquired according to the accuracy required in the work.
  • the learning of the mapping relation is executed in the robot controller 3 according to the neural network learning program, as described above. That is, the error ( ⁇ ( ⁇ ⁇ - ⁇ 2 ) 2 ) between the elbow angle ⁇ j3 ⁇ 4 d as the teacher data and the elbow angle ⁇ estimated by the neural network is calculated by the back propagation method.
  • the learning of the coupling weight coefficient of the dual neural network proceeds in the direction in which the minimum is obtained.
  • a mapping of (X s, X d) ⁇ ⁇ is generated in the neural network, and an estimated value of the elbow angle ⁇ is output according to the input (X s, X d). become.
  • the joints 11, 1 2, and (X d, ⁇ ) are obtained by analytical calculation, and the configuration of robot 1 is determined.
  • the hand 18 of the robot 1 reaches the target work object 40 avoiding the obstacle 41, and the elbow angle ⁇ obtained by estimating the neural network from the example pattern is Take an approximate value for the teacher data ⁇ d.
  • Figure 6 is a flowchart showing the procedure of neural network learning. The number following S in the figure indicates the step ban.
  • the robot 1 is manually operated using the teaching operation panel 38, and when the robot 1 reaches the target work target 40 by avoiding the obstacle 41, the robot 1 Acquire configuration data (angle data of each joint).
  • Figure 1 is a chart showing the procedure for determining the robot configuration during actual work. This flowchart shows that after the neural network learning creates a mapping relation (Xs, Xd) — in the neural network, the c [S1] robot 1 that is executed executes the actual work. When you let them go, The three-dimensional position information X s and X d of the obstacle 41 and the work object 40 obtained from the imaging data of 201 are acquired from the visual sensor control device 2.
  • the elbow angle ⁇ is estimated for each position of the obstacle 41 and the work object 40 from the mapping relation generated in the neural network, and the elbow angle ⁇ is used.
  • the configuration of robot 1 is uniquely determined.
  • the estimation of the elbow angle ⁇ and the configuration of the mouth bot 1 can be determined in real time during actual work. For this reason, even if the robot 1 has a redundant degree of freedom, the configuration of which has been difficult to determine in the past, at the time of actual work, the configuration that appropriately responds to the work environment Revisions can be determined in real time. Therefore, the redundant degree of freedom of the robot 1 can be effectively used, and the work ability originally possessed by the robot 1 can be sufficiently exhibited.
  • the number of obstacles in the working environment is one.
  • the present invention can be similarly applied to a plurality of obstacles by changing the number of input units of the neural network. -5>.
  • the neural network is a three-layer hierarchical neural network.
  • the present invention can be similarly applied to other types of neural networks, such as a hierarchical neural network having feedback coupling and a four-layer hierarchical neural network.
  • the elbow angle is estimated corresponding to each position data of the obstacle and the work object from the mapping relation generated in the neural network, and the elbow angle is used for the redundant freedom.
  • the configuration is such that the configuration of the robot is uniquely determined. In this configuration, the elbow angle is set so as to avoid obstacles, and the hand reaches the target work object.
  • the estimation of the elbow angle and the determination of the configuration of the redundant DOF robot can be performed in real time during actual work. For this reason, even robots with redundant degrees of freedom, for which it has been difficult to determine the configuration in the past, can implement a configuration suitable for the work environment at the time of actual work. Time can be determined. Therefore, the redundant degrees of freedom robot can be effectively utilized, and the working capability inherent in the redundant degrees of freedom robot can be fully exhibited.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Numerical Control (AREA)
  • Feedback Control In General (AREA)
  • Manipulator (AREA)

Abstract

L'invention se rapporte à un système de commande d'attitude pour robots avec degré de liberté redondant, ce système étant utilisé pour déterminer la configuration d'un robot avec un degré de liberté redondant en temps réel en fonction d'un travail cible et d'un environnement de travail. Les données (Xd, Xs) relatives à un objet à travailler et à un obstacle sont obtenues à partir d'un capteur visuel (étape S1), et un réseau neuronal détermine une valeur estimée d'un angle de coude (Ζ) sur la base de la relation de mappage entre les données d'entrée (Xd, Xs) de l'objet à travailler et de l'obstacle (étape S2). Lorsqu'une valeur estimée de l'angle de coude (Ζ) est émise, l'angle de chaque articulation du robot avec un degré de liberté redondant est déterminé par des calculs analytiques sur la base de la valeur estimée de l'angle de coude (Ζ) et des données (Xd) relatives à l'objet à travailler (étape S3). L'estimation de cet angle de coude (Ζ) et la détermination de la configuration du robot avec un degré de liberté redondant peuvent être effectuées en temps réel pendant une opération pratique. Ainsi, même avec un robot ayant un degré de liberté redondant, la configuration exactement adaptée à l'environnement de travail peut être déterminée en temps réel pendant une opération pratique.
PCT/JP1993/000132 1992-02-25 1993-02-03 Systeme de commande d'attitude pour robots avec degre de liberte redondant WO1993017375A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP3745492A JPH05233042A (ja) 1992-02-25 1992-02-25 冗長自由度ロボットの姿勢制御方式
JP4/37454 1992-02-25

Publications (1)

Publication Number Publication Date
WO1993017375A1 true WO1993017375A1 (fr) 1993-09-02

Family

ID=12497962

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP1993/000132 WO1993017375A1 (fr) 1992-02-25 1993-02-03 Systeme de commande d'attitude pour robots avec degre de liberte redondant

Country Status (2)

Country Link
JP (1) JPH05233042A (fr)
WO (1) WO1993017375A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106945041A (zh) * 2017-03-27 2017-07-14 华南理工大学 一种冗余度机械臂重复运动规划方法
CN107490958A (zh) * 2017-07-31 2017-12-19 天津大学 一种五自由度混联机器人的模糊自适应控制方法
CN110076770A (zh) * 2019-03-28 2019-08-02 陕西理工大学 一种用于冗余机械臂的自运动方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5948932B2 (ja) * 2012-02-16 2016-07-06 セイコーエプソン株式会社 ロボット制御装置、ロボット制御方法およびロボット制御プログラムならびにロボットシステム
JP6616170B2 (ja) * 2015-12-07 2019-12-04 ファナック株式会社 コアシートの積層動作を学習する機械学習器、積層コア製造装置、積層コア製造システムおよび機械学習方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0253582A (ja) * 1988-08-19 1990-02-22 Nippon Telegr & Teleph Corp <Ntt> マニピュレータ学習制御方法
JPH02211576A (ja) * 1989-02-10 1990-08-22 Nippon Telegr & Teleph Corp <Ntt> 自己組織化装置
JPH0349845A (ja) * 1989-07-13 1991-03-04 Omron Corp 適応制御装置
JPH0415704A (ja) * 1990-05-02 1992-01-21 Nippon Telegr & Teleph Corp <Ntt> 小口径トンネルロボットの非線形方向制御システムの同定方法およびシミユレート方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0253582A (ja) * 1988-08-19 1990-02-22 Nippon Telegr & Teleph Corp <Ntt> マニピュレータ学習制御方法
JPH02211576A (ja) * 1989-02-10 1990-08-22 Nippon Telegr & Teleph Corp <Ntt> 自己組織化装置
JPH0349845A (ja) * 1989-07-13 1991-03-04 Omron Corp 適応制御装置
JPH0415704A (ja) * 1990-05-02 1992-01-21 Nippon Telegr & Teleph Corp <Ntt> 小口径トンネルロボットの非線形方向制御システムの同定方法およびシミユレート方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106945041A (zh) * 2017-03-27 2017-07-14 华南理工大学 一种冗余度机械臂重复运动规划方法
CN106945041B (zh) * 2017-03-27 2019-08-20 华南理工大学 一种冗余度机械臂重复运动规划方法
US11409263B2 (en) 2017-03-27 2022-08-09 South China University Of Technology Method for programming repeating motion of redundant robotic arm
CN107490958A (zh) * 2017-07-31 2017-12-19 天津大学 一种五自由度混联机器人的模糊自适应控制方法
CN110076770A (zh) * 2019-03-28 2019-08-02 陕西理工大学 一种用于冗余机械臂的自运动方法

Also Published As

Publication number Publication date
JPH05233042A (ja) 1993-09-10

Similar Documents

Publication Publication Date Title
CN108883533B (zh) 机器人控制
JP5114019B2 (ja) エフェクタの軌道を制御するための方法
JP7339806B2 (ja) 制御システム、ロボットシステム及び制御方法
KR950000814B1 (ko) 로보트의 동작지시 방법 및 제어장치
WO1992001539A1 (fr) Procede d&#39;etalonnage d&#39;un capteur visuel
WO1992009019A1 (fr) Procede servant a selectionner le systeme de coordonnees d&#39;un robot
Lippiello et al. A position-based visual impedance control for robot manipulators
JP3349652B2 (ja) オフラインティーチング方法
JP2874238B2 (ja) 多関節形ロボットの制御方法
WO1989008878A1 (fr) Procede de commande de l&#39;orientation d&#39;un outil dans un robot
WO1993017375A1 (fr) Systeme de commande d&#39;attitude pour robots avec degre de liberte redondant
WO2020027106A1 (fr) Système de robot
JPS6334609A (ja) 複腕装置
JPH0693209B2 (ja) ロボツトの円弧補間姿勢制御装置
JP2629291B2 (ja) マニピュレータ学習制御方法
JPH05345291A (ja) ロボットの動作範囲制限方式
JP2703767B2 (ja) ロボットの教示データ作成方法
JPH06304893A (ja) 位置決め機構のキャリブレーション方式
JPH05177563A (ja) マスタスレーブマニピュレータの制御方法
WO1987000311A1 (fr) Systeme de commande de robot articule
JP2011224745A (ja) ロボット教示装置、該装置のコントローラ、およびプログラム
JP3085814B2 (ja) 多関節型マスタ・スレーブマニピュレータ
JPH0929673A (ja) マニピュレータ制御装置
JPH05261691A (ja) 冗長マニピュレータ制御方式
JPS62199383A (ja) ロボツトの制御方式

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): KR US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

122 Ep: pct application non-entry in european phase