[go: up one dir, main page]

WO1993017375A1 - Attitude control system for robots with redundant degree of freedom - Google Patents

Attitude control system for robots with redundant degree of freedom Download PDF

Info

Publication number
WO1993017375A1
WO1993017375A1 PCT/JP1993/000132 JP9300132W WO9317375A1 WO 1993017375 A1 WO1993017375 A1 WO 1993017375A1 JP 9300132 W JP9300132 W JP 9300132W WO 9317375 A1 WO9317375 A1 WO 9317375A1
Authority
WO
WIPO (PCT)
Prior art keywords
freedom
robot
degree
data
redundant
Prior art date
Application number
PCT/JP1993/000132
Other languages
French (fr)
Japanese (ja)
Inventor
Hiroshi Sugimura
Original Assignee
Fanuc Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fanuc Ltd. filed Critical Fanuc Ltd.
Publication of WO1993017375A1 publication Critical patent/WO1993017375A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/1643Programme controls characterised by the control loop redundant control

Definitions

  • the present invention relates to a posture control method of a redundant degree of freedom mouth bot for controlling the posture of a redundant degree of freedom mouth bot, and in particular, to a redundancy degree of freedom that uniquely determines and controls the posture of the redundancy degree of freedom robot including an elbow angle. It relates to a robot attitude control method. Background technology
  • the configuration of the manipulator when positioning the hand of the manipulator at a point in space Is finite. Therefore, when there are obstacles in the working environment or when working in a narrow environment, the working posture of the manipulator is limited, which may result in reduced work efficiency or the inability to use the manipulator. is there.
  • a robot with a redundant degree of freedom with more than ⁇ ⁇ degrees of freedom has been developed.
  • this robot with redundant degrees of freedom the number of configurations of the manipulator when positioning the hand of the manipulator at a certain point in space is infinite. Therefore, a method is needed to select and specify the configuration.
  • a human arm In the case of a robot with a degree of freedom, if the elbow angle is specified by using its mechanical features, the degree of freedom can be restricted. Finger at this elbow position
  • the positions (angles) of the seven drive shafts can be analytically determined from the settings and the hand positions at that time, and the configuration is also uniquely determined.
  • the redundant DOF robot with more than 7 DOF drive axes has better workability than the robot manipulator with 6 DOF or less, but the redundancy freedom associated with utilizing the redundancy. Because of the lack of control technology for the degree constraint method, it was not possible to utilize the degree of freedom for the yo-yo. Disclosure of the invention
  • the present invention has been made in view of such a point, and a redundancy degree of freedom robot capable of determining a configuration of a redundancy degree of freedom robot in real time according to a work target and a work environment.
  • An object of the present invention is to provide an attitude control method.
  • Redundancy degree of freedom that uniquely determines and controls the attitude of bots Elbow of the robot with redundant degrees of freedom by the neural network An angle is obtained, and the attitude (configuration) of the redundant degree-of-freedom mouth bot is uniquely determined and controlled from the data of the work object and the data of the elbow angle.
  • the attitude control method of the robot is proposed.
  • Fig. 1 is a flow chart showing the procedure for determining the configuration of the mouth box during actual work.
  • FIG. 2 is a diagram showing the overall configuration of the attitude control method for a redundant degree of freedom mouth bot according to the present invention.
  • Figure 3 shows the schematic configuration of the robot.
  • Figure 4 is an illustration of the elbow joint
  • Figure 5 is an illustration of the neural network.
  • Figure 6 is a flowchart showing the procedure of neural network learning. It is. BEST MODE FOR CARRYING OUT THE INVENTION
  • BEST MODE FOR CARRYING OUT THE INVENTION an embodiment of the present invention will be described with reference to the drawings.
  • FIG. 2 is a diagram showing the overall configuration of the attitude control system with n-both redundant degrees of freedom of the present invention.
  • a robot 1 is a human-arm type 7-degree-of-freedom robot, which operates in response to a command from the robot controller 3. Obstacles 41 and work objects 40 are located at the work return line of robot 1.
  • the camera 201, the light source 202, and the visual sensor control device 2 constitute a vision system.
  • the camera 201 captures an image of the obstacle 41 and the work object 40, and the captured data is used as a visual sensor controller.
  • the visual sensor control device 2 is configured around a host processor (CPU) 20.
  • the imaging data from the camera 201 is temporarily stored in the image memory 26 via the camera interface 29.
  • the host processor 20 reads out the image data stored in the image memory 26 and processes it according to the processing program stored in the R ⁇ M 21. Also, the three-dimensional position information of the obstacle 41 and the work object 40 obtained as a result is output from the LAN interface 24 to the robot control device 3.
  • the coprocessor 25 and the image processor 27 are coupled to the host processor 20 via a bus 200, and perform processing such as floating-point arithmetic / shading of imaging data.
  • the RAM 22 and the nonvolatile RAM 23 store various data and data for arithmetic processing.
  • the host processor 20 controls the on / off of the light source 202.
  • a command signal is output to the light source 202 via the light source interface 28.
  • the robot controller 3 is mainly configured by a host processor (CPU) 30.
  • the three-dimensional position information of the obstacle 41 and the work object 40 sent from the visual sensor control device 2 is temporarily stored in the RAM 32 via the LAN interface 37. Is done.
  • the host processor 30 reads out the three-dimensional position information stored in the RAM 32 and, in accordance with the processing program stored in the ROM 31, learns a neural network or performs a new network.
  • the elbow angle is estimated using the ral- network. The details will be described later. Further, the host processor 30 obtains each joint angle of the robot 1 and outputs a command signal to each servo motor (not shown) of the robot 1 via the servo amplifier 33.
  • the coprocessor 35 is connected to the host processor 30 by a bus 300 and performs floating point operations and the like.
  • RAM 32 temporarily stores data for arithmetic processing.
  • the coupling weight coefficient for neural network learning is also stored in RAM32.
  • the nonvolatile RAM 36 contains the number of units in the dual-network, the dual-function function type, and the coupling load coefficient finally determined in the neural network learning. Is stored.
  • .Teaching operation panel (TP) 38 is connected to robot controller 3 via serial interface 34. The operator operates the teaching operation panel 38 to perform the manual operation of the robot 1> o
  • FIG. 3 is a diagram showing a schematic configuration of the mouth bot.
  • Robot 1 is a human-arm type 7-DOF robot as described above, and has seven joints. It consists of 11, 12, 13, 14, 15, 16, 17, and 18. Joints 11, 12, 15, and 17 are axes of rotation, and L3, 14, and 16 are axes of flexion.
  • L4 can be regarded as an elbow joint. Next, the elbow joint will be described.
  • FIG. 4 is an explanatory diagram of the elbow joint.
  • joint 1 3 is defined as shoulder point 0 s
  • joint 1 16 is defined as wrist point 0 w
  • joint 14 is rotated about axis 100 connecting shoulder 0 s and wrist point ⁇ w.
  • the position and posture of the hand 18 at the tip of the robot 1 are unchanged. That is, with the position and posture of the hand 18 fixed, the position of the joint 14 can be freely set, and the joint 14 can be regarded as an elbow joint. Therefore, the robot 1 can be made redundant by the joints 14, and the position (elbow point) Oe of the joint 14 is used as a parameter to define the degree of redundancy. Can be used.
  • the elbow point ⁇ e is represented by elbow angle.
  • the elbow angle passes through two points, wrist point 0 w and shoulder point 0 s, a plane PL 0 perpendicular to the X-Y plane of the base coordinate system, wrist point 0 w, shoulder point ⁇ s and elbow position ⁇ It is defined as the angle between the plane PL 1 passing through the three points e.
  • a method for determining the elbow angle will be described.
  • Figure 5 is an explanatory diagram of the neural network.
  • the neural network is a hierarchical neural network and is composed of an input layer 51, a middle layer 52, and an output layer 53.
  • each of the three-dimensional position information Xs and Xd is information obtained based on the imaging data of the camera 201, and is input to the input layer 51 of the neural network.
  • Each of the three-dimensional position information Xs and .Xd has six degrees of freedom, and the input layer 51 corresponds to the six degrees of freedom Xs (Xsl to Xs6) and Xd (Xdl to Xd6).
  • the output layer 53 is provided with one unit 53 N corresponding to the elbow angle ⁇ .
  • the operator operates the robot 1 manually using the teaching operation panel 38, and the robot 1 avoids the obstacle 41 and reaches the target work object 40. This is the elbow angle of Robot 1 when the robot is moved.
  • the three-dimensional position information Xs and Xd of the obstacle 41 and the work object 40 and the elbow angle ⁇ d of the robot 1 are used as an example pattern, and this example pattern is used as the robot 1
  • the required number is repeatedly acquired according to the accuracy required in the work.
  • the learning of the mapping relation is executed in the robot controller 3 according to the neural network learning program, as described above. That is, the error ( ⁇ ( ⁇ ⁇ - ⁇ 2 ) 2 ) between the elbow angle ⁇ j3 ⁇ 4 d as the teacher data and the elbow angle ⁇ estimated by the neural network is calculated by the back propagation method.
  • the learning of the coupling weight coefficient of the dual neural network proceeds in the direction in which the minimum is obtained.
  • a mapping of (X s, X d) ⁇ ⁇ is generated in the neural network, and an estimated value of the elbow angle ⁇ is output according to the input (X s, X d). become.
  • the joints 11, 1 2, and (X d, ⁇ ) are obtained by analytical calculation, and the configuration of robot 1 is determined.
  • the hand 18 of the robot 1 reaches the target work object 40 avoiding the obstacle 41, and the elbow angle ⁇ obtained by estimating the neural network from the example pattern is Take an approximate value for the teacher data ⁇ d.
  • Figure 6 is a flowchart showing the procedure of neural network learning. The number following S in the figure indicates the step ban.
  • the robot 1 is manually operated using the teaching operation panel 38, and when the robot 1 reaches the target work target 40 by avoiding the obstacle 41, the robot 1 Acquire configuration data (angle data of each joint).
  • Figure 1 is a chart showing the procedure for determining the robot configuration during actual work. This flowchart shows that after the neural network learning creates a mapping relation (Xs, Xd) — in the neural network, the c [S1] robot 1 that is executed executes the actual work. When you let them go, The three-dimensional position information X s and X d of the obstacle 41 and the work object 40 obtained from the imaging data of 201 are acquired from the visual sensor control device 2.
  • the elbow angle ⁇ is estimated for each position of the obstacle 41 and the work object 40 from the mapping relation generated in the neural network, and the elbow angle ⁇ is used.
  • the configuration of robot 1 is uniquely determined.
  • the estimation of the elbow angle ⁇ and the configuration of the mouth bot 1 can be determined in real time during actual work. For this reason, even if the robot 1 has a redundant degree of freedom, the configuration of which has been difficult to determine in the past, at the time of actual work, the configuration that appropriately responds to the work environment Revisions can be determined in real time. Therefore, the redundant degree of freedom of the robot 1 can be effectively used, and the work ability originally possessed by the robot 1 can be sufficiently exhibited.
  • the number of obstacles in the working environment is one.
  • the present invention can be similarly applied to a plurality of obstacles by changing the number of input units of the neural network. -5>.
  • the neural network is a three-layer hierarchical neural network.
  • the present invention can be similarly applied to other types of neural networks, such as a hierarchical neural network having feedback coupling and a four-layer hierarchical neural network.
  • the elbow angle is estimated corresponding to each position data of the obstacle and the work object from the mapping relation generated in the neural network, and the elbow angle is used for the redundant freedom.
  • the configuration is such that the configuration of the robot is uniquely determined. In this configuration, the elbow angle is set so as to avoid obstacles, and the hand reaches the target work object.
  • the estimation of the elbow angle and the determination of the configuration of the redundant DOF robot can be performed in real time during actual work. For this reason, even robots with redundant degrees of freedom, for which it has been difficult to determine the configuration in the past, can implement a configuration suitable for the work environment at the time of actual work. Time can be determined. Therefore, the redundant degrees of freedom robot can be effectively utilized, and the working capability inherent in the redundant degrees of freedom robot can be fully exhibited.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Numerical Control (AREA)
  • Feedback Control In General (AREA)
  • Manipulator (AREA)

Abstract

An attitude control system for robots with a redundant degree of freedom, used to determine the configuration of a robot with a redundant degree of freedom in real time in accordance with a work target and work environment. The data (Xd, Xs) on a work object and an obstacle are obtained from a visual sensor (Step S1), and a neural network determines an estimated value of an elbow angle (Ζ) on the basis of the mapping relation between the input data (Xd, Xs) of the work object and obstacle (Step S2). When an estimated value of the elbow angle (Ζ) is outputted, the angle of each joint of a robot with a redundant degree of freedom is determined by analytical calculations on the basis of the estimated value of the elbow angle (Ζ) and data (Xd) on the work object (Step S3). The estimation of this elbow angle (Ζ) and the determination of the configuration of the robot with a redundant degree of freedom can be carried out in real time during a practical operation. Therefore, even with a robot having a redundant degree of freedom, the configuration thereof properly suitable for the work environment can be determined in real time during a practical operation.

Description

明 細 書 冗長自由度ロボッ 卜の姿勢制御方式 技 術 分 野  Description Attitude control method for robots with redundant degrees of freedom
本発明は冗長自由度口ボッ トの姿勢を制御する冗長自由度口 ボッ トの姿勢制御方式に関し、 特に肘角度を含めて冗長自由度 ロボッ トの姿勢を一意的に決定し制御する冗長自由度ロボッ ト の姿勢制御方式に関する。 背 景 技 術  The present invention relates to a posture control method of a redundant degree of freedom mouth bot for controlling the posture of a redundant degree of freedom mouth bot, and in particular, to a redundancy degree of freedom that uniquely determines and controls the posture of the redundancy degree of freedom robot including an elbow angle. It relates to a robot attitude control method. Background technology
一般に、 6 自由度以下の駆動機構を持つロ ボッ ト マニ ピユ レ ータでは、 マニピュレータの手先を空間内のある点に位置決め するときのマニピュ レータのコ ンフィギュ レーショ ン (各駆動 軸の角度) の個数は、 有限である。 したがって、 作業環境内に 障害物がある場合や、 狭い環境で作業を行う場合などでは、 マ ニピュ レータの作業姿勢が制限されるため、 作業効率が低下し たり、 マニ ピュ レータを利用できないことがある。  In general, in a robot manipulator with a drive mechanism with 6 degrees of freedom or less, the configuration of the manipulator when positioning the hand of the manipulator at a point in space (the angle of each drive axis) Is finite. Therefore, when there are obstacles in the working environment or when working in a narrow environment, the working posture of the manipulator is limited, which may result in reduced work efficiency or the inability to use the manipulator. is there.
この問題を解決するための機構と して、 Ί 自由度以上の駆動 軸を持つ冗長自由度ロボッ トが開発されている。 この冗長自由 度ロボッ トでは、 マニ ピュ レータの手先を空間内のある点に位 置決めするときのマニピュ レータのコ ンフィギュ レーショ ンの 個数は、 無限にある。 したがって、 コ ンフ ィ ギュ レーシ ョ ンを 選択して特定する手法が必要となる。 例えば、 人間腕型? 自由 度ロボッ 卜の場合、 その機構的な特徴を利用して肘角度を指定 すれば、 冗長自由度を制約することができる。 この肘位置の指 定とそのときの手先位置から 7つの駆動軸の位置 (角度) を解 析的に求めることができ、 コ ンフィギユ レーショ ンも一意的に 決定される。 As a mechanism to solve this problem, a robot with a redundant degree of freedom with more than 駆 動 degrees of freedom has been developed. In this robot with redundant degrees of freedom, the number of configurations of the manipulator when positioning the hand of the manipulator at a certain point in space is infinite. Therefore, a method is needed to select and specify the configuration. For example, a human arm? In the case of a robot with a degree of freedom, if the elbow angle is specified by using its mechanical features, the degree of freedom can be restricted. Finger at this elbow position The positions (angles) of the seven drive shafts can be analytically determined from the settings and the hand positions at that time, and the configuration is also uniquely determined.
しかし、 作業目標と作業環境 (空間的な拘束) に応じて自動 的に肘角度を算出し指定する方法は未だ提案されていない。 ま た、 肘角度を作業に応じて実時間で決定することもできない。 これは、 肘角度を特定するための拘束条件を解析的に記述する のが困難だからである。 また、 記述が可能であっても実時間で その解を求めることは、 現在の計算機能力では期待できないか らである。  However, a method has not yet been proposed that automatically calculates and specifies the elbow angle according to the work target and work environment (spatial constraints). Nor can the elbow angle be determined in real time depending on the task. This is because it is difficult to analytically describe the constraints for specifying the elbow angle. Also, even if it is possible to describe it, it is impossible to find the solution in real time with the current computational capabilities.
このように、 7 自由度以上の駆動軸を持つ冗長自由度ロボッ トは、 6 自由度以下のロボッ トマニピュレータに比べて作業能 力において優れているものの、 冗長性を生かす際に伴う冗長自 由度の拘束手法についての制御技術が伴っていないために、 冗 長自由度を有窈に活用することができなかった。 発 明 の 開 示  In this way, the redundant DOF robot with more than 7 DOF drive axes has better workability than the robot manipulator with 6 DOF or less, but the redundancy freedom associated with utilizing the redundancy. Because of the lack of control technology for the degree constraint method, it was not possible to utilize the degree of freedom for the yo-yo. Disclosure of the invention
本発明はこのような点に鑑みてなされたものであり、 作業目 標と作業環境に応じて実時間で冗長自由度ロボッ トのコンフィ ギユ レーショ ンを決定することができる冗長自由度ロボッ 卜の 姿勢制御方式を提供することを目的とする。  The present invention has been made in view of such a point, and a redundancy degree of freedom robot capable of determining a configuration of a redundancy degree of freedom robot in real time according to a work target and a work environment. An object of the present invention is to provide an attitude control method.
本発明では上記課題を解決するために、  In the present invention, in order to solve the above problems,
冗長自由度 σボッ トの姿勢を一意的に決定し制御する冗長自 由度 Γ3ボッ トの姿勢制御方式において、 作業対象物及び障害物 の各データを取得してニュ一ラルネッ トワークに入力し、 前記 ニューラルネッ ト ワークによつて前記冗長自由度ロボッ 卜の肘 角度を求め、 前記作業対象物のデータ及び前記肘角度のデータ から前記冗長自由度口ボッ トの姿勢 (コ ンフィギユ レーショ ン) を一意的に決定し制御することを特徵とする冗長自由度口 ボッ 卜の姿勢制御方式が、 提案される。 Redundancy degree of freedom σ Redundancy degree of freedom that uniquely determines and controls the attitude of bots Elbow of the robot with redundant degrees of freedom by the neural network An angle is obtained, and the attitude (configuration) of the redundant degree-of-freedom mouth bot is uniquely determined and controlled from the data of the work object and the data of the elbow angle. The attitude control method of the robot is proposed.
作業対象物及び障害物の各データを取得してニューラルネッ トワークに入力する。 ニューラ ルネッ ト ワークは、 その作業対 象物及び障害物の各入力データとの写像関係から肘角度の推定 値を求め出力する。 肘角度の推定値が出力されると、 その肘角 度の推定値及び作業対象物のデータから、 冗長自由度ロボッ ト の各関節の角度が解析的な計算により求まり、 コ ンフ ィギユ レ ーショ ンが一意的に決定する。 このときのコ ンフィギユ レーシ ヨ ンは、 肘角度が障害物を回避するような角度をとると共に、 そのときの手先が目標である作業対象物に到達している。 すな わち、 冗長自由度ロボッ トは、 障害物を回避し目標となる作業 対象物に到達するコ ンフィギユ レーショ ンを実時間で決定する ことができる。 図 面 の 簡 単 な 説 明 図 1 は実際作業時での口ボッ 卜のコ ンフ ィ ギュ レーシ ョ ン決 定手順を示すフ口一チャー ト、  Obtain the data of the work object and obstacle and input them to the neural network. The neural network calculates and outputs the estimated value of the elbow angle from the mapping relationship between the input data of the work object and the obstacle. When the estimated value of the elbow angle is output, the angle of each joint of the redundant DOF robot is obtained by analytical calculation from the estimated value of the elbow angle and the data of the work object, and the configuration is obtained. Is uniquely determined. In the configuration at this time, the elbow angle is set so as to avoid the obstacle, and the hand at that time reaches the target work target. In other words, the redundant DOF robot can determine in real time the configuration that avoids obstacles and reaches the target work object. Brief description of drawings Fig. 1 is a flow chart showing the procedure for determining the configuration of the mouth box during actual work.
図 2は本発明の冗長自由度口ボッ トの姿勢制御方式の全体構 成を示す図、  FIG. 2 is a diagram showing the overall configuration of the attitude control method for a redundant degree of freedom mouth bot according to the present invention.
図 3はロボッ トの概略構成を示す図、  Figure 3 shows the schematic configuration of the robot.
図 4 は肘関節の説明図、  Figure 4 is an illustration of the elbow joint,
図 5 はニューラルネ ッ ト ワークの説明図、  Figure 5 is an illustration of the neural network.
図 6 はニューラルネッ ト ワーク学習の手順を示すフローチヤ ー トである。 発明を実施するための最良の形態 以下、 本発明の一実施例を図面に基づいて説明する。 Figure 6 is a flowchart showing the procedure of neural network learning. It is. BEST MODE FOR CARRYING OUT THE INVENTION Hereinafter, an embodiment of the present invention will be described with reference to the drawings.
図 2は本発明の冗長自由度 nボッ 卜の姿勢制御方式の全体構 成を示す図である。 図において、 ロボッ ト 1 は人間腕型 7 自由 度ロボッ トであり、 ロボッ ト制御装置 3からの指令に応じて動 作する。 ロボッ ト 1の作業還境には障害物 4 1及び作業対象物 4 0が配置されている。  FIG. 2 is a diagram showing the overall configuration of the attitude control system with n-both redundant degrees of freedom of the present invention. In the figure, a robot 1 is a human-arm type 7-degree-of-freedom robot, which operates in response to a command from the robot controller 3. Obstacles 41 and work objects 40 are located at the work return line of robot 1.
カメ ラ 2 0 1、 光源 2 0 2及び視覚センサ制御装置 2は、 ビ ジョ ンシステムを構成する。 カメ ラ 2 0 1 は障害物 4 1及び作 業射象物 4 0を撮像し、 その撮像データを視覚センサ制御装置 The camera 201, the light source 202, and the visual sensor control device 2 constitute a vision system. The camera 201 captures an image of the obstacle 41 and the work object 40, and the captured data is used as a visual sensor controller.
L, Ϊ ] &る o L, Ϊ] & ru o
視覚センサ制御装置 2は、 ホス トプロセッサ (C P U ) 2 0 を中心に構成されている。 カメ ラ 2 0 1からの撮像データは、 カメ ライ ンタフヱ 一ス 2 9を経由して一旦ィ メ 一ジメモリ 2 6 に格納される。 ホス トプロセッサ 2 0 はそのイ メ ージメモ リ 2 6に格納された撮像データを読み出し、 R〇M 2 1 に格納され た処理プログラムに従って処理する。 また、 その結果得られた 障害物 4 1及び作業対象物 4 0の 3次元位置情報を、 L A Nィ ンタフヱース 2 4からロボッ ト制御装置 3に出力する。 コプロ セッサ 2 5及びイ メ ージプロセッサ 2 7 は、 ホス トプロセッサ 2 0 とバス 2 0 0で結合され、 浮動小数点渲算ゃ撮像データの 濃淡の処理等を行う。 R A M 2 2及び不揮発性 R A M 2 3には 各種のデータや演算処理のためのデータ等が格納される。 また, ホス トプロセッサ 2 0は、 光源 2 0 2のオンオフ制御を行うベ く、 光源ィ ンタ フ ユ ース 2 8を経由して光源 2 0 2 に指令信号 を出力する。 The visual sensor control device 2 is configured around a host processor (CPU) 20. The imaging data from the camera 201 is temporarily stored in the image memory 26 via the camera interface 29. The host processor 20 reads out the image data stored in the image memory 26 and processes it according to the processing program stored in the R〇M 21. Also, the three-dimensional position information of the obstacle 41 and the work object 40 obtained as a result is output from the LAN interface 24 to the robot control device 3. The coprocessor 25 and the image processor 27 are coupled to the host processor 20 via a bus 200, and perform processing such as floating-point arithmetic / shading of imaging data. The RAM 22 and the nonvolatile RAM 23 store various data and data for arithmetic processing. In addition, the host processor 20 controls the on / off of the light source 202. In addition, a command signal is output to the light source 202 via the light source interface 28.
ロボッ ト制御装置 3は、 ホス トプロセッサ (C P U) 3 0を 中心に構成されている。 視覚セ ンサ制御装置 2から送られた障 害物 4 1及び作業対象物 4 0の 3次元位置情報は、 L ANイ ン タ フ ヱ 一ス 3 7を経由して一旦 R AM 3 2に格納される。 ホス トプロセッサ 3 0 はその R AM 3 2 に格納された 3次元位置情 報を読み出し、 R OM 3 1 に格納された処理プログラ ムに従つ て、 ニュ ーラ ルネ ッ ト ワーク学習やニュ ーラ ルネ ッ ト ワ ーク に よる肘角度の推定を行う。 その詳細は後述する。 また、 ホス ト プロセッサ 3 0 はロボッ ト 1の各関節角度を求め、 その指令信 号をサーボアンプ 3 3を経由してロボッ ト 1の各サーボモータ (図示せず) に出力する。 コプロセッサ 3 5 はホス トプロセッ サ 3 0 とバス 3 0 0で結合され、 浮動小数点演算等を行う。  The robot controller 3 is mainly configured by a host processor (CPU) 30. The three-dimensional position information of the obstacle 41 and the work object 40 sent from the visual sensor control device 2 is temporarily stored in the RAM 32 via the LAN interface 37. Is done. The host processor 30 reads out the three-dimensional position information stored in the RAM 32 and, in accordance with the processing program stored in the ROM 31, learns a neural network or performs a new network. The elbow angle is estimated using the ral- network. The details will be described later. Further, the host processor 30 obtains each joint angle of the robot 1 and outputs a command signal to each servo motor (not shown) of the robot 1 via the servo amplifier 33. The coprocessor 35 is connected to the host processor 30 by a bus 300 and performs floating point operations and the like.
R AM 3 2 には、 演算処理のためのデータ等が一時的に格納 される。 ニュ ーラ ルネッ ト ワーク学習時の結合荷重係数もこの R AM 3 2 に格納される。 不揮発性 R AM 3 6 には、 二ユーラ ルネ ッ ト ワ ークのュニッ ト数、 二ユ ーラ ル関数タィプ、 ニュ ー ラ ルネッ ト ワーク学習において最終的に求められた結合荷重係 数等が格納される。  RAM 32 temporarily stores data for arithmetic processing. The coupling weight coefficient for neural network learning is also stored in RAM32. The nonvolatile RAM 36 contains the number of units in the dual-network, the dual-function function type, and the coupling load coefficient finally determined in the neural network learning. Is stored.
.教示操作盤 (T P ) 3 8がシ リ アルイ ンタ フ ヱ ース 3 4を介 してロボッ ト制御装置 3に接続されている。 オペレータは、 こ の教示操作盤 3 8を操作し、 ロボッ ト 1 のマニュ アル動作を行 > o  .Teaching operation panel (TP) 38 is connected to robot controller 3 via serial interface 34. The operator operates the teaching operation panel 38 to perform the manual operation of the robot 1> o
図 3は口ボッ トの概略構成を示す図である。 ロボッ ト 1 は、 上述したように人間腕型 7 自由度ロボッ トであり、 7個の関節 1 1 , 1 2, 1 3 , 1 4, 1 5 , 1 6、 1 7及び手先 1 8から 構成される。 関節 1 1, 1 2, 1 5及び 1 7は回転軸であり、 L 3, 1 4及び 1 6は屈曲軸である。 この口ポッ ト 1では, L 4を肘関節とみなすことができる。 次にこの肘闋節につ いて説明する。 FIG. 3 is a diagram showing a schematic configuration of the mouth bot. Robot 1 is a human-arm type 7-DOF robot as described above, and has seven joints. It consists of 11, 12, 13, 14, 15, 16, 17, and 18. Joints 11, 12, 15, and 17 are axes of rotation, and L3, 14, and 16 are axes of flexion. In this mouth port 1, L4 can be regarded as an elbow joint. Next, the elbow joint will be described.
図 4は肘関節の説明図である。 図に示すように、 関節 1 3を 肩点 0 s、 闋節 1 6を手首点 0 wとし、 その肩点 0 s と手首点 〇wを結ぶ線 1 0 0を軸に関節 1 4を回転させても、 ロボッ ト 1先端の手先 1 8の位置と姿勢は不変である。 すなわち、 手先 1 8の位置と姿勢を固定した状態で、 関節 1 4の位置を自由に とることができ、 関節 1 4を肘関節とみなすことができる。 し たがって、 ロボッ ト 1 はこの関節 1 4によつて冗長性を持つこ とができ、 闋節 1 4の位置 (肘点) O eを冗長自由度を規定す るためのパラメ ータとして用いることができる。  FIG. 4 is an explanatory diagram of the elbow joint. As shown in the figure, joint 1 3 is defined as shoulder point 0 s, joint 1 16 is defined as wrist point 0 w, and joint 14 is rotated about axis 100 connecting shoulder 0 s and wrist point 〇w. Even if it does, the position and posture of the hand 18 at the tip of the robot 1 are unchanged. That is, with the position and posture of the hand 18 fixed, the position of the joint 14 can be freely set, and the joint 14 can be regarded as an elbow joint. Therefore, the robot 1 can be made redundant by the joints 14, and the position (elbow point) Oe of the joint 14 is used as a parameter to define the degree of redundancy. Can be used.
ここで、 肘点◦ eは、 肘角度 で表される。 肘角度 は、 手 首点 0 w及び肩点 0 sの 2点を通りべ—ス座標系の X— Y平面 に垂直な平面 P L 0 と、 手首点 0 w、 肩点◦ s及び肘位置〇 e の 3点を通る平面 P L 1 とのなす角度として定義される。 次に、 この肘角度 の決定手法について説明する。  Here, the elbow point ◦ e is represented by elbow angle. The elbow angle passes through two points, wrist point 0 w and shoulder point 0 s, a plane PL 0 perpendicular to the X-Y plane of the base coordinate system, wrist point 0 w, shoulder point ◦ s and elbow position 肘It is defined as the angle between the plane PL 1 passing through the three points e. Next, a method for determining the elbow angle will be described.
図 5はニューラルネッ ト ワークの説明図である。 図に示すよ うに、 ニューラルネッ トワークは階層型ニューラルネッ ト ヮ一 クであり、 入力層 5 1、 中間層 5 2及び出力層 5 3の 3層から 構成される。  Figure 5 is an explanatory diagram of the neural network. As shown in the figure, the neural network is a hierarchical neural network and is composed of an input layer 51, a middle layer 52, and an output layer 53.
このニューラルネッ トワークにおいて、 まず、 障害物 4 1及 び作業对象物 4 2に対する肘角度 øの写像関係の学習が行われ る。 学習時の教師データとして、 障害物 4 1及ぴ作業対象物 4 0の各 3次元位置情報 X s、 X d及びロボッ ト 1の肘角度 ø d が用いられる。 各 3次元位置情報 X s、 X dは、 カメ ラ 2 0 1 の撮像データに基づいて得られた情報であり、 ニューラルネッ トワークの入力層 5 1 に入力される。 各 3次元位置情報 X s、. X dは 6 自由度を有し、 入力層 5 1 には、 この 6 自由度の X s (Xsl〜Xs6) 、 X d (Xdl〜Xd6) に対応して 1 2個のュニ ッ ト 5 1 Nが設けられている。 なお、 出力層 5 3には、 肘角度 Φに対応して 1個のュニッ ト 5 3 Nが設けられている。 In this neural network, first, learning of a mapping relationship of the elbow angle ø with respect to the obstacle 41 and the work object 42 is performed. Obstacles 4 1 and work objects 4 as teacher data during learning The three-dimensional position information X s and X d of 0 and the elbow angle ø d of the robot 1 are used. Each of the three-dimensional position information Xs and Xd is information obtained based on the imaging data of the camera 201, and is input to the input layer 51 of the neural network. Each of the three-dimensional position information Xs and .Xd has six degrees of freedom, and the input layer 51 corresponds to the six degrees of freedom Xs (Xsl to Xs6) and Xd (Xdl to Xd6). There are 12 units 51 N. The output layer 53 is provided with one unit 53 N corresponding to the elbow angle Φ.
また、 肘角度 ø dは、 オペレータがロボッ ト 1を教示操作盤 3 8を用いてマ二ユアル動作させ、 ロボッ ト 1が障害物 4 1を 回避して目標とする作業対象物 4 0 に到達したときのロボッ ト 1の肘角度である。  For the elbow angle ød, the operator operates the robot 1 manually using the teaching operation panel 38, and the robot 1 avoids the obstacle 41 and reaches the target work object 40. This is the elbow angle of Robot 1 when the robot is moved.
学習に際しては、 この障害物 4 1及び作業対象物 4 0の各 3 次元位置情報 X s、 X d及びロボッ ト 1の肘角度 ø dを例示パ ターンと し、 この例示パターンをロボッ ト 1の作業において必 要とされる精度に応じて必要な数だけ繰り返し獲得する。  In learning, the three-dimensional position information Xs and Xd of the obstacle 41 and the work object 40 and the elbow angle ød of the robot 1 are used as an example pattern, and this example pattern is used as the robot 1 The required number is repeatedly acquired according to the accuracy required in the work.
写像関係の学習は、 上述したように、 ロボッ ト制御装置 3に おいてニューラルネッ ト ワーク学習プログラムに従って実行さ れる。 すなわち、 バックプロパゲーショ ン法により、 教師デー タとしての肘角度 <j¾ dと、 ニューラルネッ ト ワークによつて推 定された肘角度 øとの誤差 (∑ (Φ ά - Φ ) 2 ) が最小となる 方向に、 二ユーラルネッ ト ワークの結合荷重係数の学習が進行 する。 学習が収束すると、 ニューラルネ ッ ト ワークには、 (X s , X d ) →øなる写像が生成され、 入力 (X s , X d ) に応 じて肘角度 øの推定値を出力するようになる。 The learning of the mapping relation is executed in the robot controller 3 according to the neural network learning program, as described above. That is, the error (∑ (Φ ά-Φ 2 ) 2 ) between the elbow angle <j¾ d as the teacher data and the elbow angle ø estimated by the neural network is calculated by the back propagation method. The learning of the coupling weight coefficient of the dual neural network proceeds in the direction in which the minimum is obtained. When the learning converges, a mapping of (X s, X d) → ø is generated in the neural network, and an estimated value of the elbow angle ø is output according to the input (X s, X d). become.
肘角度 øが求められると、 (X d , Φ) から関節 1 1、 1 2、 1 3、 1 5、 1 6、 1 7の各角度が、 解析的な計算により求め られ、 ロボッ ト 1のコ ンフィギュ レーショ ンが決定する。 この ときのロボッ ト 1の手先 1 8は障害物 4 1を回避して目標とす る作業対象物 4 0に到達し、 例示パターンからニューラルネッ- トワークの推定によって求められた肘角度 øは、 教師データ Φ dに対して近似の値をとる。 When the elbow angle ø is calculated, the joints 11, 1 2, and (X d, Φ) The angles 1, 3, 15, 16, and 17 are obtained by analytical calculation, and the configuration of robot 1 is determined. At this time, the hand 18 of the robot 1 reaches the target work object 40 avoiding the obstacle 41, and the elbow angle ø obtained by estimating the neural network from the example pattern is Take an approximate value for the teacher data Φ d.
図 6 はニューラルネッ トワーク学習の手順を示すフローチヤ ー トである。 図中 Sに続く数字はステップ蕃号を表す。  Figure 6 is a flowchart showing the procedure of neural network learning. The number following S in the figure indicates the step ban.
〔 S 2 1〕 障害物 4 1及び作業対象物 4 Qの各 3次元位置情報 X s、 X dを視覚センサ制御装置 2から取得する。  [S21] The three-dimensional position information Xs and Xd of the obstacle 41 and the work object 4Q are acquired from the visual sensor control device 2.
C S 2 ] ロボッ ト 1を教示操作盤 3 8を用いてマニュアル動 作させ、 ロボッ ト 1が障害物 4 1を回避して目標とする作業対 象物 4 0に到達したときのロボッ ト 1のコ ンフィギユレ一ショ ンデータ (各関節の角度データ) を取得する。  CS 2] The robot 1 is manually operated using the teaching operation panel 38, and when the robot 1 reaches the target work target 40 by avoiding the obstacle 41, the robot 1 Acquire configuration data (angle data of each joint).
[ S 2 3 コ ンフィギュレーショ ンより肘角度 dを算出する c [Calculate elbow angle d from S23 configuration c
C S 2 4 ] 各 3次元位置情報 X s、 X d及び肘角度 0 dから成 る例示パターンを繰り返し獲得する。 C S 2 4] An example pattern consisting of each three-dimensional position information Xs, Xd and elbow angle 0 d is repeatedly obtained.
[ S 2 5 〕 ノ ッ クプロパゲーショ ン法により、 ニューラルネッ トワーク学習を行う。  [S25] Neural network learning is performed by the knock propagation method.
C S 2 6 ) 学習を収束させ、 ニューラルネッ トワークに (X s , X d ) なる写像関係を生成する。  CS 26) Converge the learning and generate a mapping relation (Xs, Xd) in the neural network.
図 1 は実際作業時でのロボッ トのコ ンフィギュレーショ ン決 定手順を示すフ口一チャー トである。 このフローチャー トは、 ニューラルネッ ト ワーク学習によりニューラルネッ ト ワークに ( X s , X d ) — なる写像関係が生成された後、 実行される c 〔S 1〕 ロボッ ト 1 に実際の作業を行わせる際に、 先ずカメ ラ 2 0 1 による撮像データから得られた障害物 4 1及び作業対象 物 4 0の各 3次元位置情報 X s、 X dを視覚セ ンサ制御装置 2 から取得する。 Figure 1 is a chart showing the procedure for determining the robot configuration during actual work. This flowchart shows that after the neural network learning creates a mapping relation (Xs, Xd) — in the neural network, the c [S1] robot 1 that is executed executes the actual work. When you let them go, The three-dimensional position information X s and X d of the obstacle 41 and the work object 40 obtained from the imaging data of 201 are acquired from the visual sensor control device 2.
〔 S 2 〕 ニューラ ルネ ッ ト ワークに生成されている写像関係か ら、 各 3次元位置情報 X s、 X dに対する肘角度 øを推定する, [S2] Estimate the elbow angle ø for each of the three-dimensional position information Xs and Xd from the mapping relation generated in the neural network.
〔 S 3〕 作業対象物の 3次元位置情報 X d及び肘角度 0から、 πボッ ト 1の各関節 1 1、 1 2、 1 3、 1 5、 1 6、 1 7の各 角度を、 解析的な計算により求める。 これにより、 ロボッ ト 1 のコ ンフィギユレ一ショ ンが一意的に決定する。 [S3] Analyzes the angles of the joints 11, 12, 13, 15, 15, 16, 17 of the pi-bot 1 from the three-dimensional position information X d and elbow angle 0 of the work object It is determined by a typical calculation. As a result, the configuration of robot 1 is uniquely determined.
以上述べたように、 ニューラ ルネッ ト ワークに生成されてい る写像関係から、 障害物 4 1及び作業対象物 4 0 の各位置に対 応して肘角度 øを推定し、 その肘角度 øを用いてロボッ ト 1の コ ンフィギユレーショ ンを一意的に決定するようにした。 この 肘角度 øの推定及び口ボッ ト 1のコ ンフ ィ ギュ レーシ ョ ン決定 は、 実際の作業時に実時間で行う ことができる。 このため、 従 来、 コ ンフィギユ レーショ ンを決定するのが困難であった冗長 自由度を持つロボッ ト 1であっても、 実際の作業時に、 作業環 境に適切に対応したコ ンフ ィ ギユ レーシ ョ ンを実時間で決定す ることができる。 したがって、 ロボッ ト 1 の冗長自由度を有効 に活用することができるようになり、 ロボッ ト 1が本来有して いる作業能力を十分に発揮させることができる。  As described above, the elbow angle ø is estimated for each position of the obstacle 41 and the work object 40 from the mapping relation generated in the neural network, and the elbow angle ø is used. Thus, the configuration of robot 1 is uniquely determined. The estimation of the elbow angle ø and the configuration of the mouth bot 1 can be determined in real time during actual work. For this reason, even if the robot 1 has a redundant degree of freedom, the configuration of which has been difficult to determine in the past, at the time of actual work, the configuration that appropriately responds to the work environment Revisions can be determined in real time. Therefore, the redundant degree of freedom of the robot 1 can be effectively used, and the work ability originally possessed by the robot 1 can be sufficiently exhibited.
上記の説明では、 作業環境における障害物の数を 1個と した が、 障害物の数が複数であつても、 ニューラ ルネッ ト ワークの 入力ュニッ ト数を変更すれば同様に本発明を適用することがで き - 5>。  In the above description, the number of obstacles in the working environment is one. However, the present invention can be similarly applied to a plurality of obstacles by changing the number of input units of the neural network. -5>.
また、 ニュー ラ ルネッ ト ワークを 3層階層型ニュ一ラ ルネッ トワークとしたが、 他のタイプのニューラルネッ トワーク、 例 えばフィ一ドバック結合をもつ階層ニュ一ラルネッ ト ワークや 4層階層型ニューラルネッ ト ワークとしても同様に適用するこ とができる。 In addition, the neural network is a three-layer hierarchical neural network. However, the present invention can be similarly applied to other types of neural networks, such as a hierarchical neural network having feedback coupling and a four-layer hierarchical neural network.
以上説明したように本発明では、 ニューラルネッ トワークに 生成されている写像関係から、 障害物及び作業対象物の各位置 データに対応して肘角度を推定し、 その肘角度を用いて冗長自 由度ロボッ トのコ ンフィギユレーショ ンを一意的に決定するよ うに構成した。 そのコ ンフ ィギュレーショ ンは、 肘角度が障害 物を回避するような角度をとると共に、 手先が目標である作業 对象物に到達している。 この肘角度の推定及び冗長自由度ロボ ッ トのコ ンフィギユレーショ ン決定は、 実際の作業時に実時間 で行う ことができる。 このため、 従来、 コ ンフィギユレ一ショ ンを決定するのが困難であった冗長自由度を持つロボッ トであ つても、 実際の作業時に、 作業環境に適切に対応したコ ンフ ィ ギユレーショ ンを実時間で決定することができる。 したがって、 冗長自由度ロボッ トの冗長自由度を有効に活用することができ るようになり、 冗長自由度ロボッ トが本来有している作業能力 を十分に発揮させることができる。  As described above, according to the present invention, the elbow angle is estimated corresponding to each position data of the obstacle and the work object from the mapping relation generated in the neural network, and the elbow angle is used for the redundant freedom. The configuration is such that the configuration of the robot is uniquely determined. In this configuration, the elbow angle is set so as to avoid obstacles, and the hand reaches the target work object. The estimation of the elbow angle and the determination of the configuration of the redundant DOF robot can be performed in real time during actual work. For this reason, even robots with redundant degrees of freedom, for which it has been difficult to determine the configuration in the past, can implement a configuration suitable for the work environment at the time of actual work. Time can be determined. Therefore, the redundant degrees of freedom robot can be effectively utilized, and the working capability inherent in the redundant degrees of freedom robot can be fully exhibited.

Claims

請 求 の 範 囲 The scope of the claims
1 . 冗長自由度ロボッ 卜の姿勢を一意的に決定し制御する冗 長自由度ロボッ 卜の姿勢制御方式において、  1. In the attitude control method of the robot with the redundant degree of freedom, which uniquely determines and controls the attitude of the redundant degree of freedom robot,
作業対象物及び障害物の各データを取得してニュ一ラルネッ ト ワークに入力し、  Obtain the data of work objects and obstacles, input them to the neural network,
前記ニュ一ラルネ ッ ト ワークによつて前記冗長自由度口ボッ トの肘角度を求め、  An elbow angle of the redundant degree of freedom mouth bot is obtained from the neural network,
前記作業対象物のデータ及び前記肘角度のデータから前記冗 長自由度ロボッ トの姿勢を一意的に決定し制御することを特徴 とする冗長自由度ロボッ トの姿勢制御方式。  A posture control method for a robot having a degree of freedom of redundancy, wherein the posture of the robot having a degree of freedom of redundancy is uniquely determined and controlled from the data of the work object and the data of the elbow angle.
2 . 前記作業対象物及び前記障害物の各データは、 視覚セ ン サによって得られるデータであることを特徴とする請求項 1記 載の冗長自由度ロボッ 卜の姿勢制御方式。  2. The attitude control method for a robot with a degree of freedom of redundancy according to claim 1, wherein each data of the work object and the obstacle is data obtained by a visual sensor.
3 . 前記ニュ一ラ ルネッ ト ワークには、 例示学習により前記 作業対象物及び前記障害物に対する前記肘角度の写像が生成さ れていることを特徴とする請求項 1記載の冗長自由度口ボッ ト の姿勢制御方式。  3. The redundant degree-of-freedom mouth box according to claim 1, wherein a mapping of the elbow angle with respect to the work object and the obstacle is generated in the neural network by example learning. G attitude control method.
4 . 前記例示学習は、 視覚セ ンサによって得られた前記作業 対象物及び前記障害物の各データ並びに前記冗長自由度口ボッ トをマニュアル動作させて前記冗長自由度口ボッ 卜の手先を前 記作業対象物に到達させたときに得られる前記冗長自由度ロボ ッ トの肘角度データから成る教師データを例示パタ一ンと して 行われることを特徴とする請求項 3記載の冗長自由度口ボッ ト の姿勢制御方式。  4. In the example learning, the data of the work object and the obstacle obtained by a visual sensor and the redundant-degree-of-freedom mouth bot are manually operated so that the tip of the redundant-degree-of-freedom mouth bot is described in advance. 4. The redundant degree-of-freedom port according to claim 3, wherein teacher data including elbow angle data of the redundant degree-of-freedom robot obtained when the robot reaches the work target is used as an example pattern. Bot attitude control method.
5 . 前記ニュ 一 ラ ルネ ッ ト ワークは、 階層型ニュ一ラ ルネ ッ ト ワークであるこ とを特徴とする請求項 1記載の冗長自由度口 ボッ トの姿勢制御方式 ( 5. The redundant degree-of-freedom port according to claim 1, wherein said neural network is a hierarchical neural network. Bottom attitude control method (
PCT/JP1993/000132 1992-02-25 1993-02-03 Attitude control system for robots with redundant degree of freedom WO1993017375A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP3745492A JPH05233042A (en) 1992-02-25 1992-02-25 Posture control system for robot having redundant degree of freedom
JP4/37454 1992-02-25

Publications (1)

Publication Number Publication Date
WO1993017375A1 true WO1993017375A1 (en) 1993-09-02

Family

ID=12497962

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP1993/000132 WO1993017375A1 (en) 1992-02-25 1993-02-03 Attitude control system for robots with redundant degree of freedom

Country Status (2)

Country Link
JP (1) JPH05233042A (en)
WO (1) WO1993017375A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106945041A (en) * 2017-03-27 2017-07-14 华南理工大学 A kind of repetitive motion planning method for redundant manipulator
CN107490958A (en) * 2017-07-31 2017-12-19 天津大学 A kind of Fuzzy Adaptive Control Scheme of series parallel robot in five degrees of freedom
CN110076770A (en) * 2019-03-28 2019-08-02 陕西理工大学 A kind of autokinesis method for redundant mechanical arm

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5948932B2 (en) * 2012-02-16 2016-07-06 セイコーエプソン株式会社 Robot control apparatus, robot control method, robot control program, and robot system
JP6616170B2 (en) * 2015-12-07 2019-12-04 ファナック株式会社 Machine learning device, laminated core manufacturing apparatus, laminated core manufacturing system, and machine learning method for learning stacking operation of core sheet

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0253582A (en) * 1988-08-19 1990-02-22 Nippon Telegr & Teleph Corp <Ntt> Control method for learning manipulator
JPH02211576A (en) * 1989-02-10 1990-08-22 Nippon Telegr & Teleph Corp <Ntt> Self-organizing device
JPH0349845A (en) * 1989-07-13 1991-03-04 Omron Corp Control device adapted to cutting work
JPH0415704A (en) * 1990-05-02 1992-01-21 Nippon Telegr & Teleph Corp <Ntt> Identifying/simulating method for nonlinear direction control system of small-caliber tunnel robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0253582A (en) * 1988-08-19 1990-02-22 Nippon Telegr & Teleph Corp <Ntt> Control method for learning manipulator
JPH02211576A (en) * 1989-02-10 1990-08-22 Nippon Telegr & Teleph Corp <Ntt> Self-organizing device
JPH0349845A (en) * 1989-07-13 1991-03-04 Omron Corp Control device adapted to cutting work
JPH0415704A (en) * 1990-05-02 1992-01-21 Nippon Telegr & Teleph Corp <Ntt> Identifying/simulating method for nonlinear direction control system of small-caliber tunnel robot

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106945041A (en) * 2017-03-27 2017-07-14 华南理工大学 A kind of repetitive motion planning method for redundant manipulator
CN106945041B (en) * 2017-03-27 2019-08-20 华南理工大学 A Redundant Manipulator Repeated Motion Planning Method
US11409263B2 (en) 2017-03-27 2022-08-09 South China University Of Technology Method for programming repeating motion of redundant robotic arm
CN107490958A (en) * 2017-07-31 2017-12-19 天津大学 A kind of Fuzzy Adaptive Control Scheme of series parallel robot in five degrees of freedom
CN110076770A (en) * 2019-03-28 2019-08-02 陕西理工大学 A kind of autokinesis method for redundant mechanical arm

Also Published As

Publication number Publication date
JPH05233042A (en) 1993-09-10

Similar Documents

Publication Publication Date Title
CN108883533B (en) Robot control
JP5114019B2 (en) Method for controlling the trajectory of an effector
JP7339806B2 (en) Control system, robot system and control method
KR950000814B1 (en) Position teaching method and control apparatus for robot
WO1992001539A1 (en) Method of calibrating visual sensor
WO1992009019A1 (en) Method for setting coordinate system of robot
Lippiello et al. A position-based visual impedance control for robot manipulators
JP3349652B2 (en) Offline teaching method
JP2874238B2 (en) Control method of articulated robot
WO1989008878A1 (en) Method of controlling tool attitude of a robot
WO1993017375A1 (en) Attitude control system for robots with redundant degree of freedom
WO2020027106A1 (en) Robot system
JPS6334609A (en) Plural arms device
JPH0693209B2 (en) Robot&#39;s circular interpolation attitude control device
JP2629291B2 (en) Manipulator learning control method
JPH05345291A (en) Working area limitation for robot
JP2703767B2 (en) Robot teaching data creation method
JPH06304893A (en) Calibration system for positioning mechanism
JPH05177563A (en) Control method for master slave manipulator
WO1987000311A1 (en) System for controlling articulated robot
JP2011224745A (en) Robot teaching device and controller for the same, and program
JP3085814B2 (en) Articulated master / slave manipulator
JPH0929673A (en) Manipulator controller
JPH05261691A (en) Redundant manipulator control method
JPS62199383A (en) Robot control method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): KR US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

122 Ep: pct application non-entry in european phase