WO2019146007A1 - Dispositif de commande de position et procédé de commande de position - Google Patents
Dispositif de commande de position et procédé de commande de position Download PDFInfo
- Publication number
- WO2019146007A1 WO2019146007A1 PCT/JP2018/002053 JP2018002053W WO2019146007A1 WO 2019146007 A1 WO2019146007 A1 WO 2019146007A1 JP 2018002053 W JP2018002053 W JP 2018002053W WO 2019146007 A1 WO2019146007 A1 WO 2019146007A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- control
- control amount
- value
- learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23P—METAL-WORKING NOT OTHERWISE PROVIDED FOR; COMBINED OPERATIONS; UNIVERSAL MACHINE TOOLS
- B23P19/00—Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes
- B23P19/02—Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes for connecting objects by press fit or for detaching same
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/10—Programme-controlled manipulators characterised by positioning means for manipulator elements
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
Definitions
- the present invention relates to a position control device and a position control method.
- teaching operations When constructing a production system that performs assembly operations with a robot arm, it is common to perform teaching operations by a human hand called teaching. However, in this teaching, since the robot repeatedly performs the operation only at the stored position, there may be cases where it can not be dealt with if an error occurs due to manufacturing or mounting. Therefore, if it is possible to develop a position correction technology that absorbs this individual error, it is possible to expect improvement in productivity and also increase the scene in which the robot plays an active part.
- Patent Document 1 there is a technology for performing position correction just before the connector insertion operation using a camera image. Also, if a plurality of devices such as a force sensor, a stereo camera, etc. are used, it is possible to absorb an error in position related to assembly (insertion, work holding, etc.). However, in order to determine the position correction amount, it is necessary to explicitly calculate the amount of the center coordinates of the gripped connector and the center coordinates of the connector to be inserted as described in the reference from the image information. This calculation depends on the shape of the connector and must be set by the designer for each used connector. This calculation is also relatively easy if three-dimensional information can be acquired from a distance camera etc. However, it is necessary to develop an image processing algorithm for each connector in order to acquire from two-dimensional image information. It takes time.
- the present invention has been made to solve the above-described problems, and it is possible to prevent excessive load on objects even if the function for learning and the function for position control of the robot are different. Intended to collect data.
- the control amount for insertion is designated based on the image acquired from the imaging unit and the value of the force sensor, and the alignment is performed.
- Cycle control based on the path determination unit that learns from the results for the above, the cycle control amount set for each control cycle to reach the control amount, and the control amount adapted to the external force based on the value of the force sensor And a combining unit for outputting the amount adjustment value.
- the present invention even if the function for learning and the function for position control of the robot are different, it is possible to collect learning data while preventing excessive load on objects.
- FIG. 2 is a diagram in which a robot arm 100, a male connector 110, and a female connector 120 according to Embodiment 1 are arranged.
- FIG. 2 is a functional configuration diagram of a position control device according to Embodiment 1.
- FIG. 2 is a hardware configuration diagram of a position control device according to Embodiment 1.
- 6 is a flowchart of position control of the position control device according to the first embodiment.
- FIG. 8 shows an example of a diagram showing an insertion start position captured by the single-eye camera 102 according to Embodiment 1, and a camera image and a control amount near the periphery thereof.
- FIG. 2 is a diagram showing an example of a neural network according to the first embodiment and a learning rule of the neural network.
- FIG. 7 is a flowchart using a plurality of networks in the neural network in Embodiment 1.
- FIG. 7 is a functional configuration diagram of a position control device in Embodiment 2.
- FIG. 7 is a hardware configuration diagram of a position control device in a second embodiment.
- FIG. 9 is a view showing a trial of fitting of the male connector 110 and the female connector 120 in the second embodiment.
- 10 is a flowchart of path learning of the position control device according to the second embodiment.
- 15 is a flowchart of path learning in the position control device in Embodiment 3.
- FIG. 16 is a diagram showing an example of a neural network according to a third embodiment and a learning rule of the neural network.
- FIG. 14 is a functional configuration diagram of a position control device in a fourth embodiment.
- FIG. 16 is a flowchart of path learning of the position control device in Embodiment 4.
- FIG. 14 is a functional configuration diagram of a position control device in a fourth embodiment.
- Embodiment 1 Hereinafter, embodiments of the present invention will be described.
- FIG. 1 is a view in which a robot arm 100, a male side connector 110, and a female side connector 120 according to the first embodiment are arranged.
- the robot arm 100 is provided with a gripping portion 101 for gripping the male connector 110, and the monocular camera 102 is attached to the robot arm 100 so that the gripping portion can be seen.
- the position of this monocular camera 102 is such that when the grip portion 101 at the tip of the robot arm 100 grips the male connector 110, the tip of the griped male connector 110 and the female connector 120 on the inserted side can be seen Install in
- FIG. 2 is a functional configuration diagram of the position control device in the first embodiment. 2, an imaging unit 201 for capturing an image, a control parameter generating unit 202 for generating a control amount of the position of the robot arm 100 using the captured image, and a position, which are functions of the single-eye camera 102 in FIG.
- Control unit 203 that controls the current / voltage value of the drive unit 204 of the robot arm 100 using the control amount of <'> and changing the position of the robot arm 100 based on the current and voltage values output from the control unit 203
- a drive unit 204 is configured.
- the control parameter generation unit 202 is a function of the monocular camera 102, and when an image is acquired from the imaging unit 201 that captures an image, control on the value of the position (X, Y, Z, Ax, Ay, Az) of the robot arm 100 is performed.
- the quantities ( ⁇ X, ⁇ Y, ⁇ Z, ⁇ Ax, ⁇ Ay, ⁇ Az) are determined, and the control amount is output to the control unit 203.
- the control unit 203 controls the drive unit 204 based on the received control amount (.DELTA.X, .DELTA.Y, .DELTA.Z, .DELTA.Ax, .DELTA.Ay, .DELTA.Az) for the value of the position (X, Y, Z, Ax, Ay, Az) of the robot arm 100 received. Determine and control current and voltage values for each device to be configured.
- the drive unit 204 moves the robot arm 100 to the position of (X + ⁇ X, Y + ⁇ Y, Z + ⁇ Z, Ax + ⁇ Ax, Ay + ⁇ Ay, Az + ⁇ Az) by operating with the current / voltage value for each device received from the control unit 203.
- FIG. 3 is a hardware block diagram of the position control device in the first embodiment.
- the monocular camera 102 is communicably connected to the processor 302 and the memory 303 via the input / output interface 301 regardless of wired or wireless communication.
- the input / output interface 301, the processor 302, and the memory 303 configure the function of the control parameter generation unit 202 in FIG.
- the input / output interface 301 is communicably connected to the control circuit 304 corresponding to the control unit 203 regardless of wired or wireless communication.
- the control circuit 304 is also electrically connected to the motor 305.
- the motor 305 corresponds to the drive unit 204 in FIG. 2 and is configured as a component for controlling the position of each device.
- the motor 305 is used as a form of hardware corresponding to the drive unit 204, any hardware capable of controlling the position may be used. Therefore, the monocular camera 201 and the input / output interface 301 and the input / output interface 301 and the control circuit 304 may be separately provided.
- FIG. 4 is a flowchart of position control of the position control device according to the first embodiment.
- the gripping unit 101 of the robot arm 100 grips the male connector 110.
- the position and orientation of the male connector 110 are registered in advance on the side of the control unit 203 in FIG. 2, and operated based on a control program registered on the side of the control unit 203 in advance.
- step S102 the robot arm 100 is brought close to the insertion position of the female connector 120.
- the approximate position and posture of the female connector 110 are registered in advance on the side of the control unit 203 in FIG. 2, and the position of the male connector 110 is determined based on a control program registered on the side of the control unit 203 in advance. Be operated.
- step S103 the control parameter generation unit 202 instructs the imaging unit 201 of the single eye camera 102 to take an image, and the single eye camera 103 holds the male connector 110 held by the holding unit 101; An image is captured in which both the female connector 120 to be inserted are shown.
- step S104 the control parameter generation unit 202 acquires an image from the imaging unit 201, and determines control amounts ( ⁇ X, ⁇ Y, ⁇ Z, ⁇ Ax, ⁇ Ay, ⁇ Az).
- the control parameter generation unit 202 uses the processor 302 and the memory 303 of FIG. 3 as hardware and calculates a control amount ( ⁇ X, ⁇ Y, ⁇ Z, ⁇ Ax, ⁇ Ay, ⁇ Az) using a neural network. Do. The calculation method of the control amount using a neural network will be described later.
- step S105 the control unit 203 acquires the control amounts ( ⁇ X, ⁇ Y, ⁇ Z, ⁇ Ax, ⁇ Ay, ⁇ Az) output by the control parameter generation unit 202, and at the same time, determines the predetermined threshold and control amount. Compare all ingredients. If all the components of the control amount are equal to or less than the threshold value, the process proceeds to step S107, and the control unit 203 controls the drive unit 204 to insert the male connector 110 into the female connector 120. If any component of the control amount is larger than the threshold, the control unit 203 uses the control amount ( ⁇ X, ⁇ Y, ⁇ Z, ⁇ Ax, ⁇ Ay, ⁇ Az) output by the control parameter generation unit 202 in step S106. Control the step 204 and return to step S103.
- a method of calculating the control amount using the neural network in step S104 of FIG. 4 will be described.
- a set of the image and the necessary movement amount in advance Collect
- the male connector 110 is gripped by the grip portion 101 of the robot arm 100 with respect to the male connector 110 and the female connector 120 in a fitted state whose positions are known. Then, while moving the gripping unit 101 in a known pulling direction to the insertion start position, the single-eye camera 102 acquires a plurality of images.
- FIG. 5 is an example of a diagram showing an insertion start position photographed by the single-eye camera 102 according to the first embodiment, a camera image in the vicinity of the position, and a control amount.
- FIG. 6 is a diagram showing an example of a neural network in the first embodiment and a learning rule of the neural network.
- the input layer receives an image (for example, luminance and color difference value of each pixel) obtained from the monocular camera 102, and the output layer outputs control amounts ( ⁇ X, ⁇ Y, ⁇ Z, ⁇ Ax, ⁇ Ay, ⁇ Az).
- control amounts ⁇ X, ⁇ Y, ⁇ Z, ⁇ Ax, ⁇ Ay, ⁇ Az.
- optimization of the parameters of the intermediate layer is performed so that the output value of the output layer obtained from the input image through the intermediate layer approximates the control amount stored in the image set .
- the male connector 110 is fixed in position with respect to the single-eye camera 102, and only the position of the female connector 120 is changed. However, the male side connector 110 is not gripped at the correct position, and there are cases where the position of the male side connector 110 is shifted due to individual differences and the like. Therefore, the male connector 110 is acquired by acquiring and learning a plurality of control amounts and images of the insertion start position and the position in the vicinity thereof when the male connector 110 deviates from the correct position in the process of learning. The learning which can respond to the individual difference of both of and female side connector 120 is performed.
- control amount ( ⁇ X, ⁇ Y, ⁇ Z, ⁇ Ax, ⁇ Ay, ⁇ Az) is calculated excluding the amount of movement from the fitted state position at the time of shooting to the insertion start position.
- the movement amount from the position to the fitted state position needs to be separately stored for use in step S107 of FIG.
- the control unit 203 needs to control the robot arm 100 after converting the coordinate system of the monocular camera if the coordinate system of the entire robot arm 100 is different. .
- the monocular camera since the monocular camera is fixed to the robot arm 100, the coordinate system in which the female connector 120 is placed is different from the coordinate system of the monocular camera 102. Therefore, if the monocular camera 102 has the same coordinate system as the position of the female connector 120, the conversion from the coordinate system of the monocular camera 102 to the coordinate system of the robot arm 100 is unnecessary.
- step S101 the robot arm 100 grips the male connector 110 according to the operation registered in advance in order to grip the male connector 110.
- the female connector 120 is moved substantially upward.
- the position immediately before gripping of the male connector 110 being gripped is not always constant. A slight error may always occur due to a slight movement deviation of a machine that sets the position of the male connector 110 or the like. Similarly, the female connector 120 may also have some errors.
- step S103 an image captured by both the male connector 110 and the female connector 120 in the image captured by the imaging unit 201 of the single-lens camera 102 attached to the robot arm 100 is acquired. It is important to do. Since the position of the monocular camera 102 with respect to the robot arm 100 is always fixed, relative positional information between the male connector 110 and the female connector 120 is reflected in this image.
- step S104 the control amount (.DELTA.X, .DELTA.Y, .DELTA.Z, .DELTA.Ax, .DELTA.Ay, .DELTA.Az) is calculated by the control parameter generation unit 202 having a neural network as shown in FIG. 6 in which the relative position information is learned in advance. .
- the control amount output from the control parameter generation unit 202 may not operate up to the insertion start position.
- the control parameter generation unit 202 repeatedly calculates so that the loop of steps S103 to S106 is repeated a plurality of times so that the threshold does not exceed the threshold shown in step S105, and the control unit 203 and drive unit 204 control. There is also a case to control the position.
- the threshold shown in S105 is determined by the required accuracy of the mating male connector 110 and female connector 120. For example, when the fitting with the connector is loose and the accuracy is not necessary so far as the characteristics of the connector, the threshold can be set large. In the opposite case, the threshold is set smaller. In general, in the case of a manufacturing process, it is also possible to use this value because an error that can be tolerated by manufacturing is often defined.
- a plurality of insertion start positions may be set. If the insertion start position is set without taking a sufficient distance between the male connector 110 and the female connector 120, the male connector 110 and the female connector 120 abut each other before the insertion is started, and one of them is broken. There is also a risk of In that case, for example, the clearance between the male connector 110 and the female connector 120 is 5 mm at the beginning, 20 mm at the next, and 10 mm at the next, depending on the number of loops between step S103 and step S106 in FIG.
- the insertion start position may be set.
- the present embodiment has been described using the connector, the application of this technology is not limited to the fitting of the connector.
- the present invention can be applied to the case of mounting an IC on a substrate, and it is effective to use a similar method even when inserting a capacitor or the like having a large dimensional error of a foot into a hole of the substrate.
- the present invention is not necessarily limited to the insertion into the substrate, but can be used for general position control for obtaining the control amount from the relationship between the image and the control amount.
- learning of the relationship between the image and the control amount using a neural network has the advantage of being able to absorb individual differences when performing alignment between objects and objects.
- the imaging unit 201 for capturing an image in which two things are present, and information of the two captured images of things are input to the input layer of the neural network, and the positional relationship between the two things is obtained.
- a control parameter generation unit 202 that outputs a control amount of position for controlling as an output layer of a neural network, and a current or voltage for controlling the positional relationship between two things using the control amount of the output position Control unit 203 and a drive unit 204 for moving the position of one of the two objects using the current or voltage for controlling the position of the two objects.
- FIG. 7 is a flowchart using a plurality of networks in the neural network in the first embodiment. It shows the detailed steps of step S104 in FIG. A plurality of parameters are included in the control parameter generator of FIG.
- step S701 the control parameter generation unit 202 selects which network to use based on the input image. If the loop count is the first or the obtained control amount is 25 mm or more, the neural network 1 is selected and the process proceeds to step S702. If the control amount obtained in the second and subsequent loop times is 5 mm or more and less than 25 mm, the neural network 2 is selected, and the process proceeds to step S703. Furthermore, if the control amount obtained in the second and subsequent loop times is less than 5 mm, the neural network 3 is selected and the process proceeds to step S704. The control amount is calculated using the neural network selected in steps S702 to S704.
- each neural network is learned according to the distance or control amount between the male connector 110 and the female connector 120, and the neural network 3 in the figure has learning data with an error of ⁇ 1 mm, ⁇ 1 degree
- the neural network 2 changes the range of data in which learning data in the range of ⁇ 1 to ⁇ 10 mm, ⁇ 1 to ⁇ 5 degrees is learned stepwise.
- the number of networks is not particularly limited. In the case of using such a scheme, it is necessary to prepare the determination function of step S 701 for determining which network to use as the “network selection switch”.
- the network selection switch can also be configured as a neural network.
- the input image to the input layer and the output of the output layer are network numbers.
- the image data uses image / network number pairs used in all networks.
- the application of this technology is not limited to the fitting of the connector.
- the present invention can be applied to the case of mounting an IC on a substrate, and it is effective to use a similar method even when inserting a capacitor or the like having a large dimensional error of a foot into a hole of the substrate.
- the example using a plurality of neural networks is not necessarily limited to the insertion into the substrate, but can be used for general position control for obtaining the control amount from the relationship between the image and the control amount.
- learning of the relationship between the image and the control amount using a neural network has the advantage of being able to absorb the individual differences when performing alignment between objects and things, and more accurately. Control amount can be calculated.
- an imaging unit 201 for capturing an image in which two things are present, and information on images of the two captured things are input to an input layer of a neural network to control the positional relationship between the two things.
- a control parameter generation unit 202 which outputs a control amount as an output layer of a neural network, and a control unit 203 which controls a current or a voltage for controlling the positional relationship between two things using the output control amount of the position;
- the control parameter generation unit 202 includes a drive unit 204 for moving the position of one of the two objects by using a current or voltage for controlling the position of the two objects, and the control parameter generation unit 202 In order to perform alignment even if there is an individual difference between the individual objects or an error in the positional relationship between the two objects. There is an effect that accuracy can be performed well.
- the male connector 110 is gripped by the grip portion 101 of the robot arm 100 with respect to the male connector 110 and the female connector 120 in a fitted state whose positions are known. Then, while moving the gripping unit 101 in a known pulling direction to the insertion start position, the single-eye camera 102 acquires a plurality of images.
- the fitting position of the male connector 110 and the female connector 120 is unknown will be described.
- a method called reinforcement learning has been studied as a previous study of a method in which a robot learns by itself and acquires appropriate behavior.
- the robot performs various motions by trial and error, and as a result optimizes the behavior while memorizing the behavior that produced a good result, but a large number of trials is required to optimize the behavior
- a framework called "on policy" is commonly used in reinforcement learning.
- it is difficult to devise various devices specialized for the robot arm and control signals, which is difficult and has not been put to practical use.
- the robot as in the first embodiment performs various operations in a trial-and-error manner, and as a result stores the behavior that has produced a good result, while reducing the number of trials for optimizing the behavior as a result
- the form which can be done is explained.
- the entire hardware configuration is the same as that of FIG. 1 of the first embodiment, but a force sensor 801 (not shown in FIG. 1) for measuring the load applied to the grip unit 101 is added to the robot arm 100 It differs in that it is done.
- FIG. 8 shows a functional block diagram of the position control device in the second embodiment.
- a force sensor 801 and a route determination unit 802 are added, and the route determination unit 802 is configured of a Critic unit 803, an Actor unit 804, an evaluation unit 805, and a route setting unit 806.
- FIG. 9 is a hardware block diagram of the position control device in the second embodiment.
- the force sensor 801 is electrically or communicably connected to the input / output interface 301.
- the input / output interface 301, the processor 302, and the memory 303 configure the function of the control parameter generation unit 202 in FIG. 8 and also configure the function of the path determination unit 802. Therefore, the force sensor 801, the monocular camera 201, and the input / output interface 301, and the input / output interface 301 and the control circuit 304 may be separately provided.
- the force sensor 801 measures the load applied to the grip portion 101 of the robot arm 100, and can measure, for example, the value of force when the male connector 110 and the female connector 120 in FIG. 1 abut. It is.
- S3 and S4 are the same as the Critic unit and the Actor unit in the conventional reinforcement learning.
- the conventional reinforcement learning method will be described.
- a model called Actor-Critic model is also used among reinforcement learning (Reference: reinforcement learning: RSSutton and AGBarto, published in December 2000).
- the Actor unit 804 and the Critic unit 803 acquire the state of the environment through the imaging unit 201 and the force sensor 801.
- the Actor unit 804 is a function that receives the environmental condition I acquired using the sensor device and outputs the control amount A to the robot controller.
- the Critic unit 803 is a mechanism for the Actor unit 804 to appropriately learn the output A with respect to the input I so that the fitting to the Actor unit 804 is properly and successfully achieved.
- the method of the conventional reinforcement learning method will be described.
- an amount called a reward R is defined, and the Actor unit 804 can acquire an action A that maximizes R.
- X, Y, Z indicate position coordinates with the central portion of the robot as the origin
- Ax, Ay, Az indicate the amounts of rotation about the X axis, Y axis, and Z axis, respectively.
- the movement correction amount is a control amount from the fitting start position for the first attempt of fitting the male connector 110 from the current point.
- the observation of the environmental condition, that is, the trial result is obtained from the image from the imaging unit 201 and the value of the force sensor 801.
- a Critic unit 803 learns a function called a state value function V (I).
- V (I) a state value function
- action A (1) is taken in state I (1)
- R (2) the result of the first fitting trial
- ⁇ is a prediction error
- ⁇ is a learning coefficient
- ⁇ is a discount rate
- ⁇ is a discount rate
- ⁇ indicates the value of the standard deviation of the output
- Actor adds a random number having a distribution with an average of 0 and a variance of ⁇ 2 to A (I) in state I. That is, regardless of the result of the trial, the second movement correction amount is determined randomly.
- the above-mentioned update formula is used as an example, the Actor-Critic model also has various update formulas, and any model that is generally used regardless of the above can be changed.
- the Actor unit 804 learns the appropriate action in each state with the above configuration
- the action according to the first embodiment is when learning is completed.
- the evaluation unit 805 generates a function that performs evaluation at each fitting trial.
- FIG. 10 is a diagram showing a trial of fitting of the male connector 110 and the female connector 120 in the second embodiment. For example, it is assumed that an image as shown in FIG. 10A is obtained as a result of the trial. In this trial, the fitting position of the connector is largely misaligned and fails. At this time, how close to success is measured and quantified to obtain an evaluation value indicating the degree of success.
- a method of digitization for example, as shown in FIG. 10B, there is a method of calculating a connector surface area (number of pixels) on the insertion side in an image.
- the route setting unit 806 is divided into two steps as processing.
- the evaluation result processed by the evaluation unit 805 and the motion that the robot has moved to practice are learned.
- the path setting unit 806 prepares and approximates a function having A as an input and E as an output.
- RBF Random Basis Function
- RBF is known as a function that can easily approximate various unknown functions. For example, the kth input
- ⁇ is the standard deviation
- ⁇ is the center of RBF.
- the minimum value is determined by the above RBF network by a general optimization method such as the steepest descent method or PSO (Particle Swarm Optimization).
- This minimum value is input to the next Actor unit 804 as the next recommended value.
- the surface area and the number of pixels in the two-dimensional direction with respect to the movement correction amount at the time of failure are arranged as time series for each trial number as evaluation values and the optimum solution is determined using the values of the order. It is a thing.
- the movement correction amount moved at a constant rate in the direction of decreasing the number of pixels in the two-dimensional direction may be determined more simply.
- FIG. 11 is a flowchart of path learning of the position control device according to the second embodiment.
- step S ⁇ b> 1101 the gripping unit 101 of the robot arm 100 grips the male connector 110.
- the position and orientation of the male connector 110 are registered in advance on the control unit 203 side of FIG. 8 and operated based on a control program registered on the control unit 203 side in advance.
- step S1102 the robot arm 100 is brought close to the insertion position of the female connector 120.
- the approximate position and orientation of the female connector 110 are registered in advance on the side of the control unit 203 in FIG. 8, and the position of the male connector 110 is determined based on a control program registered on the side of the control unit 203 in advance. Be operated. Up to this point is the same as steps S101 to S102 of the flowchart of FIG. 4 in the first embodiment.
- step S1103 the path determination unit 802 instructs the imaging unit 201 of the single-eye camera 102 to capture an image, and the single-eye camera 102 holds the male connector 110 held by the holding unit 101; An image is captured in which both the female connector 120 to be inserted are shown. Further, the path determination unit 802 instructs the control unit 203 and the single-eye camera 102 to capture an image near the current position, and the movement unit 204 moves the drive unit 204 based on the plurality of movement values instructed to the control unit 203. In this position, the single-eye camera captures an image in which both the male connector 110 and the female connector 120 to be inserted appear.
- step S1104 the Actor unit 804 of the path determination unit 802 gives a control amount for fitting to the control unit 203 and causes the drive unit 204 to move the robot arm 100, and the male side connector 110; The fitting of the female connector 120 to be inserted is tried.
- step S1105 when the connectors are in contact with each other while the robot arm 100 is being moved by the drive unit 204, the value of the force sensor 801 and the image from the monocular camera 102 are determined for each unit amount of movement amount The evaluation unit 805 and the Critic unit 803 of 802 store them.
- step S1106 the evaluation unit 805 and the Critic unit 803 confirm whether the fitting is successful. Usually, the fit is not successful at this point. Therefore, in step S1108, the evaluation unit 805 evaluates the degree of success according to the method described with reference to FIG. 10, and provides the path setting unit 806 with an evaluation value indicating the degree of success for alignment. Then, in step S1109, the route setting unit 806 performs learning using the above-described method, and the route setting unit 806 gives the next recommended value to the Actor unit 804, and the Critic unit 803 obtains it according to the amount of reward. The Actor unit 804 outputs the received value.
- step S 1110 the Actor unit 804 adds the value obtained according to the reward amount output from the Critic unit 803 and the next recommended value output from the route setting unit 806 to obtain a movement correction amount.
- the Actor unit 804 sets an addition ratio of the value obtained according to the reward amount output by the Critic unit 803 and the next recommended value output by the route setting unit 806, and adds the ratio. It may be changed according to
- step S1111 the Actor unit 804 gives the movement correction amount to the control unit 203 to move the gripping unit 101 of the robot arm 100. Thereafter, the process returns to step 1103 again, and the image is photographed at the position moved by the movement correction amount, and the fitting operation is performed. Repeat this until it succeeds. If the fitting is successful, in step S1107, after the fitting is successful, learning of the Actor unit 804 and the Critic unit 803 is performed for I from steps S1102 to S1106 when the fitting is successful. Finally, the path determination unit 802 supplies the learned data of the neural network to the control parameter generation unit 202, thereby enabling the operation in the first embodiment.
- the learning of the Actor unit 804 and the Critic unit 803 is performed for I when the fitting is successful, but the Actor unit 804 and the Critic are obtained using data of all trials from the disclosure of the fitting trial to the success.
- the unit 803 may learn. In that case, although the case where a plurality of neural networks are formed according to the control amount is described in Embodiment 1, if the position of the success of the fitting is known, control is performed using the distance to the success of the fitting. It is possible to simultaneously form a plurality of neural networks suitable for the magnitude of the quantity.
- the application of this technology is not limited to the fitting of the connector.
- the present invention can be applied to the case of mounting an IC on a substrate, and in the case of inserting a capacitor or the like having a large dimensional error of a foot into a hole of the substrate, the same method can be used to produce an effect.
- the present invention is not necessarily limited to the insertion into the substrate, but can be used for general position control for obtaining the control amount from the relationship between the image and the control amount.
- learning of the relationship between the image and the control amount using a neural network has the advantage of being able to absorb the individual differences when performing alignment between objects and things, and more accurately. Control amount can be calculated.
- the Actor unit 804 uses the value obtained by the Critic unit 803 according to the amount of reward, and the route setting unit 806 uses the evaluation value. Based on the recommended value obtained based on the above to obtain the movement correction amount for trial, the normal Actor-Critic model requires a large number of trial and error numbers until alignment is successful. The invention makes it possible to significantly reduce the number of alignment trials.
- the number of alignment trials is reduced by evaluating the image from the imaging unit 201 at the time of alignment failure, but the value of the force sensor 801 at the time of alignment trial is described. Can also reduce the number of trials. For example, in alignment including fitting of a connector or insertion of two things, in the case of failure, when the value of the force sensor 801 exceeds a certain threshold, the positions of two things are completely fitted or inserted. In general, the Actor unit 804 determines whether or not the position is present. In that case, a. If it is in the process of fitting or inserting when the threshold is reached b. It is also conceivable that the value of the force sensor 801 which has been fitted and inserted but has been fitted or inserted shows a certain value. a.
- FIG. 12 shows a flowchart in path learning of the position control device in the third embodiment.
- the variable i is the number of times of learning of the robot arm 100
- the variable k is the number of times of learning from when the male connector 110 and the female connector 120 are disengaged
- the variable j is the number of loops in the flowchart of FIG. It is.
- step S1202 the path setting unit 806 gives the movement amount to the control unit 203 via the Actor unit 804 so as to return 1 mm from the movement amount given to perform the fitting in FIG.
- the robot arm 100 is moved by the drive unit 204.
- 1 is added to the variable i.
- an instruction to return 1 mm from the movement amount is given.
- the instruction is not necessarily limited to 1 mm, and a unit amount such as 0.5 mm or 2 mm may be used.
- step S 1204 the route setting unit 806 randomly determines control amounts ( ⁇ X, ⁇ Y, ⁇ Z, ⁇ Ax, ⁇ Ay, ⁇ Az) centering on O (i), and controls the control unit 203 via the Actor unit 804. The amount is given, and the robot arm 100 is moved by the drive unit 204. At this time, the maximum amount of this control amount can be set arbitrarily within the range in which movement is possible.
- step S1205 at the position after movement in step S1204, the Actor unit 804 collects the values of the force sensor 801 corresponding to the movement amounts ( ⁇ X, ⁇ Y, ⁇ Z, ⁇ Ax, ⁇ Ay, ⁇ Az), and
- the Critic part 803 and the Actor part 804 take the movement amount to hold the male side connector 110 by multiplying the movement amount by -1 (-.DELTA.X, -.DELTA.Y, -.DELTA.Z, -.DELTA.Ax, -.DELTA.Ay, -.DELTA.Az)
- the sensor value of the force sensor 801 that measures the force is recorded as learning data.
- step S1207 the route setting unit 806 determines whether the collected data number has reached the specified number J. If the number of data is insufficient, 1 is added to the variable j in step S1208, and the process returns to step S1204 to change the control amount (.DELTA.X, .DELTA.Y, .DELTA.Z, .DELTA.Ax, .DELTA.Ay, .DELTA.Az) by random numbers and acquire data, The steps S1204 to S1207 are repeated until the individual data are accumulated. When the specified number of data are accumulated, the route setting unit 806 sets the variable j to 1 in step S1209, and then confirms whether the male connector 110 and the female connector 120 are disengaged in step S1210. Do.
- step S1211 the route setting unit 806 gives a control amount to the control unit 203 via the Actor unit 804 so as to return the coordinates of the robot arm 100 to the coordinates O (i) before giving the control amount. To move the robot arm 100. Thereafter, the loop from step S1202 to step S1210 is returned by 1 mm or a unit amount from the control amount given to perform fitting until the fitting between the male connector 110 and the female connector 120 is released. A process of giving a control amount centering on the position and collecting data of the force sensor 801 is repeated. If the male connector 110 and the female connector 120 are disengaged from each other, the process proceeds to step S1212.
- step S1212 the route setting unit 806 sets the variable i to I (I is an integer larger than the value of i when it is determined that the male connector 110 and the female connector 120 are disengaged from each other).
- a control amount is given to the control unit 203 via the Actor unit 804 so as to return, for example, 10 mm (this may also be another value) from the movement amount given to perform fitting, and the drive unit 204 Move it.
- step S1213 the path setting unit 806 stores the position of the coordinates of the robot arm 100 moved in step S1212 as the central position O (i + k).
- step S 1214 the route setting unit 806 randomly determines again the control amount ( ⁇ X, ⁇ Y, ⁇ Z, ⁇ Ax, ⁇ Ay, ⁇ Az) again centering on the center position O (i + k), A control amount is given to the control unit 203 via the Actor unit 804, and the robot arm 100 is moved by the drive unit 204.
- step S 1215 the Critic unit 803 and the Actor unit 804 are images captured by the imaging unit 201 of the monocular camera 102 at the robot arm 100 position after movement by the control amounts ( ⁇ X, ⁇ Y, ⁇ Z, ⁇ Ax, ⁇ Ay, ⁇ Az)
- step S 1216 the Critic unit 803 and the Actor unit 804 record the image as one learning data as ( ⁇ X, ⁇ Y, ⁇ Z, ⁇ Ax, ⁇ Ay, ⁇ Az) in which the movement amount is multiplied by ⁇ 1. .
- step S1217 the route setting unit 806 determines whether the collected data number has reached the specified number J. If the number of data is insufficient, 1 is added to the variable j in step S1212, and the process returns to step S1214 to change the control amount ( ⁇ X, ⁇ Y, ⁇ Z, ⁇ Ax, ⁇ Ay, ⁇ Az) by random numbers and acquire data, and the specified number J S1214 to S1217 are repeated until individual data are accumulated.
- the maximum value of the control amount ( ⁇ X, ⁇ Y, ⁇ Z, ⁇ Ax, ⁇ Ay, ⁇ Az) in S 1204 and the random value of the control amount in S 1204 can take different values.
- the learning data acquired by the above method performs learning of the Actor unit 804 and the Critic unit 803.
- the robot arm 100 is slightly moved from the movement for fitting the male connector 110 and the female connector 120 while the robot arm 100 is slightly moved to the periphery for learning.
- the explanation has been made on the assumption that sufficient learning can not be performed depending on the pixel amount of the image of the single-eye camera 102 until the difference occurs.
- learning may be performed only with the image of the single-eye camera 102. Even when the male connector 110 and the female connector 120 are fitted, both the image of the monocular camera 102 and the value of the force sensor 801 may be used.
- the neural network is used in a state in which the male connector 110 and the female connector 120 are fitted and in the case where the male connector 110 and the female connector 120 are not fitted. You may distinguish.
- the input layer is formed by only the image
- learning can be performed with higher accuracy, and even when learning is performed using only images, it is possible to perform accurate learning because the configuration of an image is different by distinguishing between cases where fitting is performed and cases where fitting is not performed.
- the path setting unit 806 for instructing the control amount, the output layer of the moved position, the value of the force sensor 801 at the moved position as the input layer, and the moved position and the value of the force sensor 801
- the acquisition of the Actor unit 804 enables efficient collection of learning data.
- FIG. 14 shows a functional configuration diagram of the position control device in the fourth embodiment.
- the difference from FIG. 8 is that a control parameter adjustment unit 1401 is added, and the control parameter adjustment unit 1401 includes a trajectory generation unit 1402, a coordinate conversion unit 1403, a gravity correction unit 1404, a compliant motion control unit 1405, and a combination.
- a section 1406 is composed.
- the force sensor 801 measures the load applied to the grip portion 101 of the robot arm 100, and can measure, for example, the value of force when the male connector 110 and the female connector 120 in FIG. 1 abut. It is.
- an excessive force is applied to the surrounding environment due to the operation output at the initial stage of learning, which may damage the surrounding environment such as the robot arm 100 and the male connector 110 and the female connector 120.
- the robot arm 100 or the male-side connector can be operated by disposing the compliant motion control unit 1405 at the subsequent stage of the control parameter generation unit 202 and operating according to the external force acquired by the force sensor 801. It is prevented that excessive force is applied to the surrounding environment such as 110 and the female side connector 120.
- the trajectory generation unit 1402 calculates the periodic control amount in control cycle units so as not to exceed at least one of these limitations, given the control period of the robot arm 100 and the maximum velocity, maximum acceleration, and maximum jerk. .
- non-patent documents KROGER, Torsten; PADIAL, Jose. Simple and robust visual servo control of robot arms using an on-line trajectory generator.
- ICRA Robotics and Automation
- the following constants are given constants corresponding to the specifications of the robot arm 100.
- xi, vi, ⁇ i and ji are variables representing the following.
- step S1502 the robot arm 100 is brought close to the insertion position of the female connector 120.
- the approximate position and orientation of the female connector 110 are registered in advance on the side of the control unit 203 in FIG. 14, and the position of the male connector 110 is determined based on a control program registered on the side of the control unit 203 in advance. Be operated. Up to this point is the same as steps S101 to S102 of the flowchart of FIG. 4 in the first embodiment.
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Human Computer Interaction (AREA)
- Manipulator (AREA)
- Automatic Assembly (AREA)
Abstract
La présente invention comprend : une unité de détermination de trajet 802 qui, lorsqu'un alignement pour deux objets impliquant une insertion est inclus, ordonne à une quantité de commande pour l'insertion sur la base d'une image acquise à partir d'une unité de capture d'image 201 et d'une valeur à partir d'un capteur de force 801, et apprend à partir d'un résultat de l'alignement ; et une unité de réglage de paramètre de commande 1401 qui délivre une valeur de réglage de quantité de commande de période sur la base d'une quantité de commande de période définie pour chaque période de commande de façon à atteindre la quantité de commande, et une quantité de commande adaptée à une force externe sur la base de la valeur du capteur de force 801, un bras de robot 100 étant actionné par une quantité de commande dans laquelle une commande de génération de trajectoire et une commande de mouvement conforme obtenues en utilisant le capteur de force 801 sont ajoutées. Il est possible de réaliser en toute sécurité une tentative même à un stade initial d'apprentissage selon la présente invention, tandis qu'un essai et une erreur sont nécessaires jusqu'à ce que les extrémités d'apprentissage et l'environnement puissent être détruits dans un modèle d'apprentissage de renforcement typique.
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2018/002053 WO2019146007A1 (fr) | 2018-01-24 | 2018-01-24 | Dispositif de commande de position et procédé de commande de position |
| JP2018530627A JP6458912B1 (ja) | 2018-01-24 | 2018-01-24 | 位置制御装置及び位置制御方法 |
| TW107125131A TW201932257A (zh) | 2018-01-24 | 2018-07-20 | 位置控制裝置以及位置控制方法 |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2018/002053 WO2019146007A1 (fr) | 2018-01-24 | 2018-01-24 | Dispositif de commande de position et procédé de commande de position |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019146007A1 true WO2019146007A1 (fr) | 2019-08-01 |
Family
ID=65228992
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2018/002053 Ceased WO2019146007A1 (fr) | 2018-01-24 | 2018-01-24 | Dispositif de commande de position et procédé de commande de position |
Country Status (3)
| Country | Link |
|---|---|
| JP (1) | JP6458912B1 (fr) |
| TW (1) | TW201932257A (fr) |
| WO (1) | WO2019146007A1 (fr) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021066794A1 (fr) * | 2019-09-30 | 2021-04-08 | Siemens Aktiengesellschaft | Asservissement visuel activé par apprentissage automatique à accélération matérielle dédiée |
| WO2021170163A1 (fr) * | 2020-02-28 | 2021-09-02 | Rittal Gmbh & Co. Kg | Ensemble pour implanter et câbler des composants électroniques dans la réalisation d'installations de commutation et procédé correspondant |
| WO2022030334A1 (fr) * | 2020-08-03 | 2022-02-10 | キヤノン株式会社 | Dispositif de commande, dispositif de lithographie et procédé de fabrication d'article |
| CN114401829A (zh) * | 2019-10-25 | 2022-04-26 | 聪慧公司 | 机器人配套机器 |
| JP2022065785A (ja) * | 2020-10-16 | 2022-04-28 | セイコーエプソン株式会社 | 力制御パラメーター調整方法 |
| CN115990891A (zh) * | 2023-03-23 | 2023-04-21 | 湖南大学 | 一种基于视觉示教和虚实迁移的机器人强化学习装配的方法 |
| CN116917086A (zh) * | 2021-02-18 | 2023-10-20 | 三菱电机株式会社 | 控制装置、机器人系统、学习装置、控制方法和程序 |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP7239399B2 (ja) * | 2019-06-19 | 2023-03-14 | ファナック株式会社 | 調整支援装置 |
| US20220274256A1 (en) | 2019-08-02 | 2022-09-01 | Dextrous Robotics, Inc. | A robotic system for picking and placing objects from and into a constrained space |
| CN111230469B (zh) * | 2020-03-11 | 2021-05-04 | 苏州科诺机器人有限责任公司 | 一种全自动水接头装配机构及装配方法 |
| TWI766252B (zh) * | 2020-03-18 | 2022-06-01 | 揚明光學股份有限公司 | 光學鏡頭製造系統及應用其之光學鏡頭製造方法 |
| JP7563215B2 (ja) * | 2021-02-10 | 2024-10-08 | オムロン株式会社 | ロボットモデルの学習装置、ロボットモデルの機械学習方法、ロボットモデルの機械学習プログラム、ロボット制御装置、ロボット制御方法、及びロボット制御プログラム |
| CN113140104B (zh) * | 2021-04-14 | 2022-06-21 | 武汉理工大学 | 一种车辆列队跟踪控制方法、装置及计算机可读存储介质 |
| US11845184B2 (en) | 2022-04-18 | 2023-12-19 | Dextrous Robotics, Inc. | System and/or method for grasping objects |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO1998017444A1 (fr) * | 1996-10-24 | 1998-04-30 | Fanuc Ltd | Systeme de robot de commande de forces a capteur optique pour travail d'insertion |
| JP2011230245A (ja) * | 2010-04-28 | 2011-11-17 | Yaskawa Electric Corp | ロボットシステム |
| JP2014054715A (ja) * | 2012-09-13 | 2014-03-27 | Fanuc Ltd | 選択条件に基づいてロボットの保持位置姿勢を決定する物品取出装置 |
| JP2015217486A (ja) * | 2014-05-19 | 2015-12-07 | 富士通株式会社 | 判定装置、判定方法、および判定プログラム |
| JP2016221642A (ja) * | 2015-06-02 | 2016-12-28 | セイコーエプソン株式会社 | ロボット、ロボット制御装置、ロボット制御方法およびロボットシステム |
| JP2016221660A (ja) * | 2015-06-03 | 2016-12-28 | 富士通株式会社 | 判定方法、判定プログラム及び判定装置 |
| WO2017018113A1 (fr) * | 2015-07-29 | 2017-02-02 | 株式会社オートネットワーク技術研究所 | Dispositif de simulation de manipulation d'objet, système de simulation de manipulation d'objet, procédé destiné à la simulation de manipulation d'objet, procédé de fabrication destiné à un objet et programme de simulation de manipulation d'objet |
| JP2017030135A (ja) * | 2015-07-31 | 2017-02-09 | ファナック株式会社 | ワークの取り出し動作を学習する機械学習装置、ロボットシステムおよび機械学習方法 |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5904635B2 (ja) * | 2012-03-02 | 2016-04-13 | セイコーエプソン株式会社 | 制御装置、制御方法及びロボット装置 |
| JP6248694B2 (ja) * | 2014-02-25 | 2017-12-20 | セイコーエプソン株式会社 | ロボット、ロボットシステム、及び制御装置 |
| WO2018146770A1 (fr) * | 2017-02-09 | 2018-08-16 | 三菱電機株式会社 | Dispositif et procédé de commande de position |
-
2018
- 2018-01-24 WO PCT/JP2018/002053 patent/WO2019146007A1/fr not_active Ceased
- 2018-01-24 JP JP2018530627A patent/JP6458912B1/ja not_active Expired - Fee Related
- 2018-07-20 TW TW107125131A patent/TW201932257A/zh unknown
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO1998017444A1 (fr) * | 1996-10-24 | 1998-04-30 | Fanuc Ltd | Systeme de robot de commande de forces a capteur optique pour travail d'insertion |
| JP2011230245A (ja) * | 2010-04-28 | 2011-11-17 | Yaskawa Electric Corp | ロボットシステム |
| JP2014054715A (ja) * | 2012-09-13 | 2014-03-27 | Fanuc Ltd | 選択条件に基づいてロボットの保持位置姿勢を決定する物品取出装置 |
| JP2015217486A (ja) * | 2014-05-19 | 2015-12-07 | 富士通株式会社 | 判定装置、判定方法、および判定プログラム |
| JP2016221642A (ja) * | 2015-06-02 | 2016-12-28 | セイコーエプソン株式会社 | ロボット、ロボット制御装置、ロボット制御方法およびロボットシステム |
| JP2016221660A (ja) * | 2015-06-03 | 2016-12-28 | 富士通株式会社 | 判定方法、判定プログラム及び判定装置 |
| WO2017018113A1 (fr) * | 2015-07-29 | 2017-02-02 | 株式会社オートネットワーク技術研究所 | Dispositif de simulation de manipulation d'objet, système de simulation de manipulation d'objet, procédé destiné à la simulation de manipulation d'objet, procédé de fabrication destiné à un objet et programme de simulation de manipulation d'objet |
| JP2017030135A (ja) * | 2015-07-31 | 2017-02-09 | ファナック株式会社 | ワークの取り出し動作を学習する機械学習装置、ロボットシステムおよび機械学習方法 |
Non-Patent Citations (1)
| Title |
|---|
| KROEGER, TORSTEN ET AL.: "Simple and Robust Visual Servo Control of Robot Arms Using an On-Line Trajectory Generator", 2012 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, 18 May 2012 (2012-05-18), pages 4862 - 4869, XP032450906 * |
Cited By (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11883947B2 (en) | 2019-09-30 | 2024-01-30 | Siemens Aktiengesellschaft | Machine learning enabled visual servoing with dedicated hardware acceleration |
| CN114630734A (zh) * | 2019-09-30 | 2022-06-14 | 西门子股份公司 | 具有专用硬件加速的支持机器学习的视觉伺服 |
| WO2021066794A1 (fr) * | 2019-09-30 | 2021-04-08 | Siemens Aktiengesellschaft | Asservissement visuel activé par apprentissage automatique à accélération matérielle dédiée |
| US12240123B2 (en) | 2019-10-25 | 2025-03-04 | Dexterity, Inc. | Robotic kitting machine |
| CN114401829A (zh) * | 2019-10-25 | 2022-04-26 | 聪慧公司 | 机器人配套机器 |
| US12218492B2 (en) | 2020-02-28 | 2025-02-04 | Rittal Gmbh & Co. Kg | Arrangement for the assembly and wiring of electrical components in switchgear construction and a corresponding method |
| WO2021170163A1 (fr) * | 2020-02-28 | 2021-09-02 | Rittal Gmbh & Co. Kg | Ensemble pour implanter et câbler des composants électroniques dans la réalisation d'installations de commutation et procédé correspondant |
| JP7466403B2 (ja) | 2020-08-03 | 2024-04-12 | キヤノン株式会社 | 制御装置、リソグラフィー装置、制御方法および物品製造方法 |
| JP2022028489A (ja) * | 2020-08-03 | 2022-02-16 | キヤノン株式会社 | 制御装置、リソグラフィー装置および物品製造方法 |
| WO2022030334A1 (fr) * | 2020-08-03 | 2022-02-10 | キヤノン株式会社 | Dispositif de commande, dispositif de lithographie et procédé de fabrication d'article |
| US12379673B2 (en) | 2020-08-03 | 2025-08-05 | Canon Kabushiki Kaisha | Control apparatus, lithography apparatus, and article manufacturing method |
| JP2022065785A (ja) * | 2020-10-16 | 2022-04-28 | セイコーエプソン株式会社 | 力制御パラメーター調整方法 |
| JP7528709B2 (ja) | 2020-10-16 | 2024-08-06 | セイコーエプソン株式会社 | 力制御パラメーター調整方法 |
| CN116917086A (zh) * | 2021-02-18 | 2023-10-20 | 三菱电机株式会社 | 控制装置、机器人系统、学习装置、控制方法和程序 |
| CN115990891A (zh) * | 2023-03-23 | 2023-04-21 | 湖南大学 | 一种基于视觉示教和虚实迁移的机器人强化学习装配的方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| TW201932257A (zh) | 2019-08-16 |
| JPWO2019146007A1 (ja) | 2020-02-06 |
| JP6458912B1 (ja) | 2019-01-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2019146007A1 (fr) | Dispositif de commande de position et procédé de commande de position | |
| JP6376296B1 (ja) | 位置制御装置及び位置制御方法 | |
| JP6587761B2 (ja) | 位置制御装置及び位置制御方法 | |
| CN112109075B (zh) | 控制系统和控制方法 | |
| CN110315505B (zh) | 机器学习装置及方法、机器人控制装置、机器人视觉系统 | |
| JP2022145915A (ja) | 推論方法、推論プログラム、推論装置、学習方法、学習プログラム、学習装置およびモデル生成方法 | |
| CN112757284A (zh) | 机器人控制装置、方法和存储介质 | |
| JP2019093461A (ja) | 把持システム、学習装置、把持方法、及び、モデルの製造方法 | |
| CN111203871A (zh) | 使用独立致动视觉系统的机器人操纵 | |
| CN110942083B (zh) | 拍摄装置以及拍摄系统 | |
| CN115213890A (zh) | 抓取的控制方法、装置、服务器、电子设备及存储介质 | |
| US20250353167A1 (en) | Precision assembly control method and system by robot with visual-tactile fusion | |
| CN113927602A (zh) | 基于视、触觉融合的机器人精密装配控制方法及系统 | |
| CN113298847B (zh) | 基于视场感知的共识自主性追捕与逃逸方法和装置 | |
| Zhao et al. | Fit2Ear: Generating Personalized Earplugs from Smartphone Depth Camera Images | |
| US20240054610A1 (en) | Image generation device, robot control device and computer program | |
| Ramachandruni et al. | Vision-based control of UR5 robot to track a moving object under occlusion using Adaptive Kalman Filter | |
| Zha et al. | Coordination of visual and tactile sensors for pushing operation using multiple autonomous arms | |
| JP2024034668A (ja) | ワイヤ挿入システム、ワイヤ挿入方法、およびワイヤ挿入プログラム | |
| KR20250030828A (ko) | 동물의 3차원 자세 데이터를 생성하는 방법, 장치 및 기록 매체 | |
| JP2024123546A (ja) | 姿勢推定装置、姿勢推定方法、およびプログラム | |
| CN119098965A (zh) | 机械臂运动轨迹的获取方法、装置及其异常检测方法、装置 | |
| Khosla et al. | 3D hierarchical spatial representation and memory of multimodal sensory data | |
| SHARMA | Beckman Institute for Advanced Science and Technology University of Illinois at Urbana-Champaign |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| ENP | Entry into the national phase |
Ref document number: 2018530627 Country of ref document: JP Kind code of ref document: A |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18902988 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18902988 Country of ref document: EP Kind code of ref document: A1 |