[go: up one dir, main page]

WO2025225572A1 - Control device, control method, and program - Google Patents

Control device, control method, and program

Info

Publication number
WO2025225572A1
WO2025225572A1 PCT/JP2025/015420 JP2025015420W WO2025225572A1 WO 2025225572 A1 WO2025225572 A1 WO 2025225572A1 JP 2025015420 W JP2025015420 W JP 2025015420W WO 2025225572 A1 WO2025225572 A1 WO 2025225572A1
Authority
WO
WIPO (PCT)
Prior art keywords
control
measurement information
unit
control command
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2025/015420
Other languages
French (fr)
Japanese (ja)
Inventor
晋治 川畑
和之 山本
護 空閑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
COLAB Co Ltd
Original Assignee
COLAB Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by COLAB Co Ltd filed Critical COLAB Co Ltd
Publication of WO2025225572A1 publication Critical patent/WO2025225572A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls

Definitions

  • This disclosure relates to a control device, control method, and program for controlling a robot device.
  • Control devices for controlling robotic devices have been widely used in the past.
  • a known technology for such control devices is to measure the operating state of the robotic device using a visual sensor and a force sensor, and to control the operating state of the robotic device to achieve a target state (see, for example, Patent Document 1).
  • a conventional method for learning the movements of a robotic device is to teach it by manually operating the robotic device and recording the series of movements it performs.
  • This disclosure provides a control device, control method, and program that enable more appropriate control of a robotic device.
  • a control device includes a motion generation unit that generates a control command for executing a reverse motion that changes the motion state of a robotic device from a target state to an arbitrary state different from the target state, an acquisition unit that acquires measurement information obtained using a sensor that measures the motion state of the robotic device, and a data collection unit that repeatedly collects data including a set of the control command and the measurement information while the reverse motion is being executed in accordance with the control command.
  • the sensor includes at least one of a visual sensor and a force sensor.
  • a control method includes generating a control command for executing a reverse motion that changes the motion state of a robotic device from a target state to an arbitrary state different from the target state, acquiring measurement information using a sensor that measures the motion state of the robotic device, and repeatedly collecting data including a set of the control command and the measurement information while the reverse motion is being executed in accordance with the control command.
  • the sensor includes at least one of a visual sensor and a force sensor.
  • a program causes a control device to generate a control command for executing a reverse operation that changes the operating state of a robotic device from a target state to an arbitrary state different from the target state, acquire measurement information using a sensor that measures the operating state of the robotic device, and repeatedly collect data including a set of the control command and the measurement information while the reverse operation is being executed in accordance with the control command.
  • the sensor includes at least one of a visual sensor and a force sensor.
  • FIG. 1 is a diagram illustrating an example of a system configuration of a control system including a control device according to an embodiment.
  • FIG. 2 is a block diagram showing a functional block configuration of the control device according to the first embodiment.
  • FIG. 3 is a diagram for explaining an example of data collection by the control device according to the first embodiment.
  • FIG. 3 is a diagram for explaining an example of data collection by the control device according to the first embodiment.
  • FIG. 4 is a flow chart for explaining an example of data collection by the control device according to the first embodiment.
  • FIG. 2 is a diagram illustrating an example of a setting library stored in a control device according to the embodiment.
  • FIG. 10 is a block diagram showing a functional block configuration of a control device according to a second embodiment.
  • FIG. 10 is a flowchart illustrating an example of a control flow by a control device according to a second embodiment.
  • 10A and 10B are diagrams for explaining a specific example of control during the operation of "mounting components on a board” as control by the control device according to the second embodiment.
  • FIG. 11 is a block diagram showing a functional block configuration of a control device according to a third embodiment.
  • 11A and 11B are diagrams for explaining a specific example of control during the operation of "mounting components on a board” as control by the control device according to the third embodiment.
  • a control device includes: a motion generating unit that generates a control command for executing a reverse motion that changes the motion state of a robotic device from a target state to an arbitrary state different from the target state; an acquiring unit that acquires measurement information obtained using a sensor that measures the motion state of the robotic device; and a data collecting unit that repeatedly collects data including a set of the control command and the measurement information while the reverse motion is being executed in accordance with the control command.
  • the sensor includes at least one of a visual sensor and a force sensor.
  • the term "robot device” refers to any device that can operate in accordance with control commands output by a control device, and any robot device is applicable.
  • the robot device may be an industrial robot such as a manipulator, or may include a mobile body that can move automatically.
  • Industrial robots include, for example, vertical articulated robots, SCARA robots, parallel link robots, Cartesian robots, and collaborative robots.
  • Mobile bodies that can move automatically include, for example, drones, vehicles configured to be self-driving, automated guided vehicles, or mobile robots, as well as combinations with the above industrial robots.
  • the robot device is a manipulator will be mainly described.
  • Control commands are related to the control of the operation of a robotic device, and are, for example, target control amounts, operation amounts, etc.
  • Outputting control commands may mean directly controlling the robotic device based on the control commands, or, if the robotic device is equipped with a controller, outputting control commands to the controller to have the controller control the operation of the robotic device.
  • the "operating state of a robotic device” refers to the state related to the operation of a portion of the configuration of the robotic device (e.g., an end effector) and/or the state related to the object of work performed using the robotic device.
  • measuring related to the operating state of a robotic device refers to the measurement of the state related to the operation of a portion of the configuration of the robotic device (e.g., an end effector) and/or the measurement of the state related to the object of work performed using the robotic device.
  • An "object” is an object that may be related to the operation of the robotic device, such as a workpiece.
  • a portion of the configuration of a robotic device (e.g., an end effector) may also be considered an object.
  • Task refers to a job that is performed by a robotic device and may include multiple processes. Examples of tasks include transporting parts, fitting parts, screwing, and processing. Tasks may also be simple tasks such as gripping and releasing a workpiece. Tasks may be assigned in advance or specified by the operator.
  • the "target state” is the state at which the purpose of a task (or process) is achieved.
  • the target state can vary depending on the task being performed. For example, if a robotic device is tasked with attaching an assembly part carried by an end effector to a part being assembled, the target state could be a state in which the assembly part has been attached to the part being assembled.
  • An "arbitrary state” may be any state that is different from the target state, but may also be, for example, the state at the start or midway of a task performed using the robot device.
  • Measurement information is not limited to the sensor measurement data itself, but may also be feature quantities calculated from the measurement data.
  • “measurement information” may be information measured and/or calculated using an encoder and/or servo motor that are present as basic components of the robot.
  • FIG. 1 is a diagram showing an example of the system configuration of a control system including a control device 100 according to the first embodiment. Here, the description will be focused on the hardware configuration of the control system.
  • the robot device 200 is a manipulator.
  • the robot device 200 (manipulator) is a six-axis vertical articulated industrial robot, and has a base 221 and six joints 211 to 216.
  • Each of the joints 211 to 216 has a built-in servo motor (not shown) and is configured to be rotatable around each axis.
  • the first joint unit 211 is connected to the base unit 221 and rotates its tip portion around the axis of the base.
  • a moving mechanism capable of automatic movement (self-propelled) may be provided instead of the base unit 221.
  • the second joint unit 212 is connected to the first joint unit 211 and rotates its tip portion back and forth.
  • the third joint unit 213 is connected to the second joint unit 212 via a link 222 and rotates its tip portion up and down.
  • the fourth joint unit 214 is connected to the third joint unit 213 via a link 223 and rotates its tip portion around the axis of the link 223.
  • the fifth joint unit 215 is connected to the fourth joint unit 214 via a link 224 and rotates its tip portion up and down.
  • the sixth joint unit 216 is connected to the fifth joint unit 215 via a link 225 and rotates its tip portion around the axis of the link 225.
  • a force sensor 320 and a gripper 226 are attached to the tip side of the sixth joint 216.
  • the gripper 226 is an example of an end effector.
  • the visual sensor 310 is a sensor that performs image measurement, and is positioned to observe each object (gripper 226, workpiece W1, workpiece W2) present in the environment (work space) in which the robot device 200 operates.
  • the visual sensor 310 is attached to the link 225, and the visual sensor 310 is provided integrally with the robot device 200.
  • the visual sensor 310 may also be fixed to equipment in the work space, and provided separately from the robot device 200.
  • a camera such as a digital camera or video camera may be used as the visual sensor 310.
  • the measurement data (i.e., image data) of the visual sensor 310 is an example of visual measurement information.
  • the force sensor 320 is a sensor that measures forces and is configured to measure the forces and moments acting on the robot device 200 (specifically, the gripper 226).
  • the force sensor 320 may be, for example, a six-axis force sensor that measures forces acting on the gripper 226 in the three axial directions of the X-axis, Y-axis, and Z-axis, and moments about the X-axis, Y-axis, and Z-axis.
  • the force sensor 320 can measure forces and moments generated by contact between an assembly part supported by the robot device 200 or the gripper 226 and an object.
  • the measurement data from the force sensor 320 may be used to adjust the gripping force of the gripper 226 or to detect whether an abnormal force is acting on the gripper 226.
  • the term "force” is used to include the meaning of "moment.”
  • the force sensor 320 may be realized, for example, by measuring the current value of a motor (not shown) built into each of the joints 211 to 216, thereby measuring forces in the three axial directions of the X, Y, and Z axes, and moments around the X, Y, and Z axes.
  • the force sensor 320 may be, for example, a pressure sensor provided on the surface of the robot device 200, and/or a sensor that utilizes changes in the state of a jacket provided on the surface of the robot device 200.
  • the force sensor 320 may also detect changes in the flow rate of air and/or liquid, and/or changes in capacitance.
  • each of the joints 211 to 216 may have an encoder (not shown) built in.
  • the encoder is an example of a sensor.
  • the encoder is configured to be able to measure the angle (control amount) of each of the joints 211 to 216.
  • the measurement data from the encoder may be used to control the angle of each of the joints 211 to 216.
  • the control system may have a transport device 510 that transports the workpiece W2.
  • the robot device 200 can perform work using a gripper 226 (end effector) attached to the tip of its arm.
  • the end effector is an external device that can be replaced depending on the application, and a welding gun, tool, etc. may be attached instead of the gripper 226.
  • the robot device 200 can perform work using a force sensor 320 while tracking the workpiece W2 traveling on the transport device 510 using a visual sensor 310.
  • the robot device 200 performs work by fitting the workpiece W1, which is an assembly part grasped by the robot device 200, into a hole in the workpiece W2 (a part to be assembled, such as a circuit board) traveling on the transport device 510.
  • the control device 100 has a processor 101, memory 102, and an external interface (I/F) 103.
  • the processor 101 is configured to include a CPU (Central Processing Unit). Furthermore, the processor 101 may include at least one of a microprocessor, an FPGA (field-programmable gate array), and a DSP (digital signal processor).
  • the memory 102 is configured to include RAM (Random Access Memory), ROM (Read Only Memory), and an auxiliary storage device (e.g., a hard disk drive, solid state drive).
  • the processor 101 and memory 102 constitute a computer.
  • the control device 100 may be configured from multiple computers.
  • the control device 100 is not limited to an information processing device designed specifically for the service provided, but may also be a general-purpose information processing device such as a PC (Personal Computer), or a controller such as a PLC (Programmable Logic Controller).
  • Memory 102 stores programs executed by processor 101. By executing the programs stored in memory 102, processor 101, together with memory 102, realizes the functions of each functional block described below.
  • memory 102 may store, for example, a recognition library containing trained models used for image recognition to recognize objects, and a setting library containing data collected by pre-task data collection and setting information obtained based on the collected data.
  • the recognition library and setting library may be provided for each type of task that the robot device 200 can perform.
  • the setting information included in the setting library may include trained models generated based on the collected data.
  • the external I/F 103 is, for example, a USB (Universal Serial Bus) port or a dedicated port, and is an interface for connecting to an external device so as to be able to communicate with the external device.
  • the external I/F 103 may be connected to the external device (including the robot device 200) via a wired or wireless connection.
  • the type and number of external I/Fs 103 may be selected appropriately depending on the type and number of external devices to be connected.
  • the control device 100 is connected to the robot device 200, a visual sensor 310, and a user interface (I/F) 400 via the external I/F 103.
  • the user I/F 400 includes a display device 410 and an operation device 420.
  • the user I/F 400 is provided separately from the control device 100, but the user I/F 400 may also be provided integrally with the control device 100.
  • the display device 410 may be a liquid crystal display or an organic EL (Electro-Luminescence) display.
  • the display device 410 may also be a display equipped with a speaker.
  • the operation device 420 is a device for inputting operations, such as a keyboard, mouse, or touch panel.
  • the display device 410 and operation device 420 may be integrated into a touch panel display. The operator can use the display device 410 and operation device 420 to check the status of the control device 100 and operate the control device 100.
  • FIG. 2 is a block diagram showing the functional block configuration of the control device 100 according to the first embodiment.
  • the functional block configuration related to data collection which collects data to be used for control during work before the work is performed, will be mainly described.
  • data collection may be performed before a user (including an operator) starts using the control device 100 (and the robot device 200).
  • data collection may be performed in advance before the control device 100 (and the robot device 200) is shipped.
  • the control device 100 has an action generation unit 110, an acquisition unit 120, a data collection unit 130, a data storage unit 140, a setting acquisition unit 150, and a library storage unit 160.
  • the control device 100 may or may not have a control unit 170.
  • the motion generation unit 110 generates a control command for executing a forward motion that changes the motion state of the robot device 200 from an arbitrary state to a target state.
  • the motion generation unit 110 also generates a control command for executing a reverse motion that changes the motion state of the robot device 200 from a target state to an arbitrary state different from the target state.
  • the motion generation unit 110 outputs the generated control command to the drive unit 210 of the robot device 200.
  • the drive unit 210 includes servo motors provided in each of the joints 211 to 216 of the robot device 200.
  • the drive unit 210 may include a controller on the robot device 200 side.
  • the drive unit 210 drives the servo motors in accordance with the control command to operate the robot device 200.
  • the target state refers to the state that is achieved when the purpose of the work (or process) is achieved, and/or an intermediate state of the work.
  • a series of operations is assumed in which the robot device 200 attaches an assembly part (work W1) carried by the end effector (gripper 226) to a part to be assembled (work W2), and therefore the target state is a state in which the assembly part has been attached to the part to be assembled, and/or an intermediate state before attachment.
  • the target state is a state in which the work W1 has been fitted into a hole in the work W2, and/or an intermediate state in which the work W1 is in contact with the work W2 at a position where it can be fitted into the hole.
  • the arbitrary state is a state different from the target state, for example, a state in which workpiece W1 is located away from the hole in workpiece W2.
  • the arbitrary state may be set by operation input (user input) via the operation device 420.
  • the position of the arbitrary state may be set by operation input (user input) using the position of the target state as a reference.
  • position may not only mean “coordinates” but may also mean “posture.”
  • the acquisition unit 120 acquires measurement information obtained using a sensor 300 for measuring the operating state of the robot device 200.
  • the sensor 300 includes a visual sensor 310 and a force sensor 320.
  • the sensor 300 may further include other sensors 330 such as an encoder.
  • the acquisition unit 120 may also include an image recognition unit 121 that performs image recognition on the measurement data (i.e., image data) output by the visual sensor 310.
  • the image recognition unit 121 may recognize objects (e.g., gripper 226, workpiece W1, workpiece W2) through image recognition such as feature extraction, and acquire the position of the objects (e.g., coordinates of characteristic features).
  • Such position information is an example of visual measurement information.
  • the data collection unit 130 repeatedly collects data including a set of a control command output by the motion generation unit 110 and measurement information acquired by the acquisition unit 120 while performing a reverse motion in accordance with the control command. This makes it possible to efficiently collect data that can be used to control the motion (forward motion) of the robot device 200 during actual work. For example, during actual work using the robot device 200, it becomes possible to appropriately control the motion of the robot device from an arbitrary state to a target state by generating a control command from measurement information obtained during work based on already collected data. Note that the data collection unit 130 may repeatedly collect data including a set of a control command and measurement information acquired by the acquisition unit 120 while performing a forward motion in accordance with the control command output by the motion generation unit 110.
  • the motion generation unit 110 may generate control commands for executing multiple patterns of reverse motion that change the motion state of the robot device 200 from a target state to multiple arbitrary states that are different from one another.
  • the data collection unit 130 may repeatedly collect data for each of the multiple patterns of reverse motion. This makes it possible to collect multiple patterns of data corresponding to multiple movement paths, thereby providing versatility to the motion control of the robot device 200 during actual work. For example, even if the workpiece W2 is not fixed or moves during work, it becomes easy to attach the workpiece W1 to the workpiece W2.
  • the action generation unit 110 may generate control commands that collect more data in areas closer to the target state, and generate control commands that collect less data in areas farther from the target state.
  • the closer an area is to the target state the more precise and accurate control is required during actual work.
  • By increasing the amount of data collected in areas closer to the target state it becomes possible to collect a sufficient amount of data to perform precise and accurate control.
  • precise and accurate control is not as necessary. Therefore, by generating control commands that collect less data in areas farther from the target state, unnecessary data collection can be suppressed, enabling efficient data collection.
  • the acquisition unit 120 may shorten the acquisition cycle (e.g., sampling frequency) so that the amount of measurement information acquired increases in areas closer to the target state, and may extend the acquisition cycle so that the amount of measurement information acquired increases in areas farther from the target state. This type of processing also allows the amount of data collected to increase in areas closer to the target state, and decrease in areas farther from the target state.
  • acquisition cycle e.g., sampling frequency
  • the acquisition unit 120 may acquire, as relative measurement information, the difference between target measurement information obtained in the target state and current measurement information obtained during the execution of the reverse operation.
  • the data collection unit 130 may repeatedly collect data including a set of the control command and relative measurement information while the reverse operation is being executed in accordance with the control command.
  • the acquisition unit 120 may acquire, as relative position information, the difference between the visual measurement information (target position) of the object obtained by the visual sensor 310 in the target state and the current measurement information (current position) of the object obtained during the execution of the reverse operation.
  • the acquisition unit 120 may set the position of the hole in the workpiece W2 as the target position, and each position on the movement path of the workpiece W1 as the current position, and acquire the difference between the target position and each current position as the relative position.
  • This allows a control command to be associated with each relative positional relationship between the target position and the current position of the object. Therefore, during actual work, appropriate control commands can be generated based on the relative positional relationship between the target position and the current position of the object. Furthermore, control using relative positional relationships can be applied even when the object moves during work.
  • the acquisition unit 120 may acquire measurement information including a velocity-related value related to the robot device 200 in response to the output of the sensor 300 or a control command.
  • the velocity-related value related to the robot device 200 may be at least one of the velocity, acceleration, and jerk of the object.
  • the acquisition unit 120 may derive the velocity, acceleration, and jerk of the object from visual measurement information (position information of the object) obtained by the visual sensor 310.
  • the acquisition unit 120 may derive the velocity, acceleration, and jerk of the object from a control command output to the drive unit 210.
  • the acquisition unit 120 may derive the velocity, acceleration, and jerk of the object from measurement information obtained by another sensor 330 (e.g., an encoder).
  • the data collection unit 130 may repeatedly collect data including a set of the control command and measurement information including the velocity-related value while performing a reverse movement in accordance with the control command.
  • the velocity-related value collected in this manner can be used to set a velocity-related target value in control during actual work. Control using the velocity-related target value will be described in the third embodiment.
  • the data storage unit 140 stores the data collected by the data collection unit 130.
  • the data collected by the data collection unit 130 includes multiple sets of measurement information (relative measurement information) and control commands. Each set may include measurement information (relative measurement information), a speed-related value, and a control command.
  • the data collected by the data collection unit 130 may include more data sets the closer the area is to the target state.
  • the setting acquisition unit 150 acquires setting information for performing work using the robot device 200 based on data collected by the data collection unit 130 (specifically, data stored in the data storage unit 140). Setting information for performing a certain work is referred to as the "setting library" for that work. For example, the setting acquisition unit 150 acquires the setting library after data collection by the data collection unit 130 is complete.
  • the library storage unit 160 stores the setting library acquired by the setting acquisition unit 150.
  • the control unit 170 controls the robot device 200 using the setting library during actual work.
  • the setting library includes operation parameters for each work process, and information on the transition destination and transition conditions for each process (normal values, timeout values, error values, etc. of the transition conditions).
  • the operation parameters may include correspondence information that associates measurement information (relative measurement information) with control commands.
  • the operation parameters may include correspondence information that associates measurement information (relative measurement information), speed-related values, and control commands.
  • the operation parameters may include a control ratio between visual control, which is robot control based on visual measurement information obtained by the visual sensor 310, and force control, which is robot control based on force measurement information obtained by the force sensor 320.
  • the operation parameters may include information on the type of object (metal, resin, screw, connector, etc.).
  • the operation parameters may include information on features on the object (holes, edge surfaces, connectors, etc.).
  • the setting acquisition unit 150 may acquire the setting library based at least in part on operation input performed via the operation device 420.
  • the setting acquisition unit 150 may use the data collected by the data collection unit 130 as learning data to acquire a trained model for deriving control commands from measurement information through machine learning.
  • the control unit 170 uses a setting library including the trained model during actual work to control the robot device 200 based on measurement information obtained during work.
  • the type of trained model (trained model) is not particularly limited as long as it is capable of acquiring the ability to make inferences for generating control commands through machine learning.
  • the type of machine learning is not particularly limited, but is typically supervised learning or reinforcement learning.
  • the training model may be configured, for example, by a neural network such as a deep neural network (DNN).
  • the training model may be configured, for example, by a value function such as a state value function or an action value function. In this way, the setting acquisition unit 150 uses machine learning to determine calculations for inferring optimal values, enable inferences, and generate optimal value variables for control commands.
  • DNN deep neural network
  • a value function such as a state value function or an action value function
  • the setting acquisition unit 150 may acquire a setting library that includes the control ratio between visual control and force sense control, for example, for each process.
  • the control unit 170 uses the setting library during actual work to dynamically or gradually change the control ratio depending on the status of the work. Details of this type of control will be explained in the second embodiment.
  • the setting acquisition unit 150 may acquire a setting library containing speed-related values related to the robot device 200, for example, for each process.
  • the control unit 170 may derive the speed-related values related to the robot device 200 from measurement information or control commands obtained during work.
  • the control unit 170 may then repeatedly control the robot device 200 to reduce the difference between the speed-related value and the speed-related target value, based on the setting library and the measurement information obtained during work. Details of such control will be described in the third embodiment.
  • the motion generation unit 110 may generate a control command for executing a reverse motion for each type of task performed by the robot device 200. Assuming the environment of Figure 1, the task type is "mounting components on a circuit board.” Other examples of task types include “packing food into boxes,” “screw tightening,” “pick and place,” and “AGV (Automated Guided Vehicle) control.”
  • the data collection unit 130 may collect data for each task type by repeatedly collecting data for each task type.
  • the setting acquisition unit 150 can acquire a setting library for each type of work based on the data collected by the data collection unit 130 for each type of work.
  • the control unit 170 then controls the robot device 200 to perform the work using the setting library corresponding to the type of work that is actually being performed.
  • the control unit 170 may display a list of setting libraries stored in the library storage unit 160 on the display device 410, and control the robot device 200 using a setting library selected from the list using the operation device 420.
  • the library storage unit 160 also stores a recognition library associated with the setting library, image recognition for performing the work may be performed using the recognition library corresponding to the type of work that is actually being performed.
  • FIGS. 3 and 4 are diagrams for explaining an example of data collection by the control device 100 according to the first embodiment.
  • the motion generation unit 110 generates control commands for executing multiple patterns of reverse motion that change the motion state of the robot device 200 from a target state to multiple mutually different arbitrary states.
  • the data collection unit 130 repeatedly collects data for each of the multiple patterns of reverse motion.
  • the motion generation unit 110 generates control commands that collect more data in areas closer to the target state, and generates control commands that collect less data in areas farther from the target state.
  • the x-axis and y-axis indicate directions that are perpendicular to each other in the horizontal plane, and the z-axis indicates the vertical direction.
  • the amount of collected data is indicated by shading. Specifically, the greater the amount of collected data, the darker the color, and the less the amount of collected data, the lighter the color.
  • the motion generation unit 110 sets a state in which the workpiece W1 (component) is in the hole in the workpiece W2 (circuit board) as the target state, and causes the robot device 200 to perform reverse motion to any of eight patterns P1 to P8. While the movement path for each pattern P1 to P8 is illustrated as linear, each movement path does not have to be linear. For example, the movement path may be such that the workpiece W1 (component) is pulled upward from the hole in the workpiece W2 (circuit board), and then moved horizontally or diagonally upward. Furthermore, of the patterns P1 to P8, the movement of P2, P4, and P8 ends in region R, which is close to the target state. As a result, the closer the point is to the position of the target state, the greater the amount of data collected.
  • FIG. 5 is a flow chart for explaining an example of data collection by the control device 100 according to the first embodiment.
  • a user moves an object (e.g., a workpiece W1 grasped or supported by an end effector) to a target state.
  • the motion generation unit 110 also acquires control command sequence information corresponding to one of a plurality of reverse motion patterns (i.e., any arbitrary state) and/or position information for that arbitrary state.
  • the acquisition unit 120 acquires the position of the object in the target state as the target position from visual measurement information obtained using the visual sensor 310.
  • the acquisition unit 120 acquires the force (reaction force) applied to the object in the target state as the target force (target reaction force) from force measurement information obtained using the force sensor 320.
  • step S102 the motion generation unit 110 outputs a control command to the robot device 200 (drive unit 210) based on the information acquired in step S101, thereby moving the target object a predetermined distance.
  • step S103 the acquisition unit 120 acquires current measurement information.
  • the acquisition unit 120 acquires the current position of the object from visual measurement information obtained using the visual sensor 310.
  • the acquisition unit 120 may acquire the difference between the current position of the object and the target position as a relative position (amount of position change).
  • the acquisition unit 120 also acquires the current force (reaction force) from force measurement information obtained using the force sensor 320.
  • the acquisition unit 120 may acquire the difference between the current force (reaction force) and the target force (reaction force) as a relative force (amount of force change).
  • the acquisition unit 120 may acquire velocity-related values (velocity, acceleration, and jerk) of the object from the current visual measurement information or the control command of step S102.
  • step S104 the data collection unit 130 collects a set of the control command from step S102 and the information acquired in step S103 (position change amount, force change amount, velocity-related value), and stores the set in the data storage unit 140.
  • step S105 If the movement of the object to the arbitrary state corresponding to the current reverse movement pattern has not been completed (step S105: NO), the process returns to step S102, and the movement generation unit 110 outputs a control command to the robot device 200 (drive unit 210) to move the object a predetermined distance. Then, in step S103, the acquisition unit 120 acquires information (position change amount, force change amount, velocity-related value). In step S104, the data collection unit 130 collects a set of the control command from step S102 and the information acquired in step S103 (position change amount, force change amount, velocity-related value), and stores them in the data storage unit 140. This process is repeated until the movement to the arbitrary state corresponding to the current reverse movement pattern is completed.
  • step S105 if the movement of the object to an arbitrary state corresponding to the current reverse movement pattern is complete (step S105: YES), the movement generation unit 110 checks whether the movements have been completed for all of the multiple reverse movement patterns. If the movements have not been completed for all patterns (step S106: NO), the process moves to the next reverse movement pattern (step S107) and resumes processing from step S101.
  • step S108 the setting acquisition unit 150 determines the operating conditions (operating parameters) for performing work using the robot device 200 based on the data collected by the data collection unit 130 (specifically, the data stored in the data storage unit 140). Then, in step S109, the setting acquisition unit 150 stores setting information including the operating conditions (operating parameters) determined in step S108 in the library storage unit 160 as a setting library.
  • a setting library for each type of work may be stored in the library storage unit 160.
  • FIG. 6 is a diagram for explaining an example of a setting library stored in the control device 100 according to the embodiment.
  • the types of work include “mounting components on a circuit board,” “packing food into boxes,” “screw tightening,” “pick and place” (also known as “picking”), and “AGV control.” Each work involves multiple processes.
  • Step 1 "Recognize component ⁇ Move” ⁇ Step 2 "Grab component” ⁇ Step 3 "Move component toward board” ⁇ Step 4 "Approach” ⁇ Step 5 "Recognize holes on board” ⁇ Step 6 "Approach hole” ⁇ Step 7 "Hole tracing operation” ⁇ Step 8 "Insert hole” ⁇ Step 9 "Hole insertion completed” ⁇ Step 10 "Release grip.”
  • the control device 100 may control the operation of the robot device 200 in the reverse order.
  • an obstacle avoidance step may be further included.
  • the obstacle avoidance step may be included in Step 4 "Move object toward target object.”
  • the setting library includes operation parameters for each work process, and information on the transition destination and transition conditions for each process (normal values, timeout values, error values, etc. for transition conditions).
  • the operation parameters may include correspondence information that associates measurement information (relative measurement information) with control commands, or may include correspondence information that associates measurement information (relative measurement information), speed-related values, and control commands.
  • the operation parameters may include the control ratio between visual control and force control.
  • the operation parameters may include information on the type of object (metal, resin, screws, connectors, etc.).
  • the operation parameters may include information on features on the object (holes, edges, connectors, etc.).
  • Second Embodiment The second embodiment will be described mainly focusing on the differences from the first embodiment.
  • the system configuration of the second embodiment is the same as that of the first embodiment (see FIG. 1).
  • the second embodiment will be described mainly as an example in which the second embodiment is based on the first embodiment, but the second embodiment does not necessarily have to be based on at least a part of the first embodiment.
  • FIG. 7 is a block diagram showing the functional block configuration of the control device 100 according to the second embodiment.
  • the control device 100 includes an acquisition unit 120 that acquires measurement information obtained using a sensor 300 for measuring the operating state of the robot device 200, and a control unit 170 that controls the operation of the robot device 200 based on the measurement information.
  • the sensor 300 includes a visual sensor 310 and a force sensor 320.
  • the control unit 170 dynamically or stepwise changes the control ratio between visual control, which is control based on visual measurement information obtained by the visual sensor 310, and force control, which is control based on force measurement information obtained by the force sensor 320, depending on the status of the work using the robot device 200. This makes it possible to take advantage of the benefits of both visual control and force control, and appropriately control the operation of the robot device 200 by using both visual control and force control.
  • the control unit 170 includes a visual control unit 171A that references visual measurement information to generate first information indicating the control content of the robot device 200, a force-sense control unit 171B that references force-sense measurement information to generate second information indicating the control content of the robot device 200, a command generation unit 172 that generates control commands for the robot device 200 based on the first information and the second information, and a weighting unit 173 that changes the control ratio by performing weighting processing between the visual measurement information and the force-sense measurement information, or between the first information and the second information, depending on the status of the work using the robot device 200. This makes it possible to appropriately change the control ratio through weighting processing.
  • the visual control performed by the visual control unit 171A may be performed based on, for example, operational parameters (such as correspondence information) in the setting library.
  • the visual control unit 171A identifies the relative positional relationship between the current position and the target position of the object based on visual measurement information obtained using the visual sensor 310.
  • the visual control unit 171A may then generate a visual control command based on the identified positional relationship using the correspondence information to reduce the difference between the current position and the target position (i.e., move the object closer to the target position), and output this visual control command as the first information. At least a portion of this visual control may be performed using a trained model.
  • the force sense control performed by the force sense control unit 171B may be performed based on, for example, operation parameters (e.g., correspondence information) in a setting library.
  • the force sense control unit 171B may identify the difference between the current force (current reaction force) of the object and the target force (target reaction force) based on force sense measurement information obtained using the force sensor 320, and may generate a force sense control command from the identified difference to reduce the difference using correspondence information, and output this force sense control command as second information. At least a portion of this force sense control may be performed using a trained model.
  • the control unit 170 may dynamically or gradually change the control ratio as the work using the robotic device 200 progresses. As described above, the work using the robotic device 200 may include multiple predetermined steps. The control unit 170 may change the control ratio for each step. This allows the operation of the robotic device 200 to be controlled with an appropriate control ratio for each step.
  • the library storage unit 160 may store a setting library (setting information) that includes settings related to the control ratios for each of multiple processes.
  • the control unit 170 (weighting unit 173) may change the control ratio for each process based on the setting library. This allows an appropriate control ratio to be set for each process.
  • control unit 170 When transitioning from one process to the next in a task, the control unit 170 (weighting unit 173) gradually changes the control ratio from the control ratio for that process to the control ratio for the next process. This prevents sudden fluctuations in the control ratio, enabling precise operation control and reducing the occurrence of operation errors.
  • control ratio visual control: force control
  • control ratio for process A is set to "80:20" and the control ratio for process B, which follows process A, is set to "50:50”
  • the control unit 170 changes the control ratio in stages, such as “80:20” ⁇ "75:25” ⁇ "70:30” ⁇ "65:35” ⁇ "60:40” ⁇ "55:45” ⁇ "50:50.”
  • the control unit 170 (weighting unit 173) may change the control ratio continuously, such as “80:20” ⁇ "79:21” ⁇ "78:22” ⁇ "77:23” ⁇ ...
  • Such changes in the control ratio may be made in control cycle units, or in time units consisting of multiple control cycles.
  • the control unit 170 may determine whether a transition condition from one process to the next process in a task is satisfied based on measurement information obtained using the sensor 300.
  • the transition condition may be included as one of the operation parameters in the setting library.
  • the control unit 170 may transition from the one process to the next process and change the control ratio corresponding to the next process.
  • control unit 170 may return to the previous process and change the control ratio corresponding to that process. In this way, if the control does not converge, by returning to the previous process and restarting the operation, it is possible to expect that the control will converge when transitioning to the next process. Note that "if the control does not converge” may mean that at least one of the following conditions is met: the measurement information does not satisfy the normal value of the transition condition, a timeout has occurred, or the measurement information has become an error value.
  • the control unit 170 may switch the control of the robot device 200 between a first control state (also referred to as "visual-based”) in which control is performed with priority given to visual control, a second control state (also referred to as “visual+force-based”) in which control is performed using visual control and force-sense control in cooperation, and a third control state (also referred to as "force-sense-based”) in which control is performed with priority given to force-sense control.
  • “Control is performed with priority given to visual control” may mean, for example, that the control ratio accounted for by visual control is approximately 65% to 100%.
  • Control is performed using visual control and force-sense control in cooperation may mean, for example, that the visual control:force-sense control ratio is approximately 50:50.
  • Control is performed with priority given to force-sense control may mean, for example, that the control ratio accounted for by force-sense control is approximately 65% to 100%.
  • the control unit 170 may determine whether to switch control between the first control state, the second control state, and the third control state based on measurement information obtained using the sensor 300. If, in the first control state, the difference between the measurement information and the target value for the measurement information does not become equal to or less than a predetermined value (this may be the case when the transition condition is not satisfied), the control unit 170 (weighting unit 173) may switch to the second control state or the third control state and refer to the force measurement information.
  • control unit 170 may change the control ratio in accordance with the instruction. For example, when an instruction to change the control ratio is received via the operation device 420, the control unit 170 (weighting unit 173) may change the control ratio in accordance with the instruction.
  • the control unit 170 may further include a prediction unit 174A that predicts the control results for subsequent control operations performed on the robotic device 200 based on the control operations performed on the robotic device 200 and the control results (sensor information) for those control operations, and a correction unit 174B that corrects the subsequent control operations based on the prediction. For example, if the robotic device 200 does not perform the operation specified in the control command due to external factors such as an error factor on the robotic device 200 side or an error factor on the sensor 300 side, the prediction unit 174A identifies the error (i.e., the difference between the control command content and the control result) and predicts the control results for the subsequent control operations.
  • the error i.e., the difference between the control command content and the control result
  • the correction unit 174B may then correct the control command generated by the command generation unit 172 based on the prediction result of the prediction unit 174A and output it to the drive unit 210. For example, the correction unit 174B may correct the control command to cancel out the identified error. This enables more accurate robot control even when there are external error factors.
  • the library storage unit 160 may store multiple setting libraries prepared for each type of work.
  • Each of the multiple setting libraries may include setting information (operation parameters) for control ratios.
  • Each of the multiple setting libraries includes setting information for each series of processes.
  • the control unit 170 controls the robot device 200 using a setting library selected from the multiple setting libraries according to the type of work actually to be performed. The selection may be made by operation input via the operation device 420. Prior to such operation input, the acquisition unit 120 (image recognition unit 121) may estimate the actual work to be performed based on visual measurement information obtained by the visual sensor 310 observing the work site or object. The control unit 170 may suggest the setting library corresponding to the estimated work to the user (operator) by displaying it on the display device 410.
  • the library storage unit 160 may further store multiple recognition libraries prepared for each type of work. Each of the multiple recognition libraries may contain a trained model used for image recognition processing of an object.
  • the control unit 170 performs image recognition processing using a recognition library selected from the multiple recognition libraries according to the type of work actually to be performed. The selection may be made by operation input via the operation device 420. Prior to such operation input, the acquisition unit 120 (image recognition unit 121) may estimate the actual work to be performed based on visual measurement information obtained by the visual sensor 310 observing the work site or the object. The control unit 170 may suggest the recognition library corresponding to the estimated work to the user (operator) by displaying it on the display device 410.
  • FIG. 8 is a flow diagram for explaining an example of a control flow by the control device 100 according to the second embodiment.
  • step S201 the control unit 170 starts process n.
  • the value of n is "1.”
  • the control unit 170 applies the setting information (operation parameters including control ratios) corresponding to process n.
  • step S202 the control unit 170 applies the setting information (operation parameters including control ratios) corresponding to process n to control the operation of the robot device 200.
  • step S203 the control unit 170 determines whether the transition condition from process n to the next process is satisfied (i.e., whether process n has been completed). If process n has been completed (step S203: YES), processing proceeds to step S204. On the other hand, if process n has not been completed (step S203: NO), in step S205, the control unit 170 determines whether the control of process n has converged. If it is determined that the control of process n has converged (step S205: YES), processing returns to step S202.
  • step S205 the control unit 170 returns to the process (n-1) prior to process n. At that time, the control unit 170 may gradually change the control ratio from the control ratio of process n to the control ratio of process (n-1). Alternatively, instead of returning to process (n-1), the control unit 170 may change at least some of the operating parameters of process n in accordance with a predetermined rule and then resume process n. For example, if it is estimated that the speed-related target value is too high, the speed-related target value may be lowered by one level and process n may be resumed.
  • step S204 the control unit 170 determines whether all steps of the work have been completed. If all steps have been completed, this flow ends. If all steps have not been completed (step S204: NO), in step S207, the control unit 170 proceeds to step (n+1), which is the step after step n. At this time, the control unit 170 may gradually change the control ratio from the control ratio for step n to the control ratio for step (n+1).
  • FIG. 9 is a diagram for explaining a specific example of control performed by the control device 100 according to the second embodiment during the operation of "mounting components on a board.”
  • Process 1 "Recognize component ⁇ Move” ⁇ Process 2 "Grab component” ⁇ Process 3 "Move component toward board” ⁇ Process 4 "Approach” ⁇ Process 5 "Recognize board holes” ⁇ Process 6 "Approach hole” ⁇ Process 7 "Hole tracing operation” ⁇ Process 8 "Insert hole” ⁇ Process 9 "Hole insertion completed” ⁇ Process 10 "Release gripping,” and the operation parameters for each of these processes are included in the settings library.
  • force-based control is set for processes involving contact between objects (end effector (gripper 226), component (workpiece W1), substrate (workpiece W2)), visual-based control is set for processes in which objects (end effector, component) are moved at a certain speed or higher, and visual and force-based control is set for processes in which objects are moved at a low speed toward contact between them.
  • the specific control ratio for each process be set based on data collected in the data collection according to the first embodiment.
  • vision-based control is applied to process 1, "Part recognition ⁇ movement.”
  • vision + force-based control may be applied when the current position of the end effector approaches the vicinity of the target position.
  • Force-based control is applied to process 2, "Part grip.”
  • Vision-based control is applied to process 3, "Moving the component toward the board.” Vision-based control is applied to process 4, "Approach.” However, if the position of the board recognized by image recognition is set as the target position, vision and force-based control may be applied when the current position of the component approaches the vicinity of the target position. Vision-based control is applied to process 5, "Recognizing holes in the board.”
  • step 6 "Approaching the hole,” vision and force-based control is applied. From step 7, “Following the hole,” to step 10, “Releasing the grip,” force-based control is applied.
  • the third embodiment will be described mainly focusing on the differences from the first and second embodiments.
  • the system configuration of the third embodiment is the same as that of the first embodiment (see FIG. 1).
  • the third embodiment will be described mainly as an example in which the third embodiment is based on the first and second embodiments, but the third embodiment does not necessarily have to be based on at least part of the first and second embodiments.
  • FIG. 10 is a block diagram showing the functional block configuration of a control device 100 according to the third embodiment.
  • the control device 100 includes an acquisition unit 120 that acquires measurement information obtained using a sensor 300 for measuring the operating state of the robot device 200, and a control unit 170 that repeatedly generates control commands for the robot device 200 for each control period based on the measurement information.
  • the control unit 170 derives a velocity-related value for the robot device 200 from the measurement information or the control command, and generates a control command to reduce the difference between the velocity-related value and a velocity-related target value that is variable for each control period depending on the operating state of the robot device 200. This enables control that prevents operational errors when working with the robot device 200.
  • the velocity-related target value includes at least one of velocity, acceleration, and jerk.
  • the control unit 170 has a derivation unit 175 that derives a speed-related value from measurement information or a control command for each control cycle, a target setting unit 176 that variably sets a speed-related target value for each control cycle according to the operating state (work situation) of the robot device 200, and a command generation unit 172 that generates a control command for each control cycle so as to reduce the difference between the speed-related value and the speed-related target value.
  • the target setting unit 176 may variably set the speed-related target value according to a speed-related value included in an operating parameter in the setting library.
  • the command generation unit 172 may incorporate the functions of the visual control unit 171A, the haptic control unit 171B, and the weighting unit 173 described in the second embodiment.
  • the control unit 170 may be provided with the visual control unit 171A, the haptic control unit 171B, and the weighting unit 173 described in the second embodiment, separate from the command generation unit 172.
  • the control unit 170 (command generation unit 172) generates a control command to reduce the difference between the measurement information and the target measurement information (target value of the measurement information), and to reduce the difference between the speed-related value and the speed-related target value.
  • control unit 170 may generate a control command to reduce the difference between the visual measurement information (current position) obtained using the visual sensor 310 and the target visual measurement information (target position), and to reduce the difference between the speed-related value and the speed-related target value.
  • control to reduce the difference between the visual measurement information (current position) and the target visual measurement information (target position) is the same as the visual control described in the second embodiment.
  • the control unit 170 may generate a control command to reduce the difference between the force measurement information (current force (reaction force)) obtained using the force sensor 320 and the target force measurement information (target force (reaction force)), and to reduce the difference between the velocity-related value and the velocity-related target value.
  • the control to reduce the difference between the force measurement information (current force (reaction force)) and the target force measurement information (target force (reaction force)) is the same as the force control described in the second embodiment.
  • the control unit 170 may change the speed-related target value according to the difference between the current measurement information and the target measurement information (current relative measurement information). For example, the control unit 170 (target setting unit 176) may use association information included in the setting library that associates measurement information (relative measurement information) with speed-related values to set the speed-related value that corresponds to the current relative measurement information as the speed-related target value.
  • control unit 170 may change the speed-related target value in accordance with the difference between the current visual measurement information obtained using the visual sensor 310 and the target visual measurement information.
  • the control unit 170 may change the speed-related target value in accordance with the difference between the current force measurement information obtained using the force sensor 320 and the target force measurement information.
  • the control unit 170 may select one or more values from speed, acceleration, and jerk to be used as the speed-related value (and speed-related target value) depending on the difference between the measurement information and the target measurement information (current relative measurement information). For example, the control unit 170 (derivation unit 175 and target setting unit 176) may select speed as the speed-related value (and speed-related target value) during visual-based control. On the other hand, during force-based control, the control unit 170 (derivation unit 175 and target setting unit 176) may select jerk as the speed-related value (and speed-related target value). During visual-and force-based control, the control unit 170 (derivation unit 175 and target setting unit 176) may select acceleration as the speed-related value (and speed-related target value).
  • control unit 170 may have a prediction unit 174A that predicts the control results of subsequent control operations on the robot device 200 based on the control operations performed on the robot device 200 and the control results of those control operations, and a correction unit 174B that corrects the subsequent control operations based on the prediction.
  • the library storage unit 160 may store multiple setting libraries prepared for each type of work. Each of the multiple setting libraries may include setting information for speed-related values (speed-related target values).
  • the control unit 170 controls the robot device 200 using a setting library selected from the multiple setting libraries according to the type of work actually to be performed. The selection may be made by operation input via the operation device 420. Prior to such operation input, the acquisition unit 120 (image recognition unit 121) may estimate the work to be actually performed based on visual measurement information obtained by the visual sensor 310 observing the work site or object. The control unit 170 may suggest the setting library corresponding to the estimated work to the user (operator) by displaying it on the display device 410.
  • the library storage unit 160 may further store multiple recognition libraries prepared for each type of work. Each of the multiple recognition libraries may contain a trained model used for image recognition processing of an object.
  • the control unit 170 performs image recognition processing using a recognition library selected from the multiple recognition libraries according to the type of work actually to be performed. The selection may be made by operation input via the operation device 420. Prior to such operation input, the acquisition unit 120 (image recognition unit 121) may estimate the actual work to be performed based on visual measurement information obtained by the visual sensor 310 observing the work site or the object. The control unit 170 may suggest the recognition library corresponding to the estimated work to the user (operator) by displaying it on the display device 410.
  • FIG. 11 is a diagram for explaining a specific example of control performed by the control device 100 according to the third embodiment during the operation of "mounting components on a board.”
  • Process 1 "Recognize component ⁇ move” ⁇ Process 2 "Grab component” ⁇ Process 3 "Move component toward board” ⁇ Process 4 "Approach” ⁇ Process 5 "Recognize holes on board” ⁇ Process 6 "Approach hole” ⁇ Process 7 "Hole tracing operation” ⁇ Process 8 "Insert hole” ⁇ Process 9 "Hole insertion completed” ⁇ Process 10 "Release gripping", and the operation parameters for each of these processes are included in the settings library.
  • control unit 170 (command generation unit 172) generates a control command to reduce the difference between the measurement information obtained using the sensor 300 and the target measurement information (target value of the measurement information), and to reduce the difference between the speed-related value and the speed-related target value.
  • step 1 "Part recognition ⁇ movement”
  • "part position” can be applied as the target measurement information
  • "speed” can be applied as the speed-related target value.
  • the control unit 170 command generation unit 172 can generate a control command to reduce the difference between the current position of the end effector obtained using the visual sensor 310 and the position of the part, and to reduce the difference between the end effector speed and the speed target value.
  • the control unit 170 target setting unit 176) can change the speed target value for each control cycle depending on the difference between the current position of the end effector obtained using the visual sensor 310 and the position of the part.
  • step 2 "Part gripping”, "target reaction force” can be applied as the target measurement information, and "speed” can be applied as the speed-related value.
  • the control unit 170 command generation unit 172 can generate a control command to reduce the difference between the target reaction force and the current reaction force acting on the end effector obtained using the force sensor 320, and also to reduce the difference between the end effector speed and the speed target value.
  • the control unit 170 target setting unit 176) can change the speed target value for each control cycle depending on the difference between the target reaction force and the reaction force acting on the end effector obtained using the force sensor 320.
  • step 3 "Move the component toward the board,” "board position” may be applied as the target measurement information, and "jerk” may be applied as the velocity-related value.
  • the control unit 170 command generation unit 172 may generate a control command to reduce the difference between the current position of the end effector (or component) obtained using the visual sensor 310 and the position of the board, and to reduce the difference between the jerk of the end effector (or component) and the target jerk value.
  • the control unit 170 target setting unit 176) may change the target jerk value for each control cycle depending on the difference between the current position of the end effector (or component) obtained using the visual sensor 310 and the position of the board.
  • the control unit 170 can generate a control command to reduce the difference between the current position of the end effector (or component) obtained using the visual sensor 310 and the position of the board, and to reduce the difference between the acceleration of the end effector and the target acceleration value.
  • the control unit 170 target setting unit 176) can change the target acceleration value for each control cycle depending on the difference between the current position of the end effector (or component) obtained using the visual sensor 310 and the position of the board.
  • step 5 "Board Hole Recognition,” "Board Hole” can be applied as the target measurement information.
  • the control unit 170 command generation unit 172 recognizes the board hole using the visual sensor 310.
  • step 6 "Hole Approach”, "Board hole position” and “target reaction force” can be applied as target measurement information, and "acceleration” can be applied as a speed-related value.
  • the control unit 170 (command generation unit 172) can generate a control command to reduce the difference between the current position of the component obtained using the visual sensor 310 and the position of the hole on the board, and to reduce the difference between the acceleration of the end effector and the target acceleration value.
  • the control unit 170 (command generation unit 172) can also generate a control command to reduce the difference between the target reaction force and the current reaction force acting on the end effector obtained using the force sensor 320, and to reduce the difference between the acceleration of the end effector and the target acceleration value.
  • a "target reaction force” for each step can be applied as target measurement information, and a “jerk” can be applied as a speed-related value.
  • the control unit 170 command generation unit 172 may generate a control command to reduce the difference between the current reaction force acting on the end effector obtained using the force sensor 320 and the target reaction force for each step, and to reduce the difference between the jerk of the end effector and the target jerk value for each step.
  • the control unit 170 target setting unit 176) may change the target jerk value for each control cycle depending on the difference between the reaction force acting on the end effector obtained using the force sensor 320 and the target reaction force.
  • a program may be provided that causes a computer to execute the operations of the above-described embodiments.
  • the program may be recorded on a computer-readable medium.
  • the program can be installed on a computer.
  • the computer-readable medium on which the program is recorded may be a non-transitory storage medium.
  • the non-transitory storage medium is not particularly limited, but may be, for example, a storage medium such as a CD-ROM or DVD-ROM.
  • circuitry or processing circuitry including general-purpose processors, application-specific processors, integrated circuits, ASICs (Application Specific Integrated Circuits), CPUs (Central Processing Units), conventional circuits, and/or combinations thereof, programmed to achieve the described functions.
  • a processor includes transistors or other circuits and is considered to be circuitry or processing circuitry.
  • a processor may also be a programmed processor that executes a program stored in memory.
  • circuitry, units, and means refer to hardware that is programmed to achieve the described functions or that executes them.
  • the hardware may be any hardware disclosed herein or any hardware known to be programmed or capable of performing the described functions. If the hardware is a processor, which is considered a type of circuitry, the circuitry, means, or unit is a combination of hardware and software used to configure the hardware and/or processor.
  • the terms “based on” and “depending on” do not mean “based only on” or “depending only on,” unless expressly stated otherwise.
  • the term “based on” means both “based only on” and “based at least in part on.”
  • the term “depending on” means both “depending only on” and “depending at least in part on.”
  • the terms “include,” “comprise,” and variations thereof do not mean including only the listed items, but may mean including only the listed items or including additional items in addition to the listed items.
  • the term “or” as used in this disclosure is not intended to mean an exclusive or. In this disclosure, when articles are added by translation, such as a, an, and the in English, these articles are intended to include the plural unless the context clearly dictates otherwise.
  • ⁇ Appendix 1 a motion generating unit that generates a control command for executing a reverse motion that changes the motion state of the robot apparatus from a target state to an arbitrary state different from the target state; an acquisition unit that acquires measurement information obtained using a sensor for measuring the operating state of the robot device; a data collection unit that repeatedly collects data including a set of the control command and the measurement information during execution of the reverse direction operation in accordance with the control command;
  • the control device wherein the sensor includes at least one of a visual sensor and a force sensor.
  • the motion generation unit generates the control command for executing a plurality of patterns of reverse motions that change the motion state of the robot apparatus from the target state to a plurality of arbitrary states different from each other;
  • the control device according to claim 1, wherein the data collection unit repeatedly collects the data for each of the plurality of patterns of reverse direction motion.
  • ⁇ Appendix 3 The action generation unit generating the control command such that the amount of data collected increases in an area closer to the target state; The control device according to claim 1 or 2, wherein the control command is generated such that the amount of data collected decreases as the area becomes farther from the target state.
  • ⁇ Appendix 4 The acquisition unit shortening the acquisition cycle so that the amount of acquired measurement information increases in an area closer to the target state;
  • the control device according to any one of appendices 1 to 3, wherein an acquisition cycle is extended so that an amount of the measurement information acquired increases in an area farther from the target state.
  • Appendix 5 the acquisition unit acquires, as relative measurement information, a difference between target measurement information acquired in the target state and current measurement information acquired during execution of the reverse direction motion;
  • the control device according to any one of appendices 1 to 4, wherein the data collection unit repeatedly collects the data including a set of the control command and the relative measurement information while the reverse direction operation is being performed in accordance with the control command.
  • the acquisition unit acquires the measurement information including a velocity-related value related to the robot apparatus in response to the output of the sensor or the control command;
  • the control device according to any one of appendices 1 to 5, wherein the data collection unit repeatedly collects the data including a set of the control command and the measurement information including the speed-related value during execution of the reverse direction operation in accordance with the control command.
  • Appendix 7 a setting acquisition unit that acquires setting information for performing a task using the robot device based on the data collected by the data collection unit; 7.
  • the control device according to any one of claims 1 to 6, further comprising: a control unit that uses the setting information to control the robot device to perform the work.
  • Appendix 8 the setting acquisition unit acquires the setting information including a trained model for deriving the control command from the measurement information by machine learning using the data collected by the data collection unit as learning data;
  • the control device according to claim 7, wherein the control unit performs the control based on the measurement information obtained during the work, using the setting information including the trained model.
  • the sensors include the visual sensor and the force sensor; the setting acquisition unit acquires the setting information including information for setting a control ratio between visual control, which is the control based on visual measurement information obtained using the visual sensor, and force control, which is the control based on force measurement information obtained using the force sensor; and
  • Appendix 10 The control unit deriving a velocity-related value for the robot device from the measurement information or control commands obtained during the work; 10.
  • Appendix 11 the operation generation unit generates the control command for executing the reverse operation for each type of work using the robot device;
  • the control device according to any one of appendixes 1 to 10, wherein the data collection unit collects the data for each of the types by repeatedly collecting the data for each of the types.
  • Appendix 12 a setting acquisition unit that acquires setting information for performing work using the robot device for each type based on the data collected for each type by the data collection unit; 12.
  • Appendix 13 generating a control command for executing a reverse movement that changes the motion state of the robot device from a target state to an arbitrary state different from the target state; acquiring measurement information obtained using a sensor for measuring an operating state of the robot device; repeatedly collecting data including a set of the control command and the measurement information during the execution of the reverse direction operation in accordance with the control command;
  • the sensor includes at least one of a visual sensor and a force sensor.
  • Appendix 14 The control device generating a control command for executing a reverse movement that changes the motion state of the robot device from a target state to an arbitrary state different from the target state; acquiring measurement information obtained using a sensor for measuring an operating state of the robot device; repeatedly collecting data including a set of the control command and the measurement information during the execution of the reverse direction operation in accordance with the control command;
  • the sensor includes at least one of a visual sensor and a force sensor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

This control device (100)comprises: a motion generation unit (110) that generates a control command for causing a reverse direction motion for changing the motion state of a robot device (200) from a target state to an arbitrary state different from the target state; an acquisition unit (120) that acquires measurement information obtained by using a sensor (300) for performing measurement pertaining to the motion state of the robot device (200); and a data collection unit (130) that repeatedly collects data including a set of the control command and the measurement information during execution of the reverse direction motion in accordance with the control command. The sensor (300) includes at least one among a visual sensor (310) and a force sensor (320).

Description

制御装置、制御方法、及びプログラムControl device, control method, and program 関連出願の相互参照CROSS-REFERENCE TO RELATED APPLICATIONS

 本願は、日本国特許出願第2024-072901号(2024年4月26日出願)の優先権を主張し、その内容の全てが本願明細書に組み込まれている。 This application claims priority from Japanese Patent Application No. 2024-072901 (filed April 26, 2024), the entire contents of which are incorporated herein by reference.

 本開示は、ロボット装置を制御する制御装置、制御方法、及びプログラムに関する。 This disclosure relates to a control device, control method, and program for controlling a robot device.

 従来、ロボット装置を制御する制御装置が広く用いられている。このような制御装置において、視覚センサ及び力覚センサを用いてロボット装置の動作状態を測定すると共に、ロボット装置の動作状態を目標状態にするよう制御する技術が知られている(例えば、特許文献1参照)。 Control devices for controlling robotic devices have been widely used in the past. A known technology for such control devices is to measure the operating state of the robotic device using a visual sensor and a force sensor, and to control the operating state of the robotic device to achieve a target state (see, for example, Patent Document 1).

 また、従来、ロボット装置の動作を学習する方法として、ロボット装置を人手で動かして、実行させる一連の動作を記録するティーチングが採用されている。 In addition, a conventional method for learning the movements of a robotic device is to teach it by manually operating the robotic device and recording the series of movements it performs.

特許第7295344号公報Patent No. 7295344

 従来技術では、作業の環境及び/又は対象物が少しでも変化した場合には、新たにティーチングを行う必要がある。そのため、従来技術では、ロボット装置の動作の制御に利用可能なデータを効率的に収集することが難しい。 With conventional technology, any change in the work environment and/or target object requires new teaching. As a result, with conventional technology, it is difficult to efficiently collect data that can be used to control the operation of the robot device.

 本開示は、ロボット装置をより適切に制御することを可能とする制御装置、制御方法、及びプログラムを提供する。 This disclosure provides a control device, control method, and program that enable more appropriate control of a robotic device.

 本開示の第1の態様に係る制御装置は、ロボット装置の動作状態を目標状態から前記目標状態とは異なる任意状態へ変更する逆方向動作を実行させるための制御指令を生成する動作生成部と、前記ロボット装置の動作状態に関する測定を行うためのセンサを用いて得られる測定情報を取得する取得部と、前記制御指令に従った前記逆方向動作の実行中に、当該制御指令と前記測定情報とのセットを含むデータを繰り返し収集するデータ収集部と、を備える。前記センサは、視覚センサ及び力覚センサの少なくとも一方を含む。 A control device according to a first aspect of the present disclosure includes a motion generation unit that generates a control command for executing a reverse motion that changes the motion state of a robotic device from a target state to an arbitrary state different from the target state, an acquisition unit that acquires measurement information obtained using a sensor that measures the motion state of the robotic device, and a data collection unit that repeatedly collects data including a set of the control command and the measurement information while the reverse motion is being executed in accordance with the control command. The sensor includes at least one of a visual sensor and a force sensor.

 本開示の第2の態様に係る制御方法は、ロボット装置の動作状態を目標状態から前記目標状態とは異なる任意状態へ変更する逆方向動作を実行させるための制御指令を生成することと、前記ロボット装置の動作状態に関する測定を行うためのセンサを用いて得られる測定情報を取得することと、前記制御指令に従った前記逆方向動作の実行中に、当該制御指令と前記測定情報とのセットを含むデータを繰り返し収集することと、を有する。前記センサは、視覚センサ及び力覚センサの少なくとも一方を含む。 A control method according to a second aspect of the present disclosure includes generating a control command for executing a reverse motion that changes the motion state of a robotic device from a target state to an arbitrary state different from the target state, acquiring measurement information using a sensor that measures the motion state of the robotic device, and repeatedly collecting data including a set of the control command and the measurement information while the reverse motion is being executed in accordance with the control command. The sensor includes at least one of a visual sensor and a force sensor.

 本開示の第3の態様に係るプログラムは、制御装置に、ロボット装置の動作状態を目標状態から前記目標状態とは異なる任意状態へ変更する逆方向動作を実行させるための制御指令を生成することと、前記ロボット装置の動作状態に関する測定を行うためのセンサを用いて得られる測定情報を取得することと、前記制御指令に従った前記逆方向動作の実行中に、当該制御指令と前記測定情報とのセットを含むデータを繰り返し収集することと、を実行させる。前記センサは、視覚センサ及び力覚センサの少なくとも一方を含む。 A program according to a third aspect of the present disclosure causes a control device to generate a control command for executing a reverse operation that changes the operating state of a robotic device from a target state to an arbitrary state different from the target state, acquire measurement information using a sensor that measures the operating state of the robotic device, and repeatedly collect data including a set of the control command and the measurement information while the reverse operation is being executed in accordance with the control command. The sensor includes at least one of a visual sensor and a force sensor.

実施形態に係る制御装置を含む制御システムのシステム構成例を示す図である。1 is a diagram illustrating an example of a system configuration of a control system including a control device according to an embodiment. 第1実施形態に係る制御装置の機能ブロック構成を示すブロック図である。FIG. 2 is a block diagram showing a functional block configuration of the control device according to the first embodiment. 第1実施形態に係る制御装置によるデータ収集の一例を説明するための図である。FIG. 3 is a diagram for explaining an example of data collection by the control device according to the first embodiment. 第1実施形態に係る制御装置によるデータ収集の一例を説明するための図である。FIG. 3 is a diagram for explaining an example of data collection by the control device according to the first embodiment. 第1実施形態に係る制御装置によるデータ収集の一例を説明するためのフロー図である。FIG. 4 is a flow chart for explaining an example of data collection by the control device according to the first embodiment. 実施形態に係る制御装置が記憶する設定ライブラリの一例を説明するための図である。FIG. 2 is a diagram illustrating an example of a setting library stored in a control device according to the embodiment. 第2実施形態に係る制御装置の機能ブロック構成を示すブロック図である。FIG. 10 is a block diagram showing a functional block configuration of a control device according to a second embodiment. 第2実施形態に係る制御装置による制御フローの一例を説明するためのフロー図である。FIG. 10 is a flowchart illustrating an example of a control flow by a control device according to a second embodiment. 第2実施形態に係る制御装置による制御として「基板への部品装着」作業時の制御の具体例を説明するための図である。10A and 10B are diagrams for explaining a specific example of control during the operation of "mounting components on a board" as control by the control device according to the second embodiment. 第3実施形態に係る制御装置の機能ブロック構成を示すブロック図である。FIG. 11 is a block diagram showing a functional block configuration of a control device according to a third embodiment. 第3実施形態に係る制御装置による制御として「基板への部品装着」作業時の制御の具体例を説明するための図である。11A and 11B are diagrams for explaining a specific example of control during the operation of "mounting components on a board" as control by the control device according to the third embodiment.

 図面を参照しながら、実施形態について説明する。図面の記載において、同一又は類似の部分には同一又は類似の符号を付している。 Embodiments will be described with reference to the drawings. In the drawings, identical or similar parts are designated by the same or similar reference numerals.

 (1)実施形態の概要
 実施形態に係る制御装置は、ロボット装置の動作状態を目標状態から前記目標状態とは異なる任意状態へ変更する逆方向動作を実行させるための制御指令を生成する動作生成部と、前記ロボット装置の動作状態に関する測定を行うためのセンサを用いて得られる測定情報を取得する取得部と、前記制御指令に従った前記逆方向動作の実行中に、当該制御指令と前記測定情報とのセットを含むデータを繰り返し収集するデータ収集部と、を備える。前記センサは、視覚センサ及び力覚センサの少なくとも一方を含む。
(1) Overview of Embodiments A control device according to an embodiment includes: a motion generating unit that generates a control command for executing a reverse motion that changes the motion state of a robotic device from a target state to an arbitrary state different from the target state; an acquiring unit that acquires measurement information obtained using a sensor that measures the motion state of the robotic device; and a data collecting unit that repeatedly collects data including a set of the control command and the measurement information while the reverse motion is being executed in accordance with the control command. The sensor includes at least one of a visual sensor and a force sensor.

 このように、制御指令に従った逆方向動作の実行中に当該制御指令と測定情報とのセットを含むデータを繰り返し収集することにより、実際の作業時のロボット装置の動作(すなわち、順方向動作)の制御に利用可能なデータを効率的に収集可能になる。例えば、ロボット装置を用いた実際の作業時には、収集済みのデータに基づいて、作業時に得られる測定情報から制御指令を生成することで、任意状態から目標状態へのロボット装置の動作の制御を適切に行うことが可能になる。 In this way, by repeatedly collecting data including a set of control commands and measurement information while performing reverse movement in accordance with the control commands, it becomes possible to efficiently collect data that can be used to control the movement of the robot device during actual work (i.e., forward movement). For example, during actual work using the robot device, by generating control commands from measurement information obtained during work based on previously collected data, it becomes possible to appropriately control the movement of the robot device from an arbitrary state to a target state.

 なお、「ロボット装置」とは、制御装置が出力する制御指令に従って動作可能な装置であればよく、あらゆるロボット装置が適用可能である。例えば、ロボット装置は、マニピュレータ等の産業用ロボットであってもよいし、自動的に移動可能な移動体等を含んでもよい。産業用ロボットは、例えば、垂直多関節ロボット、スカラロボット、パラレルリンクロボット、直交ロボット、協調ロボット等を含む。また、自動的に移動可能な移動体は、例えば、ドローン、自動運転可能に構成された車両、無人搬送車、又はモバイルロボット等、さらに、上記産業用ロボットとの組合せを含む。後述の実施形態では、ロボット装置がマニピュレータである一例について主として説明する。 Note that the term "robot device" refers to any device that can operate in accordance with control commands output by a control device, and any robot device is applicable. For example, the robot device may be an industrial robot such as a manipulator, or may include a mobile body that can move automatically. Industrial robots include, for example, vertical articulated robots, SCARA robots, parallel link robots, Cartesian robots, and collaborative robots. Mobile bodies that can move automatically include, for example, drones, vehicles configured to be self-driving, automated guided vehicles, or mobile robots, as well as combinations with the above industrial robots. In the embodiments described below, an example in which the robot device is a manipulator will be mainly described.

 「制御指令」は、ロボット装置の動作の制御に関するものであり、例えば、目標制御量、操作量等である。「制御指令を出力すること」は、制御指令に基づいてロボット装置を直接的に制御することであってもよいし、ロボット装置がコントローラを備える場合にコントローラに制御指令を出力することでコントローラにロボット装置の動作を制御させることを含んでもよい。 "Control commands" are related to the control of the operation of a robotic device, and are, for example, target control amounts, operation amounts, etc. "Outputting control commands" may mean directly controlling the robotic device based on the control commands, or, if the robotic device is equipped with a controller, outputting control commands to the controller to have the controller control the operation of the robotic device.

 「ロボット装置の動作状態」とは、ロボット装置の一部の構成(例えば、エンドエフェクタ)の動作に関する状態、及び/又はロボット装置を用いた作業の対象物に関する状態である。また、「ロボット装置の動作状態に関する測定」とは、ロボット装置の一部の構成(例えば、エンドエフェクタ)の動作に関する状態の測定、及び/又はロボット装置を用いた作業の対象物に関する状態の測定である。「対象物」は、ロボット装置の動作に関連し得る物体であり、例えばワーク等である。ロボット装置の一部の構成(例えば、エンドエフェクタ)も対象物とみなしてもよい。 The "operating state of a robotic device" refers to the state related to the operation of a portion of the configuration of the robotic device (e.g., an end effector) and/or the state related to the object of work performed using the robotic device. Furthermore, "measurement related to the operating state of a robotic device" refers to the measurement of the state related to the operation of a portion of the configuration of the robotic device (e.g., an end effector) and/or the measurement of the state related to the object of work performed using the robotic device. An "object" is an object that may be related to the operation of the robotic device, such as a workpiece. A portion of the configuration of a robotic device (e.g., an end effector) may also be considered an object.

 「作業」は、ロボット装置に遂行させる仕事であり、複数の工程を含み得る。作業は、例えば、部品運搬、部品嵌合、ネジ回し、加工等である。作業は、例えば、ワークの把持、ワークの解放等の単純な仕事であってもよい。作業は、予め与えられてもよいし、オペレータの指定により与えられてもよい。 "Task" refers to a job that is performed by a robotic device and may include multiple processes. Examples of tasks include transporting parts, fitting parts, screwing, and processing. Tasks may also be simple tasks such as gripping and releasing a workpiece. Tasks may be assigned in advance or specified by the operator.

 「目標状態」とは、作業(又は工程)の目的を達成した時点での状態である。目標状態は、与えられる作業に応じて異なり得る。例えば、ロボット装置がエンドエフェクタにより運搬する組立部品を被組立部品に装着する作業を想定した場合、目標状態は、組立部品が被組立部品に装着された状態であり得る。 The "target state" is the state at which the purpose of a task (or process) is achieved. The target state can vary depending on the task being performed. For example, if a robotic device is tasked with attaching an assembly part carried by an end effector to a part being assembled, the target state could be a state in which the assembly part has been attached to the part being assembled.

 「任意状態」とは、目標状態と異なる状態であればよいが、例えば、ロボット装置を用いて行う作業の開始時又は途中の状態であってもよい。 An "arbitrary state" may be any state that is different from the target state, but may also be, for example, the state at the start or midway of a task performed using the robot device.

 「測定情報」は、センサの測定データそのものに限らず、測定データから算出された特徴量等であってもよい。「測定情報」は、例えば、ロボットの基本構成として存在しているエンコーダ及び/又はサーボモータを活用して測定、及び/又は算出したものであってもよい。 "Measurement information" is not limited to the sensor measurement data itself, but may also be feature quantities calculated from the measurement data. For example, "measurement information" may be information measured and/or calculated using an encoder and/or servo motor that are present as basic components of the robot.

 (2)第1実施形態
 第1実施形態では、実際の作業時の制御に利用するデータを作業前に収集するデータ収集に関して主として説明する。実際の作業時の制御の詳細については、第2実施形態及び第3実施形態で説明する。
(2) First Embodiment In the first embodiment, data collection for collecting data to be used for control during actual work before the work will be mainly described. Details of control during actual work will be described in the second and third embodiments.

 (2.1)システム構成
 図1は、第1実施形態に係る制御装置100を含む制御システムのシステム構成例を示す図である。ここでは、制御システムのハードウェア構成に着目して説明する。
(2.1) System Configuration Fig. 1 is a diagram showing an example of the system configuration of a control system including a control device 100 according to the first embodiment. Here, the description will be focused on the hardware configuration of the control system.

 図示の例では、ロボット装置200はマニピュレータである。具体的には、ロボット装置200(マニピュレータ)は、6軸の垂直多関節型の産業用ロボットであり、台座部221と、6つの関節部211乃至216とを有する。各関節部211乃至216は、サーボモータ(不図示)を内蔵し、各軸を中心に回転可能に構成されている。 In the illustrated example, the robot device 200 is a manipulator. Specifically, the robot device 200 (manipulator) is a six-axis vertical articulated industrial robot, and has a base 221 and six joints 211 to 216. Each of the joints 211 to 216 has a built-in servo motor (not shown) and is configured to be rotatable around each axis.

 第1関節部211は、台座部221に接続されており、先端側の部分を台座の軸周りに回転させる。台座部221に代えて、自動的に移動(自走)可能な移動機構を設けてもよい。第2関節部212は、第1関節部211に接続されており、先端側の部分を前後方向に回転させる。第3関節部213は、リンク222を介して第2関節部212に接続されており、先端側の部分を上下方向に回転させる。第4関節部214は、リンク223を介して第3関節部213に接続されており、先端側の部分をリンク223の軸周りに回転させる。第5関節部215は、リンク224を介して第4関節部214に接続されており、先端側の部分を上下方向に回転させる。第6関節部216は、リンク225を介して第5関節部215に接続されており、先端側の部分をリンク225の軸周りに回転させる。第6関節部216の先端側には、力覚センサ320と共にグリッパ226が取り付けられている。グリッパ226は、エンドエフェクタの一例である。 The first joint unit 211 is connected to the base unit 221 and rotates its tip portion around the axis of the base. A moving mechanism capable of automatic movement (self-propelled) may be provided instead of the base unit 221. The second joint unit 212 is connected to the first joint unit 211 and rotates its tip portion back and forth. The third joint unit 213 is connected to the second joint unit 212 via a link 222 and rotates its tip portion up and down. The fourth joint unit 214 is connected to the third joint unit 213 via a link 223 and rotates its tip portion around the axis of the link 223. The fifth joint unit 215 is connected to the fourth joint unit 214 via a link 224 and rotates its tip portion up and down. The sixth joint unit 216 is connected to the fifth joint unit 215 via a link 225 and rotates its tip portion around the axis of the link 225. A force sensor 320 and a gripper 226 are attached to the tip side of the sixth joint 216. The gripper 226 is an example of an end effector.

 視覚センサ310は、画像測定を行うセンサであり、ロボット装置200の稼働する環境(作業空間)に存在する各対象物(グリッパ226、ワークW1、ワークW2)を観察するように配置される。図示の例では、視覚センサ310がリンク225に取り付けられ、視覚センサ310がロボット装置200と一体に設けられている。しかしながら、視覚センサ310が作業空間の設備等に固定され、視覚センサ310がロボット装置200と別体に設けられていてもよい。視覚センサ310としては、例えば、デジタルカメラ又はビデオカメラ等のカメラが利用されてもよい。視覚センサ310の測定データ(すなわち、画像データ)は、視覚測定情報の一例である。 The visual sensor 310 is a sensor that performs image measurement, and is positioned to observe each object (gripper 226, workpiece W1, workpiece W2) present in the environment (work space) in which the robot device 200 operates. In the example shown, the visual sensor 310 is attached to the link 225, and the visual sensor 310 is provided integrally with the robot device 200. However, the visual sensor 310 may also be fixed to equipment in the work space, and provided separately from the robot device 200. For example, a camera such as a digital camera or video camera may be used as the visual sensor 310. The measurement data (i.e., image data) of the visual sensor 310 is an example of visual measurement information.

 力覚センサ320は、力測定を行うセンサであり、ロボット装置200(具体的には、グリッパ226)に作用する力及びモーメントを測定するように構成されている。力覚センサ320は、例えば、グリッパ226に作用するX軸、Y軸、Z軸の3軸方向の力と、X軸、Y軸、Z軸まわりのモーメントとを測定する6軸力覚センサであってもよい。すなわち、力覚センサ320は、ロボット装置200に支持された組立部品又はグリッパ226と物品との接触によって生じる力・モーメントを測定することができる。力覚センサ320の測定データは、グリッパ226の把持力を調整したり、グリッパ226に異常な力が作用しているか否かを検知したりするために利用されてもよい。なお、以下の説明では、用語「力」を「モーメント」の意味も含む用語として用いる。 The force sensor 320 is a sensor that measures forces and is configured to measure the forces and moments acting on the robot device 200 (specifically, the gripper 226). The force sensor 320 may be, for example, a six-axis force sensor that measures forces acting on the gripper 226 in the three axial directions of the X-axis, Y-axis, and Z-axis, and moments about the X-axis, Y-axis, and Z-axis. In other words, the force sensor 320 can measure forces and moments generated by contact between an assembly part supported by the robot device 200 or the gripper 226 and an object. The measurement data from the force sensor 320 may be used to adjust the gripping force of the gripper 226 or to detect whether an abnormal force is acting on the gripper 226. In the following explanation, the term "force" is used to include the meaning of "moment."

 力覚センサ320としては、例えば、各関節部211乃至216に内蔵されたモータ(不図示)の電流値などを測定することで、X軸、Y軸、Z軸の3軸方向の力と、X軸、Y軸、Z軸まわりのモーメントとを測定する方法で実現してもよい。力覚センサ320としては、例えば、ロボット装置200の表面に具備する圧力センサ、及び/又は、ロボット装置200の表面に具備するジャケットの状態変化を利用したセンサであってもよい。力覚センサ320は、空気及び/又は液体の流量変化、及び/又は静電容量の変化などを検出するものであってもよい。 The force sensor 320 may be realized, for example, by measuring the current value of a motor (not shown) built into each of the joints 211 to 216, thereby measuring forces in the three axial directions of the X, Y, and Z axes, and moments around the X, Y, and Z axes. The force sensor 320 may be, for example, a pressure sensor provided on the surface of the robot device 200, and/or a sensor that utilizes changes in the state of a jacket provided on the surface of the robot device 200. The force sensor 320 may also detect changes in the flow rate of air and/or liquid, and/or changes in capacitance.

 なお、各関節部211乃至216には、エンコーダ(不図示)が内蔵されていてもよい。エンコーダは、センサの一例である。エンコーダは、各関節部211乃至216の角度(制御量)を測定可能に構成されている。エンコーダの測定データは、各関節部211乃至216の角度の制御に利用されてもよい。 Note that each of the joints 211 to 216 may have an encoder (not shown) built in. The encoder is an example of a sensor. The encoder is configured to be able to measure the angle (control amount) of each of the joints 211 to 216. The measurement data from the encoder may be used to control the angle of each of the joints 211 to 216.

 制御システムは、ワークW2を搬送する搬送装置510を有していてもよい。ロボット装置200は、アーム先端の取り付けられたグリッパ226(エンドエフェクタ)によって作業を実行することができる。エンドエフェクタは、用途に応じて交換可能な外部装置であり、グリッパ226に代えて、溶接ガン又は工具等が取り付けられてもよい。ロボット装置200は、搬送装置510上を流れているワークW2を、視覚センサ310を用いてトラッキングしつつ力覚センサ320を用いて作業を実行することができる。図示の例では、ロボット装置200は、ロボット装置200が把持した組立部品としてのワークW1を、搬送装置510上を流れてくるワークW2(被組立部品。例えば、基板)の穴に嵌め込む作業を行う。 The control system may have a transport device 510 that transports the workpiece W2. The robot device 200 can perform work using a gripper 226 (end effector) attached to the tip of its arm. The end effector is an external device that can be replaced depending on the application, and a welding gun, tool, etc. may be attached instead of the gripper 226. The robot device 200 can perform work using a force sensor 320 while tracking the workpiece W2 traveling on the transport device 510 using a visual sensor 310. In the illustrated example, the robot device 200 performs work by fitting the workpiece W1, which is an assembly part grasped by the robot device 200, into a hole in the workpiece W2 (a part to be assembled, such as a circuit board) traveling on the transport device 510.

 制御装置100は、プロセッサ101と、メモリ102と、外部インタフェース(I/F)103とを有する。プロセッサ101は、CPU(Central Processing Unit)を含んで構成される。さらに、プロセッサ101は、マイクロプロセッサ、FPGA(field-programmable gate array)、及びDSP(digital signal processor)のうち少なくとも1つを含んでいてもよい。メモリ102は、RAM(Random Access Memory)と、ROM(Read Only Memory)と、補助記憶装置(例えば、ハードディスクドライブ、ソリッドステートドライブ)とを含んで構成される。プロセッサ101及びメモリ102は、コンピュータを構成する。制御装置100は、複数のコンピュータで構成されていてもよい。なお、制御装置100は、提供されるサービス専用に設計された情報処理装置に限らず、PC(Personal Computer)等の汎用の情報処理装置であってもよいし、PLC(programmable logic controller)等のコントローラであってもよい。 The control device 100 has a processor 101, memory 102, and an external interface (I/F) 103. The processor 101 is configured to include a CPU (Central Processing Unit). Furthermore, the processor 101 may include at least one of a microprocessor, an FPGA (field-programmable gate array), and a DSP (digital signal processor). The memory 102 is configured to include RAM (Random Access Memory), ROM (Read Only Memory), and an auxiliary storage device (e.g., a hard disk drive, solid state drive). The processor 101 and memory 102 constitute a computer. The control device 100 may be configured from multiple computers. The control device 100 is not limited to an information processing device designed specifically for the service provided, but may also be a general-purpose information processing device such as a PC (Personal Computer), or a controller such as a PLC (Programmable Logic Controller).

 メモリ102は、プロセッサ101により実行されるプログラムを記憶する。プロセッサ101は、メモリ102に記憶されたプログラムを実行することにより、メモリ102と共に、後述の各機能ブロックの機能を実現する。詳細については後述するが、メモリ102は、例えば、対象物を認識するための画像認識に用いる学習済みモデルを含む認識ライブラリと、作業前のデータ収集により収集されるデータと、収集されたデータに基づき取得する設定情報を含む設定ライブラリとを記憶してもよい。認識ライブラリ及び設定ライブラリは、ロボット装置200が実行可能な作業の種別ごとに設けられてもよい。設定ライブラリに含まれる設定情報は、収集されたデータに基づき生成された学習済みモデルを含んでもよい。 Memory 102 stores programs executed by processor 101. By executing the programs stored in memory 102, processor 101, together with memory 102, realizes the functions of each functional block described below. As will be described in detail below, memory 102 may store, for example, a recognition library containing trained models used for image recognition to recognize objects, and a setting library containing data collected by pre-task data collection and setting information obtained based on the collected data. The recognition library and setting library may be provided for each type of task that the robot device 200 can perform. The setting information included in the setting library may include trained models generated based on the collected data.

 外部I/F103は、例えば、USB(Universal Serial Bus)ポート又は専用ポート等であり、外部装置と通信可能に接続するためのインタフェースである。外部I/F103は、外部装置(ロボット装置200を含む)と有線で接続されてもよいし、無線で接続されてもよい。外部I/F103の種類及び数は、接続される外部装置の種類及び数に応じて適宜選択されてもよい。図示の例では、制御装置100は、外部I/F103を介して、ロボット装置200、視覚センサ310、及びユーザインタフェース(I/F)400と接続される。ユーザI/F400は、図2に示すように、表示装置410と、操作装置420とを含む。図示の例では、ユーザI/F400が制御装置100と別体に設けられているが、ユーザI/F400が制御装置100と一体に設けられていてもよい。表示装置410は、液晶ディスプレイ又は有機EL(Electro-Luminescence)ディスプレイ等であってもよい。表示装置410は、スピーカ搭載のディスプレイであってもよい。操作装置420は、例えば、キーボード、マウス、タッチパネル等の操作入力を行うための装置である。表示装置410及び操作装置420は、タッチパネルディスプレイとして一体に構成されていてもよい。オペレータは、表示装置410及び操作装置420を利用することで、制御装置100の状態を確認したり、制御装置100を操作したりすることができる。 The external I/F 103 is, for example, a USB (Universal Serial Bus) port or a dedicated port, and is an interface for connecting to an external device so as to be able to communicate with the external device. The external I/F 103 may be connected to the external device (including the robot device 200) via a wired or wireless connection. The type and number of external I/Fs 103 may be selected appropriately depending on the type and number of external devices to be connected. In the illustrated example, the control device 100 is connected to the robot device 200, a visual sensor 310, and a user interface (I/F) 400 via the external I/F 103. As shown in FIG. 2, the user I/F 400 includes a display device 410 and an operation device 420. In the illustrated example, the user I/F 400 is provided separately from the control device 100, but the user I/F 400 may also be provided integrally with the control device 100. The display device 410 may be a liquid crystal display or an organic EL (Electro-Luminescence) display. The display device 410 may also be a display equipped with a speaker. The operation device 420 is a device for inputting operations, such as a keyboard, mouse, or touch panel. The display device 410 and operation device 420 may be integrated into a touch panel display. The operator can use the display device 410 and operation device 420 to check the status of the control device 100 and operate the control device 100.

 (2.2)制御装置の機能ブロック構成
 図2は、第1実施形態に係る制御装置100の機能ブロック構成を示すブロック図である。第1実施形態では、作業時の制御に利用するデータを作業前に収集するデータ収集に関する機能ブロック構成について主として説明する。このようなデータ収集は、ユーザ(オペレータを含む)が制御装置100(及びロボット装置200)の利用を開始する前に行われてもよい。例えば、制御装置100(及びロボット装置200)の出荷前に予めデータ収集が行われてもよい。
(2.2) Functional Block Configuration of Control Device FIG. 2 is a block diagram showing the functional block configuration of the control device 100 according to the first embodiment. In the first embodiment, the functional block configuration related to data collection, which collects data to be used for control during work before the work is performed, will be mainly described. Such data collection may be performed before a user (including an operator) starts using the control device 100 (and the robot device 200). For example, data collection may be performed in advance before the control device 100 (and the robot device 200) is shipped.

 制御装置100は、動作生成部110と、取得部120と、データ収集部130と、データ記憶部140と、設定取得部150と、ライブラリ記憶部160とを有する。本実施形態では、制御装置100は、制御部170を有していてもよいし、制御部170を有していなくてもよい。 The control device 100 has an action generation unit 110, an acquisition unit 120, a data collection unit 130, a data storage unit 140, a setting acquisition unit 150, and a library storage unit 160. In this embodiment, the control device 100 may or may not have a control unit 170.

 動作生成部110は、ロボット装置200の動作状態を任意状態から目標状態へ変更する順方向動作を実行させるための制御指令を生成する。また、動作生成部110は、ロボット装置200の動作状態を目標状態から目標状態とは異なる任意状態へ変更する逆方向動作を実行させるための制御指令を生成する。動作生成部110は、生成した制御指令をロボット装置200の駆動部210に出力する。駆動部210は、ロボット装置200の各関節部211乃至216に設けられたサーボモータを含む。駆動部210は、ロボット装置200側のコントローラを含んでもよい。駆動部210は、制御指令に従ってサーボモータを駆動し、ロボット装置200を動作させる。 The motion generation unit 110 generates a control command for executing a forward motion that changes the motion state of the robot device 200 from an arbitrary state to a target state. The motion generation unit 110 also generates a control command for executing a reverse motion that changes the motion state of the robot device 200 from a target state to an arbitrary state different from the target state. The motion generation unit 110 outputs the generated control command to the drive unit 210 of the robot device 200. The drive unit 210 includes servo motors provided in each of the joints 211 to 216 of the robot device 200. The drive unit 210 may include a controller on the robot device 200 side. The drive unit 210 drives the servo motors in accordance with the control command to operate the robot device 200.

 ここで、目標状態とは、作業(又は工程)の目的を達成した時点に実現される状態、及び/又は、作業の中間状態のことである。第1実施形態では、ロボット装置200がエンドエフェクタ(グリッパ226)により運搬する組立部品(ワークW1)を被組立部品(ワークW2)に装着する一連の作業を想定しているため、目標状態は、組立部品が被組立部品に装着された状態、及び/又は、装着前の中間状態である。具体的には、目標状態は、ワークW1がワークW2の穴に嵌め込まれた状態、及び/又は、穴に嵌め込める位置で接触している中間状態である。 Here, the target state refers to the state that is achieved when the purpose of the work (or process) is achieved, and/or an intermediate state of the work. In the first embodiment, a series of operations is assumed in which the robot device 200 attaches an assembly part (work W1) carried by the end effector (gripper 226) to a part to be assembled (work W2), and therefore the target state is a state in which the assembly part has been attached to the part to be assembled, and/or an intermediate state before attachment. Specifically, the target state is a state in which the work W1 has been fitted into a hole in the work W2, and/or an intermediate state in which the work W1 is in contact with the work W2 at a position where it can be fitted into the hole.

 任意状態は、目標状態とは異なる状態であり、例えば、ワークW1がワークW2の穴から離れた位置にある状態である。任意状態は、操作装置420を介して操作入力(ユーザ入力)により設定されてもよい。例えば、目標状態の位置を基準として任意状態の位置が操作入力(ユーザ入力)により設定されてもよい。なお、用語「位置」は、「座標」の意味だけではなく、「姿勢」の意味を含んでもよい。 The arbitrary state is a state different from the target state, for example, a state in which workpiece W1 is located away from the hole in workpiece W2. The arbitrary state may be set by operation input (user input) via the operation device 420. For example, the position of the arbitrary state may be set by operation input (user input) using the position of the target state as a reference. Note that the term "position" may not only mean "coordinates" but may also mean "posture."

 取得部120は、ロボット装置200の動作状態を測定するためのセンサ300を用いて得られる測定情報を取得する。図示の例では、センサ300は、視覚センサ310及び力覚センサ320を含む。センサ300は、エンコーダ等の他センサ330をさらに含んでもよい。取得部120は、視覚センサ310が出力する測定データ(すなわち、画像データ)に対する画像認識を行う画像認識部121を含んでもよい。画像認識部121は、対象物(例えば、グリッパ226、ワークW1、ワークW2)を特徴抽出等の画像認識により認識し、対象物の位置(例えば、特徴部の座標)を取得してもよい。このような位置の情報は、視覚測定情報の一例である。 The acquisition unit 120 acquires measurement information obtained using a sensor 300 for measuring the operating state of the robot device 200. In the illustrated example, the sensor 300 includes a visual sensor 310 and a force sensor 320. The sensor 300 may further include other sensors 330 such as an encoder. The acquisition unit 120 may also include an image recognition unit 121 that performs image recognition on the measurement data (i.e., image data) output by the visual sensor 310. The image recognition unit 121 may recognize objects (e.g., gripper 226, workpiece W1, workpiece W2) through image recognition such as feature extraction, and acquire the position of the objects (e.g., coordinates of characteristic features). Such position information is an example of visual measurement information.

 データ収集部130は、動作生成部110が出力する制御指令に従った逆方向動作の実行中に、当該制御指令と、取得部120が取得した測定情報とのセットを含むデータを繰り返し収集する。これにより、実際の作業時のロボット装置200の動作(順方向動作)の制御に利用可能なデータを効率的に収集可能になる。例えば、ロボット装置200を用いた実際の作業時には、収集済みのデータに基づいて、作業時に得られる測定情報から制御指令を生成することで、任意状態から目標状態へのロボット装置の動作の制御を適切に行うことが可能になる。なお、データ収集部130は、動作生成部110が出力する制御指令に従った順方向動作の実行中に、当該制御指令と、取得部120が取得した測定情報とのセットを含むデータを繰り返し収集してもよい。 The data collection unit 130 repeatedly collects data including a set of a control command output by the motion generation unit 110 and measurement information acquired by the acquisition unit 120 while performing a reverse motion in accordance with the control command. This makes it possible to efficiently collect data that can be used to control the motion (forward motion) of the robot device 200 during actual work. For example, during actual work using the robot device 200, it becomes possible to appropriately control the motion of the robot device from an arbitrary state to a target state by generating a control command from measurement information obtained during work based on already collected data. Note that the data collection unit 130 may repeatedly collect data including a set of a control command and measurement information acquired by the acquisition unit 120 while performing a forward motion in accordance with the control command output by the motion generation unit 110.

 動作生成部110は、ロボット装置200の動作状態を目標状態から互いに異なる複数の任意状態に変更する複数パターンの逆方向動作を実行させるための制御指令を生成してもよい。データ収集部130は、複数パターンの逆方向動作のそれぞれについてデータを繰り返し収集してもよい。これにより、複数の移動経路に相当する複数パターンのデータを収集できるため、実際の作業時のロボット装置200の動作制御に汎用性を持たせることができる。例えば、作業時に、ワークW2が固定されていなかったり、ワークW2が移動したりする場合でも、ワークW1をワークW2に装着することが容易になる。 The motion generation unit 110 may generate control commands for executing multiple patterns of reverse motion that change the motion state of the robot device 200 from a target state to multiple arbitrary states that are different from one another. The data collection unit 130 may repeatedly collect data for each of the multiple patterns of reverse motion. This makes it possible to collect multiple patterns of data corresponding to multiple movement paths, thereby providing versatility to the motion control of the robot device 200 during actual work. For example, even if the workpiece W2 is not fixed or moves during work, it becomes easy to attach the workpiece W1 to the workpiece W2.

 動作生成部110は、目標状態に近い領域ほどデータの収集量が多くなる制御指令を生成し、目標状態から遠い領域ほどデータの収集量が少なくなる制御指令を生成してもよい。目標状態に近い領域ほど、実際の作業時に緻密且つ正確な制御が必要とされる。目標状態に近い領域のデータの収集量を増やすことにより、緻密且つ正確な制御を行うために十分な量のデータを収集可能になる。一方、目標状態から遠い領域では、緻密且つ正確な制御がさほど必要とされない。そのため、目標状態から遠い領域ほどデータの収集量が少なくなる制御指令を生成することにより、不要なデータ収集を抑制し、効率的なデータ収集が可能になる。 The action generation unit 110 may generate control commands that collect more data in areas closer to the target state, and generate control commands that collect less data in areas farther from the target state. The closer an area is to the target state, the more precise and accurate control is required during actual work. By increasing the amount of data collected in areas closer to the target state, it becomes possible to collect a sufficient amount of data to perform precise and accurate control. On the other hand, in areas farther from the target state, precise and accurate control is not as necessary. Therefore, by generating control commands that collect less data in areas farther from the target state, unnecessary data collection can be suppressed, enabling efficient data collection.

 取得部120は、目標状態に近い領域ほど測定情報の取得量が多くなるよう取得周期(例えば、サンプリング周波数)を短縮し、目標状態から遠い領域ほど測定情報の取得量が多くなるよう取得周期を延長してもよい。このような処理によっても、目標状態に近い領域ほどデータの収集量を多くし、目標状態から遠い領域ほどデータの収集量を少なくすることができる。 The acquisition unit 120 may shorten the acquisition cycle (e.g., sampling frequency) so that the amount of measurement information acquired increases in areas closer to the target state, and may extend the acquisition cycle so that the amount of measurement information acquired increases in areas farther from the target state. This type of processing also allows the amount of data collected to increase in areas closer to the target state, and decrease in areas farther from the target state.

 取得部120は、目標状態で得られる目標測定情報と、逆方向動作の実行中に得られる現在測定情報と、の差分を相対測定情報として取得してもよい。データ収集部130は、制御指令に従った逆方向動作の実行中に、当該制御指令と相対測定情報とのセットを含むデータを繰り返し収集してもよい。例えば、取得部120は、目標状態時に視覚センサ310により得られる対象物の視覚測定情報(目標位置)と、逆方向動作の実行中に得られる対象物の現在測定情報(現在位置)と、の差分を相対位置情報として取得してもよい。図1の環境では、取得部120は、ワークW2の穴の位置を目標位置とし、ワークW1の移動経路上の各位置を現在位置とし、目標位置と各現在位置との差分を相対位置として取得してもよい。これにより、対象物の目標位置と現在位置との相対的な位置関係ごとに制御指令が対応付けられる。そのため、実際の作業時には、対象物の目標位置と現在位置との相対的な位置関係から適切な制御指令を生成可能になる。また、相対的な位置関係を用いた制御であれば、作業時に対象物が移動するような場合であっても適用可能である。 The acquisition unit 120 may acquire, as relative measurement information, the difference between target measurement information obtained in the target state and current measurement information obtained during the execution of the reverse operation. The data collection unit 130 may repeatedly collect data including a set of the control command and relative measurement information while the reverse operation is being executed in accordance with the control command. For example, the acquisition unit 120 may acquire, as relative position information, the difference between the visual measurement information (target position) of the object obtained by the visual sensor 310 in the target state and the current measurement information (current position) of the object obtained during the execution of the reverse operation. In the environment of Figure 1, the acquisition unit 120 may set the position of the hole in the workpiece W2 as the target position, and each position on the movement path of the workpiece W1 as the current position, and acquire the difference between the target position and each current position as the relative position. This allows a control command to be associated with each relative positional relationship between the target position and the current position of the object. Therefore, during actual work, appropriate control commands can be generated based on the relative positional relationship between the target position and the current position of the object. Furthermore, control using relative positional relationships can be applied even when the object moves during work.

 取得部120は、センサ300の出力又は制御指令に応じてロボット装置200に関する速度関係値を含む測定情報を取得してもよい。ロボット装置200に関する速度関係値とは、対象物の速度、加速度、及び躍度のうち少なくとも1つであってもよい。取得部120は、視覚センサ310により得られる視覚測定情報(対象物の位置情報)から対象物の速度、加速度、及び躍度を導出してもよい。取得部120は、駆動部210に出力する制御指令から対象物の速度、加速度、及び躍度を導出してもよい。取得部120は、他センサ330(例えば、エンコーダ)により得られる測定情報から対象物の速度、加速度、及び躍度を導出してもよい。データ収集部130は、制御指令に従った逆方向動作の実行中に、当該制御指令と、速度関係値を含む測定情報と、のセットを含むデータを繰り返し収集してもよい。このようにして収集された速度関係値は、実際の作業時の制御において速度関係目標値を設定する際に用いることができる。速度関係目標値を用いた制御については第3実施形態で説明する。 The acquisition unit 120 may acquire measurement information including a velocity-related value related to the robot device 200 in response to the output of the sensor 300 or a control command. The velocity-related value related to the robot device 200 may be at least one of the velocity, acceleration, and jerk of the object. The acquisition unit 120 may derive the velocity, acceleration, and jerk of the object from visual measurement information (position information of the object) obtained by the visual sensor 310. The acquisition unit 120 may derive the velocity, acceleration, and jerk of the object from a control command output to the drive unit 210. The acquisition unit 120 may derive the velocity, acceleration, and jerk of the object from measurement information obtained by another sensor 330 (e.g., an encoder). The data collection unit 130 may repeatedly collect data including a set of the control command and measurement information including the velocity-related value while performing a reverse movement in accordance with the control command. The velocity-related value collected in this manner can be used to set a velocity-related target value in control during actual work. Control using the velocity-related target value will be described in the third embodiment.

 データ記憶部140は、データ収集部130が収集したデータを記憶する。データ収集部130が収集したデータは、測定情報(相対測定情報)と制御指令との複数のセットを含む。各セットは、測定情報(相対測定情報)と、速度関係値と、制御指令とを含んでもよい。データ収集部130が収集したデータは、目標状態に近い領域ほど多くのデータのセットが含まれていてもよい。 The data storage unit 140 stores the data collected by the data collection unit 130. The data collected by the data collection unit 130 includes multiple sets of measurement information (relative measurement information) and control commands. Each set may include measurement information (relative measurement information), a speed-related value, and a control command. The data collected by the data collection unit 130 may include more data sets the closer the area is to the target state.

 設定取得部150は、データ収集部130が収集したデータ(具体的には、データ記憶部140に記憶されたデータ)に基づいて、ロボット装置200を用いた作業を行うための設定情報を取得する。ある作業を行うための設定情報を、当該作業についての「設定ライブラリ」と称する。例えば、設定取得部150は、データ収集部130によるデータ収集が完了した後に設定ライブラリを取得する。ライブラリ記憶部160は、設定取得部150が取得した設定ライブラリを記憶する。制御部170は、実際の作業時に、当該設定ライブラリを用いてロボット装置200を制御する。 The setting acquisition unit 150 acquires setting information for performing work using the robot device 200 based on data collected by the data collection unit 130 (specifically, data stored in the data storage unit 140). Setting information for performing a certain work is referred to as the "setting library" for that work. For example, the setting acquisition unit 150 acquires the setting library after data collection by the data collection unit 130 is complete. The library storage unit 160 stores the setting library acquired by the setting acquisition unit 150. The control unit 170 controls the robot device 200 using the setting library during actual work.

 設定ライブラリは、作業の工程ごとの動作パラメータと、各工程の遷移先及び遷移条件の情報(遷移条件の正常値・タイムアウト値・エラー値など)と、を含む。動作パラメータは、測定情報(相対測定情報)と制御指令とを対応付ける対応付け情報を含んでもよい。当該動作パラメータは、測定情報(相対測定情報)と速度関係値と制御指令とを対応付ける対応付け情報を含んでもよい。動作パラメータは、視覚センサ310により得られる視覚測定情報によるロボット制御である視覚制御と、力覚センサ320により得られる力覚測定情報によるロボット制御である力覚制御と、の制御比率を含んでもよい。動作パラメータは、対象物の種別(金属、樹脂、ネジ、コネクタなど)の情報を含んでもよい。動作パラメータは、対象物上の特徴部(穴、縁面、コネクタなど)の情報を含んでもよい。設定取得部150は、操作装置420を介して行われる操作入力に少なくとも部分的に基づいて設定ライブラリを取得してもよい。 The setting library includes operation parameters for each work process, and information on the transition destination and transition conditions for each process (normal values, timeout values, error values, etc. of the transition conditions). The operation parameters may include correspondence information that associates measurement information (relative measurement information) with control commands. The operation parameters may include correspondence information that associates measurement information (relative measurement information), speed-related values, and control commands. The operation parameters may include a control ratio between visual control, which is robot control based on visual measurement information obtained by the visual sensor 310, and force control, which is robot control based on force measurement information obtained by the force sensor 320. The operation parameters may include information on the type of object (metal, resin, screw, connector, etc.). The operation parameters may include information on features on the object (holes, edge surfaces, connectors, etc.). The setting acquisition unit 150 may acquire the setting library based at least in part on operation input performed via the operation device 420.

 設定取得部150は、データ収集部130が収集したデータを学習用データとして用いて、測定情報から制御指令を導出するための学習済みモデルを機械学習により取得してもよい。この場合、制御部170は、実際の作業時に、学習済みモデルを含む設定ライブラリを用いて、作業中に得られる測定情報に基づいてロボット装置200の制御を行う。学習済みモデル(学習モデル)は、制御指令を生成するための推論をする能力を機械学習により獲得可能であれば、その種類は、特に限定されない。機械学習の種類は、特に限定されるものではないが、典型的には、教師あり学習又は強化学習である。学習モデルは、例えば、ディープニューラルネットワーク(DNN)等のニューラルネットワークにより構成されてもよい。学習モデルは、例えば、状態価値関数又は行動価値関数等の価値関数により構成されてもよい。このようにして、設定取得部150は、機械学習によって、最適値を推論する演算を決定したり、推論を可能としたり、制御指令の最適値変数を生成したりする。 The setting acquisition unit 150 may use the data collected by the data collection unit 130 as learning data to acquire a trained model for deriving control commands from measurement information through machine learning. In this case, the control unit 170 uses a setting library including the trained model during actual work to control the robot device 200 based on measurement information obtained during work. The type of trained model (trained model) is not particularly limited as long as it is capable of acquiring the ability to make inferences for generating control commands through machine learning. The type of machine learning is not particularly limited, but is typically supervised learning or reinforcement learning. The training model may be configured, for example, by a neural network such as a deep neural network (DNN). The training model may be configured, for example, by a value function such as a state value function or an action value function. In this way, the setting acquisition unit 150 uses machine learning to determine calculations for inferring optimal values, enable inferences, and generate optimal value variables for control commands.

 設定取得部150は、視覚制御と力覚制御との制御比率を例えば工程ごとに含む設定ライブラリを取得してもよい。この場合、制御部170は、実際の作業時に、当該設定ライブラリを用いて、当該作業の状況に応じて制御比率を動的に又は段階的に変更する。このような制御の詳細については第2実施形態で説明する。 The setting acquisition unit 150 may acquire a setting library that includes the control ratio between visual control and force sense control, for example, for each process. In this case, the control unit 170 uses the setting library during actual work to dynamically or gradually change the control ratio depending on the status of the work. Details of this type of control will be explained in the second embodiment.

 設定取得部150は、ロボット装置200に関する速度関係値を例えば工程ごとに含む設定ライブラリを取得してもよい。この場合、制御部170は、作業中に得られる測定情報又は制御指令からロボット装置200に関する速度関係値を導出してもよい。そして、制御部170は、設定ライブラリと、作業中に得られる測定情報とに基づいて、速度関係値と速度関係目標値との差分を減少させる制御をロボット装置200に対して繰り返し行ってもよい。このような制御の詳細については第3実施形態で説明する。 The setting acquisition unit 150 may acquire a setting library containing speed-related values related to the robot device 200, for example, for each process. In this case, the control unit 170 may derive the speed-related values related to the robot device 200 from measurement information or control commands obtained during work. The control unit 170 may then repeatedly control the robot device 200 to reduce the difference between the speed-related value and the speed-related target value, based on the setting library and the measurement information obtained during work. Details of such control will be described in the third embodiment.

 なお、動作生成部110は、ロボット装置200を用いた作業の種別ごとに、逆方向動作を実行させるための制御指令を生成してもよい。作業の種別は、図1の環境を想定すると、「基板への部品装着」である。他の作業の種別の例としては、「食品の箱入れ」、「ネジ締め」、「ピック&プレース」、「AGV(Automated Guided Vehicle)制御」等がある。データ収集部130は、作業の種別ごとにデータを繰り返し収集することで作業の種別ごとにデータを収集してもよい。 The motion generation unit 110 may generate a control command for executing a reverse motion for each type of task performed by the robot device 200. Assuming the environment of Figure 1, the task type is "mounting components on a circuit board." Other examples of task types include "packing food into boxes," "screw tightening," "pick and place," and "AGV (Automated Guided Vehicle) control." The data collection unit 130 may collect data for each task type by repeatedly collecting data for each task type.

 この場合、設定取得部150は、データ収集部130が作業の種別ごとに収集したデータに基づいて、設定ライブラリを作業の種別ごとに取得できる。そして、制御部170は、実際に行う作業の種別に対応する設定ライブラリを用いて、当該作業を行うための制御をロボット装置200に対して行う。例えば、制御部170は、ライブラリ記憶部160が記憶している設定ライブラリの一覧を表示装置410に表示させ、当該一覧の中から操作装置420を用いて選択された設定ライブラリを用いて、ロボット装置200を制御してもよい。なお、ライブラリ記憶部160が設定ライブラリと対応付けて認識ライブラリも記憶している場合、実際に行う作業の種別に対応する認識ライブラリを用いて、当該作業を行うための画像認識を行ってもよい。 In this case, the setting acquisition unit 150 can acquire a setting library for each type of work based on the data collected by the data collection unit 130 for each type of work. The control unit 170 then controls the robot device 200 to perform the work using the setting library corresponding to the type of work that is actually being performed. For example, the control unit 170 may display a list of setting libraries stored in the library storage unit 160 on the display device 410, and control the robot device 200 using a setting library selected from the list using the operation device 420. Note that if the library storage unit 160 also stores a recognition library associated with the setting library, image recognition for performing the work may be performed using the recognition library corresponding to the type of work that is actually being performed.

 (2.3)データ収集の一例
 図3及び図4は、第1実施形態に係る制御装置100によるデータ収集の一例を説明するための図である。
(2.3) Example of Data Collection FIGS. 3 and 4 are diagrams for explaining an example of data collection by the control device 100 according to the first embodiment.

 上述のように、動作生成部110は、ロボット装置200の動作状態を目標状態から互いに異なる複数の任意状態に変更する複数パターンの逆方向動作を実行させるための制御指令を生成する。データ収集部130は、複数パターンの逆方向動作のそれぞれについてデータを繰り返し収集する。図3に示すように、動作生成部110は、目標状態に近い領域ほどデータの収集量が多くなる制御指令を生成し、目標状態から遠い領域ほどデータの収集量が少なくなる制御指令を生成する。図3では、x軸及びy軸は、水平面内で互いに直交する方向を示し、z軸は、垂直方向を示す。また、図3において、データの収集量を濃淡で示している。具体的には、データの収集量が多いほど濃い色で示し、データの収集量が少ないほど薄い色で示している。 As described above, the motion generation unit 110 generates control commands for executing multiple patterns of reverse motion that change the motion state of the robot device 200 from a target state to multiple mutually different arbitrary states. The data collection unit 130 repeatedly collects data for each of the multiple patterns of reverse motion. As shown in FIG. 3, the motion generation unit 110 generates control commands that collect more data in areas closer to the target state, and generates control commands that collect less data in areas farther from the target state. In FIG. 3, the x-axis and y-axis indicate directions that are perpendicular to each other in the horizontal plane, and the z-axis indicates the vertical direction. Also, in FIG. 3, the amount of collected data is indicated by shading. Specifically, the greater the amount of collected data, the darker the color, and the less the amount of collected data, the lighter the color.

 図4の例では、動作生成部110は、ワークW1(部品)がワークW2(基板)の穴にある状態を目標状態として、P1乃至P8の合計8パターンの任意状態への逆方向動作をロボット装置200に実行させる。P1乃至P8の各パターンの移動経路が直線的に図示されているが、各移動経路は直線的でなくてもよい。例えば、ワークW1(部品)をワークW2(基板)の穴から上方に向けて引き抜いた後に、水平方向又は斜め上方に向けてワークW1(部品)を移動させるような移動経路であってもよい。また、P1乃至P8の各パターンのうち、P2、P4、P8は、目標状態に近い領域Rで移動が終了している。これにより、目標状態の位置に近い地点であるほどデータの収集量が多くなる。 In the example of Figure 4, the motion generation unit 110 sets a state in which the workpiece W1 (component) is in the hole in the workpiece W2 (circuit board) as the target state, and causes the robot device 200 to perform reverse motion to any of eight patterns P1 to P8. While the movement path for each pattern P1 to P8 is illustrated as linear, each movement path does not have to be linear. For example, the movement path may be such that the workpiece W1 (component) is pulled upward from the hole in the workpiece W2 (circuit board), and then moved horizontally or diagonally upward. Furthermore, of the patterns P1 to P8, the movement of P2, P4, and P8 ends in region R, which is close to the target state. As a result, the closer the point is to the position of the target state, the greater the amount of data collected.

 (2.4)制御装置の動作の具体例
 図5は、第1実施形態に係る制御装置100によるデータ収集の一例を説明するためのフロー図である。
(2.4) Specific Example of Operation of the Control Device FIG. 5 is a flow chart for explaining an example of data collection by the control device 100 according to the first embodiment.

 ステップS101において、ユーザ(例えば、作業者)が対象物(例えば、エンドエフェクタに把持又は支持されたワークW1)を目標状態に移動する。また、動作生成部110は、複数の逆方向動作パターンのいずれかのパターン(すなわち、いずれかの任意状態)に対応する制御指令の系列情報及び/又は当該任意状態の位置の情報を取得する。ここで、取得部120は、視覚センサ310を用いて得られる視覚測定情報から、目標状態での対象物の位置を目標位置として取得する。取得部120は、力覚センサ320を用いて得られる力覚測定情報から、目標状態で対象物に加わる力(反力)を目標力(目標反力)として取得する。 In step S101, a user (e.g., a worker) moves an object (e.g., a workpiece W1 grasped or supported by an end effector) to a target state. The motion generation unit 110 also acquires control command sequence information corresponding to one of a plurality of reverse motion patterns (i.e., any arbitrary state) and/or position information for that arbitrary state. Here, the acquisition unit 120 acquires the position of the object in the target state as the target position from visual measurement information obtained using the visual sensor 310. The acquisition unit 120 acquires the force (reaction force) applied to the object in the target state as the target force (target reaction force) from force measurement information obtained using the force sensor 320.

 ステップS102において、動作生成部110は、ステップS101で取得した情報に基づいて、制御指令をロボット装置200(駆動部210)に出力することにより、対象物を所定の移動量だけ移動させる。 In step S102, the motion generation unit 110 outputs a control command to the robot device 200 (drive unit 210) based on the information acquired in step S101, thereby moving the target object a predetermined distance.

 ステップS103において、取得部120は、現在の測定情報を取得する。取得部120は、視覚センサ310を用いて得られる視覚測定情報から現在の対象物の位置を取得する。取得部120は、現在の対象物の位置と目標位置との差分を相対位置(位置変化量)として取得してもよい。また、取得部120は、力覚センサ320を用いて得られる力覚測定情報から現在の力(反力)を取得する。取得部120は、現在の力(反力)と目標力(反力)との差分を相対力(力変化量)として取得してもよい。さらに、取得部120は、現在の視覚測定情報又はステップS102の制御指令から、対象物の速度関係値(速度、加速度、及び躍度)を取得してもよい。 In step S103, the acquisition unit 120 acquires current measurement information. The acquisition unit 120 acquires the current position of the object from visual measurement information obtained using the visual sensor 310. The acquisition unit 120 may acquire the difference between the current position of the object and the target position as a relative position (amount of position change). The acquisition unit 120 also acquires the current force (reaction force) from force measurement information obtained using the force sensor 320. The acquisition unit 120 may acquire the difference between the current force (reaction force) and the target force (reaction force) as a relative force (amount of force change). Furthermore, the acquisition unit 120 may acquire velocity-related values (velocity, acceleration, and jerk) of the object from the current visual measurement information or the control command of step S102.

 ステップS104において、データ収集部130は、ステップS102の制御指令と、ステップS103で取得した情報(位置変化量、力変化量、速度関係値)とのセットを収集し、当該セットをデータ記憶部140に記憶させる。 In step S104, the data collection unit 130 collects a set of the control command from step S102 and the information acquired in step S103 (position change amount, force change amount, velocity-related value), and stores the set in the data storage unit 140.

 現在の逆方向動作パターンに対応する任意状態への対象物の移動が完了していない場合(ステップS105:NO)、処理がステップS102に戻り、動作生成部110は、制御指令をロボット装置200(駆動部210)に出力することにより、対象物を所定の移動量だけ移動させる。そして、ステップS103において、取得部120は、情報(位置変化量、力変化量、速度関係値)を取得する。ステップS104において、データ収集部130は、ステップS102の制御指令と、ステップS103で取得した情報(位置変化量、力変化量、速度関係値)とのセットを収集してデータ記憶部140に記憶させる。このような処理を現在の逆方向動作パターンに対応する任意状態への移動が完了するまで繰り返す。 If the movement of the object to the arbitrary state corresponding to the current reverse movement pattern has not been completed (step S105: NO), the process returns to step S102, and the movement generation unit 110 outputs a control command to the robot device 200 (drive unit 210) to move the object a predetermined distance. Then, in step S103, the acquisition unit 120 acquires information (position change amount, force change amount, velocity-related value). In step S104, the data collection unit 130 collects a set of the control command from step S102 and the information acquired in step S103 (position change amount, force change amount, velocity-related value), and stores them in the data storage unit 140. This process is repeated until the movement to the arbitrary state corresponding to the current reverse movement pattern is completed.

 一方、現在の逆方向動作パターンに対応する任意状態への対象物の移動が完了した場合(ステップS105:YES)、動作生成部110は、複数の逆方向動作パターンの全てについて動作が完了したか否かを確認する。全パターンについて完了していない場合(ステップS106:NO)、次の逆方向動作パターンへ移行(ステップS107)し、ステップS101から処理を再開する。 On the other hand, if the movement of the object to an arbitrary state corresponding to the current reverse movement pattern is complete (step S105: YES), the movement generation unit 110 checks whether the movements have been completed for all of the multiple reverse movement patterns. If the movements have not been completed for all patterns (step S106: NO), the process moves to the next reverse movement pattern (step S107) and resumes processing from step S101.

 全パターンについて完了した場合(ステップS106:YES)、ステップS108において、設定取得部150は、データ収集部130が収集したデータ(具体的には、データ記憶部140に記憶されたデータ)に基づいて、ロボット装置200を用いた作業を行うための動作条件(動作パラメータ)を決定する。そして、ステップS109において、設定取得部150は、ステップS108で決定した動作条件(動作パラメータ)を含む設定情報を設定ライブラリとしてライブラリ記憶部160に記憶させる。 If processing has been completed for all patterns (step S106: YES), in step S108, the setting acquisition unit 150 determines the operating conditions (operating parameters) for performing work using the robot device 200 based on the data collected by the data collection unit 130 (specifically, the data stored in the data storage unit 140). Then, in step S109, the setting acquisition unit 150 stores setting information including the operating conditions (operating parameters) determined in step S108 in the library storage unit 160 as a setting library.

 なお、図5に示すフローを作業の種別ごとに実行することにより、作業の種別ごとの設定ライブラリがライブラリ記憶部160に記憶されてもよい。 In addition, by executing the flow shown in Figure 5 for each type of work, a setting library for each type of work may be stored in the library storage unit 160.

 (2.5)設定ライブラリの一例
 図6は、実施形態に係る制御装置100が記憶する設定ライブラリの一例を説明するための図である。
(2.5) Example of Setting Library FIG. 6 is a diagram for explaining an example of a setting library stored in the control device 100 according to the embodiment.

 図示の例では、作業の種別には、「基板への部品装着」、「食品の箱入れ」、「ネジ締め」、「ピック&プレース」(いわゆる、ピッキング)、及び「AGV制御」がある。各作業は、複数の工程を含む。 In the example shown, the types of work include "mounting components on a circuit board," "packing food into boxes," "screw tightening," "pick and place" (also known as "picking"), and "AGV control." Each work involves multiple processes.

 例えば、「基板への部品装着」という作業種別では、工程1「部品認識→移動」→工程2「部品把持」→工程3「部品を基板へ向けて移動」→工程4「接近」→工程5「基板の穴認識」→工程6「穴接近」→工程7「穴倣い動作」→工程8「穴挿入」→工程9「穴挿入完了」→工程10「把持解除」の順で行われる。一方、制御装置100は、データ収集時には、このような順序とは逆の順序でロボット装置200の動作制御を行ってもよい。なお、「AGV制御」の場合、障害物を避ける工程がさらに含まれていてもよい。障害物を避ける工程は、工程4「対象物を被対象物へ向けて移動」に含まれていてもよい。 For example, in a task type called "mounting components on a board," the steps are performed in the following order: Step 1 "Recognize component → Move" → Step 2 "Grab component" → Step 3 "Move component toward board" → Step 4 "Approach" → Step 5 "Recognize holes on board" → Step 6 "Approach hole" → Step 7 "Hole tracing operation" → Step 8 "Insert hole" → Step 9 "Hole insertion completed" → Step 10 "Release grip." On the other hand, when collecting data, the control device 100 may control the operation of the robot device 200 in the reverse order. Note that in the case of "AGV control," an obstacle avoidance step may be further included. The obstacle avoidance step may be included in Step 4 "Move object toward target object."

 設定ライブラリは、作業の工程ごとの動作パラメータと、各工程の遷移先及び遷移条件の情報(遷移条件の正常値・タイムアウト値・エラー値など)と、を含む。動作パラメータは、測定情報(相対測定情報)と制御指令とを対応付ける対応付け情報を含んでもよいし、測定情報(相対測定情報)と速度関係値と制御指令とを対応付ける対応付け情報を含んでもよい。動作パラメータは、視覚制御と力覚制御との制御比率を含んでもよい。動作パラメータは、対象物の種別(金属、樹脂、ネジ、コネクタなど)の情報を含んでもよい。動作パラメータは、対象物上の特徴部(穴、縁面、コネクタなど)の情報を含んでもよい。 The setting library includes operation parameters for each work process, and information on the transition destination and transition conditions for each process (normal values, timeout values, error values, etc. for transition conditions). The operation parameters may include correspondence information that associates measurement information (relative measurement information) with control commands, or may include correspondence information that associates measurement information (relative measurement information), speed-related values, and control commands. The operation parameters may include the control ratio between visual control and force control. The operation parameters may include information on the type of object (metal, resin, screws, connectors, etc.). The operation parameters may include information on features on the object (holes, edges, connectors, etc.).

 (3)第2実施形態
 第2実施形態について、第1実施形態との相違点を主として説明する。第2実施形態に係るシステム構成は第1実施形態と同様である(図1参照)。なお、第2実施形態が第1実施形態を前提とした実施形態である一例を主として説明するが、第2実施形態は、必ずしも第1実施形態の少なくとも一部を前提としなくてもよい。
(3) Second Embodiment The second embodiment will be described mainly focusing on the differences from the first embodiment. The system configuration of the second embodiment is the same as that of the first embodiment (see FIG. 1). Note that the second embodiment will be described mainly as an example in which the second embodiment is based on the first embodiment, but the second embodiment does not necessarily have to be based on at least a part of the first embodiment.

 (3.1)制御装置の機能ブロック構成
 図7は、第2実施形態に係る制御装置100の機能ブロック構成を示すブロック図である。
(3.1) Functional Block Configuration of Control Device FIG. 7 is a block diagram showing the functional block configuration of the control device 100 according to the second embodiment.

 第2実施形態に係る制御装置100は、ロボット装置200の動作状態を測定するためのセンサ300を用いて得られる測定情報を取得する取得部120と、測定情報に基づいてロボット装置200の動作の制御を行う制御部170と、を有する。センサ300は、視覚センサ310及び力覚センサ320を含む。制御部170は、視覚センサ310により得られる視覚測定情報による制御である視覚制御と、力覚センサ320により得られる力覚測定情報による制御である力覚制御と、の制御比率を、ロボット装置200を用いた作業の状況に応じて動的に又は段階的に変更する。これにより、視覚制御のメリット及び力覚制御のメリットを活用し、視覚制御及び力覚制御を両立してロボット装置200の動作を適切に制御することが可能になる。 The control device 100 according to the second embodiment includes an acquisition unit 120 that acquires measurement information obtained using a sensor 300 for measuring the operating state of the robot device 200, and a control unit 170 that controls the operation of the robot device 200 based on the measurement information. The sensor 300 includes a visual sensor 310 and a force sensor 320. The control unit 170 dynamically or stepwise changes the control ratio between visual control, which is control based on visual measurement information obtained by the visual sensor 310, and force control, which is control based on force measurement information obtained by the force sensor 320, depending on the status of the work using the robot device 200. This makes it possible to take advantage of the benefits of both visual control and force control, and appropriately control the operation of the robot device 200 by using both visual control and force control.

 第2実施形態では、制御部170は、視覚測定情報を参照して、ロボット装置200の制御の内容を示す第1情報を生成する視覚制御部171Aと、力覚測定情報を参照して、ロボット装置200の制御の内容を示す第2情報を生成する力覚制御部171Bと、第1情報及び第2情報に基づいて、ロボット装置200に対する制御指令を生成する指令生成部172と、ロボット装置200を用いた作業の状況に応じて、視覚測定情報と力覚測定情報との間の重み付け処理、又は第1情報と第2情報との間の重み付け処理を行うことで、制御比率を変更する重み付け部173と、を有する。これにより、重み付け処理によって制御比率を適切に変更できる。 In the second embodiment, the control unit 170 includes a visual control unit 171A that references visual measurement information to generate first information indicating the control content of the robot device 200, a force-sense control unit 171B that references force-sense measurement information to generate second information indicating the control content of the robot device 200, a command generation unit 172 that generates control commands for the robot device 200 based on the first information and the second information, and a weighting unit 173 that changes the control ratio by performing weighting processing between the visual measurement information and the force-sense measurement information, or between the first information and the second information, depending on the status of the work using the robot device 200. This makes it possible to appropriately change the control ratio through weighting processing.

 視覚制御部171Aが行う視覚制御は、例えば、設定ライブラリ中の動作パラメータ(対応付け情報等)に基づいて行われてもよい。視覚制御部171Aは、視覚センサ310を用いて得られる視覚測定情報に基づいて、対象物の現在位置と目標位置との間の相対的な位置関係を特定する。そして、視覚制御部171Aは、特定した位置関係から、対応付け情報により、現在位置と目標位置との差分を減少させる(すなわち、対象物を目標位置に近づける)ような視覚制御指令を生成し、この視覚制御指令を第1情報として出力してもよい。このような視覚制御の少なくとも一部は、学習済みモデルを用いて行われてもよい。 The visual control performed by the visual control unit 171A may be performed based on, for example, operational parameters (such as correspondence information) in the setting library. The visual control unit 171A identifies the relative positional relationship between the current position and the target position of the object based on visual measurement information obtained using the visual sensor 310. The visual control unit 171A may then generate a visual control command based on the identified positional relationship using the correspondence information to reduce the difference between the current position and the target position (i.e., move the object closer to the target position), and output this visual control command as the first information. At least a portion of this visual control may be performed using a trained model.

 同様に、力覚制御部171Bが行う力覚制御は、例えば、設定ライブラリ中の動作パラメータ(対応付け情報等)に基づいて行われてもよい。力覚制御部171Bは、力覚センサ320を用いて得られる力覚測定情報に基づいて、対象物の現在力(現在反力)と目標力(目標反力)との差分を特定し、特定した差分から、対応付け情報により、当該差分を減少させるような力覚制御指令を生成し、この力覚制御指令を第2情報として出力してもよい。このような力覚制御の少なくとも一部は、学習済みモデルを用いて行われてもよい。 Similarly, the force sense control performed by the force sense control unit 171B may be performed based on, for example, operation parameters (e.g., correspondence information) in a setting library. The force sense control unit 171B may identify the difference between the current force (current reaction force) of the object and the target force (target reaction force) based on force sense measurement information obtained using the force sensor 320, and may generate a force sense control command from the identified difference to reduce the difference using correspondence information, and output this force sense control command as second information. At least a portion of this force sense control may be performed using a trained model.

 制御部170(重み付け部173)は、ロボット装置200を用いた作業が進行するにつれて制御比率を動的に又は段階的に変更してもよい。上述のように、ロボット装置200を用いた作業は、予め定められた複数の工程を含み得る。制御部170は、制御比率を工程ごとに変更してもよい。これにより、工程ごとに適切な制御比率でロボット装置200の動作を制御できる。 The control unit 170 (weighting unit 173) may dynamically or gradually change the control ratio as the work using the robotic device 200 progresses. As described above, the work using the robotic device 200 may include multiple predetermined steps. The control unit 170 may change the control ratio for each step. This allows the operation of the robotic device 200 to be controlled with an appropriate control ratio for each step.

 ライブラリ記憶部160は、複数の工程のそれぞれの制御比率に関する設定を含む設定ライブラリ(設定情報)を記憶していてもよい。制御部170(重み付け部173)は、設定ライブラリに基づいて制御比率を工程ごとに変更してもよい。これにより、工程ごとに適切な制御比率を設定できる。 The library storage unit 160 may store a setting library (setting information) that includes settings related to the control ratios for each of multiple processes. The control unit 170 (weighting unit 173) may change the control ratio for each process based on the setting library. This allows an appropriate control ratio to be set for each process.

 制御部170(重み付け部173)は、作業における1つの工程から次の工程への遷移時において、当該1つの工程における制御比率から当該次の工程における制御比率へ向けて緩やかに制御比率を変更する。これにより、制御比率が急激な変動が生じないため、緻密な動作制御が可能になると共に、動作エラーの発生を抑制できる。 When transitioning from one process to the next in a task, the control unit 170 (weighting unit 173) gradually changes the control ratio from the control ratio for that process to the control ratio for the next process. This prevents sudden fluctuations in the control ratio, enabling precise operation control and reducing the occurrence of operation errors.

 例えば、工程Aの制御比率(視覚制御:力覚制御)が「80:20」、工程Aの次の工程Bの制御比率が「50:50」と設定されている場合において、工程Aから工程Bへの遷移条件が満たされると、制御部170(重み付け部173)は、「80:20」→「75:25」→「70:30」→「65:35」→「60:40」→「55:45」→「50:50」というように段階的に制御比率を変更する。或いは、制御部170(重み付け部173)は、「80:20」→「79:21」→「78:22」→「77:23」→・・・というように連続的に制御比率を変更してもよい。このような制御比率の変更は、制御周期単位で行われてもよいし、複数の制御周期からなる時間単位で行われてもよい。 For example, if the control ratio (visual control: force control) for process A is set to "80:20" and the control ratio for process B, which follows process A, is set to "50:50," when the transition condition from process A to process B is satisfied, the control unit 170 (weighting unit 173) changes the control ratio in stages, such as "80:20" → "75:25" → "70:30" → "65:35" → "60:40" → "55:45" → "50:50." Alternatively, the control unit 170 (weighting unit 173) may change the control ratio continuously, such as "80:20" → "79:21" → "78:22" → "77:23" → ... Such changes in the control ratio may be made in control cycle units, or in time units consisting of multiple control cycles.

 制御部170(重み付け部173)は、センサ300を用いて得られる測定情報に基づいて、作業における1つの工程から次の工程への遷移条件が満たされたか否かを判定してもよい。遷移条件は、設定ライブラリ内の動作パラメータの1つとして含まれていてもよい。制御部170(重み付け部173)は、当該遷移条件が満たされたことに応じて、当該1つの工程から当該次の工程へ遷移すると共に、当該次の工程に対応する制御比率に変更してもよい。 The control unit 170 (weighting unit 173) may determine whether a transition condition from one process to the next process in a task is satisfied based on measurement information obtained using the sensor 300. The transition condition may be included as one of the operation parameters in the setting library. When the transition condition is satisfied, the control unit 170 (weighting unit 173) may transition from the one process to the next process and change the control ratio corresponding to the next process.

 制御部170は、当該次の工程へ切り替えた後に当該次の工程における制御が収束しない場合、当該1つの工程へ戻すと共に当該1つの工程に対応する制御比率に変更してもよい。このように、制御が収束しない場合は1つ前の工程に戻して動作をやり直すことにより、当該次の工程に遷移したときに制御が収束することを期待できる。なお、「制御が収束しない場合」とは、測定情報が遷移条件の正常値を満たさないこと、タイムアウトが発生したこと、測定情報がエラー値になったことのうち、少なくとも1つの条件が満たされたことを意味してもよい。 If the control in the next process does not converge after switching to the next process, the control unit 170 may return to the previous process and change the control ratio corresponding to that process. In this way, if the control does not converge, by returning to the previous process and restarting the operation, it is possible to expect that the control will converge when transitioning to the next process. Note that "if the control does not converge" may mean that at least one of the following conditions is met: the measurement information does not satisfy the normal value of the transition condition, a timeout has occurred, or the measurement information has become an error value.

 制御部170(重み付け部173)は、視覚制御を優先して制御を行う第1制御状態(「視覚ベース」とも称する)と、視覚制御と力覚制御とを協調して用いて制御を行う第2制御状態(「視覚+力覚ベース」とも称する)と、力覚制御を優先して用いて制御を行う第3制御状態(「力覚ベース」とも称する)と、の間でロボット装置200の制御を切り替えてもよい。「視覚制御を優先して制御を行う」とは、例えば、視覚制御が占める制御比率が概ね65%~100%程度であることを意味してもよい。「視覚制御と力覚制御とを協調して用いて制御を行う」とは、例えば、視覚制御:力覚制御が概ね50:50程度であることを意味してもよい。「力覚制御を優先して用いて制御を行う」とは、例えば、力覚制御が占める制御比率が概ね65%~100%程度であることを意味してもよい。制御部170は、センサ300を用いて得られる測定情報に基づいて、第1制御状態乃至第3制御状態における制御の切り替えを判断してもよい。制御部170(重み付け部173)は、第1制御状態により、測定情報と測定情報の目標値との差分が所定値以下にならない場合(遷移条件が満たされない場合であってもよい)、第2制御状態又は第3制御状態に切り替えて力覚測定情報を参照してもよい。 The control unit 170 (weighting unit 173) may switch the control of the robot device 200 between a first control state (also referred to as "visual-based") in which control is performed with priority given to visual control, a second control state (also referred to as "visual+force-based") in which control is performed using visual control and force-sense control in cooperation, and a third control state (also referred to as "force-sense-based") in which control is performed with priority given to force-sense control. "Control is performed with priority given to visual control" may mean, for example, that the control ratio accounted for by visual control is approximately 65% to 100%. "Control is performed using visual control and force-sense control in cooperation" may mean, for example, that the visual control:force-sense control ratio is approximately 50:50. "Control is performed with priority given to force-sense control" may mean, for example, that the control ratio accounted for by force-sense control is approximately 65% to 100%. The control unit 170 may determine whether to switch control between the first control state, the second control state, and the third control state based on measurement information obtained using the sensor 300. If, in the first control state, the difference between the measurement information and the target value for the measurement information does not become equal to or less than a predetermined value (this may be the case when the transition condition is not satisfied), the control unit 170 (weighting unit 173) may switch to the second control state or the third control state and refer to the force measurement information.

 制御部170(重み付け部173)は、外部からの指示があった場合、当該指示に応じて制御比率を変更してもよい。例えば、操作装置420を介して制御比率の変更が指示された場合、制御部170(重み付け部173)は、当該指示に応じて制御比率を変更してもよい。 When an external instruction is received, the control unit 170 (weighting unit 173) may change the control ratio in accordance with the instruction. For example, when an instruction to change the control ratio is received via the operation device 420, the control unit 170 (weighting unit 173) may change the control ratio in accordance with the instruction.

 制御部170は、ロボット装置200に対して行った制御内容と当該制御内容に対する制御結果(センサ情報)とに応じて、ロボット装置200に対する以降の制御内容に対する制御結果を予測する予測部174Aと、当該予測に応じて、以降の制御内容を補正する補正部174Bと、をさらに有していてもよい。例えば、ロボット装置200側の誤差要因又はセンサ300側の誤差要因等の外部要因により、制御指令で指定した通りの動作をロボット装置200が行っていないような場合、予測部174Aは、当該誤差(すなわち、制御指令の内容と制御結果との差分)を特定し、以降の制御内容に対する制御結果を予測する。そして、補正部174Bは、指令生成部172が生成する制御指令を、予測部174Aの予測結果に応じて補正して駆動部210に出力してもよい。例えば、補正部174Bは、特定された誤差を打ち消すように制御指令を補正してもよい。これにより、外部の誤差要因がある場合でも、より正確なロボット制御を行うことが可能になる。 The control unit 170 may further include a prediction unit 174A that predicts the control results for subsequent control operations performed on the robotic device 200 based on the control operations performed on the robotic device 200 and the control results (sensor information) for those control operations, and a correction unit 174B that corrects the subsequent control operations based on the prediction. For example, if the robotic device 200 does not perform the operation specified in the control command due to external factors such as an error factor on the robotic device 200 side or an error factor on the sensor 300 side, the prediction unit 174A identifies the error (i.e., the difference between the control command content and the control result) and predicts the control results for the subsequent control operations. The correction unit 174B may then correct the control command generated by the command generation unit 172 based on the prediction result of the prediction unit 174A and output it to the drive unit 210. For example, the correction unit 174B may correct the control command to cancel out the identified error. This enables more accurate robot control even when there are external error factors.

 ライブラリ記憶部160は、作業の種別ごとに用意された複数の設定ライブラリを記憶していてもよい。複数の設定ライブラリのそれぞれは、制御比率の設定情報(動作パラメータ)を含んでいてもよい。複数の設定ライブラリのそれぞれは、一連の各工程に関する設定情報を含む。制御部170は、複数の設定ライブラリの中から実際に行う作業の種別に応じて選択された設定ライブラリを用いてロボット装置200の制御を行う。当該選択は、操作装置420を介した操作入力により行われてもよい。このような操作入力に先立ち、取得部120(画像認識部121)は、作業現場又は対象物を視覚センサ310が観察することで得られた視覚測定情報に基づいて実際に行う作業を推定してもよい。制御部170は、推定された作業に対応する設定ライブラリを表示装置410上で表示することでユーザ(オペレータ)に提案してもよい。 The library storage unit 160 may store multiple setting libraries prepared for each type of work. Each of the multiple setting libraries may include setting information (operation parameters) for control ratios. Each of the multiple setting libraries includes setting information for each series of processes. The control unit 170 controls the robot device 200 using a setting library selected from the multiple setting libraries according to the type of work actually to be performed. The selection may be made by operation input via the operation device 420. Prior to such operation input, the acquisition unit 120 (image recognition unit 121) may estimate the actual work to be performed based on visual measurement information obtained by the visual sensor 310 observing the work site or object. The control unit 170 may suggest the setting library corresponding to the estimated work to the user (operator) by displaying it on the display device 410.

 ライブラリ記憶部160は、作業の種別ごとに用意された複数の認識ライブラリをさらに記憶していてもよい。複数の認識ライブラリのそれぞれは、対象物の画像認識処理に用いる学習済みモデルを含んでいてもよい。制御部170は、複数の認識ライブラリの中から実際に行う作業の種別に応じて選択された認識ライブラリを用いて画像認識処理を行う。当該選択は、操作装置420を介した操作入力により行われてもよい。このような操作入力に先立ち、取得部120(画像認識部121)は、作業現場又は対象物を視覚センサ310が観察することで得られた視覚測定情報に基づいて実際に行う作業を推定してもよい。制御部170は、推定された作業に対応する認識ライブラリを表示装置410上で表示することでユーザ(オペレータ)に提案してもよい。 The library storage unit 160 may further store multiple recognition libraries prepared for each type of work. Each of the multiple recognition libraries may contain a trained model used for image recognition processing of an object. The control unit 170 performs image recognition processing using a recognition library selected from the multiple recognition libraries according to the type of work actually to be performed. The selection may be made by operation input via the operation device 420. Prior to such operation input, the acquisition unit 120 (image recognition unit 121) may estimate the actual work to be performed based on visual measurement information obtained by the visual sensor 310 observing the work site or the object. The control unit 170 may suggest the recognition library corresponding to the estimated work to the user (operator) by displaying it on the display device 410.

 (3.2)制御装置の動作フロー
 図8は、第2実施形態に係る制御装置100による制御フローの一例を説明するためのフロー図である。
(3.2) Operation Flow of the Control Device FIG. 8 is a flow diagram for explaining an example of a control flow by the control device 100 according to the second embodiment.

 ステップS201において、制御部170は、工程nを開始する。作業開始時にはnの値は「1」である。その際、制御部170は、工程nに対応する設定情報(制御比率を含む動作パラメータ)を適用する。 In step S201, the control unit 170 starts process n. At the start of work, the value of n is "1." At that time, the control unit 170 applies the setting information (operation parameters including control ratios) corresponding to process n.

 ステップS202において、制御部170は、工程nに対応する設定情報(制御比率を含む動作パラメータ)を適用してロボット装置200の動作を制御する。 In step S202, the control unit 170 applies the setting information (operation parameters including control ratios) corresponding to process n to control the operation of the robot device 200.

 ステップS203において、制御部170は、工程nから次の工程への遷移条件が満たされた否か(すなわち、工程nが完了したか否か)を判定する。工程nが完了した場合(ステップS203:YES)、ステップS204に処理を進める。一方、工程nが完了していない場合(ステップS203:NO)、ステップS205において、制御部170は、工程nの制御が収束するか否かを判定する。工程nの制御が収束すると判定した場合(ステップS205:YES)、ステップS202に処理を戻す。 In step S203, the control unit 170 determines whether the transition condition from process n to the next process is satisfied (i.e., whether process n has been completed). If process n has been completed (step S203: YES), processing proceeds to step S204. On the other hand, if process n has not been completed (step S203: NO), in step S205, the control unit 170 determines whether the control of process n has converged. If it is determined that the control of process n has converged (step S205: YES), processing returns to step S202.

 工程nの制御が収束しないと判定した場合(ステップS205:NO)、制御部170は、工程nの前の工程(n-1)に戻す。その際、制御部170は、工程nの制御比率から工程(n-1)の制御比率に緩やかに制御比率を変更してもよい。或いは、制御部170は、工程(n-1)に戻すことに代えて、工程nの動作パラメータの少なくとも一部を所定の規則に従って変更したうえで工程nを再開してもよい。例えば、速度関係目標値が高すぎると推測されるような場合、速度関係目標値を1段階下げて工程nを再開してもよい。 If it is determined that the control of process n has not converged (step S205: NO), the control unit 170 returns to the process (n-1) prior to process n. At that time, the control unit 170 may gradually change the control ratio from the control ratio of process n to the control ratio of process (n-1). Alternatively, instead of returning to process (n-1), the control unit 170 may change at least some of the operating parameters of process n in accordance with a predetermined rule and then resume process n. For example, if it is estimated that the speed-related target value is too high, the speed-related target value may be lowered by one level and process n may be resumed.

 ステップS204において、制御部170は、作業の全工程が完了したか否かを判定する。全工程が完了した場合は本フローが終了する。全工程が完了していない場合(ステップS204:NO)、ステップS207において、制御部170は、工程nの次の工程(n+1)に進める。その際、制御部170は、工程nの制御比率から工程(n+1)の制御比率に緩やかに制御比率を変更してもよい。 In step S204, the control unit 170 determines whether all steps of the work have been completed. If all steps have been completed, this flow ends. If all steps have not been completed (step S204: NO), in step S207, the control unit 170 proceeds to step (n+1), which is the step after step n. At this time, the control unit 170 may gradually change the control ratio from the control ratio for step n to the control ratio for step (n+1).

 (3.3)制御装置の動作の具体例
 図9は、第2実施形態に係る制御装置100による制御として「基板への部品装着」作業時の制御の具体例を説明するための図である。
(3.3) Specific Example of Operation of the Control Device FIG. 9 is a diagram for explaining a specific example of control performed by the control device 100 according to the second embodiment during the operation of "mounting components on a board."

 「基板への部品装着」という作業種別では、工程1「部品認識→移動」→工程2「部品把持」→工程3「部品を基板へ向けて移動」→工程4「接近」→工程5「基板の穴認識」→工程6「穴接近」→工程7「穴倣い動作」→工程8「穴挿入」→工程9「穴挿入完了」→工程10「把持解除」の順で行われ、このような各工程の動作パラメータが設定ライブラリに含まれている。 In the task type "mounting components on a board," the process is performed in the following order: Process 1 "Recognize component → Move" → Process 2 "Grab component" → Process 3 "Move component toward board" → Process 4 "Approach" → Process 5 "Recognize board holes" → Process 6 "Approach hole" → Process 7 "Hole tracing operation" → Process 8 "Insert hole" → Process 9 "Hole insertion completed" → Process 10 "Release gripping," and the operation parameters for each of these processes are included in the settings library.

 これらの工程のうち、基本的には、対象物(エンドエフェクタ(グリッパ226)、部品(ワークW1)、基板(ワークW2))間の接触を伴う工程では力覚ベースの制御を行うよう設定され、対象物(エンドエフェクタ、部品)をある程度の速度以上で移動させるような工程では視覚ベースの制御を行うよう設定され、対象物間の接触に向けて対象物を低速で移動させるような工程では視覚+力覚ベースの制御を行うように設定される。但し、具体的な各工程の制御比率は、第1実施形態に係るデータ収集で収集されたデータに基づいて設定されることが好ましい。 Of these processes, basically, force-based control is set for processes involving contact between objects (end effector (gripper 226), component (workpiece W1), substrate (workpiece W2)), visual-based control is set for processes in which objects (end effector, component) are moved at a certain speed or higher, and visual and force-based control is set for processes in which objects are moved at a low speed toward contact between them. However, it is preferable that the specific control ratio for each process be set based on data collected in the data collection according to the first embodiment.

 図示の例では、工程1「部品認識→移動」には、視覚ベースの制御が適用される。但し、画像認識により認識した部品の位置を目標位置として、エンドエフェクタの現在位置が目標位置の近傍に近づいたときには視覚+力覚ベースの制御が適用されてもよい。工程2「部品把持」には、力覚ベースの制御が適用される。 In the example shown, vision-based control is applied to process 1, "Part recognition → movement." However, if the position of the part recognized by image recognition is set as the target position, vision + force-based control may be applied when the current position of the end effector approaches the vicinity of the target position. Force-based control is applied to process 2, "Part grip."

 工程3「部品を基板へ向けて移動」には、視覚ベースの制御が適用される。工程4「接近」には、視覚ベースの制御が適用される。但し、画像認識により認識した基板の位置を目標位置として、部品の現在位置が目標位置の近傍に近づいたときには視覚+力覚ベースの制御が適用されてもよい。工程5「基板の穴認識」には、視覚ベースの制御が適用される。 Vision-based control is applied to process 3, "Moving the component toward the board." Vision-based control is applied to process 4, "Approach." However, if the position of the board recognized by image recognition is set as the target position, vision and force-based control may be applied when the current position of the component approaches the vicinity of the target position. Vision-based control is applied to process 5, "Recognizing holes in the board."

 工程6「穴接近」には、視覚+力覚ベースの制御が適用される。工程7「穴倣い動作」から工程10「把持解除」までには、力覚ベースの制御が適用される。 In step 6, "Approaching the hole," vision and force-based control is applied. From step 7, "Following the hole," to step 10, "Releasing the grip," force-based control is applied.

 (4)第3実施形態
 第3実施形態について、第1及び第2実施形態との相違点を主として説明する。第3実施形態に係るシステム構成は第1実施形態と同様である(図1参照)。なお、第3実施形態が第1及び第2実施形態を前提とした実施形態である一例を主として説明するが、第3実施形態は、必ずしも第1及び第2実施形態の少なくとも一部を前提としなくてもよい。
(4) Third Embodiment The third embodiment will be described mainly focusing on the differences from the first and second embodiments. The system configuration of the third embodiment is the same as that of the first embodiment (see FIG. 1). Note that the third embodiment will be described mainly as an example in which the third embodiment is based on the first and second embodiments, but the third embodiment does not necessarily have to be based on at least part of the first and second embodiments.

 (4.1)制御装置の機能ブロック構成
 図10は、第3実施形態に係る制御装置100の機能ブロック構成を示すブロック図である。
(4.1) Functional Block Configuration of Control Device FIG. 10 is a block diagram showing the functional block configuration of a control device 100 according to the third embodiment.

 第3実施形態に係る制御装置100は、ロボット装置200の動作状態を測定するためのセンサ300を用いて得られる測定情報を取得する取得部120と、測定情報に基づいて、制御周期ごとにロボット装置200に対する制御指令を繰り返し生成する制御部170と、を有する。制御部170は、測定情報又は制御指令からロボット装置200に関する速度関係値を導出すると共に、ロボット装置200の動作状態に応じて制御周期ごとに可変な速度関係目標値と速度関係値との差分を減少させるように制御指令を生成する。これにより、ロボット装置200を用いた作業時に動作エラーが生じない制御が可能になる。また、速度関係目標値を、ロボット装置200の動作状態(作業の状況)に応じて制御周期ごとに可変とすることにより、非線形的な動作を伴う緻密な作業にも対応可能になる。上述のように、速度関係値は、速度、加速度、及び躍度のうち少なくとも1つを含む。 The control device 100 according to the third embodiment includes an acquisition unit 120 that acquires measurement information obtained using a sensor 300 for measuring the operating state of the robot device 200, and a control unit 170 that repeatedly generates control commands for the robot device 200 for each control period based on the measurement information. The control unit 170 derives a velocity-related value for the robot device 200 from the measurement information or the control command, and generates a control command to reduce the difference between the velocity-related value and a velocity-related target value that is variable for each control period depending on the operating state of the robot device 200. This enables control that prevents operational errors when working with the robot device 200. Furthermore, by varying the velocity-related target value for each control period depending on the operating state (work situation) of the robot device 200, it becomes possible to handle precise work involving nonlinear movements. As described above, the velocity-related value includes at least one of velocity, acceleration, and jerk.

 第3実施形態では、制御部170は、測定情報又は制御指令から速度関係値を制御周期ごとに導出する導出部175と、ロボット装置200の動作状態(作業の状況)に応じて制御周期ごとに速度関係目標値を可変設定する目標設定部176と、当該速度関係値と当該速度関係目標値との差分を減少させるように制御周期ごとに制御指令を生成する指令生成部172と、を有する。目標設定部176は、設定ライブラリ中の動作パラメータに含まれる速度関係値に応じて速度関係目標値を可変設定してもよい。 In the third embodiment, the control unit 170 has a derivation unit 175 that derives a speed-related value from measurement information or a control command for each control cycle, a target setting unit 176 that variably sets a speed-related target value for each control cycle according to the operating state (work situation) of the robot device 200, and a command generation unit 172 that generates a control command for each control cycle so as to reduce the difference between the speed-related value and the speed-related target value. The target setting unit 176 may variably set the speed-related target value according to a speed-related value included in an operating parameter in the setting library.

 なお、指令生成部172には、第2実施形態で説明した視覚制御部171A、力覚制御部171B、及び重み付け部173の機能が組み込まれていてもよい。或いは、制御部170において、指令生成部172とは別に、第2実施形態で説明した視覚制御部171A、力覚制御部171B、及び重み付け部173が設けられてもよい。 The command generation unit 172 may incorporate the functions of the visual control unit 171A, the haptic control unit 171B, and the weighting unit 173 described in the second embodiment. Alternatively, the control unit 170 may be provided with the visual control unit 171A, the haptic control unit 171B, and the weighting unit 173 described in the second embodiment, separate from the command generation unit 172.

 制御部170(指令生成部172)は、測定情報と目標測定情報(測定情報の目標値)との差分を減少させ、且つ速度関係値と速度関係目標値との差分を減少させるように制御指令を生成する。 The control unit 170 (command generation unit 172) generates a control command to reduce the difference between the measurement information and the target measurement information (target value of the measurement information), and to reduce the difference between the speed-related value and the speed-related target value.

 例えば、制御部170(指令生成部172)は、視覚センサ310を用いて得られる視覚測定情報(現在位置)と目標視覚測定情報(目標位置)との差分を減少させ、且つ速度関係値と速度関係目標値との差分を減少させるように制御指令を生成してもよい。ここで、視覚測定情報(現在位置)と目標視覚測定情報(目標位置)との差分を減少させる制御については、第2実施形態で説明した視覚制御と同様である。 For example, the control unit 170 (command generation unit 172) may generate a control command to reduce the difference between the visual measurement information (current position) obtained using the visual sensor 310 and the target visual measurement information (target position), and to reduce the difference between the speed-related value and the speed-related target value. Here, the control to reduce the difference between the visual measurement information (current position) and the target visual measurement information (target position) is the same as the visual control described in the second embodiment.

 制御部170(指令生成部172)は、力覚センサ320を用いて得られる力覚測定情報(現在力(反力))と目標力覚測定情報(目標力(反力))との差分を減少させ、且つ速度関係値と速度関係目標値との差分を減少させるように制御指令を生成してもよい。ここで、力覚測定情報(現在力(反力))と目標力覚測定情報(目標力(反力))との差分を減少させる制御については、第2実施形態で説明した力覚制御と同様である。 The control unit 170 (command generation unit 172) may generate a control command to reduce the difference between the force measurement information (current force (reaction force)) obtained using the force sensor 320 and the target force measurement information (target force (reaction force)), and to reduce the difference between the velocity-related value and the velocity-related target value. Here, the control to reduce the difference between the force measurement information (current force (reaction force)) and the target force measurement information (target force (reaction force)) is the same as the force control described in the second embodiment.

 制御部170(目標設定部176)は、現在の測定情報と目標測定情報との差分(現在の相対測定情報)に応じて速度関係目標値を変更してもよい。例えば、制御部170(目標設定部176)は、設定ライブラリに含まれる、測定情報(相対測定情報)と速度関係値とを対応付ける対応付け情報を用いて、現在の相対測定情報に対応する速度関係値を速度関係目標値として設定してもよい。 The control unit 170 (target setting unit 176) may change the speed-related target value according to the difference between the current measurement information and the target measurement information (current relative measurement information). For example, the control unit 170 (target setting unit 176) may use association information included in the setting library that associates measurement information (relative measurement information) with speed-related values to set the speed-related value that corresponds to the current relative measurement information as the speed-related target value.

 例えば、制御部170(目標設定部176)は、視覚センサ310を用いて得られる現在の視覚測定情報と目標視覚測定情報との差分に応じて速度関係目標値を変更してもよい。制御部170(目標設定部176)は、力覚センサ320を用いて得られる現在の力覚測定情報と目標力覚測定情報との差分に応じて速度関係目標値を変更してもよい。 For example, the control unit 170 (target setting unit 176) may change the speed-related target value in accordance with the difference between the current visual measurement information obtained using the visual sensor 310 and the target visual measurement information. The control unit 170 (target setting unit 176) may change the speed-related target value in accordance with the difference between the current force measurement information obtained using the force sensor 320 and the target force measurement information.

 制御部170(導出部175及び目標設定部176)は、測定情報と目標測定情報との差分(現在の相対測定情報)に応じて、速度関係値(及び速度関係目標値)として用いる値を、速度、加速度、及び躍度の中から1つ以上選択してもよい。例えば、制御部170(導出部175及び目標設定部176)は、視覚ベースの制御の際には、速度関係値(及び速度関係目標値)として、速度を選択してもよい。一方、力覚ベースの制御の際には、制御部170(導出部175及び目標設定部176)は、速度関係値(及び速度関係目標値)として、躍度を選択してもよい。視覚+力覚ベースの制御の際には、制御部170(導出部175及び目標設定部176)は、速度関係値(及び速度関係目標値)として、加速度を選択してもよい。 The control unit 170 (derivation unit 175 and target setting unit 176) may select one or more values from speed, acceleration, and jerk to be used as the speed-related value (and speed-related target value) depending on the difference between the measurement information and the target measurement information (current relative measurement information). For example, the control unit 170 (derivation unit 175 and target setting unit 176) may select speed as the speed-related value (and speed-related target value) during visual-based control. On the other hand, during force-based control, the control unit 170 (derivation unit 175 and target setting unit 176) may select jerk as the speed-related value (and speed-related target value). During visual-and force-based control, the control unit 170 (derivation unit 175 and target setting unit 176) may select acceleration as the speed-related value (and speed-related target value).

 制御部170は、第2実施形態と同様に、ロボット装置200に対して行った制御内容と当該制御内容に対する制御結果とに応じて、ロボット装置200に対する以降の制御内容に対する制御結果を予測する予測部174Aと、当該予測に応じて、以降の制御内容を補正する補正部174Bと、を有していてもよい。 Similar to the second embodiment, the control unit 170 may have a prediction unit 174A that predicts the control results of subsequent control operations on the robot device 200 based on the control operations performed on the robot device 200 and the control results of those control operations, and a correction unit 174B that corrects the subsequent control operations based on the prediction.

 ライブラリ記憶部160は、作業の種別ごとに用意された複数の設定ライブラリを記憶していてもよい。複数の設定ライブラリのそれぞれは、速度関係値の設定情報(速度関係目標値)を含んでいてもよい。制御部170は、複数の設定ライブラリの中から実際に行う作業の種別に応じて選択された設定ライブラリを用いてロボット装置200の制御を行う。当該選択は、操作装置420を介した操作入力により行われてもよい。このような操作入力に先立ち、取得部120(画像認識部121)は、作業現場又は対象物を視覚センサ310が観察することで得られた視覚測定情報に基づいて実際に行う作業を推定してもよい。制御部170は、推定された作業に対応する設定ライブラリを表示装置410上で表示することでユーザ(オペレータ)に提案してもよい。 The library storage unit 160 may store multiple setting libraries prepared for each type of work. Each of the multiple setting libraries may include setting information for speed-related values (speed-related target values). The control unit 170 controls the robot device 200 using a setting library selected from the multiple setting libraries according to the type of work actually to be performed. The selection may be made by operation input via the operation device 420. Prior to such operation input, the acquisition unit 120 (image recognition unit 121) may estimate the work to be actually performed based on visual measurement information obtained by the visual sensor 310 observing the work site or object. The control unit 170 may suggest the setting library corresponding to the estimated work to the user (operator) by displaying it on the display device 410.

 ライブラリ記憶部160は、作業の種別ごとに用意された複数の認識ライブラリをさらに記憶していてもよい。複数の認識ライブラリのそれぞれは、対象物の画像認識処理に用いる学習済みモデルを含んでいてもよい。制御部170は、複数の認識ライブラリの中から実際に行う作業の種別に応じて選択された認識ライブラリを用いて画像認識処理を行う。当該選択は、操作装置420を介した操作入力により行われてもよい。このような操作入力に先立ち、取得部120(画像認識部121)は、作業現場又は対象物を視覚センサ310が観察することで得られた視覚測定情報に基づいて実際に行う作業を推定してもよい。制御部170は、推定された作業に対応する認識ライブラリを表示装置410上で表示することでユーザ(オペレータ)に提案してもよい。 The library storage unit 160 may further store multiple recognition libraries prepared for each type of work. Each of the multiple recognition libraries may contain a trained model used for image recognition processing of an object. The control unit 170 performs image recognition processing using a recognition library selected from the multiple recognition libraries according to the type of work actually to be performed. The selection may be made by operation input via the operation device 420. Prior to such operation input, the acquisition unit 120 (image recognition unit 121) may estimate the actual work to be performed based on visual measurement information obtained by the visual sensor 310 observing the work site or the object. The control unit 170 may suggest the recognition library corresponding to the estimated work to the user (operator) by displaying it on the display device 410.

 (4.2)制御装置の動作の具体例
 図11は、第3実施形態に係る制御装置100による制御として「基板への部品装着」作業時の制御の具体例を説明するための図である。
(4.2) Specific Example of Operation of the Control Device FIG. 11 is a diagram for explaining a specific example of control performed by the control device 100 according to the third embodiment during the operation of "mounting components on a board."

 上述のように、「基板への部品装着」という作業種別では、工程1「部品認識→移動」→工程2「部品把持」→工程3「部品を基板へ向けて移動」→工程4「接近」→工程5「基板の穴認識」→工程6「穴接近」→工程7「穴倣い動作」→工程8「穴挿入」→工程9「穴挿入完了」→工程10「把持解除」の順で行われ、このような各工程の動作パラメータが設定ライブラリに含まれている。 As mentioned above, the task type "mounting components on a board" is carried out in the following order: Process 1 "Recognize component → move" → Process 2 "Grab component" → Process 3 "Move component toward board" → Process 4 "Approach" → Process 5 "Recognize holes on board" → Process 6 "Approach hole" → Process 7 "Hole tracing operation" → Process 8 "Insert hole" → Process 9 "Hole insertion completed" → Process 10 "Release gripping", and the operation parameters for each of these processes are included in the settings library.

 各工程において、制御部170(指令生成部172)は、センサ300を用いて得られる測定情報と目標測定情報(測定情報の目標値)との差分を減少させ、且つ速度関係値と速度関係目標値との差分を減少させるように制御指令を生成する。 In each step, the control unit 170 (command generation unit 172) generates a control command to reduce the difference between the measurement information obtained using the sensor 300 and the target measurement information (target value of the measurement information), and to reduce the difference between the speed-related value and the speed-related target value.

 図示の例では、工程1「部品認識→移動」には、目標測定情報として「部品の位置」、速度関係目標値として「速度」が適用され得る。この場合、制御部170(指令生成部172)は、視覚センサ310を用いて得られるエンドエフェクタの現在位置と部品の位置との差分を減少させ、且つエンドエフェクタの速度と速度目標値との差分を減少させるように制御指令を生成してもよい。ここで、制御部170(目標設定部176)は、視覚センサ310を用いて得られるエンドエフェクタの現在位置と部品の位置との差分に応じて速度目標値を制御周期ごとに変更してもよい。 In the illustrated example, for step 1 "Part recognition → movement", "part position" can be applied as the target measurement information, and "speed" can be applied as the speed-related target value. In this case, the control unit 170 (command generation unit 172) can generate a control command to reduce the difference between the current position of the end effector obtained using the visual sensor 310 and the position of the part, and to reduce the difference between the end effector speed and the speed target value. Here, the control unit 170 (target setting unit 176) can change the speed target value for each control cycle depending on the difference between the current position of the end effector obtained using the visual sensor 310 and the position of the part.

 工程2「部品把持」には、目標測定情報として「目標反力」、速度関係値として「速度」が適用され得る。この場合、制御部170(指令生成部172)は、力覚センサ320を用いて得られる現在のエンドエフェクタに生じる反力と目標反力との差分を減少させ、且つエンドエフェクタの速度と速度目標値との差分を減少させるように制御指令を生成してもよい。ここで、制御部170(目標設定部176)は、力覚センサ320を用いて得られるエンドエフェクタに生じる反力と目標反力との差分に応じて速度目標値を制御周期ごとに変更してもよい。 In step 2 "Part gripping", "target reaction force" can be applied as the target measurement information, and "speed" can be applied as the speed-related value. In this case, the control unit 170 (command generation unit 172) can generate a control command to reduce the difference between the target reaction force and the current reaction force acting on the end effector obtained using the force sensor 320, and also to reduce the difference between the end effector speed and the speed target value. Here, the control unit 170 (target setting unit 176) can change the speed target value for each control cycle depending on the difference between the target reaction force and the reaction force acting on the end effector obtained using the force sensor 320.

 工程3「部品を基板へ向けて移動」には、目標測定情報として「基板の位置」、速度関係値として「躍度」が適用され得る。この場合、制御部170(指令生成部172)は、視覚センサ310を用いて得られるエンドエフェクタ(又は部品)の現在位置と基板の位置との差分を減少させ、且つエンドエフェクタ(又は部品)の躍度と躍度目標値との差分を減少させるように制御指令を生成してもよい。ここで、制御部170(目標設定部176)は、視覚センサ310を用いて得られるエンドエフェクタ(又は部品)の現在位置と基板の位置との差分に応じて躍度目標値を制御周期ごとに変更してもよい。 In step 3, "Move the component toward the board," "board position" may be applied as the target measurement information, and "jerk" may be applied as the velocity-related value. In this case, the control unit 170 (command generation unit 172) may generate a control command to reduce the difference between the current position of the end effector (or component) obtained using the visual sensor 310 and the position of the board, and to reduce the difference between the jerk of the end effector (or component) and the target jerk value. Here, the control unit 170 (target setting unit 176) may change the target jerk value for each control cycle depending on the difference between the current position of the end effector (or component) obtained using the visual sensor 310 and the position of the board.

 工程4「接近」には、目標測定情報として「エンドエフェクタと基板の位置の差分」、速度関係値として「加速度」が適用され得る。この場合、制御部170(指令生成部172)は、視覚センサ310を用いて得られるエンドエフェクタ(又は部品)の現在位置と基板の位置との差分を減少させ、且つエンドエフェクタの加速度と加速度目標値との差分を減少させるように制御指令を生成してもよい。ここで、制御部170(目標設定部176)は、視覚センサ310を用いて得られるエンドエフェクタ(又は部品)の現在位置と基板の位置との差分に応じて加速度目標値を制御周期ごとに変更してもよい。 For step 4 "Approach," the "difference in position between the end effector and the board" can be applied as the target measurement information, and "acceleration" can be applied as the speed-related value. In this case, the control unit 170 (command generation unit 172) can generate a control command to reduce the difference between the current position of the end effector (or component) obtained using the visual sensor 310 and the position of the board, and to reduce the difference between the acceleration of the end effector and the target acceleration value. Here, the control unit 170 (target setting unit 176) can change the target acceleration value for each control cycle depending on the difference between the current position of the end effector (or component) obtained using the visual sensor 310 and the position of the board.

 工程5「基板の穴認識」には、目標測定情報として「基板の穴」が適用され得る。この場合、制御部170(指令生成部172)は、視覚センサ310を用いて基板の穴を認識する。 In step 5, "Board Hole Recognition," "Board Hole" can be applied as the target measurement information. In this case, the control unit 170 (command generation unit 172) recognizes the board hole using the visual sensor 310.

 工程6「穴接近」には、目標測定情報として「基板の穴の位置」及び「目標反力」、速度関係値として「加速度」が適用され得る。この場合、制御部170(指令生成部172)は、視覚センサ310を用いて得られる部品の現在位置と基板の穴の位置との差分を減少させ、且つエンドエフェクタの加速度と加速度目標値との差分を減少させるように制御指令を生成してもよい。また、制御部170(指令生成部172)は、力覚センサ320を用いて得られる現在のエンドエフェクタに生じる反力と目標反力との差分を減少させ、且つエンドエフェクタの加速度と加速度目標値との差分を減少させるように制御指令を生成してもよい。 In step 6 "Hole Approach", "Board hole position" and "target reaction force" can be applied as target measurement information, and "acceleration" can be applied as a speed-related value. In this case, the control unit 170 (command generation unit 172) can generate a control command to reduce the difference between the current position of the component obtained using the visual sensor 310 and the position of the hole on the board, and to reduce the difference between the acceleration of the end effector and the target acceleration value. The control unit 170 (command generation unit 172) can also generate a control command to reduce the difference between the target reaction force and the current reaction force acting on the end effector obtained using the force sensor 320, and to reduce the difference between the acceleration of the end effector and the target acceleration value.

 工程7「穴倣い動作」から工程10「把持解除」までの各工程には、目標測定情報として工程ごとの「目標反力」、速度関係値として「躍度」が適用され得る。この場合、制御部170(指令生成部172)は、力覚センサ320を用いて得られる現在のエンドエフェクタに生じる反力と工程ごとの目標反力との差分を減少させ、且つエンドエフェクタの躍度と工程ごとの躍度目標値との差分を減少させるように制御指令を生成してもよい。ここで、制御部170(目標設定部176)は、力覚センサ320を用いて得られるエンドエフェクタに生じる反力と目標反力との差分に応じて躍度目標値を制御周期ごとに変更してもよい。 For each step from step 7 "Hole Copying Operation" to step 10 "Grip Release", a "target reaction force" for each step can be applied as target measurement information, and a "jerk" can be applied as a speed-related value. In this case, the control unit 170 (command generation unit 172) may generate a control command to reduce the difference between the current reaction force acting on the end effector obtained using the force sensor 320 and the target reaction force for each step, and to reduce the difference between the jerk of the end effector and the target jerk value for each step. Here, the control unit 170 (target setting unit 176) may change the target jerk value for each control cycle depending on the difference between the reaction force acting on the end effector obtained using the force sensor 320 and the target reaction force.

 (5)他の実施形態
 上述の実施形態における動作フロー及び動作例は、必ずしもフロー図に記載された順序に沿って時系列に実行されなくてよい。例えば、動作におけるステップは、フロー図として記載した順序と異なる順序で実行されても、並列的に実行されてもよい。また、動作におけるステップの一部が削除されてもよく、さらなるステップが処理に追加されてもよい。
(5) Other Embodiments The operational flows and operational examples in the above-described embodiments do not necessarily have to be executed in chronological order according to the order shown in the flow diagrams. For example, steps in the operations may be executed in an order different from that shown in the flow diagrams, or may be executed in parallel. Furthermore, some steps in the operations may be deleted, or additional steps may be added to the processing.

 上述の実施形態に係る動作をコンピュータに実行させるプログラムが提供されてもよい。プログラムは、コンピュータ読取り可能媒体に記録されていてもよい。コンピュータ読取り可能媒体を用いれば、コンピュータにプログラムをインストールすることが可能である。ここで、プログラムが記録されたコンピュータ読取り可能媒体は、非一過性の記憶媒体であってもよい。非一過性の記憶媒体は、特に限定されるものではないが、例えば、CD-ROM又はDVD-ROM等の記憶媒体であってもよい。 A program may be provided that causes a computer to execute the operations of the above-described embodiments. The program may be recorded on a computer-readable medium. Using the computer-readable medium, the program can be installed on a computer. Here, the computer-readable medium on which the program is recorded may be a non-transitory storage medium. The non-transitory storage medium is not particularly limited, but may be, for example, a storage medium such as a CD-ROM or DVD-ROM.

 上述の実施形態に係る装置により実現される機能は、当該記載された機能を実現するようにプログラムされた、汎用プロセッサ、特定用途プロセッサ、集積回路、ASICs(Application Specific Integrated Circuits)、CPU(a Central Processing Unit)、従来型の回路、及び/又はそれらの組合せを含む、circuitry(回路)又はprocessing circuitry(処理回路)において実装されてもよい。プロセッサは、トランジスタ又はその他の回路を含み、circuitry又はprocessing circuitryとみなされる。プロセッサは、メモリに格納されたプログラムを実行する、programmed processorであってもよい。本開示において、circuitry、ユニット、手段は、記載された機能を実現するようにプログラムされたハードウェア、又は実行するハードウェアである。当該ハードウェアは、本明細書に開示されているあらゆるハードウェア、又は、当該記載された機能を実現するようにプログラムされた、又は、実行するものとして知られているあらゆるハードウェアであってもよい。当該ハードウェアがcircuitryのタイプであるとみなされるプロセッサである場合、当該circuitry、手段、又はユニットは、ハードウェアと、当該ハードウェア及び又はプロセッサを構成するために用いられるソフトウェアの組合せである。 The functions achieved by the devices according to the above-described embodiments may be implemented in circuitry or processing circuitry, including general-purpose processors, application-specific processors, integrated circuits, ASICs (Application Specific Integrated Circuits), CPUs (Central Processing Units), conventional circuits, and/or combinations thereof, programmed to achieve the described functions. A processor includes transistors or other circuits and is considered to be circuitry or processing circuitry. A processor may also be a programmed processor that executes a program stored in memory. In this disclosure, circuitry, units, and means refer to hardware that is programmed to achieve the described functions or that executes them. The hardware may be any hardware disclosed herein or any hardware known to be programmed or capable of performing the described functions. If the hardware is a processor, which is considered a type of circuitry, the circuitry, means, or unit is a combination of hardware and software used to configure the hardware and/or processor.

 本開示で使用する「に基づいて」、「に応じて」という記載は、別段に明記されていない限り、「のみに基づいて」、「のみに応じて」を意味しない。「に基づいて」という記載は、「のみに基づいて」及び「に少なくとも部分的に基づいて」の両方を意味する。同様に、「に応じて」という記載は、「のみに応じて」及び「に少なくとも部分的に応じて」の両方を意味する。また、「含む(include)」、「備える(comprise)」、及びそれらの変形の用語は、列挙する項目のみを含むことを意味せず、列挙する項目のみを含んでもよいし、列挙する項目に加えてさらなる項目を含んでもよいことを意味する。また、本開示において使用されている用語「又は(or)」は、排他的論理和ではないことが意図される。本開示において、例えば、英語でのa,an,及びtheのように、翻訳により冠詞が追加された場合、これらの冠詞は、文脈から明らかにそうではないことが示されていなければ、複数のものを含むものとする。 As used in this disclosure, the terms "based on" and "depending on" do not mean "based only on" or "depending only on," unless expressly stated otherwise. The term "based on" means both "based only on" and "based at least in part on." Similarly, the term "depending on" means both "depending only on" and "depending at least in part on." Furthermore, the terms "include," "comprise," and variations thereof do not mean including only the listed items, but may mean including only the listed items or including additional items in addition to the listed items. Furthermore, the term "or" as used in this disclosure is not intended to mean an exclusive or. In this disclosure, when articles are added by translation, such as a, an, and the in English, these articles are intended to include the plural unless the context clearly dictates otherwise.

 以上、図面を参照して実施形態について詳しく説明したが、具体的な構成は上述のものに限られることはなく、要旨を逸脱しない範囲内において様々な設計変更等をすることが可能である。 The above describes the embodiments in detail with reference to the drawings, but the specific configuration is not limited to that described above, and various design changes can be made without departing from the spirit of the invention.

 (6)付記
 上述の実施形態に関する特徴について付記する。
(6) Supplementary Notes The following are additional notes regarding the features of the above-described embodiment.

 ・付記1
 ロボット装置の動作状態を目標状態から前記目標状態とは異なる任意状態へ変更する逆方向動作を実行させるための制御指令を生成する動作生成部と、
 前記ロボット装置の動作状態に関する測定を行うためのセンサを用いて得られる測定情報を取得する取得部と、
 前記制御指令に従った前記逆方向動作の実行中に、当該制御指令と前記測定情報とのセットを含むデータを繰り返し収集するデータ収集部と、を備え、
 前記センサは、視覚センサ及び力覚センサの少なくとも一方を含む
 制御装置。
・Appendix 1
a motion generating unit that generates a control command for executing a reverse motion that changes the motion state of the robot apparatus from a target state to an arbitrary state different from the target state;
an acquisition unit that acquires measurement information obtained using a sensor for measuring the operating state of the robot device;
a data collection unit that repeatedly collects data including a set of the control command and the measurement information during execution of the reverse direction operation in accordance with the control command;
The control device, wherein the sensor includes at least one of a visual sensor and a force sensor.

 ・付記2
 前記動作生成部は、前記ロボット装置の動作状態を前記目標状態から互いに異なる複数の任意状態に変更する複数パターンの逆方向動作を実行させるための前記制御指令を生成し、
 前記データ収集部は、前記複数パターンの逆方向動作のそれぞれについて前記データを繰り返し収集する
 付記1に記載の制御装置。
・Appendix 2
the motion generation unit generates the control command for executing a plurality of patterns of reverse motions that change the motion state of the robot apparatus from the target state to a plurality of arbitrary states different from each other;
The control device according to claim 1, wherein the data collection unit repeatedly collects the data for each of the plurality of patterns of reverse direction motion.

 ・付記3
 前記動作生成部は、
  前記目標状態に近い領域ほど前記データの収集量が多くなる前記制御指令を生成し、
  前記目標状態から遠い領域ほど前記データの収集量が少なくなる前記制御指令を生成する
 付記1又は2に記載の制御装置。
・Appendix 3
The action generation unit
generating the control command such that the amount of data collected increases in an area closer to the target state;
The control device according to claim 1 or 2, wherein the control command is generated such that the amount of data collected decreases as the area becomes farther from the target state.

 ・付記4
 前記取得部は、
  前記目標状態に近い領域ほど前記測定情報の取得量が多くなるよう取得周期を短縮し、
  前記目標状態から遠い領域ほど前記測定情報の取得量が多くなるよう取得周期を延長する
 付記1乃至3のいずれかに記載の制御装置。
・Appendix 4
The acquisition unit
shortening the acquisition cycle so that the amount of acquired measurement information increases in an area closer to the target state;
The control device according to any one of appendices 1 to 3, wherein an acquisition cycle is extended so that an amount of the measurement information acquired increases in an area farther from the target state.

 ・付記5
 前記取得部は、前記目標状態で得られる目標測定情報と、前記逆方向動作の実行中に得られる現在測定情報と、の差分を相対測定情報として取得し、
 前記データ収集部は、前記制御指令に従った前記逆方向動作の実行中に、当該制御指令と前記相対測定情報とのセットを含む前記データを繰り返し収集する
 付記1乃至4のいずれかに記載の制御装置。
Appendix 5
the acquisition unit acquires, as relative measurement information, a difference between target measurement information acquired in the target state and current measurement information acquired during execution of the reverse direction motion;
The control device according to any one of appendices 1 to 4, wherein the data collection unit repeatedly collects the data including a set of the control command and the relative measurement information while the reverse direction operation is being performed in accordance with the control command.

 ・付記6
 前記取得部は、前記センサの出力又は前記制御指令に応じて前記ロボット装置に関する速度関係値を含む前記測定情報を取得し、
 前記データ収集部は、前記制御指令に従った前記逆方向動作の実行中に、当該制御指令と、前記速度関係値を含む前記測定情報と、のセットを含む前記データを繰り返し収集する
 付記1乃至5のいずれかに記載の制御装置。
・Appendix 6
the acquisition unit acquires the measurement information including a velocity-related value related to the robot apparatus in response to the output of the sensor or the control command;
The control device according to any one of appendices 1 to 5, wherein the data collection unit repeatedly collects the data including a set of the control command and the measurement information including the speed-related value during execution of the reverse direction operation in accordance with the control command.

 ・付記7
 前記データ収集部が収集した前記データに基づいて、前記ロボット装置を用いた作業を行うための設定情報を取得する設定取得部と、
 前記設定情報を用いて、前記作業を行うための制御を前記ロボット装置に対して行う制御部と、さらに備える
 付記1乃至6のいずれかに記載の制御装置。
Appendix 7
a setting acquisition unit that acquires setting information for performing a task using the robot device based on the data collected by the data collection unit;
7. The control device according to any one of claims 1 to 6, further comprising: a control unit that uses the setting information to control the robot device to perform the work.

 ・付記8
 前記設定取得部は、前記データ収集部が収集した前記データを学習用データとして用いて、前記測定情報から前記制御指令を導出するための学習済みモデルを含む前記設定情報を機械学習により取得し、
 前記制御部は、前記学習済みモデルを含む前記設定情報を用いて、前記作業中に得られる前記測定情報に基づいて前記制御を行う
 付記7に記載の制御装置。
Appendix 8
the setting acquisition unit acquires the setting information including a trained model for deriving the control command from the measurement information by machine learning using the data collected by the data collection unit as learning data;
The control device according to claim 7, wherein the control unit performs the control based on the measurement information obtained during the work, using the setting information including the trained model.

 ・付記9
 前記センサは、前記視覚センサ及び前記力覚センサを含み、
 前記設定取得部は、前記視覚センサを用いて得られる視覚測定情報による前記制御である視覚制御と、前記力覚センサを用いて得られる力覚測定情報による前記制御である力覚制御と、の制御比率を設定するための情報を含む前記設定情報を取得し、
 前記制御部は、前記設定情報を用いて、前記ロボット装置を用いた作業の状況に応じて前記制御比率を動的に又は段階的に変更する
 付記7又は8に記載の制御装置。
Appendix 9
the sensors include the visual sensor and the force sensor;
the setting acquisition unit acquires the setting information including information for setting a control ratio between visual control, which is the control based on visual measurement information obtained using the visual sensor, and force control, which is the control based on force measurement information obtained using the force sensor; and
The control device according to claim 7 or 8, wherein the control unit uses the setting information to dynamically or stepwise change the control ratio depending on a status of a task using the robot device.

 ・付記10
 前記制御部は、
  前記作業中に得られる前記測定情報又は制御指令から前記ロボット装置に関する速度関係値を導出し、
  前記設定情報及び前記測定情報に基づいて、前記速度関係値と速度関係目標値との差分を減少させる制御を前記ロボット装置に対して繰り返し行う
 付記7乃至9のいずれかに記載の制御装置。
Appendix 10
The control unit
deriving a velocity-related value for the robot device from the measurement information or control commands obtained during the work;
10. The control device according to any one of appendices 7 to 9, wherein the control device repeatedly performs control on the robot device to reduce a difference between the speed-related value and a speed-related target value based on the setting information and the measurement information.

 ・付記11
 前記動作生成部は、前記ロボット装置を用いた作業の種別ごとに、前記逆方向動作を実行させるための前記制御指令を生成し、
 前記データ収集部は、前記種別ごとに前記データを繰り返し収集することで前記種別ごとに前記データを収集する
 付記1乃至10のいずれかに記載の制御装置。
Appendix 11
the operation generation unit generates the control command for executing the reverse operation for each type of work using the robot device;
The control device according to any one of appendixes 1 to 10, wherein the data collection unit collects the data for each of the types by repeatedly collecting the data for each of the types.

 ・付記12
 前記データ収集部が前記種別ごとに収集した前記データに基づいて、前記ロボット装置を用いた作業を行うための設定情報を前記種別ごとに取得する設定取得部と、
 実際に行う作業の種別に対応する前記設定情報を用いて、当該作業を行うための制御を前記ロボット装置に対して行う制御部と、さらに備える
 付記11に記載の制御装置。
Appendix 12
a setting acquisition unit that acquires setting information for performing work using the robot device for each type based on the data collected for each type by the data collection unit;
12. The control device according to claim 11, further comprising: a control unit that uses the setting information corresponding to a type of work to actually be performed to control the robot device to perform the work.

 ・付記13
 ロボット装置の動作状態を目標状態から前記目標状態とは異なる任意状態へ変更する逆方向動作を実行させるための制御指令を生成することと、
 前記ロボット装置の動作状態に関する測定を行うためのセンサを用いて得られる測定情報を取得することと、
 前記制御指令に従った前記逆方向動作の実行中に、当該制御指令と前記測定情報とのセットを含むデータを繰り返し収集することと、を有し、
 前記センサは、視覚センサ及び力覚センサの少なくとも一方を含む
 制御方法。
Appendix 13
generating a control command for executing a reverse movement that changes the motion state of the robot device from a target state to an arbitrary state different from the target state;
acquiring measurement information obtained using a sensor for measuring an operating state of the robot device;
repeatedly collecting data including a set of the control command and the measurement information during the execution of the reverse direction operation in accordance with the control command;
The sensor includes at least one of a visual sensor and a force sensor.

 ・付記14
 制御装置に、
 ロボット装置の動作状態を目標状態から前記目標状態とは異なる任意状態へ変更する逆方向動作を実行させるための制御指令を生成することと、
 前記ロボット装置の動作状態に関する測定を行うためのセンサを用いて得られる測定情報を取得することと、
 前記制御指令に従った前記逆方向動作の実行中に、当該制御指令と前記測定情報とのセットを含むデータを繰り返し収集することと、を実行させ、
 前記センサは、視覚センサ及び力覚センサの少なくとも一方を含む
 プログラム。
Appendix 14
The control device
generating a control command for executing a reverse movement that changes the motion state of the robot device from a target state to an arbitrary state different from the target state;
acquiring measurement information obtained using a sensor for measuring an operating state of the robot device;
repeatedly collecting data including a set of the control command and the measurement information during the execution of the reverse direction operation in accordance with the control command;
The sensor includes at least one of a visual sensor and a force sensor.

Claims (14)

 ロボット装置の動作状態を目標状態から前記目標状態とは異なる任意状態へ変更する逆方向動作を実行させるための制御指令を生成する動作生成部と、
 前記ロボット装置の動作状態に関する測定を行うためのセンサを用いて得られる測定情報を取得する取得部と、
 前記制御指令に従った前記逆方向動作の実行中に、当該制御指令と前記測定情報とのセットを含むデータを繰り返し収集するデータ収集部と、を備え、
 前記センサは、視覚センサ及び力覚センサの少なくとも一方を含む
 制御装置。
a motion generating unit that generates a control command for executing a reverse motion that changes the motion state of the robot apparatus from a target state to an arbitrary state different from the target state;
an acquisition unit that acquires measurement information obtained using a sensor for measuring the operating state of the robot device;
a data collection unit that repeatedly collects data including a set of the control command and the measurement information during execution of the reverse direction operation in accordance with the control command;
The control device, wherein the sensor includes at least one of a visual sensor and a force sensor.
 前記動作生成部は、前記ロボット装置の動作状態を前記目標状態から互いに異なる複数の任意状態に変更する複数パターンの逆方向動作を実行させるための前記制御指令を生成し、
 前記データ収集部は、前記複数パターンの逆方向動作のそれぞれについて前記データを繰り返し収集する
 請求項1に記載の制御装置。
the motion generation unit generates the control command for executing a plurality of patterns of reverse motions that change the motion state of the robot apparatus from the target state to a plurality of arbitrary states different from each other;
The control device according to claim 1 , wherein the data collection unit repeatedly collects the data for each of the plurality of patterns of reverse direction motion.
 前記動作生成部は、
  前記目標状態に近い領域ほど前記データの収集量が多くなる前記制御指令を生成し、
  前記目標状態から遠い領域ほど前記データの収集量が少なくなる前記制御指令を生成する
 請求項1に記載の制御装置。
The action generation unit
generating the control command such that the amount of data collected increases in an area closer to the target state;
The control device according to claim 1 , wherein the control command is generated so that the amount of data collected decreases in an area farther from the target state.
 前記取得部は、
  前記目標状態に近い領域ほど前記測定情報の取得量が多くなるよう取得周期を短縮し、
  前記目標状態から遠い領域ほど前記測定情報の取得量が多くなるよう取得周期を延長する
 請求項1に記載の制御装置。
The acquisition unit
shortening the acquisition cycle so that the amount of acquired measurement information increases in an area closer to the target state;
The control device according to claim 1 , wherein the acquisition period is extended so that the amount of acquired measurement information increases in an area farther from the target state.
 前記取得部は、前記目標状態で得られる目標測定情報と、前記逆方向動作の実行中に得られる現在測定情報と、の差分を相対測定情報として取得し、
 前記データ収集部は、前記制御指令に従った前記逆方向動作の実行中に、当該制御指令と前記相対測定情報とのセットを含む前記データを繰り返し収集する
 請求項1に記載の制御装置。
the acquisition unit acquires, as relative measurement information, a difference between target measurement information acquired in the target state and current measurement information acquired during execution of the reverse direction motion;
The control device according to claim 1 , wherein the data collection unit repeatedly collects the data including a set of the control command and the relative measurement information during the execution of the reverse direction operation in accordance with the control command.
 前記取得部は、前記センサの出力又は前記制御指令に応じて前記ロボット装置に関する速度関係値を含む前記測定情報を取得し、
 前記データ収集部は、前記制御指令に従った前記逆方向動作の実行中に、当該制御指令と、前記速度関係値を含む前記測定情報と、のセットを含む前記データを繰り返し収集する
 請求項1に記載の制御装置。
the acquisition unit acquires the measurement information including a velocity-related value related to the robot apparatus in response to the output of the sensor or the control command;
The control device according to claim 1 , wherein the data collection unit repeatedly collects the data including a set of the control command and the measurement information including the speed-related value while the reverse direction operation is being performed in accordance with the control command.
 前記データ収集部が収集した前記データに基づいて、前記ロボット装置を用いた作業を行うための設定情報を取得する設定取得部と、
 前記設定情報を用いて、前記作業を行うための制御を前記ロボット装置に対して行う制御部と、さらに備える
 請求項1乃至6のいずれか1項に記載の制御装置。
a setting acquisition unit that acquires setting information for performing a task using the robot device based on the data collected by the data collection unit;
The control device according to claim 1 , further comprising: a control unit that uses the setting information to control the robot device to perform the work.
 前記設定取得部は、前記データ収集部が収集した前記データを学習用データとして用いて、前記測定情報から前記制御指令を導出するための学習済みモデルを含む前記設定情報を機械学習により取得し、
 前記制御部は、前記学習済みモデルを含む前記設定情報を用いて、前記作業中に得られる前記測定情報に基づいて前記制御を行う
 請求項7に記載の制御装置。
the setting acquisition unit acquires the setting information including a trained model for deriving the control command from the measurement information by machine learning using the data collected by the data collection unit as learning data;
The control device according to claim 7 , wherein the control unit performs the control based on the measurement information obtained during the work, using the setting information including the trained model.
 前記センサは、前記視覚センサ及び前記力覚センサを含み、
 前記設定取得部は、前記視覚センサを用いて得られる視覚測定情報による前記制御である視覚制御と、前記力覚センサを用いて得られる力覚測定情報による前記制御である力覚制御と、の制御比率を設定するための情報を含む前記設定情報を取得し、
 前記制御部は、前記設定情報を用いて、前記ロボット装置を用いた作業の状況に応じて前記制御比率を動的に又は段階的に変更する
 請求項7に記載の制御装置。
the sensors include the visual sensor and the force sensor;
the setting acquisition unit acquires the setting information including information for setting a control ratio between visual control, which is the control based on visual measurement information obtained using the visual sensor, and force control, which is the control based on force measurement information obtained using the force sensor; and
The control device according to claim 7 , wherein the control unit uses the setting information to dynamically or stepwise change the control ratio depending on the status of work using the robot device.
 前記制御部は、
  前記作業中に得られる前記測定情報又は制御指令から前記ロボット装置に関する速度関係値を導出し、
  前記設定情報及び前記測定情報に基づいて、前記速度関係値と速度関係目標値との差分を減少させる制御を前記ロボット装置に対して繰り返し行う
 請求項7に記載の制御装置。
The control unit
deriving a velocity-related value for the robot device from the measurement information or control commands obtained during the work;
The control device according to claim 7 , wherein the control device repeatedly controls the robot device to reduce the difference between the speed-related value and the speed-related target value based on the setting information and the measurement information.
 前記動作生成部は、前記ロボット装置を用いた作業の種別ごとに、前記逆方向動作を実行させるための前記制御指令を生成し、
 前記データ収集部は、前記種別ごとに前記データを繰り返し収集することで前記種別ごとに前記データを収集する
 請求項1に記載の制御装置。
the operation generation unit generates the control command for executing the reverse operation for each type of work using the robot device;
The control device according to claim 1 , wherein the data collection unit collects the data for each of the types by repeatedly collecting the data for each of the types.
 前記データ収集部が前記種別ごとに収集した前記データに基づいて、前記ロボット装置を用いた作業を行うための設定情報を前記種別ごとに取得する設定取得部と、
 実際に行う作業の種別に対応する前記設定情報を用いて、当該作業を行うための制御を前記ロボット装置に対して行う制御部と、さらに備える
 請求項11に記載の制御装置。
a setting acquisition unit that acquires setting information for performing work using the robot device for each type based on the data collected for each type by the data collection unit;
The control device according to claim 11 , further comprising: a control unit that uses the setting information corresponding to a type of work to be actually performed to control the robot device to perform the work.
 ロボット装置の動作状態を目標状態から前記目標状態とは異なる任意状態へ変更する逆方向動作を実行させるための制御指令を生成することと、
 前記ロボット装置の動作状態に関する測定を行うためのセンサを用いて得られる測定情報を取得することと、
 前記制御指令に従った前記逆方向動作の実行中に、当該制御指令と前記測定情報とのセットを含むデータを繰り返し収集することと、を有し、
 前記センサは、視覚センサ及び力覚センサの少なくとも一方を含む
 制御方法。
generating a control command for executing a reverse movement that changes the motion state of the robot device from a target state to an arbitrary state different from the target state;
acquiring measurement information obtained using a sensor for measuring an operating state of the robot device;
repeatedly collecting data including a set of the control command and the measurement information during the execution of the reverse direction operation in accordance with the control command;
The sensor includes at least one of a visual sensor and a force sensor.
 制御装置に、
 ロボット装置の動作状態を目標状態から前記目標状態とは異なる任意状態へ変更する逆方向動作を実行させるための制御指令を生成することと、
 前記ロボット装置の動作状態に関する測定を行うためのセンサを用いて得られる測定情報を取得することと、
 前記制御指令に従った前記逆方向動作の実行中に、当該制御指令と前記測定情報とのセットを含むデータを繰り返し収集することと、を実行させ、
 前記センサは、視覚センサ及び力覚センサの少なくとも一方を含む
 プログラム。
The control device
generating a control command for executing a reverse movement that changes the motion state of the robot device from a target state to an arbitrary state different from the target state;
acquiring measurement information obtained using a sensor for measuring an operating state of the robot device;
repeatedly collecting data including a set of the control command and the measurement information during the execution of the reverse direction operation in accordance with the control command;
The sensor includes at least one of a visual sensor and a force sensor.
PCT/JP2025/015420 2024-04-26 2025-04-21 Control device, control method, and program Pending WO2025225572A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2024072901A JP7583490B1 (en) 2024-04-26 2024-04-26 Control device, control method, and program
JP2024-072901 2024-04-26

Publications (1)

Publication Number Publication Date
WO2025225572A1 true WO2025225572A1 (en) 2025-10-30

Family

ID=93429255

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2025/015420 Pending WO2025225572A1 (en) 2024-04-26 2025-04-21 Control device, control method, and program

Country Status (2)

Country Link
JP (2) JP7583490B1 (en)
WO (1) WO2025225572A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009279699A (en) * 2008-05-21 2009-12-03 Nagaoka Univ Of Technology Position-force reproducing method and position-force reproducing device
JP2010146238A (en) * 2008-12-18 2010-07-01 Yaskawa Electric Corp Teaching method for traveling object, controlling device for traveling object and traveling object system
WO2019049197A1 (en) * 2017-09-05 2019-03-14 日本電気株式会社 Aircraft, aircraft control device, aircraft control method, and aircraft control program
JP2022163719A (en) * 2021-04-14 2022-10-26 ローベルト ボツシユ ゲゼルシヤフト ミツト ベシユレンクテル ハフツング Device and method for controlling robot to insert object into insertion portion
JP2023072601A (en) * 2021-11-12 2023-05-24 川崎重工業株式会社 ROBOT CONTROL DEVICE, ROBOT SYSTEM AND ROBOT CONTROL METHOD

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009279699A (en) * 2008-05-21 2009-12-03 Nagaoka Univ Of Technology Position-force reproducing method and position-force reproducing device
JP2010146238A (en) * 2008-12-18 2010-07-01 Yaskawa Electric Corp Teaching method for traveling object, controlling device for traveling object and traveling object system
WO2019049197A1 (en) * 2017-09-05 2019-03-14 日本電気株式会社 Aircraft, aircraft control device, aircraft control method, and aircraft control program
JP2022163719A (en) * 2021-04-14 2022-10-26 ローベルト ボツシユ ゲゼルシヤフト ミツト ベシユレンクテル ハフツング Device and method for controlling robot to insert object into insertion portion
JP2023072601A (en) * 2021-11-12 2023-05-24 川崎重工業株式会社 ROBOT CONTROL DEVICE, ROBOT SYSTEM AND ROBOT CONTROL METHOD

Also Published As

Publication number Publication date
JP2025167898A (en) 2025-11-07
JP7583490B1 (en) 2024-11-14
JP2025168201A (en) 2025-11-07

Similar Documents

Publication Publication Date Title
JP6484265B2 (en) Robot system having learning control function and learning control method
US10751874B2 (en) Method of teaching robot and robotic arm control device
US8706302B2 (en) Method for offline programming of an NC-controlled manipulator
CN111975746A (en) Robot control method
KR20180069031A (en) Direct teaching method of robot
CN106945043A (en) A kind of master-slave mode telesurgery robot multi-arm cooperative control system
JP2010142910A (en) Robot system
CN114131611A (en) Joint error offline compensation method, system and terminal for robot gravity pose decomposition
JP7679737B2 (en) Command value generating device, method, and program
JP2024177314A (en) ROBOT SYSTEM, PLANNING SYSTEM, ROBOT CONTROL METHOD, AND PLANNING PROGRAM
JP7583490B1 (en) Control device, control method, and program
US20250205890A1 (en) Determination of holding position on workpiece
JP7576891B1 (en) Control device, control method, and program
JP7576890B1 (en) Control device, control method, and program
Wasik et al. A fuzzy behavior-based control system for manipulation
Bailey-Van Kuren Flexible robotic demanufacturing using real time tool path generation
Lei et al. Vision-based position/impedance control for robotic assembly task
JP7424122B2 (en) Simulation equipment and programs
Nguyen et al. Automated inverse kinematics configuration selection for path planning of a 6-DOF Robot
JP7710901B2 (en) Control method, control device, information processing method, information processing device, robot device, article manufacturing method, program, and recording medium
CN112135718B (en) Control of robots
CN113954070A (en) Mechanical arm motion control method and device, storage medium and electronic equipment
Hu et al. A compliant robotic assembly system based on multiple sensors
CN120461443B (en) Robot control method, system, robot, device and medium
US20250242493A1 (en) Determination of constraint for robot control

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25793538

Country of ref document: EP

Kind code of ref document: A1