[go: up one dir, main page]

WO2024208916A1 - Module for robotic arm - Google Patents

Module for robotic arm Download PDF

Info

Publication number
WO2024208916A1
WO2024208916A1 PCT/EP2024/059085 EP2024059085W WO2024208916A1 WO 2024208916 A1 WO2024208916 A1 WO 2024208916A1 EP 2024059085 W EP2024059085 W EP 2024059085W WO 2024208916 A1 WO2024208916 A1 WO 2024208916A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
robot
end effector
information
collision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2024/059085
Other languages
French (fr)
Inventor
Valentina SUMINI
Francesca MINCIGRUCCI
Stefano SINIGARDI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GD SpA
Original Assignee
GD SpA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GD SpA filed Critical GD SpA
Publication of WO2024208916A1 publication Critical patent/WO2024208916A1/en
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1615Programme controls characterised by special kind of manipulator, e.g. planar, scara, gantry, cantilever, space, closed chain, passive/active joints and tendon driven manipulators
    • B25J9/162Mobile manipulator, movable base with manipulator arm mounted on it
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40298Manipulator on vehicle, wheels, mobile
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40304Modular structure

Definitions

  • Example aspects herein relate to robotics, and in particular to a module for a robot, a system, a method for controlling a robot, and a computer program.
  • Robots are used in various industries to process products, and more increasingly in environments where the presence of objects (whether inanimate objects, automated devices or persons) that may interfere with the operation of the robot is not controllable. This is particularly the case for processes involving a collaboration between a human operator and a robot (in which case, the robot is also referred to as a collaborative robot, or cobot).
  • a module for a robot According to a first example aspect herein, there is provided a module for a robot.
  • said robot comprises at least one robotic arm, preferably with an end effector.
  • the module comprises an interface unit configured to mechanically and communicatively couple with said robot.
  • the module comprises a plurality of sensors configured to sense a space in a vicinity of said robot and said module for detecting the presence of any object.
  • the module comprises a control unit configured to control at least one of said robot and said module, to minimize an effect of a collision of said robot and an object sensed by said plurality of sensors.
  • the robot may be any suitable robot having at least one robotic arm. Each robotic arm may have one or more segments and at least an end effector for a range of processes, where the process may be at least one of picking and placing objects (e.g. pick- n-placing), assembling products, additive manufacturing (e.g. 3D printing), visual detecting (e.g. detecting a set of points in an environment using imaging/sensing means and using the detected set of points to identify/optimise a trajectory of the end effector in the environment, or to determine how the end effector should contact an object corresponding to the set of points).
  • picking and placing objects e.g. pick- n-placing
  • additive manufacturing e.g. 3D printing
  • visual detecting e.g. detecting a set of points in an environment using imaging/sensing means and
  • the mechanical coupling keeps the part of the robot coupled to the interface unit in a substantially static position relative to the module, whilst allowing the movement of the arm(s) of the robot.
  • the robot may have a base from which the robotic arm(s) extends and relative to which the arm(s) moves.
  • the interface unit may be configured to at least mechanically couple (the module) with the base of the robot.
  • the mechanical coupling may be a permanent coupling, for example by using permanent affixing means such as welding, soldering, gluing, crimping, or a semipermanent coupling, for example by using means detachable affixing means such as fasteners (e.g. bolts, screws, etc.), taping, hook-and-loop fasteners, magnetic coupling, vacuum suction, friction etc., allowing the robot to be removably coupled to the module.
  • permanent affixing means such as welding, soldering, gluing, crimping
  • a semipermanent coupling for example by using means detachable affixing means such as fasteners (e.g. bolts, screws, etc.), taping, hook-and-loop fasteners, magnetic coupling, vacuum suction, friction etc., allowing the robot to be removably coupled to the module.
  • the module is configured to support the robot, which may be placed substantially on (i.e. on an upper surface of) the module.
  • the communicative coupling may be establishing one or more suitable communication link.
  • Each communication link may be, for example, a wireless communication link, for example Wi-Fi, cellular telephone data link such as LTE/5G, Bluetooth or Bluetooth Low Energy (BLE), or a wired communication link, for example DSL, fibre-optic cable, Ethernet, to an interface of the robot, for example at or near a base of the robot.
  • Each communication link may not be permanent.
  • the plurality of sensors may comprise one or more cameras (e.g. IR, RGB/visible or multispectral cameras), infrared, ultrasonic sensors, LIDARs or other suitable sensors.
  • the sensors comprise a plurality of RGB cameras, to offer an improved range over ultrasonic sensors and avoid reflection related issues that LIDARs experience in industrial environments.
  • the space that is sensed by the plurality of sensors may be in a vicinity of the robot and the module. The space is sensed to detect the presence of an object (e.g. a static or moving object or obstacle, a human operator, etc.) with which any collision should be minimized.
  • an object e.g. a static or moving object or obstacle, a human operator, etc.
  • the space may be a two-dimensional space, for example a surface defining a limit that may be reached by the robotic arm(s) (e.g. a cylindrical or otherwise curved and substantially surrounds the robot and/or module), a surface defining the space in which the robotic arm(s) is operating (e.g. a substantially planar surface extending away from the robot).
  • the space may be a three-dimensional space, for example a volume comprising at least one of the robot and the module and a volume of all positions that may be reached by the robotic arm(s), or a volume in which the robotic arm is operating. It would be understood that the space needs not to surround the robot and/or the module, and that it may depend on the configuration in which the module and the robot are placed. It would be understood that in cases where the robot comprises more than one robotic arm, a separate space may be defined for each robotic arm.
  • the object that is sensed is an object that is likely to collide with the robot (e.g. with a part of the robotic arm such as the end effector, or with an object held by the end effector), due to a movement of the robot, the sensed object, or both.
  • the collision may be understood to be any undesired or unexpected contact between at least the robot and an object (instead of, for example, the end effector contacting an object to manipulate it), and it may be, in addition to the robot, an undesired/unexpected contact between the module and the object.
  • the control unit may be configured to control the robot (e.g. a part of the robotic arm, such as the end effector, or any segment of the robotic arm), to control the module (e.g. to control a movement of the module), or both, by causing a movement of the robot and/or the module, or by causing an interruption or another modification to an ongoing movement of the robot and/or module (e.g. a change in at least one of a trajectory and speed of an ongoing movement).
  • the control unit may generate one or more control signals for controlling the robot or for controlling means for moving the module.
  • control unit is configured to control a movement of at least one of the robot and the module.
  • the control of the robot and module may be based on the sensed space, for example the control may cause a new movement or a modification to an ongoing movement, where the new movement or the modification is determined based on the sensed space in the vicinity of the robot and module (e.g. to avoid a collision with another detected obstacle).
  • control unit may cause a pre-defined movement or modification to an ongoing movement.
  • control unit may be configured to interrupt all movement of the robot, or to retract the robotic arm(s) (i.e. move parts of the robotic arm so as to reduce a distance between the module and the parts of the robotic arm).
  • control may be configured to, upon sensing an object, cause the module to move away from the sensed object (e.g. by a predetermined distance or until the object is no longer sensed).
  • the control unit can minimize the effect of any collision between the robot and the object.
  • Minimizing the effect of any collision should be understood to encompass an avoidance of any collision, and alternatively a minimization of a safety risk or damage to at least one of the object (which may be a human operator), any part of the robot, an object held by the end effector, and the environment.
  • minimizing an effect of a collision may be understood to be minimizing any collision or an effect of a collision.
  • the safe control of the robot can be improved without requiring a cage to be used, thus improving possible cooperation with the robot (and in particular, with a cobot) and flexibility in the setting in which the robot may be used.
  • said control unit is configured to control at least one of said robot and said module based on one or more physical parameters of at least one of said robot and said module.
  • the physical parameters of the robot and/or the module may be at least one of a dimension (e.g. a length, height, width, depth), a weight, mass/inertia, a position/orientation, a space or volume occupied, a configuration of the robot (e.g. one or more angles between parts of the robot, a rotation, an orientation of the end effector) etc.
  • a dimension e.g. a length, height, width, depth
  • a weight, mass/inertia e.g. a weight, mass/inertia, a position/orientation, a space or volume occupied
  • a configuration of the robot e.g. one or more angles between parts of the robot, a rotation, an orientation of the end effector
  • Each parameter can be used to determine whether a collision is likely to occur, or to determine how the module or the robot should be controlled to avoid or minimize the effect of a collision. Based on these parameters, the control unit can therefore determine whether to move the robot or the module.
  • the module further comprises a notification unit configured to trigger a notification, to alert an operator of said module of an expected collision.
  • a notification unit configured to trigger a notification, to alert an operator of said module of an expected collision.
  • said notification unit comprises at least one of audio, visual and haptic means to notify said operator.
  • the notification may include information related to the expected collision, for example by using different notifications (e.g. different audio and/or visual signals, or haptic signals) each associated with different categories of collisions or safety risk.
  • different notifications e.g. different audio and/or visual signals, or haptic signals
  • the notification unit is configured to also trigger a notification to indicate a status of the robot and/or the module to the operator.
  • the status can include an alert for a low battery, an error related to the functioning of an element of the module and/or the robot, or information assisting the operator to control the module.
  • said notification unit is further configured to obtain data related to said robot and to include said data in said notification.
  • the notification unit may obtain the data related to said robot from the control unit, or from the robot via the interface unit.
  • said control unit is further configured to control an automated movement of the said module.
  • control unit may generate one or more signals to control means for moving the module (e.g. wheels coupled to one or more motor) that are either a part of the module or supporting the module (i.e. the module may be placed on the means for moving the module).
  • the automated movement may be, for example, a movement based on a predetermined trajectory of the module, a movement guided by elements present in the environment of the module (e.g. one or more markers indicating a position or trajectory to be followed), or it may be a movement that is remote-controlled by a human operator.
  • said module further comprises an automated guided vehicle (AGV).
  • AGV automated guided vehicle
  • said control unit is configured to control said automated guided vehicle.
  • said interface unit is further configured to communicate with a second module, to exchange information related to a sensed space in the vicinity of said module and said second module.
  • the second module may be a module substantially the same as the module according the first example aspect described herein (i.e. it comprises at least a second plurality of sensors, and may comprise an interface unit for coupling with a second robot, a control unit, and may comprise further elements, as described herein).
  • the second module may be in the vicinity of the module. Any second robot coupled to the second module may be configured to cooperate with the robot coupled to the module.
  • the two modules may improve the detection of object in the vicinity of the modules and the robots, and better minimize the effect of any collision between the robot(s) and the sensed object.
  • control unit is configured to control said plurality of sensors, based on said exchanged information.
  • the exchanged information may allow the module to determine a relative position of the second module. This may, in turn, allow the module to combine information from the second plurality of sensors (from the second module) to the information obtained from the plurality of sensors.
  • said interface unit is configured to communicate with an extended reality, XR, interface to exchange at least one of: information for controlling said robot, information relating to the vicinity of said robot, and information relating to a status of at least one of said robot and said module.
  • XR extended reality
  • said XR interface may be any apparatus or system capable of displaying at least one 2D or 3D virtual element of digital information to a user.
  • the at least one virtual element may be superimposed onto a real environment, either captured or viewed live by the user, the at least one virtual element may be superimposed onto a virtual environment, or the at least one virtual element may represent a virtual environment.
  • a human operator of the module and/or the robot may interact with the XR interface, for example via a visual device to be worn or held by the human operator.
  • the module may relay instructions from the XR interface (and from the human operator) to the robot.
  • the module may improve the human operator's awareness of the robot's situation, and improve the safe control of the robot.
  • the status can be alert for a low battery, an error related to the functioning of an element of the module and/or the robot.
  • the module may improve the human operator's awareness of the robot's and/or the module's situation and allow the user to perform any action required to minimize the effect of any collision with a sensed object or to ensure the correct functioning of the robot and/or the module.
  • the human operator may not need to be in the vicinity of the module and/or robot.
  • the human operator may be in a location that is remote from the module and robot.
  • said interface unit allows a remote control (or tele-operation) of the robot by communicating with the XR interface.
  • said control unit is configured to determine a risk category associated with said end effector, and to control at least one of said robot and said module based on said risk category.
  • the risk category may be determined based on an intended operation of the end effector (such as holding an object, welding, cutting, polishing painting, inspecting, sensing), one or more operational parameters of the end effector (e.g a speed at which the end effector operates, a force that the end effector is to generate, etc.).
  • an intended operation of the end effector such as holding an object, welding, cutting, polishing painting, inspecting, sensing
  • one or more operational parameters of the end effector e.g a speed at which the end effector operates, a force that the end effector is to generate, etc.
  • the control unit may determine the risk category based on information related to the end effector (e.g. information on at least one of the intended operation and the operational parameter(s)).
  • the control unit may obtain such information from the robot or from an external device, or it may be predetermined information stored in a memory.
  • control unit may determine how to control at least one of the robot and the module, for example by determining a threshold distance between the detected object and the end effector, whereby the threshold distance is greater for a higher risk category than a lower risk category, and controlling at least one of the robot and the module if the detected object is within the threshold distance from the end effector.
  • control unit may determine whether to preferentially control the robot or the module, based on the risk category.
  • the control unit may determine to preferentially control the module; when the end effector is associated with a high-risk category (e.g. the end effector comprises a blade) the control unit may determine to preferentially control of the robot.
  • the control unit may determine that controlling one of the robot and module may be faster than the other, or it may require less energy, or it may be safer (e.g retracting the blade to avoid any contact with an operator).
  • said control unit is configured to exert control so that only one of said robot and said module moves at a given time.
  • the module and the robot may be prevented from moving at the same time, as it may be difficult to quickly determine control both the robot and the module whilst avoiding or minimizing the effect of any collision.
  • said control unit is configured to prevent said module from moving when said robotic arm is not in a retracted position.
  • said control unit is configured to: determine a first format to be used in communication with said robot and a second format to be used in communication with an external device.
  • said control unit is also configured to at least one of: convert first data received from said robot from said first format into said second format and cause said interface unit to transmit converted first data to said external device, and convert second data received from said external device from said second format into said first format and cause said interface unit to transmit converted second data to said robot.
  • the robot and the external device may be configured to use different data formats to express the same data, such that the robot and the external device would not be able to interpret the data obtained from each other.
  • the module may act as a data exchange facilitator (or translator), to adapt the data from the robot in a way that can be correctly interpreted by the external device, or vice-versa.
  • Each of the first data format and the second data format may be for a digital data format, or an analog data format.
  • a digital data format used for digital communication (e.g. with the external device and/or with a robot comprising a controller or other digital processing means), may define at least one of an order in which pieces of data are to be arranged, an expected size (e.g. in terms of bits, bytes etc.) of the piece(s) of data, a range of values of a parameter corresponding to a piece of data (e.g. a 4-bit piece of data may be used to define a range of 16 values, which may correspond to values of a parameter ranging from 0 to 15, or from -7 to 8).
  • an expected size e.g. in terms of bits, bytes etc.
  • a range of values of a parameter corresponding to a piece of data e.g. a 4-bit piece of data may be used to define a range of 16 values, which may correspond to values of a parameter ranging from 0 to 15, or from -7 to 8).
  • An analog data format may define at least one of a correspondence between an amplitude of a signal transmitted and a value of a parameter, and a frequency at which the signal may change value.
  • the control unit may obtain information indicating the first data format from the robot or from a memory storing information (e.g. in a database or library) indicating one or more data formats each associated with a type of robot.
  • control unit may obtain information indicating the second data format from the external device or from the memory.
  • the information indicating a data format from the external device may be based on a user input, the user selecting a data format that the control unit should use for communication with the robot and/or with the external device.
  • control unit may convert data in the first format into data in the second format, for example by rearranging pieces of data into an order defined by the second format, by adjusting the size either by reducing (e.g. by truncating, rounding, etc.) or by extending (e.g. by zeropadding), by adding or removing an offset to a value indicated by a piece of data, etc.
  • control unit when converting data from an analog format to a digital format, may use a predetermined conversion rule to obtain a piece of data corresponding to a received signal's amplitude (or vice-versa when converting from a digital format to an analog format).
  • said interface unit comprises means for mechanically and communicatively coupling with a plurality of different connectors of robots.
  • robots with different connectors may be placed on the module and controlled by the module.
  • said interface unit is configured to removably couple said robot with said module.
  • the mechanical coupling may be a non-permanent coupling means such as magnets, screws, bolts, or any combination of means.
  • an outline of said module has a first shape in a substantially horizontal first plane.
  • said first shape is a substantially regular polygon.
  • the module may prevent certain movements of the robot as it would otherwise collide with the module. For example, if the robot is placed on the top surface of the module, the top surface becomes a physical obstacle for a movement of the robot. In other words, the module causes a limit to a space in which the robot can operate. Therefore, a restricted space around the module needs to be defined to avoid collision between the robot and the module.
  • the restricted space around the module as a limit to the space in which the robot can operate (in other words of a range of positions in which the robot can operate) in the first plane without colliding with the module, may be more easily determined.
  • the plurality of sensors may be placed on the sides of the outline to each be oriented in different directions, with a substantially constant interval angle between the directions of adjacent sensors. Accordingly, each sensor may be used to detect objects for a same angular range around their respective direction. Accordingly, the accuracy of the object detection may be substantially the same all around the module.
  • references to polygon as used herein should be understood to be a reference to substantially convex polygon, i.e. having no interior angles that is above
  • defining a shape of an outline of the module indicates that no element of the module protrudes substantially outwards from the given shape.
  • said outline of said module has a second shape in a second plane substantially parallel and distinct from said first plane
  • said second shape is a substantially regular polygon different from said first shape
  • an angular range of each sensor placed at the second plane may be substantially constant.
  • a shape with a greater number of sides may be used for the one of the first plane and second plane where sensors are placed, thus reducing the angular range corresponding to each sensor and improving the accuracy in detecting objects.
  • a shape with fewer sides may be used for a plane where the module may be in contact with another module, thus facilitating modules to be placed next to one another.
  • a number of sides of one of said first shape and said second shape is greater than a number of sides of the other one of said first shape and said second shape.
  • a number of sides of one of said first shape and said second shape is a multiple of a number of sides of the other one of said first shape and said second shape.
  • an outer shape of said module comprises a taper portion, said taper portion being narrower towards a top of said module.
  • the center of gravity of the module may be lowered, thus improving the stability of the module.
  • the visibility of visual feedback means on the taper position may be improved, as they are less likely to be entirely blocked from an observer positioned higher than the top of the module (e.g. a human operator or a camera facing downwards from a wall or a ceiling) even if objects or other modules are present in the vicinity of the module.
  • an observer positioned higher than the top of the module e.g. a human operator or a camera facing downwards from a wall or a ceiling
  • said taper portion is formed by a plurality of side surfaces, each of said side surfaces being substantially flat.
  • the manufacturing, assembly and/or transport of the module may be facilitated.
  • the volume occupied by the module and any restriction the module may cause to a reach of the robot may be more easily determined.
  • said taper portion is comprised between said first plane and said second plane.
  • said outline of said module defines a geometric surface area in one of said first plane and second plane that is larger than a geometric surface area defined by said module in the other of said first plane and second plane.
  • said one of said first plane and second plane is farther than said other of said first plane and second plane.
  • the module defines a smaller geometric area in a plane nearer the robot, and thus any restriction the module may cause to a reach of the robot may be reduced.
  • said module comprises at least one first side surface and at least one second side surface, wherein each said first side surface is inclined at a first angle and each said second side surface is inclined at a second angle that is different from the first angle.
  • an angle of elevation or depression (i.e. relative to a horizontal plane) of said at least one first side surface is different than an angle of elevation or depression of said at least one second side surface.
  • sensors may be placed on one of the first side surface and on one of the second side surface so as to be aimed with different angles relative to a horizontal plane.
  • first surface(s) may be facing upwards and the second surface(s) may be facing substantially sideways, so a sensor on a first side surface may sense a space that is above the module, whereas a sensor on the second side surface may sense a space that is nearer to the floor.
  • said at least one first side surface is substantially vertical.
  • the placement of two of the same modules may be facilitated, by bringing the corresponding substantially vertical surfaces in contact (or in close proximity) with each other.
  • said at least one substantially vertical first side surface comprise communication means for communicatively coupling with communication means on a substantially vertical side surface of another module. Accordingly, the communicative coupling between two of the same modules may be easily done by bringing their respective vertical side surface in contact or close proximity with each other.
  • said module further comprises one or more handles for positioning the module.
  • said one or more handles are retractable.
  • each of said one or more handles may be movable between an extended position and a retracted position.
  • each of said one or more handles is received in a respective opening formed at a housing of said module.
  • said one or more handles do not protrude substantially from an outline of said housing.
  • the handles do not cause an obstruction to a movement of the robot or to the detection of object by the plurality of sensors.
  • each of said one or more handles is movable between a retracted position and an extended position via at least one of a rotation along a respective rotation axis and a translation along a respective translation axis.
  • said one or more handles are mechanically coupled to haptic means for notifying a user holding said one or more handles.
  • each of said one or more handles is located on a respective side surface of the module.
  • a user may easily grip two handles to move the module.
  • each of said one or more handles is located on a respective vertical side surface of the module, wherein a side surface inclined upwards is interposed between adjacent vertical side surfaces.
  • the handles are located on substantially vertical side surfaces which are separated from each other by another side surface which is inclined upwards (i.e. which has a strictly positive angle of elevation).
  • said control unit is configured to classify said object into one of an object to be processed by said end effector and an object with which the collision is to be minimized.
  • the control unit may obtain from the robot (via the interface unit) or from an external device controlling the robot, information on an operation of the robot. Based on this information, the control unit may determine an object to be processed by the end effector. Alternatively, the control unit may be generating data and/or signals for controlling the robot to process the object.
  • interferences to the robot's operation may be reduced (for example by avoiding considering a contact with the object to be processed as a collision to be avoided and controlling the robot and/or the module accordingly).
  • said interface unit is further configured to obtain data related to said robot, from at least one of said robot and an external device.
  • a system comprising said module according to the first example aspect as described herein, and said robot.
  • said system comprises an extended reality, XR, interface communicatively coupled to at least one of said module and said robot.
  • XR extended reality
  • a method for controlling a robot comprising at least one robotic arm with an end effector, said robot being mechanically and communicatively coupled to a module comprising a plurality of sensors, the method comprising: obtaining from said plurality of sensors information on a space in a vicinity of said robot and said module, said information being indicative of the presence of any object; and controlling at least one of said robot and said module, to minimize an effect of a collision of said robot and an object indicated by said information.
  • said module may perform the steps of obtaining said information on said space, and of control said at least one of said robot and said module.
  • said method comprises controlling at least one of said robot and said module, to minimize an effect of a collision of said module and an object indicated by said information.
  • said controlling at least one of said robot and said module is based on one or more physical parameters of at least one of said robot and said module.
  • said method further comprises triggering a notification to alert an operator of said module of an expected collision.
  • said method further comprises obtaining data related to said robot and including said data in said notification.
  • said method further comprises controlling an automated movement of said module.
  • said method further comprises communicating with a second module, to exchange information related to a sensed space in the vicinity of said modules (i.e. said module and said second module).
  • said method further comprises communicating with an extended reality, XR, interface to exchange at least one of information for controlling said robot, information relating to the vicinity of said robot, and information relating to a status of at least one of said robot and said module.
  • XR extended reality
  • said method further comprises determining a risk category associated with said end effector, and controlling at least one of said robot and said module based on said risk category.
  • said method further comprises determining a first format to be used in communication with said robot and a second format to be used in communication with an external device, and at least one of: (i) converting first data received from said robot from said first format into said second format and causing converted first data to be transmitted to said external device, and (ii) converting second data received from said external device from said second format into said first format and causing converted second data to be transmitted to said robot.
  • said method further comprises classifying said object into one of an object to be processed by said end effector and an object with which the collision is to be minimized.
  • said method further comprises obtaining data related to said robot, from at least one of said robot and an external device.
  • a computer program comprising instructions which, when executed by one or more processor, cause said one or more processor to carry the method according to the third example aspect described herein.
  • Figure 1 shows a schematic view illustrating an example of a system in example embodiments
  • Figure 2 shows a schematic view illustrating an example of a module in example embodiments
  • Figure 3 shows a schematic diagram illustrating an example of a module in example embodiments
  • Figure 4 shows a schematic diagram illustrating an example of a general kind of programmable processing apparatus that may be used to implement a control unit in example embodiments;
  • Figure 5 shows a schematic view illustrating an example of sensors on a module sensing a space in a vicinity of the robot and the module, in example embodiments;
  • Figures 6a and 6b show schematic views illustrating an example of a control of the robot to minimize an effect of collision, in example embodiments
  • Figure 7 shows a schematic view illustrating a module and robot in an environment with objects, in example embodiments
  • Figure 8 shows a schematic diagram illustrating a module and possible communication links with a robot, a second module, an external device and an XR interface, in example embodiments;
  • Figure 9 shows a schematic view illustrating an example of geometric elements of an outline of the module, in example embodiments.
  • Figures 10a to 10c show schematic views illustrating examples of handles of the module, in example embodiments
  • Figure 11 shows a schematic view illustrating an example of a system with a module, a robot and an XR interface, in example embodiments;
  • Figure 12 shows processing operations performed in example embodiments. Detailed Description
  • the system comprises a module 10 and a robot 20.
  • the module 10 comprises a plurality of sensors 12, an interface unit 110 and a control unit (not shown on Figure 1).
  • the robot 20 comprises a robotic arm 22 with an end effector 24.
  • the robot 20 also includes a base 26, and the interface unit 110 is located on a top surface of the module 10 and is mechanically coupled to the robot 20 via the base 26 of the robot.
  • the interface unit 110 may be located elsewhere on the module 10 (e.g. a side surface), or it may be provided in a recess on the module 10.
  • the robot 20 need not have a base 26 (e.g. the interface unit 110 may couple with the robotic arm 22 directly). In some cases, the robot 20 may have other elements such as other robotic arms.
  • the robot's base 26 is fastened to the interface unit 110 by means of bolts and corresponding thread throughs provided on the interface unit 110.
  • any other affixing means permanent or non-permanent may be used instead, such as those described herein.
  • the top surface of the module 10 has a substantially octagonal shape, and a bottom surface of the module 10 has a substantially square shape.
  • the module 10 has eight side faces (four triangular and four trapezoidal) joining the top surface and the bottom surface. Between the top surface and the bottom surface (defining planes Pl and P2, respectively), the module 10 defines a taper which is narrower at the top surface than the bottom surface.
  • Each of these sides includes a respective sensor 12, although Figure 1 only shows four of these sensors 12.
  • each sensor 12 is facing a respective direction away from the module 10 (and the robot), and can therefore detect a space around the module 10 and the robot 20 for any object that should not collide with the robot.
  • the module 10 has a tapered outer shape, which is narrower towards the top.
  • a cross section along a horizonal plane of the module 10 has a smaller area towards the top of the module 10 than towards the bottom of the module 10.
  • more components of the module 10 may be placed towards the bottom (i.e. towards plane P2), thus lowering the center of gravity of the module 10, which in turn improves its stability.
  • each sensor 12 comprises an RGB camera (i.e. a camera capturing visible light), and processing means for identifying object(s) on a captured image and for estimating a distance from the sensor 12 to each identified object. Based on this distance, the sensor 12 may determine whether each object is within a predetermined space in a vicinity of the robot 20 and the module 10.
  • RGB camera i.e. a camera capturing visible light
  • the sensors 12 may comprise any other suitable type of sensor (such as an IR camera, ultrasound sensor etc.), and instead of each sensor 12 having processing means, a centralized processing (for example the control unit 100) may obtain images or other data related to the sensed space from each sensor 12 to detect the presence of an object in the sensed space.
  • a centralized processing for example the control unit 100 may obtain images or other data related to the sensed space from each sensor 12 to detect the presence of an object in the sensed space.
  • FIG 2 shows an example of the module 10 comprising handles 13, loudspeakers 14, LED light strips 15, an automated guided vehicle 17 and wheels 18.
  • the module 10 shown on Figure 2 comprises four handles 13 and four loudspeakers 14 (on each trapezoidal side of the module 10), and four LED light strips 15, although only a subset of these are visible on Figure 2 due to the perspective.
  • the module 10 may comprise any number of these elements, as the example shown on Figure 2 would be understood to be purely illustrative.
  • the module 10 may comprise any subset of the handles 13, the loudspeakers 14, the LED light strips 15, the automated guided vehicle 17 and the wheels 18, as each of these are optional and may be separately included in the module 10.
  • the handles 13 are connected to haptic feedback means (e.g. an electric motor or linear actuator coupled to a weight to controllably generate vibrations, not shown on Figure 2), to provide haptic feedback to a user using the handles 13 for positioning the module 10.
  • the loudspeaker 14 may generate an audio signal to alert a user near the module 10, and the LED light strip 15 may emit light as a visual signal to alert the user.
  • the LED light strip 15 may be configured to emit light of a single color, or may control the color of the emitted light (e.g. by using multiple LEDs each emitting light at a different wavelength).
  • the haptic feedback means, the loudspeakers 14 and the LED light strips 15 are coupled to a notification unit of the module 10, which will be described later.
  • the automated guided vehicle 17 supports the module 10 (and the robot 20 placed on the module 10) to move the module 10, based on a predetermined trajectory, based on guiding elements present in the environment of the module 10, and/ based on commands received from a remote control.
  • the wheels 18 may be driven by a motor coupled to the automated guided vehicle 17, or may freely rotate as a user moves the module 10 (e.g. while holding the handles 13).
  • the module 10 comprises a control unit 100, an interface unit 110, a plurality of sensors 120, a notification unit 130, and an automated guided vehicle, AGV, 140.
  • the interface unit 110 comprises means for mechanically coupling the robot 20 to the module 10, and means for communicatively coupling with robot 20 (i.e. forming an electrical and/or electronic connection to the robot). Accordingly, the module 10 can communicate with the robot 20 via the interface unit 110, as indicated by the arrow. The communication may be used for at least one of a control the robot 20 and obtaining information from the robot 20 (e.g. a status or a configuration of the robotic arm 22 and/or end effector 24).
  • the interface unit 110 may also communicate with one or more devices other than the robot 20.
  • the interface unit 110 can communicate with an external device, and with another module, as will be explained in more detail below.
  • the plurality of sensors 120 corresponds to the sensors 12 shown on Figures 1 and 2, and are configured to sense a space in a vicinity of the module 10 and of the robot 20 placed on the module 10.
  • the notification unit 130 generates, upon receiving an instruction from the control unit 100, a notification to a user.
  • the notification may be generated as a visual notification, an audio notification, a haptic notification, or an electronic notification (for example by transmitting the notification to an external device via the interface unit 110).
  • the notification unit 130 is coupled to audio, visual and/or haptic means, such as the vibrating means coupled to the handles 13 shown on Figure 2, the loudspeakers 14 and the LED strip lights 15 shown on Figure 2.
  • the AGV 140 corresponds to the AGV 17 shown on Figure 2.
  • the notification unit 130 may be omitted (for example when the module 10 operates in an environment where not user would need to be notified), and similarly the AGV 140 may be omitted (for example in the case of a module that is static or that can only move by being dragged by users).
  • the control unit 100 controls various elements of the module 10, and may be implemented as a general kind of programmable processing apparatus, as will be described below.
  • control unit 100 controls the interface unit 110 to transmit data to the robot, to obtain data from the robot, or both.
  • the control unit 100 also controls the sensors 120, either by causing each sensor 120 to start or stop sensing the space, or by obtaining data related to the space from each sensor 120 (the obtained data indicating the presence of any object in the space).
  • the control unit 100 also controls the notification unit 130 to trigger the generation and output of the notification.
  • the control unit 100 also controls the AGV 140, for example based on instructions from a user received via the interface unit 110.
  • FIG. 4 an example of a general kind of programmable processing apparatus 70 that may be used to implement the control unit 100 is shown.
  • the programmable processing apparatus 70 comprises one or more processors 71, one or more input/output communication modules 72, one or more working memories 73, and one or more instruction stores 74 storing computer-readable instructions which can be executed by one or more processors 71 to perform the processing operations as described hereinafter.
  • An instruction store 74 is a non-transitory storage medium, which may comprise a non-volatile memory, for example in the form of a read-only-memory (ROM), a flash memory, a magnetic computer storage device (for example a hard disk) or an optical disk, which is pre-loaded with the computer-readable instructions.
  • an instruction store 74 may comprise writeable memory, such as random access memory (RAM) and the computer-readable instructions can be input thereto from a computer program product, such as a non-transitory computer-readable storage medium 75 (for example an optical disk such as a CD-ROM, DVD-ROM, etc.) or a computer-readable signal 76 carrying the computer-readable instructions.
  • a non-transitory computer-readable storage medium 75 for example an optical disk such as a CD-ROM, DVD-ROM, etc.
  • a computer-readable signal 76 carrying the computer-readable instructions.
  • the combination 77 of hardware components shown in Figure 3 and the computer-readable instructions are configured to implement the functionality of the control unit 100
  • operations caused when one or more processors 71 executes instructions stored in an instruction store 74 will be described generally as operations performed by the control unit 100, by the module 10, or by elements of the module 10 such as the interface unit 110.
  • Figure 5 shows a top view of the module 10 and the robot 20 shown on Figure 1. For illustrative purpose, only a part of the robot 20 is shown on Figure 5.
  • the module 10 comprises eight sensors 12, each located on a respective side surface of the module 10 and each sensor 12 faces a respective direction away from the module 10 and the robot 20.
  • the top surface of the module 10 defines a regular octagon, the direction faced by each sensor 12 is at an angle of 45° from adjacent sensors 12.
  • Each sensor 12 has a predetermined field of view, indicated by the angle 0 on Figure 5. Accordingly, each sensor 12 senses a respective part of the space extending away from the module 10 and the robot 20. Although not required, the parts of the space sensed by adjacent sensors 12 usually overlap as shown on Figure 5.
  • each sensor 12 may have a different field of view. Additionally, as would be apparent from the present disclosure, each sensor 12 may be of a different type.
  • the sensors 12 may be sensing a two-dimensional space (for example if the robotic arm operates substantially along a plane), or a three dimensional space.
  • sensors 12 on the triangular inclined side surfaces of the module 10 sense a space that extends substantially above the module 10, in the vicinity of the robot 20 which is placed on top of the module 10, whereas sensors 12 on the trapezoidal vertical side surfaces sense a space that extends substantially to the side of the module 10. Accordingly, together, the sensors 12 can sense a space in the vicinity of both the module 10 and the robot 20.
  • control unit 100 may, via the interface unit 110, control the robot 20 to change its configuration into a retracted configuration as shown on Figure 6b, namely by bringing the end effector 24 closer to the module 10.
  • a distance between the sensed object and the robot 20 may be increased, thus avoiding a collision with an object, or by minimizing any effect of the collision.
  • the retracted configuration may be such that no part of the robot 20 extends sideways beyond the side surfaces of the module 10. Accordingly, the module 10 may act as a barrier preventing a collision between the object and the robot.
  • Figure 7 an example of a module and robot in an environment with objects will be described.
  • Figure 7 illustrates the projection S of the space sensed by the sensors 12.
  • Objects 32 and 33 are examples of static objects, whereas object 34 is an example of a moving object, with a trajectory of the object 34 indicated by the arrow 35.
  • the robot 20 is programed to process object 32, without coming into contact with any of objects 33 and 34.
  • the end effector 24 comprises a camera and a suction cup.
  • the camera is for inspecting object 32.
  • the suction cup is for grasping object 32 (if object 32 passes the inspection) and for moving object 32 to another predetermined location.
  • the robotic arm may move while keeping the end effector 24 at a predetermined capturing range of the end effector 24's camera, to perform the inspection. Then, the robotic arm may move the end effector 24's suction cup to contact the object 32 at a specific location on the object 32 (e.g. a location that is predetermined, or that is determined based on the orientation, shape, etc. of object 32 as determined during the inspection).
  • the sensors 12 of the module 10 may detect the presence of the object 32, and may cause the control unit 100 to control the robot or the module 10 to avoid this contact. This, however, would prevent the robot 20 from performing the desired operation.
  • control unit 100 obtains from the robot 20 (via the interface unit 110) or from an external device controlling the robot, information identifying the object 32 which is to be processed by the end effector 24.
  • This information may include, for example, at least one of a predicted location, shape, size, color, reflectance, orientation, etc. of the object 32 to be processed.
  • the control unit 100 may determine that the object sensed by the sensors 12 corresponds to the object 32 to be processed by the end effector 24. In other words, the control unit 100 may classify the sensed object as an object 32 to be processed by the end effector 24. In this case, as the contact with the object 32 is desired, the control unit 100 may not perform (or may stop performing) any of the processing described herein for minimizing an effect of a collision of the robot 20 with the object 32.
  • the sensors 12 of the module 10 may detect the presence of an object 33 and/or the object 34. Based on the information identifying the object 32 which is to be processed by the end effector 24, the control unit 100 may determine that the object sensed by the sensors 12 does not correspond to the object 32 to be processed by the end effector 24.
  • control unit 100 may classify the sensed object as an object with which the collision (of the robot 20, and possible also of the module 10) is to be minimized.
  • control unit 100 may determine a space around the object 33 to be avoided (for example a space including the object 33 to be avoided and a space around this object 33 defined as a threshold distance).
  • the control unit 100 may use the determined space to be avoided to modify the operation of the robot 20 to proceed with the processing of object 32 whilst avoiding the space around the object 33.
  • control unit 100 may control the robot 20 to interrupt an operation of the robotic arm, of the end effector 24, or to retract the robot 20 (for example as explained with reference to Figures 6a and 6b above).
  • the control may depend on the type of end effector 24.
  • the end effector 24 comprises a camera and a suction cup, which may be associated with a lower risk category.
  • various end effectors may have different inherent risk to the safety of human operators, to the environment or the robot 20 or module 10 itself.
  • control unit 100 is configured to obtain information indicating a risk category associated with the end effector 24.
  • control unit 100 may determine a size of the sensed space, or determine an allowable threshold distance between the end effector 24 and any detected object. The threshold distance is set greater for a higher risk category than a lower risk category. Accordingly, the control unit 100 controls the robot 20 and/or the module 10 if the detected object is within the determined threshold distance from the end effector 24.
  • Table 1 provides example of risk categories that may be associated with various end effectors.
  • the end effector 24 shown on Figure 7 may include an end effector associated with a high risk category, such as a rotating blade for making a predetermined cut in the object 32.
  • a high risk category such as a rotating blade for making a predetermined cut in the object 32.
  • control unit 100 may control the robot 20 as soon as the sensed object enters in a space in a vicinity of the robot 20 and the module 10 that is greater than the space indicated by the projection S.
  • the module 10 may, via the interface unit 110, establish communication links with the robot 20, a second module 10', an external device 40 and an XR interface 50.
  • the second module 10' may be substantially similar to the module 10.
  • the communication link between module 10 and 10' may be a wireless communication link allowing modules that are near each other to communicate (e.g. using Bluetooth, nearfield communication, etc.).
  • the external device 40 may be a device controlling the robot 20, either via the module 10 or via a separate communication link between the external device 40 and the robot 20.
  • the control unit 100 may be configured to override the control from the external device 40, to minimize the effect of a collision between the robot 20 and the detected object.
  • the XR interface 50 comprises a visual device for showing to a user an extended reality content, which may be used to control the robot 20.
  • the module 10 may exchange with the external device 40 and/or the XR interface 50, at least one of: (i) information related to the end effector 24, (ii) information on an operation of the robot, (iii) data related to said robot, (iv) information identifying an object 32 to be processed by the end effector 24, (v) information indicating a risk category associated with the end effector 24, and (vi) information regarding a data format to be used for communication with the robot 20 or with the external device 40 (for example, the external device 40 and the module 10 may be communicating using a predetermined data format, and the external device 40 may indicate another data format into which data from the robot 20 should be converted before being forwarded to the external device 40).
  • the module 10 may provide to the external device 40 data obtained from the robot 20 (in the format in which it is obtained or converted to another data format for the external device 40), a notification triggered by the control unit 100, which may include information related to the robot.
  • the external device 40 and/or the XR interface 50 may provide to the module 10 a set of instructions for controlling a movement of the robot, or the end effector 24.
  • module 10 has been described as having an octagonal top surface and a square bottom surface, this is non-limiting, as other shapes may be used instead.
  • An outline of the module 10 in a first substantially horizontal plane Pl may have a first shape that is substantially polygonal having any number of sides.
  • Figure 9 shows the outline having a hexagonal shape with 6 sides.
  • An outline of the module 10 in a second plane P2 that is substantially parallel to plane Pl (and lower than plane Pl) may have a second shape that is also substantially polygonal having any number of sides.
  • Figure 9 shows the outline having a dodecagonal shape with 12 sides.
  • first shape and the second shape may have the same number of sides, it is preferable to have a second shape in the lower plane P2 that is larger (i.e. has a larger area) than the first shape in plane Pl.
  • a second shape in the lower plane P2 that is larger (i.e. has a larger area) than the first shape in plane Pl.
  • This allows the module 10 to define a taper portion between the two planes Pl and P2, with a narrower top.
  • elements in the module 10 are placed lower, thus lowering the center of gravity of the module 10. This improves the stability of the module 10 from tipping off, in particular as the robotic arm coupled to the module 10 moves.
  • planes Pl and P2 need not correspond to the top surface and the bottom surface of the module 10 (i.e. the module 10 may extend above plane Pl and/or below plane P2).
  • side surfaces can be defined linking vertices of the first shape and vertices of the second shape. If the second shape has a number of sides that is a multiple of the number of sides of the first shape, the shape of side surfaces can be made uniform, and the overall geometric shape of the module 10 simplified, thus facilitating the manufacture and assembly of the module 10.
  • the module 10 may cause a restriction to the movement of the robot 20, as the robotic arm 22 cannot move into the space occupied by the module 10. Accordingly, having a simpler overall geometric shape of the module 10 help simplify the determination of the space that the robotic arm should be prevented from moving into, to avoid a collision of the robotic arm 22 with the module 10.
  • FIGS 10a to 10c examples of handles of the module 10 will now be described.
  • Figures 10a to 10c only show a single handle 13 and only show the handle 13 in the extended position (and not showing any recess formed in the module 10 for the handle 13), however it would be understood that any number of handles 13 may be provided on the module 10 each handle 13 being provided with a respective recess formed in the module 10.
  • the handles 13 of the module 10 shown on Figure 2 may be retractable, i.e. they can be moved out of the housing of the module 10 into an extended position, and moved back into the housing of the module 10 when the handles 13 are not in use, so that, when retracted, the handles 13 do not protrude substantially from the housing of the module 10, which would reduce risk that the robot 20 collide with a handle 13.
  • the module 10 would typically be positioned when the robot 20 is not moving, and the module 10 would typically not be moved by a human operator when the robot 20 is moving. In other words, the handles 13 are not in use when the robot 20 moves, and vice-versa.
  • the handles 13 may be moved from a retracted position to an extended position in various ways.
  • the handle 13 is rotatable along an axis xl that is parallel to the side of the top surface of the module 10 where the handle 13 is located. In an extended position, the handle 13 extends horizontally outward from the module 10. In a retracted position, the handle 13 moves into a corresponding recess formed on the trapezoidal side surface 19a of the module 10, to be flushed with the trapezoidal side surface 19a.
  • the handle 13 may be slidably moved along an axis yl between the extended position and the retracted position, where a recess corresponding to the handle 13 is formed underneath and parallel to the top surface of the module 10, such that, in the retracted position, the handle 13 is flushed with the top surface of the module 10.
  • the handle 13 may be rotatable along a vertical axis zl.
  • the handle 13 In an extended position, the handle 13 is located on the trapezoidal side surface 19a and a part 13a of the handle 13 contacts the side surface 19a.
  • the handle 13 In the retracted position, the handle 13 moves into a recess formed at the top of the triangular side surface 19b, underneath and parallel to the top surface of the module 10, such that the part 13a of the handle 13 is flushed with the triangular side surface 19b.
  • a combination of movements may be used to move the handle 13 between the retracted and extended position.
  • the handle 13 may be retracted into a recess formed in the module 10, parallel to the trapezoidal side surface 19a.
  • the handle 13 may be moved into an extended position by first sliding the handle 13 upwards, along axis z2, then rotating the handle 13 sideways along axis xl.
  • the XR interface 50 may exchange various information with the module 10.
  • the XR interface 50 is to be used for controlling the robot 20.
  • the system comprises module 10 and robot 20.
  • the system comprises a XR interface 50 including a visual device 52 and input means 54.
  • the visual device 52 is to be worn by a user 60, and shows to the user 60 an XR content, such as an augmented reality content, a mixed reality content or a virtual reality content.
  • the input means 54 is to be held or worn by the user 60 and is used to input commands for controlling the robot 20.
  • the visual device 52 display an XR content that includes a planned trajectory 56 of the end effector 24, and a cursor 57 corresponding to the input means 54.
  • the planned trajectory 56 and cursor 57 are examples of virtual elements of information that the XR content superimposes onto a real environment, as an example of a reality extension.
  • the user 60 may point the cursor 57 to a point of the planned trajectory 56 of the end effector 24, and may move the point of the planned trajectory 56 (e.g. by effecting a click- and-drag input) to a new location, as shown by the arrow 58.
  • the trajectory 56 of the end effector 24 may change to pass through the point newly designated by the user 60, by following the modified trajectory 56'.
  • the information processing apparatus obtains, from a plurality of sensors 12 of a module 10 that is mechanically and communicatively coupled to the robot 20, information on a space in a vicinity of the robot 20 and the module 10, the information being indicative of the presence of any object.
  • the information processing apparatus controls at least one of the robot
  • the example aspects described here avoid limitations, specifically rooted in computer technology, relating to the control of robots having at least one robotic arm.
  • the safe control of the robot can be improved without requiring a cage to be used, thus improving possible cooperation with the robot (and in particular, with a cobot) and flexibility in the setting in which the robot may be used.
  • the example aspects described herein improve computers and computer processing/functionality, and also improve the field(s) of at least robot control, data processing and data storage.
  • Software embodiments of the examples presented herein may be provided as, a computer program, or software, such as one or more programs having instructions or sequences of instructions, included or stored in an article of manufacture such as a machine-accessible or machine-readable medium, an instruction store, or computer- readable storage device, each of which can be non-transitory, in one example embodiment.
  • the program or instructions on the non-transitory machine-accessible medium, machine-readable medium, instruction store, or computer-readable storage device may be used to program a computer system or other electronic device.
  • the machine- or computer-readable medium, instruction store, and storage device may include, but are not limited to, floppy diskettes, optical disks, and magneto-optical disks or other types of media/machine-readable medium/instruction store/storage device suitable for storing or transmitting electronic instructions.
  • the techniques described herein are not limited to any particular software configuration. They may find applicability in any computing or processing environment.
  • computer-readable shall include any medium that is capable of storing, encoding, or transmitting instructions or a sequence of instructions for execution by the machine, computer, or computer processor and that causes the machine/computer/computer processor to perform any one of the methods described herein.
  • Such expressions are merely a shorthand way of stating that the execution of the software by a processing system causes the processor to perform an action to produce a result.
  • Some embodiments may also be implemented by the preparation of applicationspecific integrated circuits, field-programmable gate arrays, or by interconnecting an appropriate network of conventional component circuits.
  • the computer program product may be a storage medium or media, instruction store(s), or storage device(s), having instructions stored thereon or therein which can be used to control, or cause, a computer or computer processor to perform any of the procedures of the example embodiments described herein.
  • the storage medium/instruction store/storage device may include, by example and without limitation, an optical disc, a ROM, a RAM, an EPROM, an EEPROM, a DRAM, a VRAM, a flash memory, a flash card, a magnetic card, an optical card, nano systems, a molecular memory integrated circuit, a RAID, remote data storage/archive/warehousing, and/or any other type of device suitable for storing instructions and/or data.
  • some implementations include software for controlling both the hardware of the system and for enabling the system or microprocessor to interact with a human user or other mechanism utilizing the results of the example embodiments described herein.
  • Such software may include without limitation device drivers, operating systems, and user applications.
  • Such computer-readable media or storage device(s) further include software for performing example aspects of the invention, as described above.
  • a module includes software, although in other example embodiments herein, a module includes hardware, or a combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Manipulator (AREA)

Abstract

A module for a robot comprising at least one robotic arm with an end effector, the module comprising: an interface unit configured to mechanically and communicatively couple with said robot; a plurality of sensors configured to sense a space in a vicinity of said robot and said module for detecting the presence of any object; and a control unit configured to control at least one of said robot and said module, to minimize an effect of a collision of said robot and an object sensed by said plurality of sensors. A system, a method, and a computer program are also provided.

Description

MODULE FOR ROBOTIC ARM
Technical field
Example aspects herein relate to robotics, and in particular to a module for a robot, a system, a method for controlling a robot, and a computer program.
Background
Robots are used in various industries to process products, and more increasingly in environments where the presence of objects (whether inanimate objects, automated devices or persons) that may interfere with the operation of the robot is not controllable. This is particularly the case for processes involving a collaboration between a human operator and a robot (in which case, the robot is also referred to as a collaborative robot, or cobot).
Collisions between the robot and objects in its environment can be avoided or minimized by using "cages", i.e. virtual or physical enclosures around the robot. Any unauthorised ingress into the cage causes an immediate stop of the robot.
While they provide a level of safety, these cages limit cooperation and flexibility of the operation of the robot.
It would therefore be advantageous to improve the safe control of the robot without using a cage.
Summary of the invention
According to a first example aspect herein, there is provided a module for a robot.
Preferably said robot comprises at least one robotic arm, preferably with an end effector.
Preferably the module comprises an interface unit configured to mechanically and communicatively couple with said robot.
Preferably the module comprises a plurality of sensors configured to sense a space in a vicinity of said robot and said module for detecting the presence of any object.
Preferably, the module comprises a control unit configured to control at least one of said robot and said module, to minimize an effect of a collision of said robot and an object sensed by said plurality of sensors. The robot may be any suitable robot having at least one robotic arm. Each robotic arm may have one or more segments and at least an end effector for a range of processes, where the process may be at least one of picking and placing objects (e.g. pick- n-placing), assembling products, additive manufacturing (e.g. 3D printing), visual detecting (e.g. detecting a set of points in an environment using imaging/sensing means and using the detected set of points to identify/optimise a trajectory of the end effector in the environment, or to determine how the end effector should contact an object corresponding to the set of points).
The mechanical coupling keeps the part of the robot coupled to the interface unit in a substantially static position relative to the module, whilst allowing the movement of the arm(s) of the robot. In some cases, the robot may have a base from which the robotic arm(s) extends and relative to which the arm(s) moves. In such cases, the interface unit may be configured to at least mechanically couple (the module) with the base of the robot.
The mechanical coupling may be a permanent coupling, for example by using permanent affixing means such as welding, soldering, gluing, crimping, or a semipermanent coupling, for example by using means detachable affixing means such as fasteners (e.g. bolts, screws, etc.), taping, hook-and-loop fasteners, magnetic coupling, vacuum suction, friction etc., allowing the robot to be removably coupled to the module.
Preferably, the module is configured to support the robot, which may be placed substantially on (i.e. on an upper surface of) the module.
The communicative coupling may be establishing one or more suitable communication link. Each communication link may be, for example, a wireless communication link, for example Wi-Fi, cellular telephone data link such as LTE/5G, Bluetooth or Bluetooth Low Energy (BLE), or a wired communication link, for example DSL, fibre-optic cable, Ethernet, to an interface of the robot, for example at or near a base of the robot. Each communication link may not be permanent.
The plurality of sensors may comprise one or more cameras (e.g. IR, RGB/visible or multispectral cameras), infrared, ultrasonic sensors, LIDARs or other suitable sensors. Preferably, the sensors comprise a plurality of RGB cameras, to offer an improved range over ultrasonic sensors and avoid reflection related issues that LIDARs experience in industrial environments. The space that is sensed by the plurality of sensors may be in a vicinity of the robot and the module. The space is sensed to detect the presence of an object (e.g. a static or moving object or obstacle, a human operator, etc.) with which any collision should be minimized.
The space may be a two-dimensional space, for example a surface defining a limit that may be reached by the robotic arm(s) (e.g. a cylindrical or otherwise curved and substantially surrounds the robot and/or module), a surface defining the space in which the robotic arm(s) is operating (e.g. a substantially planar surface extending away from the robot). Alternatively, the space may be a three-dimensional space, for example a volume comprising at least one of the robot and the module and a volume of all positions that may be reached by the robotic arm(s), or a volume in which the robotic arm is operating. It would be understood that the space needs not to surround the robot and/or the module, and that it may depend on the configuration in which the module and the robot are placed. It would be understood that in cases where the robot comprises more than one robotic arm, a separate space may be defined for each robotic arm.
The object that is sensed is an object that is likely to collide with the robot (e.g. with a part of the robotic arm such as the end effector, or with an object held by the end effector), due to a movement of the robot, the sensed object, or both. The collision may be understood to be any undesired or unexpected contact between at least the robot and an object (instead of, for example, the end effector contacting an object to manipulate it), and it may be, in addition to the robot, an undesired/unexpected contact between the module and the object.
The control unit may be configured to control the robot (e.g. a part of the robotic arm, such as the end effector, or any segment of the robotic arm), to control the module (e.g. to control a movement of the module), or both, by causing a movement of the robot and/or the module, or by causing an interruption or another modification to an ongoing movement of the robot and/or module (e.g. a change in at least one of a trajectory and speed of an ongoing movement). The control unit may generate one or more control signals for controlling the robot or for controlling means for moving the module.
Thus, preferably, the control unit is configured to control a movement of at least one of the robot and the module. The control of the robot and module may be based on the sensed space, for example the control may cause a new movement or a modification to an ongoing movement, where the new movement or the modification is determined based on the sensed space in the vicinity of the robot and module (e.g. to avoid a collision with another detected obstacle).
Alternatively, the control unit may cause a pre-defined movement or modification to an ongoing movement. For example, the control unit may be configured to interrupt all movement of the robot, or to retract the robotic arm(s) (i.e. move parts of the robotic arm so as to reduce a distance between the module and the parts of the robotic arm). Similarly, the control may be configured to, upon sensing an object, cause the module to move away from the sensed object (e.g. by a predetermined distance or until the object is no longer sensed).
Accordingly, by controlling at least one of the robot and the module, the control unit can minimize the effect of any collision between the robot and the object. Minimizing the effect of any collision should be understood to encompass an avoidance of any collision, and alternatively a minimization of a safety risk or damage to at least one of the object (which may be a human operator), any part of the robot, an object held by the end effector, and the environment. In other words, minimizing an effect of a collision may be understood to be minimizing any collision or an effect of a collision.
Accordingly, with the module, the safe control of the robot can be improved without requiring a cage to be used, thus improving possible cooperation with the robot (and in particular, with a cobot) and flexibility in the setting in which the robot may be used.
Preferably, said control unit is configured to control at least one of said robot and said module based on one or more physical parameters of at least one of said robot and said module.
The physical parameters of the robot and/or the module may be at least one of a dimension (e.g. a length, height, width, depth), a weight, mass/inertia, a position/orientation, a space or volume occupied, a configuration of the robot (e.g. one or more angles between parts of the robot, a rotation, an orientation of the end effector) etc. These may be parameters of the module, a part of the robot, the entire robot or both the module and the robot. Each parameter can be used to determine whether a collision is likely to occur, or to determine how the module or the robot should be controlled to avoid or minimize the effect of a collision. Based on these parameters, the control unit can therefore determine whether to move the robot or the module.
Preferably, the module further comprises a notification unit configured to trigger a notification, to alert an operator of said module of an expected collision. Preferably said notification unit comprises at least one of audio, visual and haptic means to notify said operator.
In the present disclosure, the terms "operator", "human operator" and "user" are interchangeable.
The notification may include information related to the expected collision, for example by using different notifications (e.g. different audio and/or visual signals, or haptic signals) each associated with different categories of collisions or safety risk.
By alerting the operator, it is possible to further reduce any safety risk or damage to the robot, the module and/or the environment.
Preferably, the notification unit is configured to also trigger a notification to indicate a status of the robot and/or the module to the operator. The status can include an alert for a low battery, an error related to the functioning of an element of the module and/or the robot, or information assisting the operator to control the module.
Preferably, said notification unit is further configured to obtain data related to said robot and to include said data in said notification.
The notification unit may obtain the data related to said robot from the control unit, or from the robot via the interface unit.
Preferably, said control unit is further configured to control an automated movement of the said module.
For example, the control unit may generate one or more signals to control means for moving the module (e.g. wheels coupled to one or more motor) that are either a part of the module or supporting the module (i.e. the module may be placed on the means for moving the module). The automated movement may be, for example, a movement based on a predetermined trajectory of the module, a movement guided by elements present in the environment of the module (e.g. one or more markers indicating a position or trajectory to be followed), or it may be a movement that is remote-controlled by a human operator. Preferably, said module further comprises an automated guided vehicle (AGV).
Preferably, said control unit is configured to control said automated guided vehicle.
Preferably, said interface unit is further configured to communicate with a second module, to exchange information related to a sensed space in the vicinity of said module and said second module.
The second module may be a module substantially the same as the module according the first example aspect described herein (i.e. it comprises at least a second plurality of sensors, and may comprise an interface unit for coupling with a second robot, a control unit, and may comprise further elements, as described herein). The second module may be in the vicinity of the module. Any second robot coupled to the second module may be configured to cooperate with the robot coupled to the module.
By exchanging the information related to the sensed space, the two modules may improve the detection of object in the vicinity of the modules and the robots, and better minimize the effect of any collision between the robot(s) and the sensed object.
Preferably, said control unit is configured to control said plurality of sensors, based on said exchanged information.
For example, the exchanged information may allow the module to determine a relative position of the second module. This may, in turn, allow the module to combine information from the second plurality of sensors (from the second module) to the information obtained from the plurality of sensors.
Preferably, said interface unit is configured to communicate with an extended reality, XR, interface to exchange at least one of: information for controlling said robot, information relating to the vicinity of said robot, and information relating to a status of at least one of said robot and said module.
By way of non-limiting example, said XR interface may be any apparatus or system capable of displaying at least one 2D or 3D virtual element of digital information to a user. The at least one virtual element may be superimposed onto a real environment, either captured or viewed live by the user, the at least one virtual element may be superimposed onto a virtual environment, or the at least one virtual element may represent a virtual environment.
A human operator of the module and/or the robot may interact with the XR interface, for example via a visual device to be worn or held by the human operator. By exchanging information for controlling said robot, the module may relay instructions from the XR interface (and from the human operator) to the robot.
By exchanging information relating to the vicinity of said robot, the module may improve the human operator's awareness of the robot's situation, and improve the safe control of the robot.
As explained above, the status can be alert for a low battery, an error related to the functioning of an element of the module and/or the robot.
Accordingly, by exchanging information relating to a status of at least one of said robot and said module, the module may improve the human operator's awareness of the robot's and/or the module's situation and allow the user to perform any action required to minimize the effect of any collision with a sensed object or to ensure the correct functioning of the robot and/or the module.
Also, by communicating with the XR device, it is possible that the human operator may not need to be in the vicinity of the module and/or robot. For example, the human operator may be in a location that is remote from the module and robot. Accordingly, said interface unit allows a remote control (or tele-operation) of the robot by communicating with the XR interface.
Preferably, said control unit is configured to determine a risk category associated with said end effector, and to control at least one of said robot and said module based on said risk category.
The risk category may be determined based on an intended operation of the end effector (such as holding an object, welding, cutting, polishing painting, inspecting, sensing), one or more operational parameters of the end effector (e.g a speed at which the end effector operates, a force that the end effector is to generate, etc.).
The control unit may determine the risk category based on information related to the end effector (e.g. information on at least one of the intended operation and the operational parameter(s)). The control unit may obtain such information from the robot or from an external device, or it may be predetermined information stored in a memory.
Based on the risk category, the control unit may determine how to control at least one of the robot and the module, for example by determining a threshold distance between the detected object and the end effector, whereby the threshold distance is greater for a higher risk category than a lower risk category, and controlling at least one of the robot and the module if the detected object is within the threshold distance from the end effector.
Preferably, the control unit may determine whether to preferentially control the robot or the module, based on the risk category.
For example, when the end effector is associated with a low-risk category (e.g. the end effector is a suction cup or is an end effector for sensing/inspecting), the control unit may determine to preferentially control the module; when the end effector is associated with a high-risk category (e.g. the end effector comprises a blade) the control unit may determine to preferentially control of the robot. The control unit may determine that controlling one of the robot and module may be faster than the other, or it may require less energy, or it may be safer (e.g retracting the blade to avoid any contact with an operator).
Preferably, said control unit is configured to exert control so that only one of said robot and said module moves at a given time.
Accordingly, the module and the robot may be prevented from moving at the same time, as it may be difficult to quickly determine control both the robot and the module whilst avoiding or minimizing the effect of any collision.
Preferably, said control unit is configured to move said module only when said robotic arm is in a retracted position.
Preferably, said control unit is configured to prevent said module from moving when said robotic arm is not in a retracted position.
Preferably, said control unit is configured to: determine a first format to be used in communication with said robot and a second format to be used in communication with an external device. Preferably, said control unit is also configured to at least one of: convert first data received from said robot from said first format into said second format and cause said interface unit to transmit converted first data to said external device, and convert second data received from said external device from said second format into said first format and cause said interface unit to transmit converted second data to said robot.
The robot and the external device may be configured to use different data formats to express the same data, such that the robot and the external device would not be able to interpret the data obtained from each other. Accordingly, the module may act as a data exchange facilitator (or translator), to adapt the data from the robot in a way that can be correctly interpreted by the external device, or vice-versa.
Each of the first data format and the second data format may be for a digital data format, or an analog data format.
A digital data format, used for digital communication (e.g. with the external device and/or with a robot comprising a controller or other digital processing means), may define at least one of an order in which pieces of data are to be arranged, an expected size (e.g. in terms of bits, bytes etc.) of the piece(s) of data, a range of values of a parameter corresponding to a piece of data (e.g. a 4-bit piece of data may be used to define a range of 16 values, which may correspond to values of a parameter ranging from 0 to 15, or from -7 to 8).
An analog data format may define at least one of a correspondence between an amplitude of a signal transmitted and a value of a parameter, and a frequency at which the signal may change value.
The control unit may obtain information indicating the first data format from the robot or from a memory storing information (e.g. in a database or library) indicating one or more data formats each associated with a type of robot.
Similarly, the control unit may obtain information indicating the second data format from the external device or from the memory.
The information indicating a data format from the external device may be based on a user input, the user selecting a data format that the control unit should use for communication with the robot and/or with the external device.
Thus, when converting from a digital format to another digital format, the control unit may convert data in the first format into data in the second format, for example by rearranging pieces of data into an order defined by the second format, by adjusting the size either by reducing (e.g. by truncating, rounding, etc.) or by extending (e.g. by zeropadding), by adding or removing an offset to a value indicated by a piece of data, etc.
Alternatively, when converting data from an analog format to a digital format, the control unit may use a predetermined conversion rule to obtain a piece of data corresponding to a received signal's amplitude (or vice-versa when converting from a digital format to an analog format). Preferably, said interface unit comprises means for mechanically and communicatively coupling with a plurality of different connectors of robots.
Accordingly, robots with different connectors may be placed on the module and controlled by the module.
Preferably, said interface unit is configured to removably couple said robot with said module.
For example, the mechanical coupling may be a non-permanent coupling means such as magnets, screws, bolts, or any combination of means.
Accordingly, different robots may be interchangeably coupled to the module, for an improved flexibility in control of the robot.
Preferably, an outline of said module has a first shape in a substantially horizontal first plane. Preferably said first shape is a substantially regular polygon.
As the robot is coupled to (e.g. placed on) the module, the module may prevent certain movements of the robot as it would otherwise collide with the module. For example, if the robot is placed on the top surface of the module, the top surface becomes a physical obstacle for a movement of the robot. In other words, the module causes a limit to a space in which the robot can operate. Therefore, a restricted space around the module needs to be defined to avoid collision between the robot and the module.
Accordingly, by using an outline that is substantially a regular polygon, the restricted space around the module, as a limit to the space in which the robot can operate (in other words of a range of positions in which the robot can operate) in the first plane without colliding with the module, may be more easily determined.
Separately, by using a substantially regular polygonal shape, the plurality of sensors may be placed on the sides of the outline to each be oriented in different directions, with a substantially constant interval angle between the directions of adjacent sensors. Accordingly, each sensor may be used to detect objects for a same angular range around their respective direction. Accordingly, the accuracy of the object detection may be substantially the same all around the module.
For simplicity, references to polygon as used herein should be understood to be a reference to substantially convex polygon, i.e. having no interior angles that is above
180°. In the present disclosure, defining a shape of an outline of the module indicates that no element of the module protrudes substantially outwards from the given shape.
Preferably, said outline of said module has a second shape in a second plane substantially parallel and distinct from said first plane Preferably said second shape is a substantially regular polygon different from said first shape.
Accordingly, as with the first plane, an angular range of each sensor placed at the second plane may be substantially constant.
Separately, a shape with a greater number of sides may be used for the one of the first plane and second plane where sensors are placed, thus reducing the angular range corresponding to each sensor and improving the accuracy in detecting objects. On the other hand, a shape with fewer sides may be used for a plane where the module may be in contact with another module, thus facilitating modules to be placed next to one another.
Preferably, a number of sides of one of said first shape and said second shape is greater than a number of sides of the other one of said first shape and said second shape.
Preferably, a number of sides of one of said first shape and said second shape is a multiple of a number of sides of the other one of said first shape and said second shape.
Preferably, an outer shape of said module comprises a taper portion, said taper portion being narrower towards a top of said module.
Accordingly, the center of gravity of the module may be lowered, thus improving the stability of the module.
Additionally, the visibility of visual feedback means on the taper position may be improved, as they are less likely to be entirely blocked from an observer positioned higher than the top of the module (e.g. a human operator or a camera facing downwards from a wall or a ceiling) even if objects or other modules are present in the vicinity of the module.
Preferably, said taper portion is formed by a plurality of side surfaces, each of said side surfaces being substantially flat.
Accordingly, the manufacturing, assembly and/or transport of the module may be facilitated.
Additionally, the volume occupied by the module and any restriction the module may cause to a reach of the robot may be more easily determined. Preferably, said taper portion is comprised between said first plane and said second plane.
Preferably said outline of said module defines a geometric surface area in one of said first plane and second plane that is larger than a geometric surface area defined by said module in the other of said first plane and second plane.
Preferably, said one of said first plane and second plane is farther than said other of said first plane and second plane.
Accordingly, the module defines a smaller geometric area in a plane nearer the robot, and thus any restriction the module may cause to a reach of the robot may be reduced.
Preferably, said module comprises at least one first side surface and at least one second side surface, wherein each said first side surface is inclined at a first angle and each said second side surface is inclined at a second angle that is different from the first angle.
In other words, an angle of elevation or depression (i.e. relative to a horizontal plane) of said at least one first side surface is different than an angle of elevation or depression of said at least one second side surface.
Accordingly, sensors may be placed on one of the first side surface and on one of the second side surface so as to be aimed with different angles relative to a horizontal plane.
For example, the first surface(s) may be facing upwards and the second surface(s) may be facing substantially sideways, so a sensor on a first side surface may sense a space that is above the module, whereas a sensor on the second side surface may sense a space that is nearer to the floor.
Preferably, said at least one first side surface is substantially vertical.
Accordingly, the placement of two of the same modules may be facilitated, by bringing the corresponding substantially vertical surfaces in contact (or in close proximity) with each other.
Preferably said at least one substantially vertical first side surface comprise communication means for communicatively coupling with communication means on a substantially vertical side surface of another module. Accordingly, the communicative coupling between two of the same modules may be easily done by bringing their respective vertical side surface in contact or close proximity with each other.
Preferably, said module further comprises one or more handles for positioning the module.
Preferably, said one or more handles are retractable.
For example, each of said one or more handles may be movable between an extended position and a retracted position.
Preferably, when retracted or when in the retracted position, each of said one or more handles is received in a respective opening formed at a housing of said module.
Preferably, when retracted or when in the retracted position, said one or more handles do not protrude substantially from an outline of said housing.
Accordingly, the handles do not cause an obstruction to a movement of the robot or to the detection of object by the plurality of sensors.
Preferably, each of said one or more handles is movable between a retracted position and an extended position via at least one of a rotation along a respective rotation axis and a translation along a respective translation axis.
Preferably, said one or more handles are mechanically coupled to haptic means for notifying a user holding said one or more handles.
Preferably, each of said one or more handles is located on a respective side surface of the module.
Accordingly, a user may easily grip two handles to move the module.
Preferably, each of said one or more handles is located on a respective vertical side surface of the module, wherein a side surface inclined upwards is interposed between adjacent vertical side surfaces.
In other words, the handles are located on substantially vertical side surfaces which are separated from each other by another side surface which is inclined upwards (i.e. which has a strictly positive angle of elevation).
Accordingly, the user gripping two handles faces the side surface that is inclined upwards, thus providing a space in front of the user. Therefore, this would improve the user's safety and comfort. Preferably, said control unit is configured to classify said object into one of an object to be processed by said end effector and an object with which the collision is to be minimized.
The control unit may obtain from the robot (via the interface unit) or from an external device controlling the robot, information on an operation of the robot. Based on this information, the control unit may determine an object to be processed by the end effector. Alternatively, the control unit may be generating data and/or signals for controlling the robot to process the object.
Accordingly, by classifying this object differently than other objects that may be detected by the plurality of sensors, interferences to the robot's operation may be reduced (for example by avoiding considering a contact with the object to be processed as a collision to be avoided and controlling the robot and/or the module accordingly).
Preferably, said interface unit is further configured to obtain data related to said robot, from at least one of said robot and an external device.
According to a second example aspect herein, there is provided a system comprising said module according to the first example aspect as described herein, and said robot.
Optionally, said system comprises an extended reality, XR, interface communicatively coupled to at least one of said module and said robot.
According to a third example aspect herein, there is provided a method for controlling a robot comprising at least one robotic arm with an end effector, said robot being mechanically and communicatively coupled to a module comprising a plurality of sensors, the method comprising: obtaining from said plurality of sensors information on a space in a vicinity of said robot and said module, said information being indicative of the presence of any object; and controlling at least one of said robot and said module, to minimize an effect of a collision of said robot and an object indicated by said information.
For example, said module may perform the steps of obtaining said information on said space, and of control said at least one of said robot and said module.
Preferably, said method comprises controlling at least one of said robot and said module, to minimize an effect of a collision of said module and an object indicated by said information. Preferably, said controlling at least one of said robot and said module is based on one or more physical parameters of at least one of said robot and said module.
Preferably, said method further comprises triggering a notification to alert an operator of said module of an expected collision.
Preferably, said method further comprises obtaining data related to said robot and including said data in said notification.
Preferably, said method further comprises controlling an automated movement of said module.
Preferably, said method further comprises communicating with a second module, to exchange information related to a sensed space in the vicinity of said modules (i.e. said module and said second module).
Preferably, said method further comprises communicating with an extended reality, XR, interface to exchange at least one of information for controlling said robot, information relating to the vicinity of said robot, and information relating to a status of at least one of said robot and said module.
Preferably, said method further comprises determining a risk category associated with said end effector, and controlling at least one of said robot and said module based on said risk category.
Preferably, said method further comprises determining a first format to be used in communication with said robot and a second format to be used in communication with an external device, and at least one of: (i) converting first data received from said robot from said first format into said second format and causing converted first data to be transmitted to said external device, and (ii) converting second data received from said external device from said second format into said first format and causing converted second data to be transmitted to said robot.
Preferably, said method further comprises classifying said object into one of an object to be processed by said end effector and an object with which the collision is to be minimized.
Preferably, said method further comprises obtaining data related to said robot, from at least one of said robot and an external device.
According to a fourth example aspect herein, there is provided a computer program comprising instructions which, when executed by one or more processor, cause said one or more processor to carry the method according to the third example aspect described herein.
Brief Description of the Drawings
Embodiments of the present invention, which are presented for better understanding the inventive concepts, but which are not to be seen as limiting the invention, will now be described with reference to the figures in which:
Figure 1 shows a schematic view illustrating an example of a system in example embodiments;
Figure 2 shows a schematic view illustrating an example of a module in example embodiments;
Figure 3 shows a schematic diagram illustrating an example of a module in example embodiments;
Figure 4 shows a schematic diagram illustrating an example of a general kind of programmable processing apparatus that may be used to implement a control unit in example embodiments;
Figure 5 shows a schematic view illustrating an example of sensors on a module sensing a space in a vicinity of the robot and the module, in example embodiments;
Figures 6a and 6b show schematic views illustrating an example of a control of the robot to minimize an effect of collision, in example embodiments;
Figure 7 shows a schematic view illustrating a module and robot in an environment with objects, in example embodiments;
Figure 8 shows a schematic diagram illustrating a module and possible communication links with a robot, a second module, an external device and an XR interface, in example embodiments;
Figure 9 shows a schematic view illustrating an example of geometric elements of an outline of the module, in example embodiments;
Figures 10a to 10c show schematic views illustrating examples of handles of the module, in example embodiments;
Figure 11 shows a schematic view illustrating an example of a system with a module, a robot and an XR interface, in example embodiments;
Figure 12 shows processing operations performed in example embodiments. Detailed Description
Although example embodiments will be described below, it will be evident that various modifications may be made to these example embodiments without departing from the broader spirit and scope of the present disclosure. Accordingly, the following description and the accompanying drawings are to be regarded as illustrative rather than restrictive.
In the following description and in the accompanying figures, numerous details are set forth in order to provide an understanding of various example embodiments. However, it will be evident to those skilled in the art that embodiments may be practiced without these details.
Referring to Figure 1, an example of a system in example embodiments will now be described.
The system comprises a module 10 and a robot 20. The module 10 comprises a plurality of sensors 12, an interface unit 110 and a control unit (not shown on Figure 1). The robot 20 comprises a robotic arm 22 with an end effector 24.
In the example shown on Figure 1, the robot 20 also includes a base 26, and the interface unit 110 is located on a top surface of the module 10 and is mechanically coupled to the robot 20 via the base 26 of the robot. However, this is non-limiting as the interface unit 110 may be located elsewhere on the module 10 (e.g. a side surface), or it may be provided in a recess on the module 10. The robot 20 need not have a base 26 (e.g. the interface unit 110 may couple with the robotic arm 22 directly). In some cases, the robot 20 may have other elements such as other robotic arms.
In the example shown on Figure 1, the robot's base 26 is fastened to the interface unit 110 by means of bolts and corresponding thread throughs provided on the interface unit 110. However, it would be understood that any other affixing means (permanent or non-permanent) may be used instead, such as those described herein.
By way of non-limiting example, on Figure 1, the top surface of the module 10 has a substantially octagonal shape, and a bottom surface of the module 10 has a substantially square shape. The module 10 has eight side faces (four triangular and four trapezoidal) joining the top surface and the bottom surface. Between the top surface and the bottom surface (defining planes Pl and P2, respectively), the module 10 defines a taper which is narrower at the top surface than the bottom surface.
Each of these sides includes a respective sensor 12, although Figure 1 only shows four of these sensors 12.
Accordingly, each sensor 12 is facing a respective direction away from the module 10 (and the robot), and can therefore detect a space around the module 10 and the robot 20 for any object that should not collide with the robot.
Additionally, the module 10 has a tapered outer shape, which is narrower towards the top. In other words, a cross section along a horizonal plane of the module 10 has a smaller area towards the top of the module 10 than towards the bottom of the module 10.
Accordingly, more components of the module 10 may be placed towards the bottom (i.e. towards plane P2), thus lowering the center of gravity of the module 10, which in turn improves its stability.
In the example shown on Figure 1, each sensor 12 comprises an RGB camera (i.e. a camera capturing visible light), and processing means for identifying object(s) on a captured image and for estimating a distance from the sensor 12 to each identified object. Based on this distance, the sensor 12 may determine whether each object is within a predetermined space in a vicinity of the robot 20 and the module 10.
However, it would be understood that this is non-limiting, as the sensors 12 may comprise any other suitable type of sensor (such as an IR camera, ultrasound sensor etc.), and instead of each sensor 12 having processing means, a centralized processing (for example the control unit 100) may obtain images or other data related to the sensed space from each sensor 12 to detect the presence of an object in the sensed space.
Referring now to Figure 2, an example of a module in example embodiments will now be described.
For brevity, elements of the module 10 that have already been described in connection with Figure 1 above (and identified with the same reference numbers on the Figures) will not be referred to here.
Figure 2 shows an example of the module 10 comprising handles 13, loudspeakers 14, LED light strips 15, an automated guided vehicle 17 and wheels 18. By way of non-limiting example, the module 10 shown on Figure 2 comprises four handles 13 and four loudspeakers 14 (on each trapezoidal side of the module 10), and four LED light strips 15, although only a subset of these are visible on Figure 2 due to the perspective. However, it would be understood that the module 10 may comprise any number of these elements, as the example shown on Figure 2 would be understood to be purely illustrative. Furthermore, it would be understood that the module 10 may comprise any subset of the handles 13, the loudspeakers 14, the LED light strips 15, the automated guided vehicle 17 and the wheels 18, as each of these are optional and may be separately included in the module 10.
In the example shown on Figure 2, the handles 13 are connected to haptic feedback means (e.g. an electric motor or linear actuator coupled to a weight to controllably generate vibrations, not shown on Figure 2), to provide haptic feedback to a user using the handles 13 for positioning the module 10. The loudspeaker 14 may generate an audio signal to alert a user near the module 10, and the LED light strip 15 may emit light as a visual signal to alert the user. The LED light strip 15 may be configured to emit light of a single color, or may control the color of the emitted light (e.g. by using multiple LEDs each emitting light at a different wavelength).
The haptic feedback means, the loudspeakers 14 and the LED light strips 15 are coupled to a notification unit of the module 10, which will be described later.
The automated guided vehicle 17 supports the module 10 (and the robot 20 placed on the module 10) to move the module 10, based on a predetermined trajectory, based on guiding elements present in the environment of the module 10, and/ based on commands received from a remote control.
The wheels 18 may be driven by a motor coupled to the automated guided vehicle 17, or may freely rotate as a user moves the module 10 (e.g. while holding the handles 13).
Referring now to Figure 3, an example of a module in example embodiments will now be described.
The module 10 comprises a control unit 100, an interface unit 110, a plurality of sensors 120, a notification unit 130, and an automated guided vehicle, AGV, 140.
The interface unit 110 comprises means for mechanically coupling the robot 20 to the module 10, and means for communicatively coupling with robot 20 (i.e. forming an electrical and/or electronic connection to the robot). Accordingly, the module 10 can communicate with the robot 20 via the interface unit 110, as indicated by the arrow. The communication may be used for at least one of a control the robot 20 and obtaining information from the robot 20 (e.g. a status or a configuration of the robotic arm 22 and/or end effector 24).
The interface unit 110 may also communicate with one or more devices other than the robot 20. For example, the interface unit 110 can communicate with an external device, and with another module, as will be explained in more detail below.
The plurality of sensors 120 corresponds to the sensors 12 shown on Figures 1 and 2, and are configured to sense a space in a vicinity of the module 10 and of the robot 20 placed on the module 10.
The notification unit 130 generates, upon receiving an instruction from the control unit 100, a notification to a user. The notification may be generated as a visual notification, an audio notification, a haptic notification, or an electronic notification (for example by transmitting the notification to an external device via the interface unit 110). Specifically, the notification unit 130 is coupled to audio, visual and/or haptic means, such as the vibrating means coupled to the handles 13 shown on Figure 2, the loudspeakers 14 and the LED strip lights 15 shown on Figure 2.
The AGV 140 corresponds to the AGV 17 shown on Figure 2.
However, it would be understood this is not limiting, as the notification unit 130 may be omitted (for example when the module 10 operates in an environment where not user would need to be notified), and similarly the AGV 140 may be omitted (for example in the case of a module that is static or that can only move by being dragged by users).
The control unit 100 controls various elements of the module 10, and may be implemented as a general kind of programmable processing apparatus, as will be described below.
In the example of Figure 3, the control unit 100 controls the interface unit 110 to transmit data to the robot, to obtain data from the robot, or both. The control unit 100 also controls the sensors 120, either by causing each sensor 120 to start or stop sensing the space, or by obtaining data related to the space from each sensor 120 (the obtained data indicating the presence of any object in the space). The control unit 100 also controls the notification unit 130 to trigger the generation and output of the notification. The control unit 100 also controls the AGV 140, for example based on instructions from a user received via the interface unit 110.
Referring now to Figure 4, an example of a general kind of programmable processing apparatus 70 that may be used to implement the control unit 100 is shown.
The programmable processing apparatus 70 comprises one or more processors 71, one or more input/output communication modules 72, one or more working memories 73, and one or more instruction stores 74 storing computer-readable instructions which can be executed by one or more processors 71 to perform the processing operations as described hereinafter.
An instruction store 74 is a non-transitory storage medium, which may comprise a non-volatile memory, for example in the form of a read-only-memory (ROM), a flash memory, a magnetic computer storage device (for example a hard disk) or an optical disk, which is pre-loaded with the computer-readable instructions. Alternatively, an instruction store 74 may comprise writeable memory, such as random access memory (RAM) and the computer-readable instructions can be input thereto from a computer program product, such as a non-transitory computer-readable storage medium 75 (for example an optical disk such as a CD-ROM, DVD-ROM, etc.) or a computer-readable signal 76 carrying the computer-readable instructions. The combination 77 of hardware components shown in Figure 3 and the computer-readable instructions are configured to implement the functionality of the control unit 100.
In the description herein, operations caused when one or more processors 71 executes instructions stored in an instruction store 74 will be described generally as operations performed by the control unit 100, by the module 10, or by elements of the module 10 such as the interface unit 110.
Referring now to Figure 5, an example of the space in the vicinity of the robot 20 and the module 10 will now be described.
Figure 5 shows a top view of the module 10 and the robot 20 shown on Figure 1. For illustrative purpose, only a part of the robot 20 is shown on Figure 5.
For example, the module 10 comprises eight sensors 12, each located on a respective side surface of the module 10 and each sensor 12 faces a respective direction away from the module 10 and the robot 20. As the top surface of the module 10 defines a regular octagon, the direction faced by each sensor 12 is at an angle of 45° from adjacent sensors 12.
Each sensor 12 has a predetermined field of view, indicated by the angle 0 on Figure 5. Accordingly, each sensor 12 senses a respective part of the space extending away from the module 10 and the robot 20. Although not required, the parts of the space sensed by adjacent sensors 12 usually overlap as shown on Figure 5.
Although Figure 5 shows the same field of view for each sensor 12, this is purely for simplicity as it should be understood that each sensor 12 may have a different field of view. Additionally, as would be apparent from the present disclosure, each sensor 12 may be of a different type.
The sensors 12 may be sensing a two-dimensional space (for example if the robotic arm operates substantially along a plane), or a three dimensional space. In the example shown on Figure 2, for example, sensors 12 on the triangular inclined side surfaces of the module 10 sense a space that extends substantially above the module 10, in the vicinity of the robot 20 which is placed on top of the module 10, whereas sensors 12 on the trapezoidal vertical side surfaces sense a space that extends substantially to the side of the module 10. Accordingly, together, the sensors 12 can sense a space in the vicinity of both the module 10 and the robot 20.
Referring now to Figures 6a and 6b, an example of a control of the robot 20 to minimize an effect of collision will now be described.
When an object is detected by the sensors 12 in the vicinity of the robot 20, and the robot 20 is in an extended configuration as shown on Figure 6a (i.e. the end effector 24 is at a distance away from the module 10), the control unit 100 may, via the interface unit 110, control the robot 20 to change its configuration into a retracted configuration as shown on Figure 6b, namely by bringing the end effector 24 closer to the module 10.
Accordingly, a distance between the sensed object and the robot 20 may be increased, thus avoiding a collision with an object, or by minimizing any effect of the collision.
The retracted configuration may be such that no part of the robot 20 extends sideways beyond the side surfaces of the module 10. Accordingly, the module 10 may act as a barrier preventing a collision between the object and the robot. Referring now to Figure 7, an example of a module and robot in an environment with objects will be described.
For simplicity, Figure 7 illustrates the projection S of the space sensed by the sensors 12.
In the environment where the module 10 and the robot 20 operate, a plurality of objects may be present, some of which may be moving. Objects 32 and 33 are examples of static objects, whereas object 34 is an example of a moving object, with a trajectory of the object 34 indicated by the arrow 35.
In the example shown on Figure 7, the robot 20 is programed to process object 32, without coming into contact with any of objects 33 and 34.
Specifically, by way of a non-limiting example, the end effector 24 comprises a camera and a suction cup. The camera is for inspecting object 32. The suction cup is for grasping object 32 (if object 32 passes the inspection) and for moving object 32 to another predetermined location. To this effect, the robotic arm may move while keeping the end effector 24 at a predetermined capturing range of the end effector 24's camera, to perform the inspection. Then, the robotic arm may move the end effector 24's suction cup to contact the object 32 at a specific location on the object 32 (e.g. a location that is predetermined, or that is determined based on the orientation, shape, etc. of object 32 as determined during the inspection).
As the robotic arm moves towards the object 32 for the inspection or for contacting the object 32, the sensors 12 of the module 10 may detect the presence of the object 32, and may cause the control unit 100 to control the robot or the module 10 to avoid this contact. This, however, would prevent the robot 20 from performing the desired operation.
Accordingly, the control unit 100 obtains from the robot 20 (via the interface unit 110) or from an external device controlling the robot, information identifying the object 32 which is to be processed by the end effector 24.
This information may include, for example, at least one of a predicted location, shape, size, color, reflectance, orientation, etc. of the object 32 to be processed.
Based on this information, the control unit 100 may determine that the object sensed by the sensors 12 corresponds to the object 32 to be processed by the end effector 24. In other words, the control unit 100 may classify the sensed object as an object 32 to be processed by the end effector 24. In this case, as the contact with the object 32 is desired, the control unit 100 may not perform (or may stop performing) any of the processing described herein for minimizing an effect of a collision of the robot 20 with the object 32.
On the other hand, as the robotic arm moves towards the object 32 for the inspection or for contacting the object 32, the sensors 12 of the module 10 may detect the presence of an object 33 and/or the object 34. Based on the information identifying the object 32 which is to be processed by the end effector 24, the control unit 100 may determine that the object sensed by the sensors 12 does not correspond to the object 32 to be processed by the end effector 24.
Accordingly, the control unit 100 may classify the sensed object as an object with which the collision (of the robot 20, and possible also of the module 10) is to be minimized.
In case the sensed object is one of object 33, the control unit 100 may determine a space around the object 33 to be avoided (for example a space including the object 33 to be avoided and a space around this object 33 defined as a threshold distance).
The control unit 100 may use the determined space to be avoided to modify the operation of the robot 20 to proceed with the processing of object 32 whilst avoiding the space around the object 33.
In case the sensed object is the moving object 34, the control unit 100 may control the robot 20 to interrupt an operation of the robotic arm, of the end effector 24, or to retract the robot 20 (for example as explained with reference to Figures 6a and 6b above). The control may depend on the type of end effector 24.
In the above provided example, the end effector 24 comprises a camera and a suction cup, which may be associated with a lower risk category. However, it would be understood that various end effectors may have different inherent risk to the safety of human operators, to the environment or the robot 20 or module 10 itself.
Accordingly, in another example, the control unit 100 is configured to obtain information indicating a risk category associated with the end effector 24.
This may be, for example, obtained from the robot 20 (which may receive the information from the end effector 24 when it is installed), or from an external device. Based on the risk category, the control unit 100 may determine a size of the sensed space, or determine an allowable threshold distance between the end effector 24 and any detected object. The threshold distance is set greater for a higher risk category than a lower risk category. Accordingly, the control unit 100 controls the robot 20 and/or the module 10 if the detected object is within the determined threshold distance from the end effector 24.
By way of non-limiting example, Table 1 below provides example of risk categories that may be associated with various end effectors.
Figure imgf000026_0001
Table 1 - risk categories for various end effectors
Accordingly, in another example, the end effector 24 shown on Figure 7 may include an end effector associated with a high risk category, such as a rotating blade for making a predetermined cut in the object 32.
In this case, the control unit 100 may control the robot 20 as soon as the sensed object enters in a space in a vicinity of the robot 20 and the module 10 that is greater than the space indicated by the projection S.
Referring now to Figure 8, an example of communications of the module will now be described.
As shown on Figure 8, the module 10 may, via the interface unit 110, establish communication links with the robot 20, a second module 10', an external device 40 and an XR interface 50.
The second module 10' may be substantially similar to the module 10. The communication link between module 10 and 10' may be a wireless communication link allowing modules that are near each other to communicate (e.g. using Bluetooth, nearfield communication, etc.).
The external device 40 may be a device controlling the robot 20, either via the module 10 or via a separate communication link between the external device 40 and the robot 20. In case the external device 40 controls the robot 20 via a separate communication link, the control unit 100 may be configured to override the control from the external device 40, to minimize the effect of a collision between the robot 20 and the detected object.
The XR interface 50 comprises a visual device for showing to a user an extended reality content, which may be used to control the robot 20.
The module 10 may exchange with the external device 40 and/or the XR interface 50, at least one of: (i) information related to the end effector 24, (ii) information on an operation of the robot, (iii) data related to said robot, (iv) information identifying an object 32 to be processed by the end effector 24, (v) information indicating a risk category associated with the end effector 24, and (vi) information regarding a data format to be used for communication with the robot 20 or with the external device 40 (for example, the external device 40 and the module 10 may be communicating using a predetermined data format, and the external device 40 may indicate another data format into which data from the robot 20 should be converted before being forwarded to the external device 40).
In addition, the module 10 may provide to the external device 40 data obtained from the robot 20 (in the format in which it is obtained or converted to another data format for the external device 40), a notification triggered by the control unit 100, which may include information related to the robot.
The external device 40 and/or the XR interface 50 may provide to the module 10 a set of instructions for controlling a movement of the robot, or the end effector 24.
Referring now to Figure 9, geometric elements of an outline of the module will be described.
Although the module 10 has been described as having an octagonal top surface and a square bottom surface, this is non-limiting, as other shapes may be used instead.
An outline of the module 10 in a first substantially horizontal plane Pl (i.e. a crosssection of the module 10 in plane Pl) may have a first shape that is substantially polygonal having any number of sides. By way of non-limiting example, Figure 9 shows the outline having a hexagonal shape with 6 sides.
An outline of the module 10 in a second plane P2 that is substantially parallel to plane Pl (and lower than plane Pl) may have a second shape that is also substantially polygonal having any number of sides. By way of non-limiting example, Figure 9 shows the outline having a dodecagonal shape with 12 sides.
Although the first shape and the second shape may have the same number of sides, it is preferable to have a second shape in the lower plane P2 that is larger (i.e. has a larger area) than the first shape in plane Pl. This allows the module 10 to define a taper portion between the two planes Pl and P2, with a narrower top. As a result, elements in the module 10 are placed lower, thus lowering the center of gravity of the module 10. This improves the stability of the module 10 from tipping off, in particular as the robotic arm coupled to the module 10 moves.
It would be understood that the planes Pl and P2 need not correspond to the top surface and the bottom surface of the module 10 (i.e. the module 10 may extend above plane Pl and/or below plane P2).
Between the two planes Pl and P2, side surfaces can be defined linking vertices of the first shape and vertices of the second shape. If the second shape has a number of sides that is a multiple of the number of sides of the first shape, the shape of side surfaces can be made uniform, and the overall geometric shape of the module 10 simplified, thus facilitating the manufacture and assembly of the module 10.
Additionally, it would be understood that, the module 10 may cause a restriction to the movement of the robot 20, as the robotic arm 22 cannot move into the space occupied by the module 10. Accordingly, having a simpler overall geometric shape of the module 10 help simplify the determination of the space that the robotic arm should be prevented from moving into, to avoid a collision of the robotic arm 22 with the module 10.
Referring now to Figures 10a to 10c, examples of handles of the module 10 will now be described. For simplicity, Figures 10a to 10c only show a single handle 13 and only show the handle 13 in the extended position (and not showing any recess formed in the module 10 for the handle 13), however it would be understood that any number of handles 13 may be provided on the module 10 each handle 13 being provided with a respective recess formed in the module 10.
As explained above, it is preferable that no element of the module 10 protrudes from the outer shape of the module 10, as this helps avoid collision between the module 10 and the robotic arm 22 and allows similar modules to be placed next to each other. Accordingly, the handles 13 of the module 10 shown on Figure 2 may be retractable, i.e. they can be moved out of the housing of the module 10 into an extended position, and moved back into the housing of the module 10 when the handles 13 are not in use, so that, when retracted, the handles 13 do not protrude substantially from the housing of the module 10, which would reduce risk that the robot 20 collide with a handle 13. It would be understood that the module 10 would typically be positioned when the robot 20 is not moving, and the module 10 would typically not be moved by a human operator when the robot 20 is moving. In other words, the handles 13 are not in use when the robot 20 moves, and vice-versa.
As shown on Figures 10a to 10c, the handles 13 may be moved from a retracted position to an extended position in various ways.
For example, in the example shown on Figure 10a, the handle 13 is rotatable along an axis xl that is parallel to the side of the top surface of the module 10 where the handle 13 is located. In an extended position, the handle 13 extends horizontally outward from the module 10. In a retracted position, the handle 13 moves into a corresponding recess formed on the trapezoidal side surface 19a of the module 10, to be flushed with the trapezoidal side surface 19a.
Alternatively, as shown on Figure 10b, the handle 13 may be slidably moved along an axis yl between the extended position and the retracted position, where a recess corresponding to the handle 13 is formed underneath and parallel to the top surface of the module 10, such that, in the retracted position, the handle 13 is flushed with the top surface of the module 10.
As another example, still shown on Figure 10b, the handle 13 may be rotatable along a vertical axis zl. In an extended position, the handle 13 is located on the trapezoidal side surface 19a and a part 13a of the handle 13 contacts the side surface 19a. In the retracted position, the handle 13 moves into a recess formed at the top of the triangular side surface 19b, underneath and parallel to the top surface of the module 10, such that the part 13a of the handle 13 is flushed with the triangular side surface 19b.
As shown on Figure 10c, a combination of movements may be used to move the handle 13 between the retracted and extended position.
For example, the handle 13 may be retracted into a recess formed in the module 10, parallel to the trapezoidal side surface 19a. The handle 13 may be moved into an extended position by first sliding the handle 13 upwards, along axis z2, then rotating the handle 13 sideways along axis xl.
Referring now to Figure 11, an example of a system with a module, a robot and an XR interface (which is an example of the XR interface 50 shown on Figure 8) will now be described.
As explained above, the XR interface 50 may exchange various information with the module 10. In the example shown on Figure 11, the XR interface 50 is to be used for controlling the robot 20.
Specifically, the system comprises module 10 and robot 20. In addition, the system comprises a XR interface 50 including a visual device 52 and input means 54. The visual device 52 is to be worn by a user 60, and shows to the user 60 an XR content, such as an augmented reality content, a mixed reality content or a virtual reality content. The input means 54 is to be held or worn by the user 60 and is used to input commands for controlling the robot 20.
In the present example, as shown on the top part, the visual device 52 display an XR content that includes a planned trajectory 56 of the end effector 24, and a cursor 57 corresponding to the input means 54. The planned trajectory 56 and cursor 57 are examples of virtual elements of information that the XR content superimposes onto a real environment, as an example of a reality extension. By moving the input means 54, the user 60 may point the cursor 57 to a point of the planned trajectory 56 of the end effector 24, and may move the point of the planned trajectory 56 (e.g. by effecting a click- and-drag input) to a new location, as shown by the arrow 58. As a result, the trajectory 56 of the end effector 24 may change to pass through the point newly designated by the user 60, by following the modified trajectory 56'.
In summary, it will be appreciated from the description above that certain example embodiments perform processing operations to effect a method as shown in Figure 12 that is performed by an information processing apparatus (e.g. the control unit 100).
Referring to Figure 12, at step S82, the information processing apparatus obtains, from a plurality of sensors 12 of a module 10 that is mechanically and communicatively coupled to the robot 20, information on a space in a vicinity of the robot 20 and the module 10, the information being indicative of the presence of any object. At step S84, the information processing apparatus controls at least one of the robot
20 and the module 10, to minimize an effect of a collision of the robot 20 and an object indicated by the information.
Modifications and variations
Many modifications and variations can be made to the example embodiments described above.
The example aspects described here avoid limitations, specifically rooted in computer technology, relating to the control of robots having at least one robotic arm. By virtue of the example aspects described herein, the safe control of the robot can be improved without requiring a cage to be used, thus improving possible cooperation with the robot (and in particular, with a cobot) and flexibility in the setting in which the robot may be used. Also, by virtue of the foregoing capabilities of the example aspects described herein, which are rooted in computer technology, the example aspects described herein improve computers and computer processing/functionality, and also improve the field(s) of at least robot control, data processing and data storage.
In the foregoing description, example aspects are described with reference to several example embodiments. Accordingly, the specification should be regarded as illustrative, rather than restrictive. Similarly, the figures illustrated in the drawings, which highlight the functionality and advantages of the example embodiments, are presented for example purposes only. The architecture of the example embodiments is sufficiently flexible and configurable, such that it may be utilized in ways other than those shown in the accompanying figures.
Software embodiments of the examples presented herein may be provided as, a computer program, or software, such as one or more programs having instructions or sequences of instructions, included or stored in an article of manufacture such as a machine-accessible or machine-readable medium, an instruction store, or computer- readable storage device, each of which can be non-transitory, in one example embodiment. The program or instructions on the non-transitory machine-accessible medium, machine-readable medium, instruction store, or computer-readable storage device, may be used to program a computer system or other electronic device. The machine- or computer-readable medium, instruction store, and storage device may include, but are not limited to, floppy diskettes, optical disks, and magneto-optical disks or other types of media/machine-readable medium/instruction store/storage device suitable for storing or transmitting electronic instructions. The techniques described herein are not limited to any particular software configuration. They may find applicability in any computing or processing environment. The terms "computer-readable", "machine- accessible medium", "machine-readable medium", "instruction store", and "computer- readable storage device" used herein shall include any medium that is capable of storing, encoding, or transmitting instructions or a sequence of instructions for execution by the machine, computer, or computer processor and that causes the machine/computer/computer processor to perform any one of the methods described herein. Furthermore, it is common in the art to speak of software, in one form or another (e.g., program, procedure, process, application, module, unit, logic, and so on), as taking an action or causing a result. Such expressions are merely a shorthand way of stating that the execution of the software by a processing system causes the processor to perform an action to produce a result.
Some embodiments may also be implemented by the preparation of applicationspecific integrated circuits, field-programmable gate arrays, or by interconnecting an appropriate network of conventional component circuits.
Some embodiments include a computer program product. The computer program product may be a storage medium or media, instruction store(s), or storage device(s), having instructions stored thereon or therein which can be used to control, or cause, a computer or computer processor to perform any of the procedures of the example embodiments described herein. The storage medium/instruction store/storage device may include, by example and without limitation, an optical disc, a ROM, a RAM, an EPROM, an EEPROM, a DRAM, a VRAM, a flash memory, a flash card, a magnetic card, an optical card, nano systems, a molecular memory integrated circuit, a RAID, remote data storage/archive/warehousing, and/or any other type of device suitable for storing instructions and/or data.
Stored on any one of the computer-readable medium or media, instruction store(s), or storage device(s), some implementations include software for controlling both the hardware of the system and for enabling the system or microprocessor to interact with a human user or other mechanism utilizing the results of the example embodiments described herein. Such software may include without limitation device drivers, operating systems, and user applications. Ultimately, such computer-readable media or storage device(s) further include software for performing example aspects of the invention, as described above.
Included in the programming and/or software of the system are software modules for implementing the procedures described herein. In some example embodiments herein, a module includes software, although in other example embodiments herein, a module includes hardware, or a combination of hardware and software.
While various example embodiments of the present invention have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein. Thus, the present invention should not be limited by any of the above described example embodiments, but should be defined only in accordance with the following claims and their equivalents.
Further, the purpose of the Abstract is to enable the Patent Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract is not intended to be limiting as to the scope of the example embodiments presented herein in any way. It is also to be understood that any procedures recited in the claims need not be performed in the order presented.
While this specification contains many specific embodiment details, these should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular embodiments described herein. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Having now described some illustrative embodiments and embodiments, it is apparent that the foregoing is illustrative and not limiting, having been presented by way of example. In particular, although many of the examples presented herein involve specific combinations of apparatus or software elements, those elements may be combined in other ways to accomplish the same objectives. Acts, elements and features discussed only in connection with one embodiment are not intended to be excluded from a similar role in other embodiments or embodiments.
The apparatuses described herein may be embodied in other specific forms without departing from the characteristics thereof. The foregoing embodiments are illustrative rather than limiting of the described systems and methods. Scope of the apparatuses described herein is thus indicated by the appended claims, rather than the foregoing description, and changes that come within the meaning and range of equivalence of the claims are embraced therein.

Claims

1. A module (10) for a robot (20) comprising at least one robotic arm (22) with an end effector (24), the module (10) comprising: an interface unit (110) configured to mechanically and communicatively couple with said robot (20); a plurality of sensors (12) configured to sense a space (S) in a vicinity of said robot (20) and said module (10) for detecting the presence of any object (32; 33; 34); and a control unit (100) configured to control at least one of said robot (20) and said module (10) , to minimize an effect of a collision of said robot (20) and an object (33; 34) sensed by said plurality of sensors (12).
2. The module (10) according to claim 1, further comprising a notification unit (130) configured to trigger a notification, to alert an operator (60) of said module (10) of an expected collision, wherein said notification unit (130) comprises at least one of audio (14), visual (15) and haptic means to notify said operator (60).
3. The module (10) according to claim 1 or claim 2, wherein said module (10) further comprises an automated guided vehicle, AGV (17; 140).
4. The module (10) according to claim 3, wherein said control unit (100) is configured to control said automated guided vehicle, AGV (17; 140).
5. The module (10) according to any one of claims 1 to 4, wherein said interface unit (110) is further configured to communicate with a second module(lO'), to exchange information related to a sensed space in the vicinity of said module (10) and said second module (10').
6. The module (10) according to any one of claims 1 to 5, wherein an outline of said module has a first shape in a first substantially horizontal plane (Pl; P2), said first shape being a substantially regular polygon.
7. The module (10) according to claim 6, wherein said outline of said module has a second shape in a second plane (P2; Pl) substantially parallel and distinct from said first plane, said second shape being a substantially regular polygon different from said first shape.
8. A system comprising a module (10) according to any one of claims 1 to 7, a robot (20), and an XR interface (52, 54) communicatively coupled to at least one of said module (10) and said robot (20).
9. A method for controlling a robot (20) comprising at least one robotic arm (22) with an end effector (24), said robot being mechanically and communicatively coupled to a module (10) comprising a plurality of sensors (12), the method comprising: obtaining (S82) from said plurality of sensors (12) information on a space (S) in a vicinity of said robot (20) and said module (10), said information being indicative of the presence of any object (32; 33; 34); and controlling (S84) at least one of said robot (20) and said module (10), to minimize an effect of a collision of said robot (20) and an object (33; 34) indicated by said information.
10. The method according to claim 9, comprising controlling at least one of said robot (20) and said module (10), to minimize an effect of a collision of said module (10) and an object (33; 34) indicated by said information.
11. The method according to claim 9 or claim 10, wherein said controlling at least one of said robot (20) and said module (10) is based on one or more physical parameters of at least one of said robot (20) and said module (10).
12. The method according to any one of claim 9 to 11, further comprising triggering a notification to alert an operator (60) of said module (10) of an expected collision.
13. The method according to claim 12, further comprising obtaining data related to said robot (20) and including said data in said notification.
14. The method according to any one of claim 9 to 13, further comprising controlling an automated movement of said module (10).
15. The method according to any one of claim 9 to 14, further comprising communicating with a second module (10'), to exchange information related to a sensed space in the vicinity of said module (10) and said second module (10').
16. The method according to any one of claim 9 to 15, further comprising communicating with an extended reality, XR, interface (52, 54) to exchange at least one of information for controlling said robot (20), information relating to the vicinity of said robot (20), and information relating to a status of at least one of said robot (20) and said module (10).
17. The method according to any one of claim 9 to 16, further comprising determining a risk category associated with said end effector (24), and controlling at least one of said robot (20) and said module (10) based on said risk category.
18. The method according to any one of claim 9 to 17, further comprising determining a first format to be used in communication with said robot (20) and a second format to be used in communication with an external device (40), and at least one of:
(i) converting first data received from said robot (20) from said first format into said second format and causing converted first data to be transmitted to said external device (40), and
(ii) converting second data received from said external device (40) from said second format into said first format and causing converted second data to be transmitted to said robot (10).
19. The method according to any one of claim 9 to 18, further comprising classifying said object (32; 33; 34) into one of an object (32) to be processed by said end effector (224) and an object (33; 34) with which the collision is to be minimized.
20. A computer program comprising instructions which, when executed by one or more processor (71), cause said one or more processor to carry the method according to any one of claims 9 to 19.
PCT/EP2024/059085 2023-04-05 2024-04-03 Module for robotic arm Pending WO2024208916A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT102023000006729A IT202300006729A1 (en) 2023-04-05 2023-04-05 Robotic arm module
IT102023000006729 2023-04-05

Publications (1)

Publication Number Publication Date
WO2024208916A1 true WO2024208916A1 (en) 2024-10-10

Family

ID=86732899

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2024/059085 Pending WO2024208916A1 (en) 2023-04-05 2024-04-03 Module for robotic arm

Country Status (2)

Country Link
IT (1) IT202300006729A1 (en)
WO (1) WO2024208916A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014102018A1 (en) * 2012-12-28 2014-07-03 Abb Technology Ltd Method and apparatus for reduction of co-worker's injury
US20140350725A1 (en) * 2012-01-25 2014-11-27 Adept Technology, Inc. Autonomous mobile robot for handling job assignments in a physical environment inhabited by stationary and non-stationary obstacles
CN109079738A (en) * 2018-08-24 2018-12-25 北京秘塔网络科技有限公司 A kind of adaptive AGV robot and adaptive navigation method
US20210031385A1 (en) * 2019-07-31 2021-02-04 X Development Llc Mobile Robot Sensor Configuration
US20230050932A1 (en) * 2021-08-16 2023-02-16 Hexagon Technology Center Gmbh Autonomous measuring robot system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140350725A1 (en) * 2012-01-25 2014-11-27 Adept Technology, Inc. Autonomous mobile robot for handling job assignments in a physical environment inhabited by stationary and non-stationary obstacles
WO2014102018A1 (en) * 2012-12-28 2014-07-03 Abb Technology Ltd Method and apparatus for reduction of co-worker's injury
CN109079738A (en) * 2018-08-24 2018-12-25 北京秘塔网络科技有限公司 A kind of adaptive AGV robot and adaptive navigation method
US20210031385A1 (en) * 2019-07-31 2021-02-04 X Development Llc Mobile Robot Sensor Configuration
US20230050932A1 (en) * 2021-08-16 2023-02-16 Hexagon Technology Center Gmbh Autonomous measuring robot system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHEN T M ET AL: "MULTISENSOR BASED AUTONOMOUS MOBILE ROBOT THROUGH INTERNET CONTROL", PROCEEDINGS OF THE IECON '97 : 23RD. INTERNATIONAL CONFERENCE ON INDUSTRIAL ELECTRONICS, CONTROL, AND INSTRUMENTATION. NEW ORLEANS, NOV. 9 - 14, 1997; [PROCEEDINGS OF IEEE IECON: INTERNATIONAL CONFERENCE ON INDUSTRIAL ELECTRONICS, CONTROL, AND INSTRU, 9 November 1997 (1997-11-09), pages 1248 - 1253, XP000790704, ISBN: 978-0-7803-3933-0 *
CHOPRA A ET AL: "The Nomad 200 and the Nomad SuperScout: Reverse engineered and resurrected", COMPUTER AND ROBOT VISION, 2006. THE 3RD CANADIAN CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 7 June 2006 (2006-06-07), pages 55 - 55, XP010919366, ISBN: 978-0-7695-2542-6, DOI: 10.1109/CRV.2006.76 *

Also Published As

Publication number Publication date
IT202300006729A1 (en) 2024-10-05

Similar Documents

Publication Publication Date Title
US10378353B2 (en) Robot automated mining
EP3377722B1 (en) Automatically scanning and representing an environment with collision avoidance
JP7284953B2 (en) Robotic system with advanced scanning mechanism
US9987744B2 (en) Generating a grasp pose for grasping of an object by a grasping end effector of a robot
CN110640730B (en) Method and system for generating three-dimensional model for robot scene
US11090815B2 (en) Robot, control device, and robot system
WO2017087521A1 (en) Three-dimensional visual servoing for robot positioning
JP6697204B1 (en) Robot system control method, non-transitory computer-readable recording medium, and robot system control device
KR20220012921A (en) Robot configuration with 3D lidar
US10349035B2 (en) Automatically scanning and representing an environment having a plurality of features
CN114728417A (en) Robotic autonomous object learning triggered by teleoperators
US12421054B2 (en) Robotic multi-surface gripper assemblies and methods for operating the same
CN114728413A (en) Method and system for controlling graphical user interface of remote robot
CN112654470A (en) Robot cleaner and control method thereof
JP2013184279A (en) Information processing apparatus, and information processing method
JP2016099257A (en) Information processing apparatus and information processing method
KR20170012455A (en) Real-time determination of object metrics for trajectory planning
US20230256606A1 (en) Robot System with Object Detecting Sensors
CN106687821A (en) Device for detection of obstacles in a horizontal plane and detection method implementing such a device
JP6328796B2 (en) Manipulator control method, system, and manipulator
WO2024208916A1 (en) Module for robotic arm
US11407117B1 (en) Robot centered augmented reality system
US12097627B2 (en) Control apparatus for robotic system, control method for robotic system, computer-readable storage medium storing a computer control program, and robotic system
TW202201946A (en) Camera system and robot system for simplifying operation of unmanned aerial vehicle carrying camera device
WO2024208919A1 (en) Extended reality interface for robotic arm

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24716185

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2024716185

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2024716185

Country of ref document: EP

Effective date: 20251105

ENP Entry into the national phase

Ref document number: 2024716185

Country of ref document: EP

Effective date: 20251105

ENP Entry into the national phase

Ref document number: 2024716185

Country of ref document: EP

Effective date: 20251105

ENP Entry into the national phase

Ref document number: 2024716185

Country of ref document: EP

Effective date: 20251105

ENP Entry into the national phase

Ref document number: 2024716185

Country of ref document: EP

Effective date: 20251105