[go: up one dir, main page]

WO2025233972A1 - A route planning method for the operation of autonomous mobile objects and a system thereof - Google Patents

A route planning method for the operation of autonomous mobile objects and a system thereof

Info

Publication number
WO2025233972A1
WO2025233972A1 PCT/IN2025/050729 IN2025050729W WO2025233972A1 WO 2025233972 A1 WO2025233972 A1 WO 2025233972A1 IN 2025050729 W IN2025050729 W IN 2025050729W WO 2025233972 A1 WO2025233972 A1 WO 2025233972A1
Authority
WO
WIPO (PCT)
Prior art keywords
autonomous mobile
virtual
mobile object
obstacles
triangulated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IN2025/050729
Other languages
French (fr)
Inventor
Mohanasankar SIVAPRAKASAM
Akash Shanthi Murugesan
Aswathaman Govindaraju
Ragu Babu
Shyam Ayyasamy
Manojkumar LAKSHMANAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Indian Institute of Technology Madras
Original Assignee
Indian Institute of Technology Madras
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Indian Institute of Technology Madras filed Critical Indian Institute of Technology Madras
Publication of WO2025233972A1 publication Critical patent/WO2025233972A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0112Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0145Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2754/00Output or target parameters relating to objects
    • B60W2754/10Spatial relation or speed relative to objects
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40298Manipulator on vehicle, wheels, mobile
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40476Collision, planning for collision free path

Definitions

  • the present disclosure relates to the field of autonomous mobile objects. Particularly, the present disclosure relates to operation of autonomous mobile objects in an operating environment and a system thereof.
  • Obstacle detection and avoidance are critical for autonomous mobile objects to navigate safely and efficiently. Effective obstacle avoidance enhances safety, prevents damage, and ensures smooth operation in various applications, from industrial automation to household robotics.
  • Precision and accuracy are fundamental for autonomous mobile objects to perform tasks such as manipulation, assembly, and navigation with high effectiveness. Precision, often referred to as repeatability, is the ability of autonomous mobile objects to consistently reach the same position. Accuracy, on the other hand, is the degree to which the actual position of the autonomous mobile objects matches the desired position. Factors such as manufacturing tolerances, sensor accuracy, and environmental conditions can affect these attributes. High precision and accuracy are crucial for applications like robotic surgery, where even minor deviations can have significant consequences. [004] Therefore, it is necessary to establish an appropriate system and method that determines preferred collision-free routes with optimal computation and reduction in time and solve the problems associated with the existing systems.
  • the present disclosure discloses a method of operation of autonomous mobile objects in an operating environment.
  • the method comprises receiving information related to an autonomous mobile object and one or more obstacles in an operating environment from one or more sources.
  • the information related to the autonomous mobile object comprises a start state and a goal state of the autonomous mobile object.
  • the method comprises generating one or more virtual geometric casings around the one or more obstacles based on the information related to the one or more obstacles.
  • the method comprises generating one or more virtual geometric casings around the one or more obstacles based on the information related to the one or more obstacles.
  • the method comprises generating a virtual triangulated three-dimensional (3D) enclosure around the one or more obstacles, based on one or more coordinates associated with each of the one or more virtual geometric casings.
  • the method comprises generating a plurality of routes from the start state to the goal state based on the virtual triangulated 3D enclosure, to operate the autonomous mobile object in the operating environment.
  • the method for generating the plurality of routes further comprises, constructing a plurality of candidate nodes on a surface of the virtual triangulated 3D enclosure, identifying at least one entry node and at least one exit node from the plurality of candidate nodes based on the start state and goal state, and generating the plurality of routes traversing the at least one entry node and the at least one exit node based on one or more path retrieval techniques wherein the plurality of routes lies on or outside the virtual triangulated 3D enclosure .
  • the one or more path retrieval techniques include at least one of: a graphical technique, a plane fitting technique, a hybrid technique
  • the method comprises determining position and orientation of the autonomous mobile object, to operate the autonomous mobile object in the operating environment.
  • the method comprises generating an optimal route from the plurality of routes by minimizing a pre-defined cost function.
  • the present disclosure discloses a system for operation of autonomous mobile objects in an operating environment.
  • the system comprises a processor and a memory.
  • the processor is configured to receive information related to an autonomous mobile object and one or more obstacles in an operating environment from one or more sources.
  • the information related to the autonomous mobile object comprises a start state and a goal state of the autonomous mobile object.
  • the processor is configured to generate one or more virtual geometric casings around the one or more obstacles based on the information related to the one or more obstacles.
  • the processor is configured to generate a virtual triangulated three-dimensional (3D) enclosure around the one or more obstacles, based on one or more coordinates associated with each of the one or more virtual geometric casings.
  • the processor is configured to generate a plurality of routes from the start state to the goal state based on the virtual triangulated 3D enclosure, to operate the autonomous mobile object in the operating environment.
  • the processor is configured to generate the plurality of routes by, constructing a plurality of candidate nodes on a surface of the virtual triangulated 3D enclosure, identifying at least one entry node and at least one exit node from the plurality of candidate nodes based on the start state and goal state, and generating the plurality of routes traversing the at least one entry node and the at least one exit node based on one or more path retrieval techniques wherein the plurality of routes lies on or outside the virtual triangulated 3D enclosure.
  • the one or more path retrieval techniques include at least one of: a graphical technique, a plane fitting technique, a hybrid technique.
  • the processor is configured to determine a position and an orientation of the autonomous mobile object, to operate the autonomous mobile object in the operating environment.
  • the processor is configured to generate an optimal route from the plurality of routes by minimizing a pre-defined cost function.
  • the present disclosure determines optimal and collision-free routes for the autonomous mobile objects (for instance, robots) in an obstacle-prone image-guided surgical and interventional environment.
  • the present disclosure enables route generation in such a way as to not move the autonomous mobile objects very much away from the region of interest. This helps in reducing motion execution time considerably, and impact on the overall interventional or operating theatre time.
  • Figure 1 illustrates an exemplary architecture of the system for operation of autonomous mobile objects in an operating environment, in accordance with some embodiments of the present disclosure
  • Figure ! illustrates virtual geometric casings around fixtures with a virtual triangulated 3D enclosure, in accordance with some embodiments of the present disclosure
  • Figure 3 illustrates a route generated from a start state to a goal state of an autonomous mobile object, in accordance with some embodiments of the present disclosure
  • Figure 4A illustrates a virtual graph search using virtual graphical technique on surface of a virtual triangulated 3D enclosure, in accordance with some embodiments of the present disclosure
  • Figure 4B illustrates an exemplary construction of the virtual graph on the surface of virtual triangulated 3D enclosure, in accordance with some embodiments of the present disclosure
  • Figure 4C illustrates a virtual path search using virtual plane fitting technique in a virtual triangulated 3D enclosure, in accordance with some embodiments of the present disclosure
  • Figure 5 shows an exemplary flow chart illustrating method steps for operation of autonomous mobile objects in an operating environment, in accordance with some embodiments of the present disclosure
  • Figure 6 shows an exemplary flowchart illustrating a method of route planning for operation of autonomous mobile object using virtual triangulated enclosures, in accordance with some embodiments of the present disclosure
  • Figure 7 illustrates a block diagram of an exemplary computer system for operation of autonomous mobile objects in an operating environment, in accordance with some embodiments of the present disclosure.
  • the present disclosure discloses a method and a system of operation of autonomous mobile objects in an operating environment. The method involves receiving information related to an autonomous mobile object and one or more obstacles in an operating environment from one or more sources wherein the information related to the autonomous mobile object comprises a start state and a goal state of the autonomous mobile object.
  • the method comprises generating one or more virtual geometric casings around the one or more obstacles based on the information related to the one or more obstacles. Then, the method involves generating a virtual triangulated three-dimensional (3D) enclosure around the one or more obstacles, based on one or more coordinates associated with each of the one or more virtual geometric casings. Thereafter, the method involves generating a plurality of routes 316 from the start state to the goal state based on the virtual triangulated 3D enclosure, to operate the autonomous mobile object in the operating environment.
  • 3D three-dimensional
  • FIG. 1 illustrates an exemplary architecture of the system for operation of autonomous mobile objects in an operating environment, in accordance with some embodiments of the present disclosure.
  • Autonomous mobile objects are intelligent machines that navigate and perform tasks assigned by user independently.
  • the architecture 100 comprises a system 102, an autonomous mobile object 104, and one or more sources 106.
  • the system 102 is configured to operate the autonomous mobile object 104 in the operating environment.
  • the operating environment is where the autonomous mobile object 104 performs tasks/operation.
  • the operating environment refers to the physical and operational conditions surrounding the robot.
  • the operating environment may include, the layout of the workspace, presence of obstacles, and any dynamic elements like moving objects or people, and the like.
  • the autonomous mobile object 104 may include, but not limited to, a robot, a mobile robot with a wheeled or actuated base integrated with a dual or multi-articulated arm or a standalone articulated arm, forklifts, guided vehicles, a surgical robot, industrial robot, military robot, humanoid robot, flying robot, service robot, aquatic robot and the like.
  • the system 102 comprises a processor 108, a memory 110, and an I/O interface 112.
  • the memory 110 may be communicatively coupled to the processor 108.
  • the memory 110 stores instructions executable by the processor 108.
  • the processor 108 may comprise at least one data processor for executing program components for executing user or system-generated requests.
  • the processor 108 may be configured to receive information from one or more sources 106.
  • the autonomous mobile object 104 may communicate with the system 102 via a communication network (not shown in the Figure 1A).
  • the communication network may be at least one of wired communication network and a wireless communication network.
  • the system 102 may be integrated with the autonomous mobile object 104.
  • Figure 2 illustrates virtual geometric casings around one or more obstacles 202 with virtual triangulated 3D enclosure, in accordance with some embodiments of the present disclosure.
  • the processor 108 is configured to receive information related to the autonomous mobile object 104 and one or more obstacles 202 in the operating environment, from one or more sources 106.
  • the one or more obstacles 202 are objects within the environment that the autonomous mobile object 104 must navigate around to perform tasks efficiently.
  • the information may include, but not limited to, operation plan, pose and movement of components of the autonomous mobile object 104, configuration of components of the autonomous mobile object 104, kinematics data and other data that is required for movement, interaction and finding the optimum way to move the component of the autonomous mobile object 104 in the operating environment as per the requirements of user.
  • the processor 108 may store the information received from the one or more sources 106 in the memory 110. While the present description defines behavior for a single virtual triangulated enclosure, it should be noted that multiple virtual triangulated enclosures may be instantiated in practice if the plurality of obstacles 202 are separated by certain distance, or depending on properties of plurality of obstacles 202 such as, but not limited to, dimensions of each of the one or more obstacles 202, shape of each of the one or more obstacles 202, size of each of the one or more obstacles 202, and the like.
  • the processor 108 is configured to generate one or more virtual geometric casings 204 around the one or more obstacles 202 based on the information related to the one or more obstacles 202.
  • the one or more obstacles 202 needs to be avoided to prevent collisions that could damage both the autonomous mobile object 104 and the objects, ensuring operational efficiency of the autonomous mobile object 104.
  • the processor may construct one or more geometrical casings 204 around one or more obstacles 202 to avoid the collision with the one or more obstacles as shown in Figure 2.
  • the one or more obstacles 202 may be anatomical fixtures in the operating environment.
  • the one or more obstacles 202 may be furniture in the context of cleaning robot.
  • the processor 108 is configured to generate a virtual triangulated three-dimensional (3D) 206 enclosure around the one or more obstacles 202, based on one or more coordinates associated with each of the one or more virtual geometric casings 204.
  • the processor may generate a virtual triangulated three-dimensional (3D) 206 enclosure around the one or more obstacles 202 as shown in Figure 2 to construct a route for the autonomous mobile object 104.
  • the route is the path the autonomous mobile object 104 follows to navigate and to perform the operation in the operating environment.
  • Figure 3 illustrates the route generated for the autonomous mobile object 104, in accordance with some embodiments of the present disclosure.
  • the processor 108 is configured to receive information related to the autonomous mobile object 104 and one or more obstacles 202 in the operating environment, from one or more sources 106.
  • the information related to the autonomous mobile object 104 may comprise a start state 302 and a goal state 304 of the autonomous mobile object 104.
  • start state 302 is where the autonomous mobile object 104 is positioned and oriented to move to the goal state 304
  • the goal state 304 is where the autonomous mobile object 104 has moved and oriented to perform the operation.
  • the goal state 304 is, but not limited to, aligning the end-effector of the autonomous mobile object 104 with the goal in the operating environment.
  • the processor 108 may be configured to perform the kinematic analysis for the goal state 304 to determine the feasibility of reaching the goal state 304 without considering the one or more obstacles 202 in the operating environment.
  • the processor 108 may construct one or more geometrical casings 204 around one or more obstacles 202 to avoid the collision with the one or more obstacles 202 as shown in Figure 2. Further, the processor 108 may generate a virtual triangulated three-dimensional (3D) 206 enclosure around the one or more obstacles 202 to construct a route for the autonomous mobile object 104 as shown in Figure 3.
  • 3D three-dimensional
  • the processor 109 may identify a collision zone based on the operation that is performed by the autonomous mobile object 104 and one or more components of the autonomous mobile object 104.
  • the collision zone is defined as any region within the operating environment where the physical presence of two or more entities such as, but not limited to, one or more components of the autonomous mobile object itself, other autonomous mobile objects, or one or more obstacles in the operating environment, and the like, may overlap.
  • the collision zones are constraints during route planning and motion execution to ensure safe, collision-free operation.
  • the autonomous mobile object 104 may receive the information of the collision zone from the one or more sources 106 or the one or more components of the autonomous mobile object 104.
  • the one or more components may include, but not limited to, Stereoscopic Camera, Computed Tomography (CT) data, C-Arm data, mono/stereo camera, Infrared (IR) cameras, Depth Sensors, Proximity Sensors or user inputs and the like.
  • C-Arm is a medical imaging device named for its distinctive C-shaped arm that connects the X-ray source and detector. It provides real-time, high-resolution X-ray images, allowing medical professionals to view internal structures during diagnostic procedures and surgeries.
  • CT data refers to the detailed internal images of the body obtained through a Computed Tomography (CT) scan. Mono cameras capture 2D images from a single viewpoint, while stereo cameras use two lenses to create 3D images, enhancing depth perception in surgery.
  • IR cameras utilize infrared light to identify the position of IR markers aiding in precise minimally invasive procedures.
  • Depth sensors create 3D maps for real-time navigation and accurate overlaying of virtual models onto the patient's anatomy.
  • Proximity sensors detect nearby one or more obstacles 202 to prevent accidental damage to tissues, ensuring the safety and accuracy of autonomous mobile object 104.
  • the autonomous mobile object 104 receives the information related to the collision zone by utilizing the one or more components such as Stereoscopic Camera, Computed Tomography (CT) data, C-Arm data, mono/stereo camera, IR cameras, Depth Sensors, Proximity Sensors.
  • CT Computed Tomography
  • C-Arm mono/stereo camera
  • IR cameras Depth Sensors, Proximity Sensors.
  • the kinematics data of the operating environment may include the position and orientation of one or more obstacles 202 and end effector of the autonomous mobile object 104.
  • the end effector is a device attached to the end of autonomous mobile object 104, designed to interact with the operating environment.
  • the end effector can be a gripper, tool, or any other device that allows the autonomous mobile object 104 to perform tasks.
  • This kinematics data may include the coordinates of one or more obstacles 202, start state, and end state within the operating environment, the speed at which the end effector of the autonomous mobile object 104 move, the changes in velocity of the end effector of the autonomous mobile object 104, and the angles and directions in which the end effector of the autonomous mobile object 104 is oriented.
  • Comprehensive kinematics data is essential for planning and executing precise movements, ensuring the autonomous mobile object 104 can accurately move from the start state to end state without colliding with the one or more obstacles 202 within the operating environment.
  • the processor 108 may generate a valid goal pose configuration and end effector pose of the autonomous mobile object 104 for particular goal state.
  • Such goal pose configuration and end effector pose is configured by considering collision zone with the one or more obstacles 202 within the operating environment.
  • a valid goal pose configuration of the autonomous mobile object 104 is a specific arrangement of components and end effector that allows the autonomous mobile object 104 to achieve a desired position and orientation within the operating environment. This configuration must ensure that the autonomous mobile object 104 can physically reach the goal state 304 without exceeding the limits of the autonomous mobile object 104, avoid any collisions with one or more obstacles 202 in the operating environment, maintain balance and stability, and position the end effector accurately to perform the intended task effectively. Ensuring these criteria are met is crucial for the autonomous mobile object 104 to operate safely and efficiently.
  • the system 102 utilizes the virtual triangulated 3D enclosure 206 to effectively find the route and can drive the autonomous mobile object 104 from the start state 302 to the goal state 304 without collision.
  • the system 102 may be configured to generate the one or more virtual geometric casings 204 around one or more obstacles having protruding components with minimal threshold.
  • the system 102 may generate the virtual triangulated 3D enclosure 206 around the one or more obstacles 202 based on one or more coordinates associated with each of the one or more virtual geometric casings 204.
  • the generated virtual triangulated 3D enclosure 206 may dynamically change with the addition or removal of each protruding fixture or obstacle.
  • the system 102 may generate a plurality of routes 316 from the start state 302 to the goal state 304 based on the virtual triangulated 3D enclosure 206 to operate the autonomous mobile object 104 in the operating environment. Based on utilizing the virtual triangulated 3D enclosure 206, collision-free routes or paths between the start state and the goal state are generated.
  • the processor 108 of the system 102 may be configured to construct a plurality of candidate nodes 310 on a surface of the virtual triangulated 3D enclosure 206. The plurality of candidate nodes 310 on the virtual triangulated 3D enclosure 206 are the nodes identified by the user as potential nodes in the operation.
  • the processor 108 of the system 102 may be configured to identify at least one entry node 306 and at least one exit node 308 from the plurality of candidate nodes 310 based on the start state 302 and goal state 304.
  • the at least one entry node 306 and the at least one exit node 308 are identified by extending a vector from the start state 302, goal state 304 respectively by a magnitude (Scale-k) along the current fixture axis, until vector intersects with the virtual triangulated 3D enclosure 206.
  • the system 102 may generate the plurality of routes 316 traversing the at least one entry node 306 and the at least one exit node 308.
  • the processor 108 may generate a plurality of routes 316 traversing the at least one entry node 306 and the at least one exit node 308 based on one or more path retrieval techniques, wherein the plurality of routes 316 lies on or outside the virtual triangulated 3D enclosure 206.
  • the plurality of routes 316 lies on or outside the virtual triangulated enclosure because generated plurality of routes 316 are the shortest routes which ensure collision avoidance with the plurality of obstacles 202.
  • the system 102 utilizes the virtual triangulated 3D enclosure 206 to generate the collision-free plurality of routes 316 between the start state 302 and the goal state 304.
  • the four states can be derived based on the Cartesian positions of the start state and the goal state relative to the virtual triangulated 3D enclosure 200, as illustrated in Table 1.
  • the complete route between the start state 302 and the goal state 304 may comprise a sub-route inside the virtual triangulated 3D enclosure 206, on the surface of the triangulated 3D enclosure 206, and outside the triangulated 3D enclosure 206.
  • the system 102 may determine the specific points where the route enters and exits the virtual triangulated 3D enclosure 206. Said system 102 may calculate a route that traverses between the entry and exit nodes on the surface of the virtual triangulated 3D enclosure 200.
  • the route is constructed by the system 103 using the path retrieval techniques.
  • the one or more path retrieval techniques may include at least one of: a graphical technique, a plane fitting technique, a hybrid technique.
  • Hybrid techniques is combination of the graphical and path fitting techniques.
  • FIG. 4A illustrates a virtual graph path search using virtual graphical technique on virtual triangulated 3D enclosure, in accordance with some embodiments of the present disclosure.
  • the system 102 may find the optimal route by using the graphical technique.
  • a graph 312 is constructed around the virtual triangulated 3D enclosure 206.
  • the graph 312 is selectively constructed on the preferred regions, preferably on the uppermost sections of the virtual triangulated 3D enclosure 206 to reduce computation time and to avoid exploring unwanted parts of the virtual triangulated 3D enclosure 206.
  • the virtual triangulated 3D enclosure 206 may comprise multiple 2D simplices or triangles. Each triangle’s edges are divided into ‘m’ segments to construct the graph as shown in the Figure 4B. Each segment act as graph edges, linking the nodes together.
  • the nodes along each edge are interconnected with the nodes from the other edges as depicted in Figure 4B.
  • the system 102 may execute the graph search algorithm on the graph to determine the shortest possible route within the graph 312 connecting the nodes nearest to both entry and exit nodes.
  • the above formulas may identify the number of nodes, number of edges, minimum number of nodes linked to a single node, and maximum number of nodes linked to a single node, for the single triangle and number of triangles (T) based on the number of segments (m).
  • the plane fitting technique is used to construct the shortest possible route.
  • Figure 4C illustrates virtual path search using virtual plane fitting technique in virtual triangulated 3D enclosure 206, in accordance with some embodiments of the present disclosure.
  • the system 102 may find the optimum route in using the virtual plane fitting technique.
  • the at least one entry node 306 and at least one exit node 308 are determined as shown in Figure 4C by the system 102.
  • the start state 302 and the goal state 304 are inside the enclosure as disclosed in the case 1 of Table 1, the system 102 may identify a corresponding entry node, and exit node on the triangulated 3D enclosure’s surface.
  • said entry node and exit node are identified by extending a vector from the start state or goal state by a magnitude (Scale-k) along the current fixture/end effector axis, until vector intersects with the virtual triangulated 3D enclosure 206.
  • a magnitude Scale-k
  • at least one entry node 306 and at least one exit node 308 are identified. Based on the selected axis, there may be multiple entry and exit nodes.
  • the system 102 may generate the route connecting the at least one entry node 306 and at least one exit node 308.
  • the route is generated using a unique plane 314 with a plane normal vector (Normal).
  • Said plane 314 is constructed using the at least one entry node 306 and at least one exit node 308, such that the line segment between entry node and exit node lies on the plane 303.
  • Said plane 314 cuts the virtual triangulated 3D enclosure’s 206 surface, resulting in intersection points (interpoints) which form the basis for route construction.
  • the plane’s normal vector (PNormal) remains unconstrained.
  • rotation of plane’s normal vector (PNormal) distinct planes 314 are generated, which produces different routes between at least one entry node 306 and at least one exit node 308.
  • the system 102 determines the shortest possible route nearest to at least one entry node 306 and at least one exit node 308.
  • a hybrid technique which is combination of the graphical technique with the plane fitting technique may be used to achieve an optimal route length.
  • the system 102 may use hybrid technique to achieve a sub-optimal route length and to reduce computation cost.
  • the processor 108 is configured to determine position and orientation of the autonomous mobile object 104, to operate the autonomous mobile object 104 in the operating environment.
  • the processor 108 may perform a trajectory profiling to construct the final trajectory of the autonomous mobile object 104.
  • Trajectory profiling for the autonomous mobile object 104 involves planning a route that specifies the position, velocity, and acceleration of the autonomous mobile object 104 over time to ensure smooth and efficient movement. This helps the autonomous mobile object 104 navigate through the operating environment while avoiding one or more obstacles 202 and optimizing performance.
  • the trajectory is generated such that the trajectory consistently lies on or outside the virtual triangulated 3D enclosure 206.
  • the processor 108 may be configured to generate an optimal route from the plurality of routes 316 by minimizing a pre-defined cost function.
  • the processor 108 may generate plurality of routes 316 using the path retrieval technique by considering the kinematic feasibility and collision free reachability. In an embodiment, the processor 108 may generate plurality of routes 316 by identifying most feasible and least feasible routes. In an embodiment, the ranking corresponding to each route may be computed based on factors associated with the kinematics and collision zone in the route. In some embodiments, the ranking corresponding to each route may vary dynamically based on the information received by the processor 108 in real-time.
  • the processor 108 may customize the ranking of routes based on requirement of the user. Further, in some embodiments, the processor 109 may provide a list of the preferred routes of the autonomous mobile object 104 and the corresponding ranking of the preferred routes to the user. In another embodiment, the system 102 may allow the user to make decisions regarding selection of the route of the autonomous mobile object 104 based on the route ranking, the collision zone and the one or more suggestions.
  • Figure 5 illustrates a flowchart illustrating method steps for operation of autonomous mobile objects in an operating environment, in accordance with some embodiments of the present disclosure.
  • the method 500 may comprise one or more steps.
  • the method 500 may be described in the general context of computer executable instructions.
  • computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types.
  • the system 102 is configured to receive information related to an autonomous mobile object 104 and one or more obstacles 202 in an operating environment, from one or more sources 106.
  • the system 102 may receive the information related to an autonomous mobile object 104 and one or more obstacles 202 in an operating environment, from one or more sources 106.
  • the information is related to the autonomous mobile object 104 comprising the start state 302 and the goal state 304 of the autonomous mobile object 104.
  • the system 102 is configured to generate one or more virtual geometric casings 204 around the one or more obstacles 202 based on the information related to the one or more obstacles 202.
  • the system 102 may generate one or more virtual geometric casings 204 around the one or more obstacles 202 based on the information related to the one or more obstacles 202.
  • the system 102 is configured to generate the virtual triangulated three- dimensional (3D) enclosure 206 around the one or more obstacles 202 based on one or more coordinates associated with each of the one or more virtual geometric casings 204.
  • the system 102 may generate the virtual triangulated three-dimensional (3D) enclosure 206 around the one or more obstacles 202 based on one or more coordinates associated with each of the one or more virtual geometric casings 204.
  • the system 102 is configured to generate the plurality of routes 316 from the start state 302 to the goal state 304 based on the virtual triangulated 3D enclosure 206.
  • the system 102 may generate the plurality of routes 316 from the start state 302 to the goal state 304 based on the virtual triangulated 3D enclosure 206 to operate the autonomous mobile object 104 in the operating environment..
  • Figure 6 shows an exemplary flowchart illustrating a method of operation of autonomous mobile object 104 using virtual triangulated enclosures in accordance with some embodiments of the present disclosure.
  • the system 102 may receive an operation plan by choosing the protruding target tip and corresponding trajectory, may calculate end-effector goal position of the autonomous mobile object 104, may obtain current pose of autonomous mobile object 104, and obtain protruded pose of the successfully placed fixtures.
  • the system 102 may receive the plan and related data from the one or more sources 106 associated with the system 102 or from the user.
  • the system 102 may perform initial kinematic analysis for the computed goal state to determine the feasibility of reaching the goal without considering the obstacles in the region of interest.
  • system 102 may receive the data from the one or more sources 106 related to the collisions zone.
  • the system 102 may generate a valid goal pose configuration for given goal state 304 and end effector pose autonomous mobile object 104, which are free of any collisions.
  • the system 102 may construct a geometric casing around the one or more obstacles 202 and may construct a virtual triangulated 3D enclosure 206 based on casing information as disclosed at block 610A, 610B and Figure 6. This serves as a primary modality to retrieve autonomous mobile object 104 tool-paths efficiently.
  • system 102 may construct the route of the autonomous mobile object 104 based on the path retrieval techniques.
  • Said path retrieval techniques includes a graphical technique or plane fitting technique or hybrid technique as disclosed in Figure 4A, Figure 4B. It is to be construed that the various techniques or combinations of technique may be employed for path retrieval and it is not solely limited to the mentioned techniques.
  • system 102 may construct the nodes on virtual triangulated 3D enclosure 206.
  • system 102 may construct a graph around the virtual triangulated 3D enclosure 206 by using graphical technique.
  • system 102 may generate the virtual plane 303 intersect the generated nodes which may indicate at least one entry node 312 and at least one exit node 308 in the triangulated 3D enclosure.
  • system 102 may rotate the plane 314 about the line segment between at least one entry node 306 and at least one exit node 308 to generate plurality of routes 316 and may check plurality of routes 316 related to the kinematic feasibility and collision free reachability at block 612E and 612F.
  • system 102 performs a trajectory profiling to construct the final trajectory of the autonomous mobile object 104. The trajectory is generated such that it consistently lies on or outside the virtual triangulated 3D enclosure 206.
  • any detection of unmodeled one or more obstacles that may intersect with the generated route is based on real-time data of the operating environment, once the initial route is generated using the aforementioned methods, any detection of unmodeled one or more obstacles that may intersect with the generated route.
  • an another system is configured to dynamically reroute the route subsequently rejoining the initial route to ensure collision-free navigation. This capability is facilitated by the plurality of alternative routes available within the virtual triangulated 3D enclosure 206.
  • the system 102 is configured to dynamically reroute the route to ensure collision-free navigation. This capability is facilitated by the plurality of alternative routes available within the virtual triangulated 3D enclosure 206.
  • the autonomous mobile object 104 is provided with the precomputed and collision-free plurality of routes 316.
  • the user selects the most optimal route based on predefined criteria may include, but not limited to, minimal distance, traversal time, or obstacle avoidance and the like. This selected route serves as the primary trajectory for the operation. If an unexpected one or more obstacles which are unmodeled, detected while traversing the current route, the autonomous mobile object 104 engages a path- switching mechanism. This mechanism involves real-time detection of the issue, safe halting at the current position, and transition to an alternative route selected from the set of precomputed plurality of routes 316.
  • multiple autonomous mobile objects can leverage the virtual triangulated enclosure 3D to compute individual trajectories for each autonomous mobile object, ensuring that every planned routes remains collision-free with respect to the other autonomous mobile object.
  • each autonomous mobile object of the multiple autonomous mobile objects can independently plan their routes while accounting for the positions and predicted movements of other mobile object within the environment, thereby ensuring mutual collision avoidance and coordinated route planning.
  • multiple autonomous mobile objects are deployed to execute an operation by distributing the overall workload of the operation into smaller sub-tasks, each subtask assigned to an individual autonomous mobile object.
  • the system 102 ensures that subtasks are spatially or temporally distinct to prevent overlap and avoid task interference among the multiple autonomous mobile objects.
  • individual autonomous mobile object may be equipped with a system for route planning capable of generating collision-free trajectories that account not only for one or more obstacles in the operating environment but also for the predicted trajectories of peer autonomous mobile objects.
  • the system 102 may utilize centralized or decentralized multi-agent route planning techniques to avoid inter collisions within the multiple autonomous mobile objects.
  • the multiple autonomous mobile objects are configured to communicate their position, status, and intention with one another or with a central coordination unit. This communication enables synchronization of execution of the operation and coordination in shared workspaces, ensuring that dependencies between sub-tasks are respected and managed dynamically.
  • a priority-based or reservation-based conflict resolution strategy is implemented to handle dynamic interactions between multiple autonomous mobile objects.
  • the system 102 may assign temporary priorities or time slots to ensure safe and efficient traversal through shared areas.
  • the multiple autonomous mobile objects are programmed to operate with an understanding of the global operation plan, allowing them to support the overall objective without contradicting or disrupting the efforts of other autonomous mobile objects. This includes avoiding actions that would interfere with the sub-task execution of others and, where applicable, assisting or compensating for delays or failures within the team.
  • the system supports adaptive re-planning, allowing multiple autonomous mobile objects to respond to unforeseen changes in the operation environment or flow of the operation.
  • autonomous mobile object may adjust its plan in real time, coordinating with other autonomous mobile objects to maintain continuity of the operation.
  • FIG. 7 illustrates a block diagram of an exemplary computer system 700 for implementing embodiments consistent with the present disclosure.
  • the computer system 700 may be used to implement the system 102 of the present disclosure.
  • the computer system 700 may be used for operation of autonomous mobile objects in an operating environment.
  • the computer system 700 may comprise a Central Processing Unit 702 (also referred as “CPU” or “processor”).
  • the processor 702 may comprise at least one data processor.
  • the processor 702 may include specialized processing units such as integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
  • the processor 702 may be disposed in communication with one or more input/output (I/O) devices (not shown) via I/O interface 701.
  • the VO interface 701 may employ communication protocols/methods such as, without limitation, audio, analog, digital, monoaural, RCA, stereo, IEEE (Institute of Electrical and Electronics Engineers) -1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), Radio Frequency (RF) antennas, S-Video, VGA, IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
  • CDMA code-division multiple access
  • HSPA+ high-speed packet access
  • GSM global system for mobile communications
  • the computer system 700 may communicate with one or more I/O devices.
  • the input device 710 may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, stylus, scanner, storage device, transceiver, video device/source, etc.
  • the output device 711 may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, Plasma display panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc.
  • CTR cathode ray tube
  • LCD liquid crystal display
  • LED light-emitting diode
  • PDP Plasma display panel
  • OLED Organic light-emitting diode display
  • the computer system 700 may be connected to one or more sources 106 through a communication network 709.
  • the processor 702 may be disposed in communication with the communication network 709 via a network interface 703.
  • the network interface 703 may communicate with the communication network 709.
  • the network interface 703 may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.1 la/b/g/n/x, etc.
  • the communication network 709 may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc.
  • the network interface 703 may employ connection protocols include, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.1 la/b/g/n/x, etc.
  • connection protocols include, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.1 la/b/g/n/x, etc.
  • the communication network 709 includes, but is not limited to, a direct interconnection, an e-commerce network, a peer to peer (P2P) network, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, Wi-Fi, and such.
  • the first network and the second network may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/internet Protocol (TCP/IP), Wireless Application Protocol (WAP), etc., to communicate with each other.
  • the first network and the second network may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, etc.
  • the processor 702 may be disposed in communication with a memory 705 (e.g., RAM, ROM, etc. not shown in Figure 7) via a storage interface 704.
  • the storage interface 704 may connect to memory 705 including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as serial advanced technology attachment (SATA), Integrated Drive Electronics (IDE), IEEE- 1394, Universal Serial Bus (USB), fiber channel, Small Computer Systems Interface (SCSI), etc.
  • the memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, Redundant Array of Independent Discs (RAID), solid-state memory devices, solid-state drives, etc.
  • the memory 705 may store a collection of program or database components, including, without limitation, user interface 706, an operating system 707, web browser 708 etc.
  • computer system 700 may store user/application data, such as, the data, variables, records, etc., as described in this disclosure.
  • databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle® or Sybase®.
  • the operating system 707 may facilitate resource management and operation of the computer system 700.
  • Examples of operating systems include, without limitation, APPLE MACINTOSH OS® X, UNIX®, UNIX-like system distributions (E.G., BERKELEY SOFTWARE DISTRIBUTIONTM (BSD), FREEBSDTM, NETBSDTM, OPENBSDTM, etc.), LINUX DISTRIBUTIONSTM (E.G., RED HATTM, UBUNTUTM, KUBUNTUTM, etc.), IBMTM OS/2, MICROSOFTTM WINDOWSTM (XPTM, VISTATM/7/8, 10 etc.), APPLE® IOSTM, GOOGLE® ANDROIDTM, BLACKBERRY® OS, or the like.
  • the computer system 700 may implement the web browser 708 stored program component.
  • the web browser 708 may be a hypertext viewing application, for example MICROSOFT® INTERNET EXPLORERTM, GOOGLE® CHROMETM, MOZILLA® FIREFOXTM, APPLE® SAFARITM, etc. Secure web browsing may be provided using Secure Hypertext Transport Protocol (HTTPS), Secure Sockets Layer (SSL), Transport Layer Security (TLS), etc.
  • Web browsers 708 may utilize facilities such as AJAXTM, DHTMLTM, ADOBE® FLASHTM, JAVASCRIPTTM, JAVATM, Application Programming Interfaces (APIs), etc.
  • the computer system 700 may implement a mail server (not shown in Figure) stored program component.
  • the mail server may be an Internet mail server such as Microsoft Exchange, or the like.
  • the mail server may utilize facilities such as ASPTM, ACTIVEXTM, ANSITM C++/C#, MICROSOFT 1 *, .NETTM, CGI SCRIPTSTM, JAVATM, JAVASCRIPTTM, PERLTM, PHPTM, PYTHONTM, WEB OBJECTSTM, etc.
  • the mail server may utilize communication protocols such as Internet Message Access Protocol (IMAP), Messaging Application Programming Interface (MAPI), MICROSOFT® exchange, Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), or the like.
  • IMAP Internet Message Access Protocol
  • MAPI Messaging Application Programming Interface
  • PMP Post Office Protocol
  • SMTP Simple Mail Transfer Protocol
  • the computer system 700 may implement a mail client stored program component.
  • the mail client (not shown in Figure) may be a mail viewing application, such as APPLE® MAILTM, MICROSOFT® ENTOURAGETM, MICROSOFT® OUTLOOKTM, MOZILLA® THUNDERBIRDTM, etc.
  • a computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored.
  • a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein.
  • the term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory.
  • RAM Random Access Memory
  • ROM Read-Only Memory
  • volatile memory volatile memory
  • non-volatile memory hard drives
  • CD ROMs Compact Disc Read-Only Memory
  • DVDs Digital Video Disc
  • flash drives disks, and any other known physical storage media.
  • the present disclosure determines optimal routes for the autonomous mobile object for safe navigation.
  • the present disclosure enables generation collision-free routes for the autonomous mobile object in an obstacle -prone image-guided operational and interventional environment.
  • the present invention reduces the randomness and drastic deviations in movements of the autonomous mobile object that are caused due to the usage of other pre-existing operational environments.
  • the present disclosure provides a route for autonomous mobile object that avoids obstacles if there exists one without compromising on route repeatability.
  • the present disclosure provides the combination of the plane fitting and the graph construction method aids in optimal computation and reduction in plan time per plan.
  • the routes generated by present disclosure do not generate motions that move the autonomous mobile object very much away from the region of interest. This helps in reducing motion execution time considerably, and impact on the overall interventional or operating time. Referral Numerals:

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Human Computer Interaction (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The present disclosure discloses method and system of operation of autonomous mobile objects. The method involves receiving information related to an autonomous mobile object and one or more obstacles (202) in an operating environment from one or more sources (106).The information comprises start state (302) and goal state (304) of the autonomous mobile object (104). Then, the method comprises generating one or more virtual geometric casings (204) around the one or more obstacles (202) based on the information related to the one or more obstacles (202). The method involves generating a virtual triangulated three-dimensional (3D) (206) enclosure around the one or more obstacles (202), based on one or more coordinates associated with each of the one or more virtual geometric casings (206). Thereafter, the method involves generating a plurality of routes (316) from the start state (302) to the goal state (304) based on the virtual triangulated 3D enclosure (206).

Description

TITLE: “A ROUTE PLANNING METHOD FOR THE OPERATION OF AUTONOMOUS MOBILE OBJECTS AND A SYSTEM THEREOF”
TECHNICAL FIELD
[001] The present disclosure relates to the field of autonomous mobile objects. Particularly, the present disclosure relates to operation of autonomous mobile objects in an operating environment and a system thereof.
BACKGROUND
[002] Navigating an autonomous mobile object or manipulating the autonomous mobile object within an operating environment presents a formidable challenge. Based on the interventional plan, which includes but is not limited to entry points, target points, axis constraints, and other relevant parameters made by the user, the autonomous mobile object has to autonomously move from place to place to align its end-effector relative to the operating environment. This facilitates the user in executing a particular operation. Miscomputed route planning for autonomous mobile object results in additional interventional time and induces the risk of collisions with obstacles, which in turn significantly affects the efficiency of the outcomes and workflow of the operation.
[003] Obstacle detection and avoidance are critical for autonomous mobile objects to navigate safely and efficiently. Effective obstacle avoidance enhances safety, prevents damage, and ensures smooth operation in various applications, from industrial automation to household robotics. Precision and accuracy are fundamental for autonomous mobile objects to perform tasks such as manipulation, assembly, and navigation with high effectiveness. Precision, often referred to as repeatability, is the ability of autonomous mobile objects to consistently reach the same position. Accuracy, on the other hand, is the degree to which the actual position of the autonomous mobile objects matches the desired position. Factors such as manufacturing tolerances, sensor accuracy, and environmental conditions can affect these attributes. High precision and accuracy are crucial for applications like robotic surgery, where even minor deviations can have significant consequences. [004] Therefore, it is necessary to establish an appropriate system and method that determines preferred collision-free routes with optimal computation and reduction in time and solve the problems associated with the existing systems.
[005] The information disclosed in this background of the disclosure section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
SUMMARY
[006] The present disclosure overcomes one or more shortcomings of the prior art and provides additional advantages. Embodiments and aspects of the disclosure described in detail herein are considered a part of the claimed disclosure.
[007] In an embodiment, the present disclosure discloses a method of operation of autonomous mobile objects in an operating environment. The method comprises receiving information related to an autonomous mobile object and one or more obstacles in an operating environment from one or more sources. The information related to the autonomous mobile object comprises a start state and a goal state of the autonomous mobile object. The method comprises generating one or more virtual geometric casings around the one or more obstacles based on the information related to the one or more obstacles. The method comprises generating one or more virtual geometric casings around the one or more obstacles based on the information related to the one or more obstacles. The method comprises generating a virtual triangulated three-dimensional (3D) enclosure around the one or more obstacles, based on one or more coordinates associated with each of the one or more virtual geometric casings. The method comprises generating a plurality of routes from the start state to the goal state based on the virtual triangulated 3D enclosure, to operate the autonomous mobile object in the operating environment.
[008] In an embodiment, the method for generating the plurality of routes further comprises, constructing a plurality of candidate nodes on a surface of the virtual triangulated 3D enclosure, identifying at least one entry node and at least one exit node from the plurality of candidate nodes based on the start state and goal state, and generating the plurality of routes traversing the at least one entry node and the at least one exit node based on one or more path retrieval techniques wherein the plurality of routes lies on or outside the virtual triangulated 3D enclosure .
[009] In an embodiment, the one or more path retrieval techniques include at least one of: a graphical technique, a plane fitting technique, a hybrid technique
[0010] In an embodiment, the method comprises determining position and orientation of the autonomous mobile object, to operate the autonomous mobile object in the operating environment.
[0011] In an embodiment, the method comprises generating an optimal route from the plurality of routes by minimizing a pre-defined cost function.
[0012] In an embodiment, the present disclosure discloses a system for operation of autonomous mobile objects in an operating environment. The system comprises a processor and a memory. The processor is configured to receive information related to an autonomous mobile object and one or more obstacles in an operating environment from one or more sources. The information related to the autonomous mobile object comprises a start state and a goal state of the autonomous mobile object. Further, the processor is configured to generate one or more virtual geometric casings around the one or more obstacles based on the information related to the one or more obstacles. Further, the processor is configured to generate a virtual triangulated three-dimensional (3D) enclosure around the one or more obstacles, based on one or more coordinates associated with each of the one or more virtual geometric casings. Further, the processor is configured to generate a plurality of routes from the start state to the goal state based on the virtual triangulated 3D enclosure, to operate the autonomous mobile object in the operating environment.
[0013] In an embodiment, the processor is configured to generate the plurality of routes by, constructing a plurality of candidate nodes on a surface of the virtual triangulated 3D enclosure, identifying at least one entry node and at least one exit node from the plurality of candidate nodes based on the start state and goal state, and generating the plurality of routes traversing the at least one entry node and the at least one exit node based on one or more path retrieval techniques wherein the plurality of routes lies on or outside the virtual triangulated 3D enclosure. [0014] In an embodiment, the one or more path retrieval techniques include at least one of: a graphical technique, a plane fitting technique, a hybrid technique.
[0015] In an embodiment, the processor is configured to determine a position and an orientation of the autonomous mobile object, to operate the autonomous mobile object in the operating environment.
[0016] In an embodiment, the processor is configured to generate an optimal route from the plurality of routes by minimizing a pre-defined cost function.
[0017] The present disclosure determines optimal and collision-free routes for the autonomous mobile objects (for instance, robots) in an obstacle-prone image-guided surgical and interventional environment. The present disclosure enables route generation in such a way as to not move the autonomous mobile objects very much away from the region of interest. This helps in reducing motion execution time considerably, and impact on the overall interventional or operating theatre time.
[0018] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
[0019] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles. In the figures, the left-most digit(s) of a reference number identify the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of device or system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and with reference to the accompanying figures, in which:
[0020] Figure 1 illustrates an exemplary architecture of the system for operation of autonomous mobile objects in an operating environment, in accordance with some embodiments of the present disclosure; [0021] Figure ! illustrates virtual geometric casings around fixtures with a virtual triangulated 3D enclosure, in accordance with some embodiments of the present disclosure;
[0022] Figure 3 illustrates a route generated from a start state to a goal state of an autonomous mobile object, in accordance with some embodiments of the present disclosure;
[0023] Figure 4A illustrates a virtual graph search using virtual graphical technique on surface of a virtual triangulated 3D enclosure, in accordance with some embodiments of the present disclosure;
[0024] Figure 4B illustrates an exemplary construction of the virtual graph on the surface of virtual triangulated 3D enclosure, in accordance with some embodiments of the present disclosure;
[0025] Figure 4C illustrates a virtual path search using virtual plane fitting technique in a virtual triangulated 3D enclosure, in accordance with some embodiments of the present disclosure;
[0026] Figure 5 shows an exemplary flow chart illustrating method steps for operation of autonomous mobile objects in an operating environment, in accordance with some embodiments of the present disclosure;
[0027] Figure 6 shows an exemplary flowchart illustrating a method of route planning for operation of autonomous mobile object using virtual triangulated enclosures, in accordance with some embodiments of the present disclosure;
[0028] Figure 7 illustrates a block diagram of an exemplary computer system for operation of autonomous mobile objects in an operating environment, in accordance with some embodiments of the present disclosure.
[0029] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be represented in a computer readable medium and executed by a computer or processor, whether or not such computer or processor is explicitly shown. DESCRIPTION OF THE DISCLOSURE
[0030] In the present document, the word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or implementation of the present subject matter described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments.
[0031] While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure.
[0032] The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises. . . a” does not, without more constraints, preclude the existence of other elements or additional elements in the device, system or apparatus.
[0033] The terms "an embodiment", "embodiment", "embodiments", "the embodiment", "the embodiments", "one or more embodiments", "some embodiments", and "one embodiment" mean "one or more (but not all) embodiments of the invention(s)" unless expressly specified otherwise.
[0034] The terms "including", "comprising", “having” and variations thereof mean "including but not limited to", unless expressly specified otherwise.
[0035] In the present document, the word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or implementation of the present subject matter described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. [0036] While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure.
[0037] The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises. . . a” does not, without more constraints, preclude the existence of other elements or additional elements in the device, system or apparatus.
[0038] The terms "an embodiment", "embodiment", "embodiments", "the embodiment", "the embodiments", "one or more embodiments", "some embodiments", and "one embodiment" mean "one or more (but not all) embodiments of the invention(s)" unless expressly specified otherwise.
[0039] The terms "including", "comprising", “having” and variations thereof mean "including but not limited to", unless expressly specified otherwise.
[0040] The terms like “at least one” and “one or more” may be used interchangeably or in combination throughout the description.
[0041] The terms like “route” and “path” may be used interchangeably or in combination throughout the description.
[0042] Obstacle detection and avoidance are critical for autonomous mobile objects to navigate safely and efficiently. Autonomous mobile objects need to adjust to the presence of dynamic obstacles like moving objects or people. High precision and accuracy are crucial for applications like robotic surgery, where even minor deviations can have significant consequences. Hence, there is a need for advanced techniques to assist autonomous mobile objects by generating optimal routes and adapt to dynamic environments. [0043] The present disclosure discloses a method and a system of operation of autonomous mobile objects in an operating environment. The method involves receiving information related to an autonomous mobile object and one or more obstacles in an operating environment from one or more sources wherein the information related to the autonomous mobile object comprises a start state and a goal state of the autonomous mobile object. Then, the method comprises generating one or more virtual geometric casings around the one or more obstacles based on the information related to the one or more obstacles. Then, the method involves generating a virtual triangulated three-dimensional (3D) enclosure around the one or more obstacles, based on one or more coordinates associated with each of the one or more virtual geometric casings. Thereafter, the method involves generating a plurality of routes 316 from the start state to the goal state based on the virtual triangulated 3D enclosure, to operate the autonomous mobile object in the operating environment.
[0044] Figure 1 illustrates an exemplary architecture of the system for operation of autonomous mobile objects in an operating environment, in accordance with some embodiments of the present disclosure. Autonomous mobile objects are intelligent machines that navigate and perform tasks assigned by user independently. In an embodiment, the architecture 100 comprises a system 102, an autonomous mobile object 104, and one or more sources 106. The system 102 is configured to operate the autonomous mobile object 104 in the operating environment. The operating environment is where the autonomous mobile object 104 performs tasks/operation. The operating environment refers to the physical and operational conditions surrounding the robot. As an example, but not limited to, the operating environment may include, the layout of the workspace, presence of obstacles, and any dynamic elements like moving objects or people, and the like. The autonomous mobile object 104 may include, but not limited to, a robot, a mobile robot with a wheeled or actuated base integrated with a dual or multi-articulated arm or a standalone articulated arm, forklifts, guided vehicles, a surgical robot, industrial robot, military robot, humanoid robot, flying robot, service robot, aquatic robot and the like. In an embodiment, the system 102 comprises a processor 108, a memory 110, and an I/O interface 112. In some embodiments, the memory 110 may be communicatively coupled to the processor 108. The memory 110 stores instructions executable by the processor 108. The processor 108 may comprise at least one data processor for executing program components for executing user or system-generated requests. The processor 108 may be configured to receive information from one or more sources 106. [0045] In an embodiment, the autonomous mobile object 104 may communicate with the system 102 via a communication network (not shown in the Figure 1A). The communication network may be at least one of wired communication network and a wireless communication network. In another embodiment, the system 102 may be integrated with the autonomous mobile object 104.
[0046] Figure 2 illustrates virtual geometric casings around one or more obstacles 202 with virtual triangulated 3D enclosure, in accordance with some embodiments of the present disclosure. In an embodiment, the processor 108 is configured to receive information related to the autonomous mobile object 104 and one or more obstacles 202 in the operating environment, from one or more sources 106. The one or more obstacles 202 are objects within the environment that the autonomous mobile object 104 must navigate around to perform tasks efficiently. The information may include, but not limited to, operation plan, pose and movement of components of the autonomous mobile object 104, configuration of components of the autonomous mobile object 104, kinematics data and other data that is required for movement, interaction and finding the optimum way to move the component of the autonomous mobile object 104 in the operating environment as per the requirements of user. The processor 108 may store the information received from the one or more sources 106 in the memory 110. While the present description defines behavior for a single virtual triangulated enclosure, it should be noted that multiple virtual triangulated enclosures may be instantiated in practice if the plurality of obstacles 202 are separated by certain distance, or depending on properties of plurality of obstacles 202 such as, but not limited to, dimensions of each of the one or more obstacles 202, shape of each of the one or more obstacles 202, size of each of the one or more obstacles 202, and the like.
[0047] In an embodiment, the processor 108 is configured to generate one or more virtual geometric casings 204 around the one or more obstacles 202 based on the information related to the one or more obstacles 202. The one or more obstacles 202 needs to be avoided to prevent collisions that could damage both the autonomous mobile object 104 and the objects, ensuring operational efficiency of the autonomous mobile object 104. The processor may construct one or more geometrical casings 204 around one or more obstacles 202 to avoid the collision with the one or more obstacles as shown in Figure 2. In an example, the one or more obstacles 202 may be anatomical fixtures in the operating environment. In another example, the one or more obstacles 202 may be furniture in the context of cleaning robot. These examples should not be considered as limiting.
[0048] In an embodiment, the processor 108 is configured to generate a virtual triangulated three-dimensional (3D) 206 enclosure around the one or more obstacles 202, based on one or more coordinates associated with each of the one or more virtual geometric casings 204. The processor may generate a virtual triangulated three-dimensional (3D) 206 enclosure around the one or more obstacles 202 as shown in Figure 2 to construct a route for the autonomous mobile object 104. The route is the path the autonomous mobile object 104 follows to navigate and to perform the operation in the operating environment.
[0049] Figure 3 illustrates the route generated for the autonomous mobile object 104, in accordance with some embodiments of the present disclosure. Figure 3 should be read in conjunction with Figure 2. In an embodiment, the processor 108 is configured to receive information related to the autonomous mobile object 104 and one or more obstacles 202 in the operating environment, from one or more sources 106. The information related to the autonomous mobile object 104 may comprise a start state 302 and a goal state 304 of the autonomous mobile object 104. Herein, start state 302 is where the autonomous mobile object 104 is positioned and oriented to move to the goal state 304, while the goal state 304 is where the autonomous mobile object 104 has moved and oriented to perform the operation. The goal state 304 is, but not limited to, aligning the end-effector of the autonomous mobile object 104 with the goal in the operating environment.
[0050] In an embodiment, the processor 108 may be configured to perform the kinematic analysis for the goal state 304 to determine the feasibility of reaching the goal state 304 without considering the one or more obstacles 202 in the operating environment. The processor 108 may construct one or more geometrical casings 204 around one or more obstacles 202 to avoid the collision with the one or more obstacles 202 as shown in Figure 2. Further, the processor 108 may generate a virtual triangulated three-dimensional (3D) 206 enclosure around the one or more obstacles 202 to construct a route for the autonomous mobile object 104 as shown in Figure 3.
[0051] In an embodiment, the processor 109 may identify a collision zone based on the operation that is performed by the autonomous mobile object 104 and one or more components of the autonomous mobile object 104. Herein, the collision zone is defined as any region within the operating environment where the physical presence of two or more entities such as, but not limited to, one or more components of the autonomous mobile object itself, other autonomous mobile objects, or one or more obstacles in the operating environment, and the like, may overlap. The collision zones are constraints during route planning and motion execution to ensure safe, collision-free operation. In an example of operating environment, surgery, the autonomous mobile object 104 may receive the information of the collision zone from the one or more sources 106 or the one or more components of the autonomous mobile object 104. The one or more components may include, but not limited to, Stereoscopic Camera, Computed Tomography (CT) data, C-Arm data, mono/stereo camera, Infrared (IR) cameras, Depth Sensors, Proximity Sensors or user inputs and the like. A C-Arm is a medical imaging device named for its distinctive C-shaped arm that connects the X-ray source and detector. It provides real-time, high-resolution X-ray images, allowing medical professionals to view internal structures during diagnostic procedures and surgeries. CT data refers to the detailed internal images of the body obtained through a Computed Tomography (CT) scan. Mono cameras capture 2D images from a single viewpoint, while stereo cameras use two lenses to create 3D images, enhancing depth perception in surgery. IR cameras utilize infrared light to identify the position of IR markers aiding in precise minimally invasive procedures. Depth sensors create 3D maps for real-time navigation and accurate overlaying of virtual models onto the patient's anatomy. Proximity sensors detect nearby one or more obstacles 202 to prevent accidental damage to tissues, ensuring the safety and accuracy of autonomous mobile object 104. In this example, the autonomous mobile object 104 receives the information related to the collision zone by utilizing the one or more components such as Stereoscopic Camera, Computed Tomography (CT) data, C-Arm data, mono/stereo camera, IR cameras, Depth Sensors, Proximity Sensors. This example should not be considered as limiting, other scenarios or implementations may exist and fall within the scope of the present description.
[0052] In an embodiment, the kinematics data of the operating environment may include the position and orientation of one or more obstacles 202 and end effector of the autonomous mobile object 104. The end effector is a device attached to the end of autonomous mobile object 104, designed to interact with the operating environment. The end effector can be a gripper, tool, or any other device that allows the autonomous mobile object 104 to perform tasks. This kinematics data may include the coordinates of one or more obstacles 202, start state, and end state within the operating environment, the speed at which the end effector of the autonomous mobile object 104 move, the changes in velocity of the end effector of the autonomous mobile object 104, and the angles and directions in which the end effector of the autonomous mobile object 104 is oriented. Comprehensive kinematics data is essential for planning and executing precise movements, ensuring the autonomous mobile object 104 can accurately move from the start state to end state without colliding with the one or more obstacles 202 within the operating environment.
[0053] In an embodiment, the processor 108 may generate a valid goal pose configuration and end effector pose of the autonomous mobile object 104 for particular goal state. Such goal pose configuration and end effector pose is configured by considering collision zone with the one or more obstacles 202 within the operating environment. A valid goal pose configuration of the autonomous mobile object 104 is a specific arrangement of components and end effector that allows the autonomous mobile object 104 to achieve a desired position and orientation within the operating environment. This configuration must ensure that the autonomous mobile object 104 can physically reach the goal state 304 without exceeding the limits of the autonomous mobile object 104, avoid any collisions with one or more obstacles 202 in the operating environment, maintain balance and stability, and position the end effector accurately to perform the intended task effectively. Ensuring these criteria are met is crucial for the autonomous mobile object 104 to operate safely and efficiently.
[0054] In an embodiment, the system 102 utilizes the virtual triangulated 3D enclosure 206 to effectively find the route and can drive the autonomous mobile object 104 from the start state 302 to the goal state 304 without collision. As shown in Figure 3, the system 102 may be configured to generate the one or more virtual geometric casings 204 around one or more obstacles having protruding components with minimal threshold. The system 102 may generate the virtual triangulated 3D enclosure 206 around the one or more obstacles 202 based on one or more coordinates associated with each of the one or more virtual geometric casings 204.
[0055] In an embodiment, the generated virtual triangulated 3D enclosure 206 may dynamically change with the addition or removal of each protruding fixture or obstacle.
[0056] In an embodiment, the system 102 may generate a plurality of routes 316 from the start state 302 to the goal state 304 based on the virtual triangulated 3D enclosure 206 to operate the autonomous mobile object 104 in the operating environment. Based on utilizing the virtual triangulated 3D enclosure 206, collision-free routes or paths between the start state and the goal state are generated. [0057] In an embodiment, the processor 108 of the system 102 may be configured to construct a plurality of candidate nodes 310 on a surface of the virtual triangulated 3D enclosure 206. The plurality of candidate nodes 310 on the virtual triangulated 3D enclosure 206 are the nodes identified by the user as potential nodes in the operation.
[0058] In an embodiment, the processor 108 of the system 102 may be configured to identify at least one entry node 306 and at least one exit node 308 from the plurality of candidate nodes 310 based on the start state 302 and goal state 304. The at least one entry node 306 and the at least one exit node 308 are identified by extending a vector from the start state 302, goal state 304 respectively by a magnitude (Scale-k) along the current fixture axis, until vector intersects with the virtual triangulated 3D enclosure 206. Upon identifying the at least one entry node 306 and the at least one exit node 308, the system 102 may generate the plurality of routes 316 traversing the at least one entry node 306 and the at least one exit node 308.
[0059] In an embodiment, the processor 108 may generate a plurality of routes 316 traversing the at least one entry node 306 and the at least one exit node 308 based on one or more path retrieval techniques, wherein the plurality of routes 316 lies on or outside the virtual triangulated 3D enclosure 206. The plurality of routes 316 lies on or outside the virtual triangulated enclosure because generated plurality of routes 316 are the shortest routes which ensure collision avoidance with the plurality of obstacles 202.
[0060] In an embodiment, the system 102 utilizes the virtual triangulated 3D enclosure 206 to generate the collision-free plurality of routes 316 between the start state 302 and the goal state 304. The four states can be derived based on the Cartesian positions of the start state and the goal state relative to the virtual triangulated 3D enclosure 200, as illustrated in Table 1. [0061] From the above table, the complete route between the start state 302 and the goal state 304 may comprise a sub-route inside the virtual triangulated 3D enclosure 206, on the surface of the triangulated 3D enclosure 206, and outside the triangulated 3D enclosure 206. In all four cases, whether the start state 302 is inside the virtual triangulated 3D enclosure 206 and the goal state 304 is outside, or vice versa, or if both are either inside or outside the virtual triangulated 3D enclosure 206, the route between them necessitates crossing the surface of the virtual triangulated 3D enclosure 206.
[0062] The system 102 may determine the specific points where the route enters and exits the virtual triangulated 3D enclosure 206. Said system 102 may calculate a route that traverses between the entry and exit nodes on the surface of the virtual triangulated 3D enclosure 200.
[0063] After generation of the virtual triangulated 3D enclosure 206 by the route planning system 103, the route is constructed by the system 103 using the path retrieval techniques.
[0064] In an embodiment, the one or more path retrieval techniques may include at least one of: a graphical technique, a plane fitting technique, a hybrid technique. Hybrid techniques is combination of the graphical and path fitting techniques.
[0065] Figure 4A illustrates a virtual graph path search using virtual graphical technique on virtual triangulated 3D enclosure, in accordance with some embodiments of the present disclosure. The system 102 may find the optimal route by using the graphical technique. In the graphical technique, a graph 312 is constructed around the virtual triangulated 3D enclosure 206. The graph 312 is selectively constructed on the preferred regions, preferably on the uppermost sections of the virtual triangulated 3D enclosure 206 to reduce computation time and to avoid exploring unwanted parts of the virtual triangulated 3D enclosure 206.
[0066] As shown in Figure 4A, the virtual triangulated 3D enclosure 206 may comprise multiple 2D simplices or triangles. Each triangle’s edges are divided into ‘m’ segments to construct the graph as shown in the Figure 4B. Each segment act as graph edges, linking the nodes together.
[0067] The nodes along each edge are interconnected with the nodes from the other edges as depicted in Figure 4B. The system 102 may execute the graph search algorithm on the graph to determine the shortest possible route within the graph 312 connecting the nodes nearest to both entry and exit nodes.
[0068] The comparison of graph characteristics based on number of segments (m) are as follows:
[0069] The above formulas may identify the number of nodes, number of edges, minimum number of nodes linked to a single node, and maximum number of nodes linked to a single node, for the single triangle and number of triangles (T) based on the number of segments (m).
[0070] In another embodiment, the plane fitting technique is used to construct the shortest possible route. Figure 4C illustrates virtual path search using virtual plane fitting technique in virtual triangulated 3D enclosure 206, in accordance with some embodiments of the present disclosure. The system 102 may find the optimum route in using the virtual plane fitting technique. In the virtual plane fitting technique, the at least one entry node 306 and at least one exit node 308 are determined as shown in Figure 4C by the system 102. For example, for particular plan, the start state 302 and the goal state 304 are inside the enclosure as disclosed in the case 1 of Table 1, the system 102 may identify a corresponding entry node, and exit node on the triangulated 3D enclosure’s surface. In an embodiment, said entry node and exit node are identified by extending a vector from the start state or goal state by a magnitude (Scale-k) along the current fixture/end effector axis, until vector intersects with the virtual triangulated 3D enclosure 206. In a similar manner, for all three cases of the table 1, at least one entry node 306 and at least one exit node 308 are identified. Based on the selected axis, there may be multiple entry and exit nodes. [0071] Upon identifying the at least one entry node 306 and at least one exit node 308, the system 102 may generate the route connecting the at least one entry node 306 and at least one exit node 308. The route is generated using a unique plane 314 with a plane normal vector (Normal). Said plane 314 is constructed using the at least one entry node 306 and at least one exit node 308, such that the line segment between entry node and exit node lies on the plane 303. Said plane 314 cuts the virtual triangulated 3D enclosure’s 206 surface, resulting in intersection points (interpoints) which form the basis for route construction.
[0072] While the triangulated 3D enclosure 206, the start state 302, and the goal state 304 define the at least one entry node 306 and at least one exit node 308, the plane’s normal vector (PNormal) remains unconstrained. By rotation of plane’s normal vector (PNormal), distinct planes 314 are generated, which produces different routes between at least one entry node 306 and at least one exit node 308. Based on the length of the different routes, classified from most feasible to least feasible routes. By the different routes, the system 102 determines the shortest possible route nearest to at least one entry node 306 and at least one exit node 308.
[0073] In another embodiment, a hybrid technique which is combination of the graphical technique with the plane fitting technique may be used to achieve an optimal route length. The system 102 may use hybrid technique to achieve a sub-optimal route length and to reduce computation cost.
[0074] In the hybrid technique, a less dense graph with fewer segments and the most optimal configuration is constructed using the optimum number of the segments of each triangle’s edges, and a route is retrieved from it. Subsequently, using the nodes of this graph path, a plane 314 is fitted. Said plane 314 is intersected with the enclosure to produce the final sub-optimal route between the at least one entry node 306 and at least one exit node 308. The integration of graph search and enclosure plane intersection methods results in a sub-optimal route while minimizing computational overhead.
[0075] In an embodiment, the processor 108 is configured to determine position and orientation of the autonomous mobile object 104, to operate the autonomous mobile object 104 in the operating environment. The processor 108 may perform a trajectory profiling to construct the final trajectory of the autonomous mobile object 104. Trajectory profiling for the autonomous mobile object 104 involves planning a route that specifies the position, velocity, and acceleration of the autonomous mobile object 104 over time to ensure smooth and efficient movement. This helps the autonomous mobile object 104 navigate through the operating environment while avoiding one or more obstacles 202 and optimizing performance. The trajectory is generated such that the trajectory consistently lies on or outside the virtual triangulated 3D enclosure 206.
[0076] In an embodiment, the processor 108 may be configured to generate an optimal route from the plurality of routes 316 by minimizing a pre-defined cost function.
[0077] In an embodiment, the processor 108 may generate plurality of routes 316 using the path retrieval technique by considering the kinematic feasibility and collision free reachability. In an embodiment, the processor 108 may generate plurality of routes 316 by identifying most feasible and least feasible routes. In an embodiment, the ranking corresponding to each route may be computed based on factors associated with the kinematics and collision zone in the route. In some embodiments, the ranking corresponding to each route may vary dynamically based on the information received by the processor 108 in real-time.
[0078] In an embodiment, the processor 108 may customize the ranking of routes based on requirement of the user. Further, in some embodiments, the processor 109 may provide a list of the preferred routes of the autonomous mobile object 104 and the corresponding ranking of the preferred routes to the user. In another embodiment, the system 102 may allow the user to make decisions regarding selection of the route of the autonomous mobile object 104 based on the route ranking, the collision zone and the one or more suggestions.
[0079] Figure 5 illustrates a flowchart illustrating method steps for operation of autonomous mobile objects in an operating environment, in accordance with some embodiments of the present disclosure. As illustrated in Figure 5, the method 500 may comprise one or more steps. The method 500 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types.
[0080] The order in which the method 500 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof.
[0081] At step 502, the system 102 is configured to receive information related to an autonomous mobile object 104 and one or more obstacles 202 in an operating environment, from one or more sources 106. In an embodiment, the system 102 may receive the information related to an autonomous mobile object 104 and one or more obstacles 202 in an operating environment, from one or more sources 106. The information is related to the autonomous mobile object 104 comprising the start state 302 and the goal state 304 of the autonomous mobile object 104.
[0082] At step 504, the system 102 is configured to generate one or more virtual geometric casings 204 around the one or more obstacles 202 based on the information related to the one or more obstacles 202. In an embodiment, the system 102 may generate one or more virtual geometric casings 204 around the one or more obstacles 202 based on the information related to the one or more obstacles 202.
[0083] At step 506, the system 102 is configured to generate the virtual triangulated three- dimensional (3D) enclosure 206 around the one or more obstacles 202 based on one or more coordinates associated with each of the one or more virtual geometric casings 204. In an embodiment, the system 102 may generate the virtual triangulated three-dimensional (3D) enclosure 206 around the one or more obstacles 202 based on one or more coordinates associated with each of the one or more virtual geometric casings 204.
[0084] At step 508, the system 102 is configured to generate the plurality of routes 316 from the start state 302 to the goal state 304 based on the virtual triangulated 3D enclosure 206. In an embodiment, the system 102 may generate the plurality of routes 316 from the start state 302 to the goal state 304 based on the virtual triangulated 3D enclosure 206 to operate the autonomous mobile object 104 in the operating environment..
[0085] Figure 6 shows an exemplary flowchart illustrating a method of operation of autonomous mobile object 104 using virtual triangulated enclosures in accordance with some embodiments of the present disclosure. As shown in Fig. 6, at block 602 the system 102 may receive an operation plan by choosing the protruding target tip and corresponding trajectory, may calculate end-effector goal position of the autonomous mobile object 104, may obtain current pose of autonomous mobile object 104, and obtain protruded pose of the successfully placed fixtures.
[0086] In an embodiment, the system 102 may receive the plan and related data from the one or more sources 106 associated with the system 102 or from the user. At block 604, the system 102 may perform initial kinematic analysis for the computed goal state to determine the feasibility of reaching the goal without considering the obstacles in the region of interest. At block 606, system 102 may receive the data from the one or more sources 106 related to the collisions zone.
[0087] At block 608, the system 102 may generate a valid goal pose configuration for given goal state 304 and end effector pose autonomous mobile object 104, which are free of any collisions. At block 610, after successful completion of goal pose computation, the system 102 may construct a geometric casing around the one or more obstacles 202 and may construct a virtual triangulated 3D enclosure 206 based on casing information as disclosed at block 610A, 610B and Figure 6. This serves as a primary modality to retrieve autonomous mobile object 104 tool-paths efficiently.
[0088] At block 612, system 102 may construct the route of the autonomous mobile object 104 based on the path retrieval techniques. Said path retrieval techniques includes a graphical technique or plane fitting technique or hybrid technique as disclosed in Figure 4A, Figure 4B. It is to be construed that the various techniques or combinations of technique may be employed for path retrieval and it is not solely limited to the mentioned techniques.
[0089] At block 612A, the system 102 may construct the nodes on virtual triangulated 3D enclosure 206. At block 612B, upon the node construction, system 102 may construct a graph around the virtual triangulated 3D enclosure 206 by using graphical technique. At block 612C, system 102 may generate the virtual plane 303 intersect the generated nodes which may indicate at least one entry node 312 and at least one exit node 308 in the triangulated 3D enclosure. At block 612D, system 102 may rotate the plane 314 about the line segment between at least one entry node 306 and at least one exit node 308 to generate plurality of routes 316 and may check plurality of routes 316 related to the kinematic feasibility and collision free reachability at block 612E and 612F. At block 614, system 102 performs a trajectory profiling to construct the final trajectory of the autonomous mobile object 104. The trajectory is generated such that it consistently lies on or outside the virtual triangulated 3D enclosure 206.
[0090] In an embodiment, based on real-time data of the operating environment, once the initial route is generated using the aforementioned methods, any detection of unmodeled one or more obstacles that may intersect with the generated route. In such cases, an another system is configured to dynamically reroute the route subsequently rejoining the initial route to ensure collision-free navigation. This capability is facilitated by the plurality of alternative routes available within the virtual triangulated 3D enclosure 206.
[0091] In an embodiment, based on real-time data of the operating environment, once the initial route is generated using the aforementioned methods, any detection of unmodeled one or more obstacles that may intersect with the generated route. In such cases, the system 102 is configured to dynamically reroute the route to ensure collision-free navigation. This capability is facilitated by the plurality of alternative routes available within the virtual triangulated 3D enclosure 206.
[0092] In an embodiment, at the start of the operation of the autonomous mobile object 104, the autonomous mobile object 104 is provided with the precomputed and collision-free plurality of routes 316. At runtime, the user selects the most optimal route based on predefined criteria may include, but not limited to, minimal distance, traversal time, or obstacle avoidance and the like. This selected route serves as the primary trajectory for the operation. If an unexpected one or more obstacles which are unmodeled, detected while traversing the current route, the autonomous mobile object 104 engages a path- switching mechanism. This mechanism involves real-time detection of the issue, safe halting at the current position, and transition to an alternative route selected from the set of precomputed plurality of routes 316.
[0093] In an embodiment, multiple autonomous mobile objects can leverage the virtual triangulated enclosure 3D to compute individual trajectories for each autonomous mobile object, ensuring that every planned routes remains collision-free with respect to the other autonomous mobile object. By leveraging the constraints imposed by the virtual triangulated 3D enclosure 206, each autonomous mobile object of the multiple autonomous mobile objects can independently plan their routes while accounting for the positions and predicted movements of other mobile object within the environment, thereby ensuring mutual collision avoidance and coordinated route planning. [0094] In an embodiment, multiple autonomous mobile objects are deployed to execute an operation by distributing the overall workload of the operation into smaller sub-tasks, each subtask assigned to an individual autonomous mobile object. The system 102 ensures that subtasks are spatially or temporally distinct to prevent overlap and avoid task interference among the multiple autonomous mobile objects. In an embodiment, individual autonomous mobile object may be equipped with a system for route planning capable of generating collision-free trajectories that account not only for one or more obstacles in the operating environment but also for the predicted trajectories of peer autonomous mobile objects. The system 102 may utilize centralized or decentralized multi-agent route planning techniques to avoid inter collisions within the multiple autonomous mobile objects.
[0095] In an embodiment, the multiple autonomous mobile objects are configured to communicate their position, status, and intention with one another or with a central coordination unit. This communication enables synchronization of execution of the operation and coordination in shared workspaces, ensuring that dependencies between sub-tasks are respected and managed dynamically.
[0096] In an embodiment, a priority-based or reservation-based conflict resolution strategy is implemented to handle dynamic interactions between multiple autonomous mobile objects. When conflicting trajectories are detected, the system 102 may assign temporary priorities or time slots to ensure safe and efficient traversal through shared areas.
[0097] In one embodiment, the multiple autonomous mobile objects are programmed to operate with an understanding of the global operation plan, allowing them to support the overall objective without contradicting or disrupting the efforts of other autonomous mobile objects. This includes avoiding actions that would interfere with the sub-task execution of others and, where applicable, assisting or compensating for delays or failures within the team.
[0098] In a further embodiment, the system supports adaptive re-planning, allowing multiple autonomous mobile objects to respond to unforeseen changes in the operation environment or flow of the operation. Upon detecting an obstruction or failure, autonomous mobile object may adjust its plan in real time, coordinating with other autonomous mobile objects to maintain continuity of the operation. COMPUTER SYSTEM
[0099] Figure 7 illustrates a block diagram of an exemplary computer system 700 for implementing embodiments consistent with the present disclosure. In an embodiment, the computer system 700 may be used to implement the system 102 of the present disclosure. Thus, the computer system 700 may be used for operation of autonomous mobile objects in an operating environment. The computer system 700 may comprise a Central Processing Unit 702 (also referred as “CPU” or “processor”). The processor 702 may comprise at least one data processor. The processor 702 may include specialized processing units such as integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
[00100] The processor 702 may be disposed in communication with one or more input/output (I/O) devices (not shown) via I/O interface 701. The VO interface 701 may employ communication protocols/methods such as, without limitation, audio, analog, digital, monoaural, RCA, stereo, IEEE (Institute of Electrical and Electronics Engineers) -1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), Radio Frequency (RF) antennas, S-Video, VGA, IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
[00101] Using the VO interface 701, the computer system 700 may communicate with one or more I/O devices. For example, the input device 710 may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, stylus, scanner, storage device, transceiver, video device/source, etc. The output device 711 may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, Plasma display panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc.
[00102] The computer system 700 may be connected to one or more sources 106 through a communication network 709. The processor 702 may be disposed in communication with the communication network 709 via a network interface 703. The network interface 703 may communicate with the communication network 709. The network interface 703 may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.1 la/b/g/n/x, etc. The communication network 709 may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc. The network interface 703 may employ connection protocols include, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.1 la/b/g/n/x, etc.
[00103] The communication network 709 includes, but is not limited to, a direct interconnection, an e-commerce network, a peer to peer (P2P) network, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, Wi-Fi, and such. The first network and the second network may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/internet Protocol (TCP/IP), Wireless Application Protocol (WAP), etc., to communicate with each other. Further, the first network and the second network may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, etc.
[00104] In some embodiments, the processor 702 may be disposed in communication with a memory 705 (e.g., RAM, ROM, etc. not shown in Figure 7) via a storage interface 704. The storage interface 704 may connect to memory 705 including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as serial advanced technology attachment (SATA), Integrated Drive Electronics (IDE), IEEE- 1394, Universal Serial Bus (USB), fiber channel, Small Computer Systems Interface (SCSI), etc. The memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, Redundant Array of Independent Discs (RAID), solid-state memory devices, solid-state drives, etc.
[00105] The memory 705 may store a collection of program or database components, including, without limitation, user interface 706, an operating system 707, web browser 708 etc. In some embodiments, computer system 700 may store user/application data, such as, the data, variables, records, etc., as described in this disclosure. Such databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle® or Sybase®.
[00106] The operating system 707 may facilitate resource management and operation of the computer system 700. Examples of operating systems include, without limitation, APPLE MACINTOSH OS® X, UNIX®, UNIX-like system distributions (E.G., BERKELEY SOFTWARE DISTRIBUTION™ (BSD), FREEBSD™, NETBSD™, OPENBSD™, etc.), LINUX DISTRIBUTIONS™ (E.G., RED HAT™, UBUNTU™, KUBUNTU™, etc.), IBM™ OS/2, MICROSOFT™ WINDOWS™ (XP™, VISTA™/7/8, 10 etc.), APPLE® IOS™, GOOGLE® ANDROID™, BLACKBERRY® OS, or the like.
[00107] In some embodiments, the computer system 700 may implement the web browser 708 stored program component. The web browser 708 may be a hypertext viewing application, for example MICROSOFT® INTERNET EXPLORER™, GOOGLE® CHROME™, MOZILLA® FIREFOX™, APPLE® SAFARI™, etc. Secure web browsing may be provided using Secure Hypertext Transport Protocol (HTTPS), Secure Sockets Layer (SSL), Transport Layer Security (TLS), etc. Web browsers 708 may utilize facilities such as AJAX™, DHTML™, ADOBE® FLASH™, JAVASCRIPT™, JAVA™, Application Programming Interfaces (APIs), etc. In some embodiments, the computer system 700 may implement a mail server (not shown in Figure) stored program component. The mail server may be an Internet mail server such as Microsoft Exchange, or the like. The mail server may utilize facilities such as ASP™, ACTIVEX™, ANSI™ C++/C#, MICROSOFT1*, .NET™, CGI SCRIPTS™, JAVA™, JAVASCRIPT™, PERL™, PHP™, PYTHON™, WEB OBJECTS™, etc. The mail server may utilize communication protocols such as Internet Message Access Protocol (IMAP), Messaging Application Programming Interface (MAPI), MICROSOFT® exchange, Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), or the like. In some embodiments, the computer system 700 may implement a mail client stored program component. The mail client (not shown in Figure) may be a mail viewing application, such as APPLE® MAIL™, MICROSOFT® ENTOURAGE™, MICROSOFT® OUTLOOK™, MOZILLA® THUNDERBIRD™, etc.
[00108] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory, non-volatile memory, hard drives, Compact Disc Read-Only Memory (CD ROMs), Digital Video Disc (DVDs), flash drives, disks, and any other known physical storage media.
[00109] The terms "an embodiment", "embodiment", "embodiments", "the embodiment", "the embodiments", "one or more embodiments", "some embodiments", and "one embodiment" mean "one or more (but not all) embodiments of the invention(s)" unless expressly specified otherwise.
[00110] The terms "including", "comprising", “having” and variations thereof mean "including but not limited to", unless expressly specified otherwise.
[00111] The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms "a", "an" and "the" mean "one or more", unless expressly specified otherwise.
[00112] A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention.
[00113] When a single device or article is described herein, it will be readily apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be readily apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality /features. Thus, other embodiments of the invention need not include the device itself. [00114] The illustrated operations of Figure 5 show certain events occurring in a certain order. In alternative embodiments, certain operations may be performed in a different order, modified, or removed. Moreover, steps may be added to the above described logic and still conform to the described embodiments. Further, operations described herein may occur sequentially or certain operations may be processed in parallel. Yet further, operations may be performed by a single processing unit or by distributed processing units.
[00115] Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
[00116] While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope being indicated by the following claims.
ADVANTAGES OF THE PRESENT DISCLSOURE
[00117] The present disclosure determines optimal routes for the autonomous mobile object for safe navigation.
[00118] The present disclosure enables generation collision-free routes for the autonomous mobile object in an obstacle -prone image-guided operational and interventional environment.
[00119] The present invention reduces the randomness and drastic deviations in movements of the autonomous mobile object that are caused due to the usage of other pre-existing operational environments.
[00120] The present disclosure provides a route for autonomous mobile object that avoids obstacles if there exists one without compromising on route repeatability.
[00121] The present disclosure provides the combination of the plane fitting and the graph construction method aids in optimal computation and reduction in plan time per plan. [00122] The routes generated by present disclosure do not generate motions that move the autonomous mobile object very much away from the region of interest. This helps in reducing motion execution time considerably, and impact on the overall interventional or operating time. Referral Numerals:

Claims

We Claim:
1. A method of operation of autonomous mobile objects in an operating environment, the method comprising: receiving information related to an autonomous mobile object (104) and one or more obstacles (202) in an operating environment, from one or more sources (106), wherein the information related to the autonomous mobile object (104) comprises a start state (302) and a goal state (304) of the autonomous mobile object (104); generating one or more virtual geometric casings (204) around the one or more obstacles (202) based on the information related to the one or more obstacles (202); generating a virtual triangulated three-dimensional (3D) enclosure (206) around the one or more obstacles (202), based on one or more coordinates associated with each of the one or more virtual geometric casings (204); and generating a plurality of routes (316) from the start state to the goal state based on the virtual triangulated 3D enclosure (206), to operate the autonomous mobile object (104) in the operating environment.
2. The method as claimed in claim 1, wherein generating the plurality of routes (316) comprising: constructing a plurality of candidate nodes on a surface of the virtual triangulated 3D enclosure (206); identifying at least one entry node (306) and at least one exit node (308) from the plurality of candidate nodes based on the start state and goal state; and generating the plurality of routes (316) traversing the at least one entry node (306) and the at least one exit node (308) based on one or more path retrieval techniques, wherein the plurality of routes (316) lies on or outside the virtual triangulated 3D enclosure (206).
3. The method as claimed in claim 2, wherein the one or more path retrieval techniques include at least one of: a graphical technique, a plane fitting technique, and a hybrid technique.
4. The method as claimed in claim 1, further comprises determining a position and an orientation of the autonomous mobile object (104), to operate the autonomous mobile object (104) in the operating environment.
5. The method as claimed in claim 1, further comprising generating an optimal route from the plurality of routes (316) by minimizing a pre-defined cost function.
6. A system for operation of autonomous mobile objects in an operating environment, the system comprises: a processor (108); and a memory, wherein the memory stores processor executable instructions, which, on execution, causes the processor (108) to: receive information related to an autonomous mobile object (104) and one or more obstacles (202) in an operating environment, from one or more sources (106), wherein the information related to the autonomous mobile object (104) comprises a start state (302) and a goal state (304) of the autonomous mobile object (104); generate one or more virtual geometric casings (204) around the one or more obstacles (202) based on the information related to the one or more obstacles (202); generate a virtual triangulated three-dimensional (3D) enclosure (206) around the one or more obstacles (202), based on one or more coordinates associated with each of the one or more virtual geometric casings (204); and generate a plurality of routes (316) from the start state to the goal state based on the virtual triangulated 3D enclosure (206), to operate the autonomous mobile object (104) in the operating environment.
7. The system as claimed in claim 6, wherein the processor (108) is configured to generate the plurality of routes (316) by: constructing a plurality of candidate nodes on a surface of the virtual triangulated 3D enclosure (206); identifying at least one entry node (306) and at least one exit node (308) from the plurality of candidate nodes based on the start state (302) and goal state (304); and generating the plurality of routes (316) traversing the at least one entry node (306) and the at least one exit node (308) based on one or more path retrieval techniques, wherein the plurality of routes (316) lies on or outside the virtual triangulated 3D enclosure (206).
8. The system as claimed in claim 7, wherein the one or more path retrieval techniques include at least one of: a graphical technique, a plane fitting technique, and a hybrid technique.
9. The system as claimed in claim 6, wherein the processor (108) is further configured to determine a position and an orientation of the autonomous mobile object (104), to operate the autonomous mobile object (104) in the operating environment.
10. The system as claimed in claim 6, wherein the processor (108) is further configured to generate an optimal route from the plurality of routes (316) by minimizing a pre-defined cost function.
PCT/IN2025/050729 2024-05-10 2025-05-09 A route planning method for the operation of autonomous mobile objects and a system thereof Pending WO2025233972A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202441037213 2024-05-10
IN202441037213 2024-05-10

Publications (1)

Publication Number Publication Date
WO2025233972A1 true WO2025233972A1 (en) 2025-11-13

Family

ID=97674855

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2025/050729 Pending WO2025233972A1 (en) 2024-05-10 2025-05-09 A route planning method for the operation of autonomous mobile objects and a system thereof

Country Status (1)

Country Link
WO (1) WO2025233972A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220135068A1 (en) * 2020-10-31 2022-05-05 Han Hu Method and system for motion planning for an autonmous vehicle
US20230288221A1 (en) * 2020-08-17 2023-09-14 Murata Machinery, Ltd. Autonomous travel route planning method, autonomous traveling method, and program
US20230347923A1 (en) * 2022-04-27 2023-11-02 Tmrw Foundation Ip S. À R.L. Location-based autonomous navigation using a virtual world system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230288221A1 (en) * 2020-08-17 2023-09-14 Murata Machinery, Ltd. Autonomous travel route planning method, autonomous traveling method, and program
US20220135068A1 (en) * 2020-10-31 2022-05-05 Han Hu Method and system for motion planning for an autonmous vehicle
US20230347923A1 (en) * 2022-04-27 2023-11-02 Tmrw Foundation Ip S. À R.L. Location-based autonomous navigation using a virtual world system

Similar Documents

Publication Publication Date Title
EP3843955B1 (en) Dynamic probabilistic motion planning
US12222723B2 (en) Directed exploration for navigation in dynamic environments
EP3581342B1 (en) Path planning apparatus, path planning method, and path planning program
CN110587600A (en) Point cloud-based autonomous path planning method for live working robot
US9056394B2 (en) Methods and systems for determining efficient robot-base position
US20210370510A1 (en) Robot path planning method with static and dynamic collision avoidance in an uncertain environment
US5544282A (en) Method and apparatus for planning motions of robot manipulators
US20100174410A1 (en) Methods, devices, and systems for autmated movements involving medical robots
US20150363935A1 (en) Robot, robotic system, and control device
JP2012056023A (en) Action generating system and method for robot
TW201723425A (en) Using sensor-based observations of agents in an environment to estimate the pose of an object in the environment and to estimate an uncertainty measure for the pose
CN112947403B (en) Deterministic Robotic Path Planning for Obstacle Avoidance
US20240004388A1 (en) Method and system for navigation of robot from one area to another area
Blomqvist et al. Go fetch: Mobile manipulation in unstructured environments
US20240285356A1 (en) Methods, systems, device, and storage mediums for obstacle avoidance of surgical robots
JP2021169149A (en) Disassembly based assembly planning
Merkt et al. Robust shared autonomy for mobile manipulation with continuous scene monitoring
CN117400265A (en) Method and device for generating obstacle avoidance track of mechanical arm for part detection
JPWO2020066949A1 (en) Robot routing device, robot routing method, program
JP2018120482A (en) Robot and method of controlling the same
JP2022187584A (en) Information processing device, information processing system, information processing method, and program
WO2025233972A1 (en) A route planning method for the operation of autonomous mobile objects and a system thereof
CN113510699A (en) A Robotic Arm Motion Trajectory Planning Method Based on Improved Ant Colony Optimization Algorithm
Pedrosa et al. A skill-based architecture for pick and place manipulation tasks
Kanehiro et al. Efficient reaching motion planning method for low-level autonomy of teleoperated humanoid robots