[go: up one dir, main page]

CN119319559A - Task movement planning method and device for intelligent agent and intelligent agent - Google Patents

Task movement planning method and device for intelligent agent and intelligent agent Download PDF

Info

Publication number
CN119319559A
CN119319559A CN202410924251.2A CN202410924251A CN119319559A CN 119319559 A CN119319559 A CN 119319559A CN 202410924251 A CN202410924251 A CN 202410924251A CN 119319559 A CN119319559 A CN 119319559A
Authority
CN
China
Prior art keywords
task
target
agent
interacted
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410924251.2A
Other languages
Chinese (zh)
Inventor
焦子元
牛艺达
李志天
刘航欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing General Artificial Intelligence Research Institute
Original Assignee
Beijing General Artificial Intelligence Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing General Artificial Intelligence Research Institute filed Critical Beijing General Artificial Intelligence Research Institute
Priority to CN202410924251.2A priority Critical patent/CN119319559A/en
Publication of CN119319559A publication Critical patent/CN119319559A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/08Programme-controlled manipulators characterised by modular constructions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an agent task movement planning method and device and an agent, and belongs to the field of agents. The task motion planning method of the intelligent body comprises the steps of obtaining a target kinematic model between the intelligent body and each object to be interacted in a task scene, generating a task sequence based on a target task corresponding to the intelligent body, wherein the task sequence comprises at least one subsequence arranged based on a time sequence, the subsequence comprises target interaction object information, action information and first pose information between an executing mechanism and the object to be interacted, the target interaction object information is used for representing the target object to be interacted in the object to be interacted, and generating the task motion planning of the intelligent body based on the task sequence and the target kinematic model. The task motion planning method of the intelligent agent effectively aims at the scene of rich tasks, ensures the task completion rate of the intelligent agent, improves the flexibility of the intelligent agent control process and improves the use experience of users.

Description

Task movement planning method and device for intelligent agent and intelligent agent
Technical Field
The application belongs to the field of intelligent agents, and particularly relates to a task movement planning method and device for an intelligent agent and the intelligent agent.
Background
It is becoming increasingly important for agents to operate autonomously in task-rich environments and assist users in various everyday lives. In the related art, challenges of high complexity of contact patterns of the end effector with the object and high dimensionality of the state space in sequential movement operation planning problems are mainly solved in a hierarchical manner. However, the method simplifies the complexity of motion planning, can only solve simple operation, cannot effectively solve actual challenges in scenes with rich tasks, limits the deployment of an intelligent agent in an actual environment, influences the flexibility of planning and the completion rate of tasks, and influences the use experience of users.
Disclosure of Invention
The present application is directed to solving at least one of the technical problems existing in the related art. Therefore, the application provides the task movement planning method and device for the intelligent agent and the intelligent agent, which effectively cope with the scene of rich tasks, ensure the task completion rate of the intelligent agent, improve the flexibility of the intelligent agent control process and improve the use experience of users.
In a first aspect, the present application provides a task movement planning method for an agent, where the method includes:
Acquiring a target kinematic model between the intelligent body and each object to be interacted in a task scene, wherein the target kinematic model is constructed based on the kinematic information of an executing mechanism of the intelligent body, the kinematic information of the object to be interacted and the kinematic information of a moving mechanism corresponding to the intelligent body;
generating a task sequence based on a target task corresponding to the intelligent agent, wherein the task sequence comprises at least one subsequence arranged based on time sequence, and the subsequence comprises target interaction object information, action information and first pose information between the executing mechanism and the object to be interacted;
and generating the task motion plan of the intelligent body based on the task sequence and the target kinematics model.
According to the task motion planning method of the intelligent agent, complex motions are simplified into the task sequence comprising the motion information, the target interaction object information and the pose information, for a complex task, the complex task can be decomposed into a plurality of subsequences comprising the motion information, the target interaction object information and the pose information, the task motion planning of the intelligent agent for executing the complex task is generated based on the task sequence and the target kinematics model, so that the intelligent agent is controlled to execute the task based on the task motion planning, the scene with abundant effects is effectively provided, the task completion rate of the intelligent agent is ensured, the flexibility of the intelligent agent control process is improved, and the user experience is improved.
According to the task motion planning method of the intelligent agent, the motion information comprises one of a placing motion, a picking motion and a moving motion.
According to the task motion planning method of the agent, the task motion planning of the agent is generated based on the task sequence and the target kinematic model, and the task motion planning method comprises the following steps:
And generating a task motion plan of the intelligent body based on at least two of the degrees of freedom corresponding to the moving mechanism of the intelligent body, the degrees of freedom corresponding to the executing mechanism of the intelligent body and the degrees of freedom corresponding to the target object to be interacted, which is included in a first target subsequence in the at least one subsequence, and the first target subsequence.
According to the task motion planning method of an agent of the present application, the generating task motion planning of the agent based on at least two of the degrees of freedom corresponding to the moving mechanism of the agent, the degrees of freedom corresponding to the executing mechanism of the agent, and the degrees of freedom corresponding to the target object to be interacted included in the first target sub-sequence in the at least one sub-sequence, and the first target sub-sequence includes:
Under the condition that the action information included in the first target subsequence is a pickup action, determining second pose information of the intelligent agent under the condition that the first target subsequence is executed based on the first target subsequence, the degree of freedom corresponding to the mobile device of the intelligent agent and the degree of freedom corresponding to the intelligent agent;
and generating a task motion plan of the intelligent agent based on the second pose information.
According to the task motion planning method of an agent of the present application, the generating task motion planning of the agent based on at least two of the degrees of freedom corresponding to the moving mechanism of the agent, the degrees of freedom corresponding to the executing mechanism of the agent, and the degrees of freedom corresponding to the target object to be interacted included in the first target sub-sequence in the at least one sub-sequence, and the first target sub-sequence includes:
Under the condition that the action information included in the first target subsequence is a placing action, determining third pose information of the executing mechanism under the first target subsequence based on the first target subsequence, the degree of freedom corresponding to the mobile device of the intelligent body, the degree of freedom corresponding to the intelligent body and the degree of freedom corresponding to the target object to be interacted;
and generating the task motion plan of the intelligent agent based on the third pose information.
According to the task motion planning method of the intelligent agent, based on the task sequence and the target kinematics model, the task motion planning of the intelligent agent is generated, and the task motion planning method comprises the following steps:
After the execution of the agent is controlled to complete a second target subsequence in the at least one subsequence arranged based on time sequence, under the condition that the pose of a target object to be interacted included in the second target subsequence is changed, updating the task sequence and the target kinematic model based on the changed pose;
And generating the task motion plan of the intelligent agent based on the updated task sequence and the updated target kinematics model.
In a second aspect, the present application provides an agent's task movement planning apparatus, the apparatus comprising:
the first processing module is used for acquiring a target kinematic model between the intelligent body and each object to be interacted in a task scene, wherein the target kinematic model is constructed based on the kinematic information of an executing mechanism of the intelligent body, the kinematic information of the object to be interacted and the kinematic information of a moving mechanism corresponding to the intelligent body;
the system comprises a first processing module, a second processing module and a processing module, wherein the first processing module is used for generating a target task corresponding to an intelligent agent based on the target task, the target task comprises at least one subsequence arranged based on time sequence, and the subsequence comprises target interaction object information, action information and first pose information between an executing mechanism and an object to be interacted;
and the third processing module is used for generating the task motion plan of the intelligent body based on the task sequence and the target kinematics model.
According to the task motion planning device for the intelligent agent, complex motions are simplified into the task sequence comprising the motion information, the target interaction object information and the pose information, for a complex task, the complex task can be decomposed into a plurality of subsequences comprising the motion information, the target interaction object information and the pose information, the task motion planning for the intelligent agent to execute the complex task is generated based on the task sequence and the target kinematics model, so that the intelligent agent is controlled to execute the task based on the task motion planning, the scene with abundant effects is effectively treated, the task completion rate of the intelligent agent is guaranteed, the flexibility of the intelligent agent control process is improved, and the user experience is improved.
In a third aspect, the present application provides an agent comprising:
A sensor;
An actuator;
the actuating mechanism is arranged on the moving mechanism;
the task movement planning device for an agent according to the second aspect, wherein the task movement planning device for an agent is electrically connected to the sensor, the actuator, and the moving mechanism, respectively.
In a fourth aspect, the present application provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a task movement planning method for an agent as described in the first aspect above.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements a method for task movement planning of an agent according to the first aspect described above.
The above technical solutions in the embodiments of the present application have at least one of the following technical effects:
By simplifying the complex actions into a task sequence comprising action information, target interaction object information and pose information, for a complex task, the complex task can be decomposed into a plurality of subsequences comprising the action information, the target interaction object information and the pose information, and a task motion plan for an agent to execute the complex task is generated based on the task sequence and a target kinematic model so as to control the agent to execute the task based on the task motion plan, so that the task can be effectively applied to a scene with abundant tasks, the task completion rate of the agent is ensured, the flexibility of the agent control process is improved, and the user experience is improved.
Furthermore, by determining that the actions corresponding to the completed sub-sequences are artificially changed in the process of executing the task sequences corresponding to the target tasks by the intelligent agent, according to the characteristics that the task sequences only comprise action information, target interaction object information and final relative pose between the intelligent agent and the final intelligent agent and the executing mechanism and the object to be interacted, intermediate parameter forced moving structures and intermediate poses of the executing mechanism are not set, the task sequences can be updated based on the pose of the changed target object to be interacted, and the flexibility of executing the target tasks is improved.
Further, by determining a plurality of degrees of freedom corresponding to the moving mechanism of the intelligent body, the degrees of freedom corresponding to the executing mechanism of the intelligent body and the degrees of freedom corresponding to the object to be interacted with, the track of the action corresponding to the sub-sequence is effectively determined, so that the intelligent body is controlled based on the action information corresponding to each sub-sequence and the track corresponding to the sub-sequence to complete the target task, the actions corresponding to the sub-sequences can be processed, complex scenes can be processed, the task completion rate is improved, and the user experience is improved.
Additional aspects and advantages of the application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic flow chart of a task movement planning method of an agent according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a task movement planning method of an agent according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a task movement planning method of an agent according to a second embodiment of the present application;
Fig. 4 is a schematic structural diagram of a task movement planning device for an agent according to an embodiment of the present application;
FIG. 5 is a third schematic diagram of a task movement planning method of an agent according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a task movement planning method of an agent according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a task movement planning system for an agent according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The task motion planning method for the intelligent agent, the task motion planning device for the intelligent agent, the intelligent agent and the readable storage medium provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
The task motion planning method of the intelligent agent can be applied to the terminal, and can be specifically executed by hardware or software in the terminal.
The execution main body of the task motion planning method of the intelligent body provided by the embodiment of the application can be the intelligent body or electronic equipment in communication connection with the intelligent body or a functional module or a functional entity capable of realizing the task motion planning method of the intelligent body in the electronic equipment.
As shown in FIG. 1, the task movement planning method of the agent comprises a step 110, a step 120 and a step 130.
It should be noted that the task movement planning method of the agent may be applied to an agent control scenario, where the agent may be a robot or other structural body capable of implementing the same function, and the robot may include, but is not limited to, a home agent, a production agent, a service agent, a mobile mechanical arm, an aerial work agent, and other agents.
As shown in fig. 3, the agent may include an executing mechanism 310 and a moving mechanism 320, where the executing mechanism 310 is disposed on the moving mechanism 320, and the moving mechanism 320 drives the executing mechanism 310 to move.
The actuator 310 is used for performing operations such as grabbing, pushing, pulling, placing, etc., and may be a mechanical arm, etc.
The moving mechanism 320 may be a roller or track, or the like.
Step 110, acquiring a target kinematic model between an intelligent agent and each object to be interacted in a task scene, wherein the target kinematic model is constructed based on the kinematic information of an executing mechanism of the intelligent agent, the kinematic information of the object to be interacted and the kinematic information of a moving mechanism corresponding to the intelligent agent;
in this step, the task scene is the scene in which the agent is located.
Task scenes include, but are not limited to, mall, library, house, etc.
The object to be interacted is an object which can interact with the intelligent agent in the task scene.
Objects to be interacted with include, but are not limited to, objects such as cups, cabinets, bottles, chairs, and the like.
The actuator kinematics information is used to characterize the relevant information of the actuator, including but not limited to, the structure, size, attributes, and movement characteristics of the actuator.
The kinematic information of the object to be interacted is used for representing relevant information of the object to be interacted, including but not limited to information such as structure, size, shape, attribute and the like of the object to be interacted.
The kinematic information of the moving mechanism is used for representing the related information of the moving mechanism, including but not limited to the information of the structure, the size attribute, the relative relation with the external environment, the moving characteristics and the like of the moving mechanism.
The movement mechanism kinematic information can reflect the movement probability of the movement mechanism.
In the actual implementation process, the plane motion of the moving mechanism can be simulated through a three-degree-of-freedom motion chain.
The target kinematic model is a model for characterizing a serial kinematic relationship between a moving mechanism, an actuator, and an object to be interacted with.
In some embodiments, the objects to be interacted with may be one or more, and the categories of the plurality of objects to be interacted with may not be the same.
In some embodiments, step 110 may further include, when there are multiple objects to be interacted, constructing a target kinematic model corresponding to the first object to be interacted based on the kinematic information of the object to be interacted, the kinematic information of the actuator and the kinematic information of the moving mechanism, and then fusing the target kinematic models corresponding to the objects to be interacted to obtain a final target kinematic model.
The target first object to be interacted is any object in the plurality of objects to be interacted.
Step 120, generating a task sequence based on a target task corresponding to the agent;
in this step, the target task is a task that the target agent needs to execute.
The task sequence includes at least one subsequence arranged based on a time sequence.
The sub-sequence comprises target interaction object information, action information and first pose information between the executing mechanism and the object to be interacted.
The first pose information is used for representing a final relative pose between the actuator and the object to be interacted.
The target interaction object information is used for representing a target object to be interacted in the objects to be interacted.
In some embodiments, the motion information may include one of a placement motion, a pickup motion, and a movement motion.
In this embodiment, the placement action is an actuator that places the target object to be interacted with in a desired location, which may be denoted as a place.
The pick action is for the actuator to grasp the object to be interacted with, and may be denoted as pick.
The movement operation is a movement of the movement mechanism to a desired position, which may be denoted goto.
In the actual execution process, the action information corresponding to each sub-sequence can be determined based on the actual condition of the target task.
In the actual execution process, the pickup action moves the virtual kinematics chain to the target object to be interacted, and extends the kinematics of the target object to be interacted and the virtual additional joint of the executing mechanism by adding the virtual additional joint of the target object to be interacted and the executing mechanism.
The pick-up action includes moving the actuator to perform a task of interacting with the environment, such as picking up a target object to be interacted with or grasping the target object to be interacted with.
The placement action moves the target object to be interacted with, which has been connected to the current virtual kinematic chain, to a desired pose.
The target object to be interacted needs to be integrated into a virtual kinematic chain, and kinematic constraint is applied to the target kinematic model.
After the target object to be interacted reaches the required pose, the virtual kinematic chain at the virtual attachment joint can be disconnected, so that the executing mechanism is separated from the target object to be interacted.
It will be appreciated that the placement and pick actions can represent a wide range of mobile operational tasks, as well as help simplify the subsequent planning arts by eliminating unnecessary operations and intermediate state predicates.
According to the task movement planning method of the intelligent agent, through determining the actions which can be executed by the intelligent agent, the target task is effectively finished based on different action information in the follow-up process, the use requirement of a user is met, the actions included in the action information can represent wide operation tasks, and the follow-up planning field can be effectively simplified.
In the actual execution process, the target task can be disassembled to obtain a task sequence corresponding to the target task.
Taking a target task as an example, taking a bottle firstly onto a tea table and then placing the bottle into a cabinet, in the actual implementation process, an intelligent body needs to pick up the bottle firstly, namely, clamp up the bottle, then place the bottle on the tea table, clamp up the bottle again and place the bottle on the cabinet, so that the corresponding task sequence is { pick bottle pose, place bottle pose2, pick bottle pose3, place bottle pose4}.
It will be appreciated that pose and pose are used to characterize the relative pose between the bottle and the actuator in the case of a clamped bottle, and pose2 and pose are used to characterize the relative pose between the bottle and the actuator after the bottle has been placed on the tea table or cabinet.
And 130, generating task motion planning of the intelligent agent based on the task sequence and the target kinematics model.
In this step, the target kinematic model may be adjusted based on the task sequence to obtain kinematic features of the agent to perform the target task in the task scenario, and a task motion plan for the agent may be generated based on the obtained kinematic features.
It can be understood that the task scene or the executed target task is different, the corresponding task sequence may be different, and the relevant parameters in the target kinematic model can be correspondingly adjusted only by inputting the task sequence into the target kinematic model, so that flexible control of the intelligent agent is realized.
In the actual execution process, the task sequence can characterize the motion constraint and the spatial relationship between the intelligent agent and the target object to be interacted.
In some embodiments, where the target object to be interacted with in the task sequence is a hinge, the target kinematic model needs to be inverted to keep the constructed kinematic chain continuous.
In some embodiments, the state of the virtual kinematic chain may be characterized by a state vector, which may be expressed as:
wherein q B=[xB,yBB is the vector of the moving mechanism, For the joint position vector of the actuator, q o is the joint position vector of the target object to be interacted with.
Is the collision-free configuration space of the virtual kinematic chain, wherein n is the total degree of freedom.
In the actual execution process, motion planning can be performed based on the task sequence and the target kinematics model, namely a t-step path is obtained: the subscript [ k ] represents a variable at the step length k, and the task motion plan of the agent can be obtained through solving a cabinet optimization formula so as to control the agent to execute a target task based on the task motion plan.
The inventor finds out in the research and development process that in the related technology, a method for solving the challenges of high complexity of a contact mode between an end effector and an object and high dimensionality of a state space in the sequential movement operation planning problem in a layered mode influences the flexibility of planning and the completion rate of tasks and influences the use experience of users, and in addition, the method consumes time and has high labor cost in the planning process.
According to the method, the target task to be executed by the intelligent agent is converted into the task sequence comprising target object information to be interacted, action information and pose information between the executing mechanism and the object to be interacted, the planning field for controlling the intelligent agent to execute the target task is effectively simplified, task motion planning for the intelligent agent to execute the target task is generated based on a plurality of subsequences in the task sequence, time consumption in the task motion planning process is effectively reduced, labor cost is reduced, a plurality of actions can be executed based on the subsequences, scenes with rich tasks are effectively processed, the intelligent agent can complete the tasks in complex scenes, the task completion rate is guaranteed, in addition, the action information and the pose information in the task sequence are not provided with intermediate parameters to force intermediate poses of the moving mechanism or the executing mechanism, the flexibility of the task motion planning process of the intelligent agent is improved, and the user experience is improved.
According to the task motion planning method for the intelligent agent, complex motions are simplified into the task sequence comprising the motion information, the target interaction object information and the pose information, for a complex task, the complex task can be decomposed into a plurality of subsequences comprising the motion information, the target interaction object information and the pose information, task motion planning for the intelligent agent to execute the complex task is generated based on the task sequence and the target kinematics model, so that the intelligent agent is controlled to execute the task based on the task motion planning, the scene with abundant effects is effectively treated, the task completion rate of the intelligent agent is guaranteed, the flexibility of the intelligent agent control process is improved, and the user experience is improved.
In some embodiments, step 130 may further comprise:
And generating a task motion plan of the intelligent agent based on at least two of the degrees of freedom corresponding to the moving mechanism of the intelligent agent, the degrees of freedom corresponding to the executing mechanism of the intelligent agent and the degrees of freedom corresponding to the target object to be interacted, which is included in the first target subsequence in at least one subsequence, and the first target subsequence.
In some embodiments, the corresponding degrees of freedom of the movement mechanism may include the ability of the movement mechanism to move in both the horizontal and vertical directions.
The corresponding degrees of freedom of the actuator may include a translational degree of freedom of the actuator and a rotational degree of freedom of the actuator.
Wherein, the translational degree of freedom is the motion capability of the actuator in the horizontal plane and the vertical direction.
The rotational degree of freedom is the rotational capacity of the actuator about an axis corresponding to the horizontal plane and an axis corresponding to the vertical direction.
The target object to be interacted is the object to be interacted indicated in the first target subsequence.
The degree of freedom corresponding to the target object to be interacted is the movement or rotation capability of the target object to be interacted.
In the actual execution process, after the task sequence is obtained, the whole body track of the executing mechanism, the moving mechanism and the target object to be interacted can be constructed based on each sub-sequence in the task sequence, namely, the possible numerical values of the degrees of freedom of the executing mechanism, the moving mechanism and the target object to be interacted are obtained in the process of executing the actions corresponding to each sub-sequence, so that the whole body track of the intelligent agent is obtained, and the intelligent agent is controlled to execute the target task based on the whole body track in the follow-up process.
The whole body trajectory is used for representing the trajectory of each joint, namely the task motion planning of each joint.
In some embodiments, the trajectory may include waypoints for each joint.
It will be appreciated that the whole body trajectory corresponding to each sub-sequence can be obtained for each sub-sequence in the manner described above.
It will be appreciated that different target tasks correspond to different task sequences, and that the whole body trajectories corresponding to different task sequences may be different.
According to the task motion planning method for the intelligent agent, provided by the embodiment of the application, the track of the action corresponding to the intelligent agent in the execution sub-sequence is effectively determined by determining a plurality of degrees of freedom corresponding to the moving mechanism of the intelligent agent, the degrees of freedom corresponding to the actuating mechanism of the intelligent agent and the degrees of freedom corresponding to the target object to be interacted, so that the intelligent agent is controlled to complete the target task based on the action information corresponding to each sub-sequence and the track corresponding to the sub-sequence, the actions corresponding to the sub-sequences can be processed, complex scenes can be processed, the task completion rate is improved, and the user experience is improved.
In some embodiments, generating the task motion plan of the agent based on at least two of a degree of freedom corresponding to the moving mechanism of the agent, a degree of freedom corresponding to the executing mechanism of the agent, and a degree of freedom corresponding to the target object to be interacted included in the first target sub-sequence in the at least one sub-sequence, and the first target sub-sequence may further include:
Under the condition that the action information included in the first target subsequence is a pickup action, determining second pose information of the intelligent agent under the condition that the first target subsequence is executed based on the first target subsequence, the degree of freedom corresponding to the mobile device of the intelligent agent and the degree of freedom corresponding to the intelligent agent;
And generating task motion planning of the intelligent agent based on the second pose information.
In this embodiment, the second pose information is pose information of the moving mechanism and the actuator that need to be determined in the case where the agent performs the pickup action.
In the actual execution process, taking an example that the execution mechanism of the intelligent agent needs to reach a position and grasp a bottle, the second pose information of the intelligent agent under the execution of the first target subsequence can be determined by determining three degrees of freedom of the moving mechanism and six degrees of freedom of the execution mechanism.
In some embodiments, the degree of freedom of the actuator may also be determined based on the number of axes of the actuator.
It will be appreciated that the greater the number of axes of the actuator, the greater the degree of freedom of the actuator.
And after the second pose information is determined, generating a task motion plan corresponding to the execution target task of the intelligent agent based on the second pose information so as to control the intelligent agent to complete the target task based on the task motion plan.
According to the task motion planning method for the intelligent body, under the condition that the motion information is determined to be the pickup motion, the pose information of the intelligent body in the process of executing the pickup motion is effectively determined through the degree of freedom corresponding to the moving mechanism of the intelligent body and the degree of freedom corresponding to the actuating mechanism of the intelligent body, so that the task motion plan corresponding to the execution target task of the intelligent body is generated based on the pose information of the intelligent body, the intelligent body can be accurately controlled to execute the corresponding motion based on the task motion plan, and the user experience is improved.
In some embodiments, generating the task motion plan of the agent based on at least two of a degree of freedom corresponding to the moving mechanism of the agent, a degree of freedom corresponding to the executing mechanism of the agent, and a degree of freedom corresponding to the target object to be interacted included in the first target sub-sequence in the at least one sub-sequence, and the first target sub-sequence may further include:
Under the condition that the action information included in the first target subsequence is a placing action, determining third pose information of the intelligent agent under the condition that the first target subsequence is executed based on the first target subsequence, the degree of freedom corresponding to the mobile device of the intelligent agent, the degree of freedom corresponding to the intelligent agent and the degree of freedom corresponding to the target object to be interacted;
And generating task motion planning of the intelligent agent based on the third pose information.
In this embodiment, the third pose information is pose information of the moving mechanism, the executing mechanism and the target object to be interacted, which need to be determined, when the agent performs the placing action.
In the actual execution process, as shown in fig. 2, the intelligent body opens the cabinet door, the target object to be interacted is the cabinet door, the moving mechanism of the intelligent body needs to move to the front of the cabinet door, then the executing mechanism opens the cabinet door, the cabinet door is hinged, and at the moment, the third pose information of the intelligent body under the condition of executing the first target subsequence needs to be further determined based on the degree of freedom of the cabinet door.
After the third pose information is determined, a task motion plan corresponding to the execution target task of the intelligent agent is generated based on the third pose information, so that a moving mechanism, an executing mechanism and a target object to be interacted of the intelligent agent are controlled to reach the required pose based on the task motion plan in the follow-up process, and the intelligent agent is controlled to complete the target task.
According to the task motion planning method for the intelligent body, under the condition that the motion information is determined to be the placing motion, the pose information of the intelligent body and the pose information of the target object to be interacted in the process of the placing motion are effectively determined through the degrees of freedom corresponding to the moving mechanism of the intelligent body, the degrees of freedom corresponding to the actuating mechanism of the intelligent body and the degrees of freedom corresponding to the target object to be interacted, so that the task motion plan corresponding to the task of the intelligent body is generated based on the pose information, the intelligent body can be accurately controlled to execute the corresponding motion based on the task motion plan, and the user experience is improved.
In some embodiments, step 130 may further comprise:
After the control agent performs the execution of a second target subsequence in at least one subsequence arranged based on time sequence, updating a task sequence and a target kinematic model based on the changed pose under the condition that the pose of a target object to be interacted included in the second target subsequence is changed;
and generating the task motion plan of the intelligent agent based on the updated task sequence and the updated target kinematics model.
In this embodiment, the second target subsequence is one of a plurality of subsequences included in the task sequence.
The changed pose is a new pose of the target object to be interacted, which is different from pose information included in the second target subsequence after the agent executes the second target subsequence.
In the actual execution process, external force information may occur in the process of the agent executing a plurality of subsequences in the task sequence, so that the pose of the target object to be interacted is changed.
In the actual execution process, the pose of the target object to be interacted can be determined to change through a sensor.
For example, the task sequence is a sequence of closing the cabinet door and placing the chair at the designated position, as shown in fig. 5, after the agent performs the sub-sequence corresponding to closing the cabinet door, in the process of performing the sub-sequence corresponding to moving the chair, as shown in fig. 6, the cabinet door is manually opened again, and at this time, the sensor needs to send the condition that the cabinet door is opened to the agent to update the task sequence and perform the subsequent procedure.
For another example, the task sequence is that the target task takes the bottle to the tea table and then puts it into the cabinet, and the corresponding task sequence is that the agent needs to perform the task twice { pick bottle pose, place bottle pose }, but after the agent puts the bottle to the tea table, there may be a case that the agent takes the bottle to the table by hand.
At this time, the pose of the bottle of the target object to be interacted is changed, the task sequence needs to be updated based on the new pose of the changed target object to be interacted, and for the target task, the intelligent agent still needs to execute pick and place operations, but pose and pose in the two subsequences are changed, and pose and pose are required to be updated, so that a new task sequence is obtained.
It can be understood that after the task sequence is changed, the target kinematic model is correspondingly updated, so that a task motion plan of the agent in a new task scene is generated based on the updated task sequence and the updated target kinematic model, and the agent is controlled to execute the target task corresponding to the updated task sequence.
According to the task motion planning method of the intelligent agent, in the process of determining that the intelligent agent executes the task sequence corresponding to the target task, the actions corresponding to the sub-sequence are artificially changed when the intelligent agent has executed the task sequence, according to the characteristics that the task sequence only comprises action information, target interaction object information and final relative pose between the intelligent agent and an executing mechanism and an object to be interacted, a middle parameter forced moving structure and the middle pose of the executing mechanism are not set, the task sequence can be updated based on the pose of the target object to be interacted after the change, and the flexibility of executing the target task is improved.
According to the task motion planning method for the intelligent agent, provided by the embodiment of the application, the execution main body can be the task motion planning device for the intelligent agent. In the embodiment of the application, the task motion planning device of the intelligent agent is taken as an example to execute the task motion planning method of the intelligent agent, and the task motion planning device of the intelligent agent provided by the embodiment of the application is described.
The embodiment of the application also provides a task movement planning device of the intelligent body.
As shown in fig. 4, the task movement planning apparatus of the agent includes a first processing module 410, a second processing module 420, and a third processing module 430.
The first processing module 410 is configured to obtain a target kinematic model between the agent and each object to be interacted in the task scene, where the target kinematic model is constructed based on the kinematic information of the actuator of the agent, the kinematic information of the object to be interacted, and the kinematic information of the moving mechanism corresponding to the agent;
The second processing module 420 is configured to generate a task sequence based on a target task corresponding to the agent, where the task sequence includes at least one subsequence arranged based on a time sequence, and the subsequence includes target interaction object information, action information, and first pose information between the execution mechanism and the object to be interacted;
The third processing module 430 is configured to generate a task motion plan of the agent based on the task sequence and the target kinematic model.
According to the task motion planning device for the intelligent agent, complex motions are simplified into the task sequence comprising the motion information, the target interaction object information and the pose information, for a complex task, the complex task can be decomposed into a plurality of subsequences comprising the motion information, the target interaction object information and the pose information, and the task motion planning for the intelligent agent to execute the complex task is generated based on the task sequence and the target kinematics model, so that the intelligent agent is controlled to execute the task based on the task motion planning, the scene with abundant effects is effectively treated, the task completion rate of the intelligent agent is guaranteed, the flexibility of the intelligent agent control process is improved, and the user experience is improved.
In some embodiments, the third processing module 430 may also be configured to:
And generating a task motion plan of the intelligent agent based on at least two of the degrees of freedom corresponding to the moving mechanism of the intelligent agent, the degrees of freedom corresponding to the executing mechanism of the intelligent agent and the degrees of freedom corresponding to the target object to be interacted, which is included in the first target subsequence in at least one subsequence, and the first target subsequence.
In some embodiments, the third processing module 430 may also be configured to:
Under the condition that the action information included in the first target subsequence is a pickup action, determining second pose information of the intelligent agent under the condition that the first target subsequence is executed based on the first target subsequence, the degree of freedom corresponding to the mobile device of the intelligent agent and the degree of freedom corresponding to the intelligent agent;
And generating task motion planning of the intelligent agent based on the second pose information.
In some embodiments, the third processing module 430 may also be configured to:
Under the condition that the action information included in the first target subsequence is a placing action, determining third pose information of the executing mechanism under the first target subsequence based on the first target subsequence, the degree of freedom corresponding to the mobile device of the intelligent body, the degree of freedom corresponding to the intelligent body and the degree of freedom corresponding to the target object to be interacted;
And generating task motion planning of the intelligent agent based on the third pose information.
In some embodiments, the third processing module 430 may also be configured to:
After the control agent performs the execution of a second target subsequence in at least one subsequence arranged based on time sequence, updating a task sequence and a target kinematic model based on the changed pose under the condition that the pose of a target object to be interacted included in the second target subsequence is changed;
and generating the task motion plan of the intelligent agent based on the updated task sequence and the updated target kinematics model.
The task movement planning device of the intelligent agent in the embodiment of the application can be electronic equipment, and can also be a component in the electronic equipment, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. The electronic device may be a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a Mobile INTERNET DEVICE, MID device, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, an agent, a wearable device, an ultra-Mobile personal computer, a UMPC, a netbook, a personal digital assistant (personal DIGITAL ASSISTANT, PDA), a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), an teller machine, a self-service machine, or the like, which is not limited in the embodiments of the present application.
The task movement planning device of the agent in the embodiment of the application can be a device with an operating system. The operating system may be an Android operating system, an IOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The task movement planning device for the intelligent agent provided by the embodiment of the application can realize each process realized by the method embodiments of fig. 1 to 3 and fig. 5 to 6, and in order to avoid repetition, the description is omitted.
As shown in fig. 3, the embodiment of the present application further provides an agent, including a sensor, an executing mechanism 310, a moving mechanism 320, and a task motion planning device for an agent according to any of the foregoing embodiments.
The actuator 310 is used for performing operations such as grabbing, pushing and pulling, placing, and the like.
The executing mechanism 310 is disposed on the moving mechanism 320, and the moving mechanism 320 drives the executing mechanism 310 to move.
The device for task movement planning of the intelligent agent is electrically connected with the sensor, the executing mechanism 310 and the moving mechanism 320 respectively.
The device for task movement planning of an agent is used for executing the task movement planning method of the agent according to any embodiment.
According to the agent provided by the embodiment of the application, the complex action is simplified into the task sequence comprising the action information, the target interaction object information and the pose information, for a complex task, the complex task can be decomposed into a plurality of subsequences comprising the action information, the target interaction object information and the pose information, the task motion planning of the agent for executing the complex task is generated based on the task sequence and the target kinematics model, so that the agent is controlled to execute the task based on the task motion planning, the scene with abundant tasks is effectively processed, the task completion rate of the agent is ensured, the flexibility of the agent control process is improved, and the user experience is improved.
In some embodiments, as shown in fig. 8, an electronic device 800 is further provided in the embodiments of the present application, which includes a processor 801, a memory 802, and a computer program stored in the memory 802 and capable of running on the processor 801, where the program when executed by the processor 801 implements the processes of the task motion planning method embodiments of the agent, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
As shown in FIG. 7, the application also provides a task movement planning system of the agent, which comprises a task planning module, a movement planning module and a state monitoring module.
In some embodiments, the task planning module disassembles the target task into a sequence of tasks represented by a virtual kinematics chain based on symbolic actions in a planning domain of a planning domain definition language.
For example, a door of a surrogate door may be expressed as an Action1: pick, object1: HANDLE ATTACH: the relative pose of the actuator and door Handle, VKC1: the root coordinate system of the moving mechanism to the coordinate system of the actuator, action2: space, object2: handle, attach2: the fixed connection pose of the actuator and door Handle, VKC2: the root coordinate system of the moving mechanism to the root coordinate system of the cabinet.
The motion planning module is used for obtaining the track (namely pose information) of all joints of the intelligent body by using an optimization method based on the motion planning of the virtual kinematic chain and considering the execution target and self constraint of the target task after obtaining the task sequence.
The state monitoring module is used for monitoring the change of the task environment.
For example, the opening and closing of the cabinet door and the position of the chair correspond to environmental information corresponding to a task scene.
Taking a target task as an example of closing a cabinet door and placing a chair at a designated position, after an intelligent body closes the cabinet door, the cabinet door is manually opened again in the process of moving the chair, and the state monitoring module needs to send information that the cabinet door is opened to the task planning module so as to enable the task planning module to quickly respond to environmental changes, generate a new task planning result and execute subsequent procedures.
According to the task motion planning system for the intelligent agent, provided by the embodiment of the application, complex motions are simplified into the task sequence comprising the motion information, the target interaction object information and the pose information, for a complex task, the complex task can be decomposed into a plurality of subsequences comprising the motion information, the target interaction object information and the pose information, and the task motion planning for the intelligent agent to execute the complex task is generated based on the task sequence and the target kinematics model, so that the intelligent agent is controlled to execute the task based on the task motion planning, the scene with abundant effects is effectively treated, the task completion rate of the intelligent agent is ensured, the flexibility of the intelligent agent control process is improved, and the user experience is improved.
The embodiment of the application also provides a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements each process of the task motion planning method embodiment of the agent, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application also provides a computer program product, which comprises a computer program, wherein the computer program realizes the task movement planning method of the agent when being executed by a processor.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the task movement planning method embodiment of the intelligent agent can be realized, the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the related art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the spirit and scope of the application as defined by the appended claims and their equivalents.

Claims (10)

1.一种智能体的任务运动规划方法,其特征在于,包括:1. A task motion planning method for an intelligent agent, characterized by comprising: 获取所述智能体与任务场景中各待交互对象之间的目标运动学模型,所述目标运动学模型为基于所述智能体的执行机构运动学信息、所述待交互对象的待交互对象运动学信息和所述智能体对应的移动机构运动学信息构建的;Acquire a target kinematic model between the agent and each object to be interacted in the task scene, wherein the target kinematic model is constructed based on the kinematic information of the actuator of the agent, the kinematic information of the object to be interacted, and the kinematic information of the mobile mechanism corresponding to the agent; 基于所述智能体对应的目标任务,生成任务序列,所述任务序列包括至少一个基于时间顺序排列的子序列,所述子序列包括目标交互对象信息、动作信息和所述执行机构与所述待交互对象之间的第一位姿信息;所述目标交互对象信息用于表征所述待交互对象中的目标待交互对象;Based on the target task corresponding to the agent, a task sequence is generated, wherein the task sequence includes at least one subsequence arranged in chronological order, and the subsequence includes target interaction object information, action information, and first posture information between the actuator and the object to be interacted; the target interaction object information is used to characterize the target object to be interacted among the objects to be interacted; 基于所述任务序列以及所述目标运动学模型,生成所述智能体的任务运动规划。Based on the task sequence and the target kinematic model, a task motion plan of the agent is generated. 2.根据权利要求1所述的智能体的任务运动规划方法,其特征在于,所述动作信息包括:放置动作、拾取动作以及移动动作中的一种。2. The task motion planning method of an intelligent agent according to claim 1 is characterized in that the action information includes: one of a placing action, a picking action and a moving action. 3.根据权利要求1所述的智能体的任务运动规划方法,其特征在于,所述基于所述任务序列以及所述目标运动学模型,生成所述智能体的任务运动规划,包括:3. The task motion planning method of an intelligent agent according to claim 1, characterized in that the step of generating the task motion planning of the intelligent agent based on the task sequence and the target kinematic model comprises: 基于所述智能体的移动机构对应的自由度、所述智能体的执行机构对应的自由度以及所述至少一个子序列中第一目标子序列所包括的目标待交互对象对应的自由度中的至少两种,以及所述第一目标子序列,生成所述智能体的任务运动规划。Based on at least two of the degrees of freedom corresponding to the moving mechanism of the intelligent agent, the degrees of freedom corresponding to the actuator of the intelligent agent, and the degrees of freedom corresponding to the target object to be interacted with included in the first target subsequence in the at least one subsequence, and the first target subsequence, a task motion plan of the intelligent agent is generated. 4.根据权利要求3所述的智能体的任务运动规划方法,其特征在于,所述基于所述智能体的移动机构对应的自由度、所述智能体的执行机构对应的自由度以及所述至少一个子序列中第一目标子序列所包括的目标待交互对象对应的自由度中的至少两种,以及所述第一目标子序列,生成所述智能体的任务运动规划,包括:4. The task motion planning method of an intelligent agent according to claim 3, characterized in that the generating of the task motion planning of the intelligent agent based on at least two of the degrees of freedom corresponding to the mobile mechanism of the intelligent agent, the degrees of freedom corresponding to the actuator of the intelligent agent, and the degrees of freedom corresponding to the target to-be-interacted object included in the first target subsequence in the at least one subsequence, and the first target subsequence, comprises: 在所述第一目标子序列所包括的所述动作信息为拾取动作的情况下,基于所述第一目标子序列、所述智能体的移动装置对应的自由度和所述智能体对应的自由度,确定所述智能体在执行所述第一目标子序列下的第二位姿信息;In a case where the action information included in the first target subsequence is a picking action, determining second posture information of the agent when executing the first target subsequence based on the first target subsequence, the degrees of freedom corresponding to the mobile device of the agent, and the degrees of freedom corresponding to the agent; 基于所述第二位姿信息,生成所述智能体的任务运动规划。Based on the second posture information, a task motion plan of the agent is generated. 5.根据权利要求3所述的智能体的任务运动规划方法,其特征在于,所述基于所述智能体的移动机构对应的自由度、所述智能体的执行机构对应的自由度以及所述至少一个子序列中第一目标子序列所包括的目标待交互对象对应的自由度中的至少两种,以及所述第一目标子序列,生成所述智能体的任务运动规划,包括:5. The task motion planning method of an intelligent agent according to claim 3, characterized in that the generating of the task motion planning of the intelligent agent based on at least two of the degrees of freedom corresponding to the mobile mechanism of the intelligent agent, the degrees of freedom corresponding to the actuator of the intelligent agent, and the degrees of freedom corresponding to the target to-be-interacted object included in the first target subsequence in the at least one subsequence, and the first target subsequence, comprises: 在所述第一目标子序列所包括的所述动作信息为放置动作的情况下,基于所述第一目标子序列、所述智能体的移动装置对应的自由度、所述智能体对应的自由度以及所述目标待交互对象对应的自由度,确定所述执行机构在执行所述第一目标子序列下的第三位姿信息;In a case where the action information included in the first target subsequence is a placement action, determining third posture information of the actuator when executing the first target subsequence based on the first target subsequence, the degrees of freedom corresponding to the mobile device of the agent, the degrees of freedom corresponding to the agent, and the degrees of freedom corresponding to the target to-be-interacted object; 基于所述第三位姿信息,生成所述智能体的任务运动规划。Based on the third posture information, a task motion plan of the agent is generated. 6.根据权利要求1-5任一项所述的智能体的任务运动规划方法,其特征在于,基于所述任务序列以及所述目标运动学模型,生成所述智能体的任务运动规划,包括:6. The task motion planning method of an intelligent agent according to any one of claims 1 to 5, characterized in that the task motion planning of the intelligent agent is generated based on the task sequence and the target kinematic model, comprising: 在控制所述智能体执行完成所述至少一个基于时间顺序排列的子序列中第二目标子序列之后,在所述第二目标子序列所包括的目标待交互对象的位姿发生变化的情况下,基于变化的位姿更新所述任务序列和所述目标运动学模型;After controlling the agent to execute a second target subsequence in the at least one subsequence arranged in time order, when the posture of the target object to be interacted with included in the second target subsequence changes, updating the task sequence and the target kinematic model based on the changed posture; 基于更新的任务序列和更新的目标运动学模型,生成所述智能体的任务运动规划。Based on the updated task sequence and the updated target kinematic model, a task motion plan of the agent is generated. 7.一种智能体的任务运动规划装置,其特征在于,包括:7. A task motion planning device for an intelligent agent, characterized by comprising: 第一处理模块,用于获取所述智能体与任务场景中各待交互对象之间的目标运动学模型,所述目标运动学模型为基于所述智能体的执行机构运动学信息、所述待交互对象的待交互对象运动学信息和所述智能体对应的移动机构运动学信息构建的;A first processing module is used to obtain a target kinematic model between the agent and each object to be interacted in the task scene, wherein the target kinematic model is constructed based on the kinematic information of the actuator of the agent, the kinematic information of the object to be interacted, and the kinematic information of the mobile mechanism corresponding to the agent; 第二处理模块,用于基于所述智能体对应的目标任务,生成任务序列,所述任务序列包括至少一个基于时间顺序排列的子序列,所述子序列包括目标交互对象信息、动作信息和所述执行机构与所述待交互对象之间的第一位姿信息;所述目标交互对象信息用于表征所述待交互对象中的目标待交互对象;A second processing module is used to generate a task sequence based on the target task corresponding to the intelligent agent, wherein the task sequence includes at least one subsequence arranged in chronological order, and the subsequence includes target interaction object information, action information, and first posture information between the actuator and the object to be interacted; the target interaction object information is used to characterize the target object to be interacted among the objects to be interacted; 第三处理模块,用于基于所述任务序列以及所述目标运动学模型,生成所述智能体的任务运动规划。The third processing module is used to generate the task motion plan of the intelligent agent based on the task sequence and the target kinematic model. 8.一种智能体,其特征在于,包括:8. An intelligent agent, characterized by comprising: 传感器;sensor; 执行机构;Executive body; 移动机构,所述执行机构设置于所述移动机构;A moving mechanism, wherein the actuator is arranged on the moving mechanism; 如权利要求7所述的智能体的任务运动规划装置,所述智能体的任务运动规划装置分别与所述传感器、所述执行机构和所述移动机构电连接。According to the task motion planning device of the intelligent body as described in claim 7, the task motion planning device of the intelligent body is electrically connected to the sensor, the actuator and the moving mechanism respectively. 9.一种非暂态计算机可读存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时实现如权利要求1-6任一项所述的智能体的任务运动规划方法。9. A non-transitory computer-readable storage medium having a computer program stored thereon, wherein when the computer program is executed by a processor, the task motion planning method of an intelligent agent as described in any one of claims 1-6 is implemented. 10.一种计算机程序产品,包括计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1-6任一项所述的智能体的任务运动规划方法。10. A computer program product, comprising a computer program, characterized in that when the computer program is executed by a processor, the task motion planning method of an intelligent agent as described in any one of claims 1 to 6 is implemented.
CN202410924251.2A 2024-07-10 2024-07-10 Task movement planning method and device for intelligent agent and intelligent agent Pending CN119319559A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410924251.2A CN119319559A (en) 2024-07-10 2024-07-10 Task movement planning method and device for intelligent agent and intelligent agent

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410924251.2A CN119319559A (en) 2024-07-10 2024-07-10 Task movement planning method and device for intelligent agent and intelligent agent

Publications (1)

Publication Number Publication Date
CN119319559A true CN119319559A (en) 2025-01-17

Family

ID=94230982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410924251.2A Pending CN119319559A (en) 2024-07-10 2024-07-10 Task movement planning method and device for intelligent agent and intelligent agent

Country Status (1)

Country Link
CN (1) CN119319559A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022085339A1 (en) * 2020-10-19 2022-04-28 オムロン株式会社 Movement planning device, movement planning method, and movement planning program
CN116619374A (en) * 2023-06-02 2023-08-22 北京通用人工智能研究院 Robot control method and device and robot
CN116922403A (en) * 2023-09-19 2023-10-24 上海摩马智能科技有限公司 Visual feedback intelligent track implementation method based on simulation
CN117807317A (en) * 2023-12-29 2024-04-02 Oppo广东移动通信有限公司 Interaction method and device based on intelligent agent

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022085339A1 (en) * 2020-10-19 2022-04-28 オムロン株式会社 Movement planning device, movement planning method, and movement planning program
CN116619374A (en) * 2023-06-02 2023-08-22 北京通用人工智能研究院 Robot control method and device and robot
CN116922403A (en) * 2023-09-19 2023-10-24 上海摩马智能科技有限公司 Visual feedback intelligent track implementation method based on simulation
CN117807317A (en) * 2023-12-29 2024-04-02 Oppo广东移动通信有限公司 Interaction method and device based on intelligent agent

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHITIAN LI: "Dynamic Planning for Sequential Whole-body Mobile Manipulation", ARXIV, 24 May 2024 (2024-05-24), pages 1 - 8 *
高阳: "当代行星机器人学", 30 June 2022, 中国宇航出版社, pages: 298 - 301 *

Similar Documents

Publication Publication Date Title
Li et al. Survey on mapping human hand motion to robotic hands for teleoperation
Li et al. Okami: Teaching humanoid robots manipulation skills through single video imitation
Delmerico et al. Spatial computing and intuitive interaction: Bringing mixed reality and robotics together
Shen et al. Learning category-level generalizable object manipulation policy via generative adversarial self-imitation learning from demonstrations
CN115686193A (en) A method and system for three-dimensional gesture manipulation of a virtual model in an augmented reality environment
CN114932555A (en) Mechanical arm cooperative operation system and mechanical arm control method
CN118893633B (en) Model training method and device and mechanical arm system
An et al. Dexterous manipulation through imitation learning: A survey
Li et al. Hybrid trajectory replanning-based dynamic obstacle avoidance for physical human-robot interaction
Sun et al. Digital-twin-assisted skill learning for 3C assembly tasks
Jing et al. HumanoidGen: Data Generation for Bimanual Dexterous Manipulation via LLM Reasoning
Ogawara et al. Acquiring hand-action models in task and behavior levels by a learning robot through observing human demonstrations
Li et al. Real-time motion tracking of cognitive Baxter robot based on differential inverse kinematics
CN118893634B (en) Model training method and device and mechanical arm system
CN119319559A (en) Task movement planning method and device for intelligent agent and intelligent agent
Yuan et al. Demograsp: Universal dexterous grasping from a single demonstration
Yu et al. Real-time multitask multihuman–robot interaction based on context awareness
Wan et al. LodeStar: long-horizon dexterity via synthetic data augmentation from human demonstrations
Hübel et al. Codeless, Inclusive, and End-to-End Robotized Manipulations by Leveraging Extended Reality and Digital Twin Technologies
Makris Virtual reality for programming cooperating robots based on human motion mimicking
Liu et al. Manipulating complex robot behavior for autonomous and continuous operations
CN119322515A (en) Control method and device of intelligent agent and intelligent agent
Xiang et al. Vision-Based Non-anthropomorphic Robot Teleoperation Considering Human Arm Configuration
Li et al. Object-Focus Actor for Data-efficient Robot Generalization Dexterous Manipulation
Pang et al. Discrete data-driven position and orientation control for redundant manipulators with Jacobian matrix learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination