[go: up one dir, main page]

CN119077727A - Robot control method, robot and storage medium - Google Patents

Robot control method, robot and storage medium Download PDF

Info

Publication number
CN119077727A
CN119077727A CN202411116782.5A CN202411116782A CN119077727A CN 119077727 A CN119077727 A CN 119077727A CN 202411116782 A CN202411116782 A CN 202411116782A CN 119077727 A CN119077727 A CN 119077727A
Authority
CN
China
Prior art keywords
gate
robot
target gate
target
video information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411116782.5A
Other languages
Chinese (zh)
Inventor
卢鹰
刘瑜权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youdi Robot Wuxi Co ltd
Original Assignee
Youdi Robot Wuxi Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Youdi Robot Wuxi Co ltd filed Critical Youdi Robot Wuxi Co ltd
Priority to CN202411116782.5A priority Critical patent/CN119077727A/en
Publication of CN119077727A publication Critical patent/CN119077727A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

本申请公开了一种机器人的控制方法、机器人及存储介质,涉及机器人技术领域,所述机器人包括用于操作闸机的机械臂,所述机械臂包括用于与所述闸机进行交互的末端执行器,公开了机器人的控制方法,包括:在检测到行进路径上存在所述闸机时,获取所述闸机的视频信息;根据所述视频信息确定目标闸机的位置;在所述机器人移动到所述目标闸机的通道口时,控制所述机械臂的所述末端执行器与所述目标闸机进行交互;响应所述目标闸机的通行指令,控制所述机器人通行所述目标闸机的通道。通过实时检测的视频流数据确定闸机的运行状况,并选定需要通行的目标闸机,基于该目标闸机执行对应的控制策略以通过该闸机,提高机器人的灵活性。

The present application discloses a robot control method, a robot and a storage medium, and relates to the field of robot technology. The robot includes a mechanical arm for operating a gate, and the mechanical arm includes an end effector for interacting with the gate. The robot control method is disclosed, including: when the gate is detected on the travel path, obtaining video information of the gate; determining the position of the target gate according to the video information; when the robot moves to the passage of the target gate, controlling the end effector of the mechanical arm to interact with the target gate; responding to the passage instruction of the target gate, controlling the robot to pass the passage of the target gate. The operating status of the gate is determined by real-time detected video stream data, and the target gate that needs to be passed is selected. Based on the target gate, the corresponding control strategy is executed to pass the gate, thereby improving the flexibility of the robot.

Description

Robot control method, robot, and storage medium
Technical Field
The present application relates to the field of robots, and in particular, to a control method for a robot, and a storage medium.
Background
With the continued development of automation and robotics, robots are becoming increasingly popular in a variety of fields including, but not limited to, manufacturing, logistics, service, security monitoring, and the like. In these applications, robots are required to pass through various types of gates, such as revolving doors, sliding doors, security gates, etc., to achieve automatic access and monitoring of a specific area.
However, when the robot faces different types or positions of gates, the robot lacks autonomous recognition and adaptation capability, and often cannot autonomously and accurately perform the passing action. Therefore, the current robot has a defect of low flexibility.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present application and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The application mainly aims to provide a control method of a robot, the robot and a storage medium, and aims to solve the technical problem of low flexibility of the robot.
In order to achieve the above object, the present application provides a control method of a robot, the method comprising:
when the existence of the gate on the travelling path is detected, acquiring video information of the gate;
determining the position of a target gate according to the video information;
When the robot moves to the passage port of the target gate, controlling the end effector of the mechanical arm to interact with the target gate;
and responding to the passing instruction of the target gate, and controlling the robot to pass through the channel of the target gate.
In one embodiment, the determining the position of the target gate according to the video information includes:
If a gate exists on the travelling path, taking the gate in the video information as the target gate;
If a plurality of gates exist on the travelling path, determining current traffic information and use states of the gates according to the video information;
And taking one gate with the least number of traffic as the target gate in the plurality of gates according to the current traffic information and the use state.
In an embodiment, the controlling the end effector of the mechanical arm to interact with the target gate when the robot moves to the passage port of the target gate includes:
When the robot moves to the passage port of the target gate, determining the passing control type and the passing control area of the target gate according to the video information;
Determining a moving path of the mechanical arm according to the passing control area, and determining an operation sequence of the mechanical arm according to the passing control type;
and controlling the mechanical arm to move to the traffic control area according to the moving path, and selecting a target end effector to interact with the traffic control area based on the traffic control type.
In an embodiment, the determining the traffic control type and the traffic control area of the target gate according to the video information includes:
And inputting the video information into a pre-trained machine learning model to obtain the traffic control type and the traffic control area of the target gate.
In an embodiment, the acquiring the video information of the gate when the gate is detected to exist on the travel path includes:
Starting a vision system of the robot when the gate is detected to exist on the travelling path;
And when the gate is in the detection view of the vision system, scanning the gate based on the vision system to obtain the video information.
In an embodiment, after the controlling the robot to pass through the channel of the target gate in response to the pass instruction of the target gate, the method further includes:
generating traffic operation data of the robot;
Updating the traffic operation data to a machine learning model to optimize the processing actions of the robot when a gate is detected.
In an embodiment, after the robot moves to the passage port of the target gate and the end effector of the mechanical arm is controlled to interact with the target gate, the method further includes:
in a preset period, when the target gate is in a closed state and/or the robot does not receive the passing instruction, performing negative-sample training on the machine learning model based on passing operation data of interaction between the end effector and the target gate;
and executing the step of acquiring the video information of the gate and the subsequent steps.
In an embodiment, after the robot moves to the passage port of the target gate and the end effector of the mechanical arm is controlled to interact with the target gate, the method further includes:
Determining the opening and closing state of the target gate according to the communication sensor of the robot;
When the target gate is detected to be in an open state, controlling the robot to pass through the target gate;
and feeding back operation result data of the mechanical arm to a control system of the robot so as to enable the control system to optimize control logic and decision algorithm of the mechanical arm.
In addition, in order to achieve the above object, the present application also proposes a robot comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program being configured to implement the steps of the method of controlling a robot as described above.
In addition, in order to achieve the above object, the present application also proposes a storage medium, which is a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the method for controlling a robot as described above.
One or more technical schemes provided by the application have at least the following technical effects:
Because the end effector which can interact with the gate is added in the mechanical arm of the robot, when the gate exists on the travelling path, the real-time video data stream is utilized to accurately identify the switch state, the use state, the operation interface and the like of the gate, the target gate needing to pass is selected from the switch state, the use state, the operation interface and the like, and then the end effector is controlled to interact with the target gate according to the type of the target gate, and the gate is further used. Based on the method, the optimal control strategy of the robot is obtained by analyzing the image data of the gate, and the flexibility of the robot is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flow chart of a first embodiment of a control method of a robot according to the present application;
FIG. 2 is a flow chart of a second embodiment of the control method of the robot according to the present application;
FIG. 3 is a flow chart of a third embodiment of the control method of the robot according to the present application;
Fig. 4 is a schematic device structure diagram of a hardware operating environment related to a control method of a robot in an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the technical solution of the present application and are not intended to limit the present application.
For a better understanding of the technical solution of the present application, the following detailed description will be given with reference to the drawings and the specific embodiments.
The main solution of the embodiment of the application is that when detecting that the gate exists on the travelling path of the robot, the optimal target gate is selected to enter based on the video information of the gate, and when reaching the channel port, a control strategy is selected based on the gate information in the video information, and the end effector is controlled to interact with the target gate, so that the response mode of the robot after touching the gate when executing the task is improved, and the flexibility of the robot is improved.
In this embodiment, for convenience of description, the following description will be made with the robot as an execution body.
When the robot in the prior art executes tasks, if gates of different types appear on the travelling route, the passing action cannot be executed autonomously and accurately due to the lack of autonomous identification and adaptability, so that the flexibility is low, and meanwhile, the task execution efficiency is low.
The application provides a solution, when a robot detects that a gate exists on a travelling path, video information of the gate is acquired, then the gate needing to pass is selected through the video information, and a mode of interacting with the gate is adopted, then after the robot reaches a passage port of the gate, a terminal interactor of the mechanical arm is controlled to interact with the gate, and further a passing instruction of the gate is responded, and the subsequent task is conveniently carried out through the gate passage, so that the flexibility of the robot in executing the task or travelling is improved.
Based on the above, the embodiment of the application provides a control method of a robot, wherein the robot comprises a mechanical arm for operating a gate, and the mechanical arm comprises an end effector for exchanging with the gate. The end effector is a device or tool mounted at the end of the robot arm for interaction with the environment. In this embodiment, the end effector is a highly complex and versatile device capable of performing a variety of operations, such as pressing a button, sliding a touch screen, scanning a card, etc., i.e., the end effector includes at least a plunger or lever arm for pressing a pass button of a gate, a smooth touch head for sliding the touch screen, a storage compartment for storing pass cards.
Referring to fig. 1, fig. 1 is a flowchart illustrating a control method of a robot according to a first embodiment of the present application.
In this embodiment, the control method of the robot includes steps S10 to S40:
step S10, when the existence of the gate on the travelling path is detected, video information of the gate is acquired.
It should be noted that, the gate generally refers to a device for controlling traffic of people or vehicles, which may limit or allow traffic by lifting a railing, a revolving door, or other forms. The structure and function of the gates may vary from occasion to occasion, but the main purpose is to ensure orderly and safe traffic management. The gate is mainly arranged for realizing control and management of people flow or traffic flow, safety control, flow management, order maintenance, data collection, space utilization optimization and the like. The video information refers to a series of continuous pictures of the region where the gate collected by the robot is located, and generally comprises a traffic type, a traffic region, a detection mode, a detection region and the like of the gate.
In this embodiment, when the robot performs a task such as a delivery task, a lead task, a patrol task, a cargo transportation task in a large warehouse, or other robot navigation and control tasks, the robot needs to move from a starting point to a destination, and a gate blocking the way is usually present on the action route. Therefore, the robot can detect whether a gate exists on the travel path by means of magnetic field induction, radio Frequency Identification (RFID), infrared sensing identification, video identification, or the like. In order to reduce the energy consumption of the robot, the detection range may be set according to practical requirements, for example, whether a gate appears in a range of 30 meters around the robot is detected. It can be appreciated that training of the deep learning model, i.e., the robot performing the navigation and control tasks, can be completed before the robot is put into use, and different brands and models of gates can be identified. Based on the deep learning model, the accuracy and the effectiveness of gate identification are improved.
The video information of the gate can be obtained by a vision system of the robot, when the existence of the gate on the travelling path is detected, the vision system of the robot needs to be started, and then when the robot gradually approaches the gate, namely the gate is in a detection view of the vision system, the gate is scanned based on the vision system, so that the video information is obtained. The video information of the gate is acquired, so that the robot can determine the type of the gate and the optimal operation strategy of the passing gate based on the image information, and the flexibility of the robot is improved. It can be understood that the video information may include information of one gate or information of a plurality of gates.
When the robot executes the navigation task, the gate on the travelling path is detected by means of radio frequency identification and infrared sensing identification, the gate is detected to appear at the position 30 meters in front in the travelling process, and the robot starts the vision system at the moment and shoots video information of the gate when the robot approaches the gate by 15 meters.
It should be noted that the above parameters are merely for explanation, and are not meant as limitations of the present application.
And step S20, determining the position of the target gate according to the video information.
It should be noted that the target gate is a gate that the robot needs to pass through. The number of gates is usually set according to the flow rate of the current place, and in the high flow rate place, the number of gates is at least 2, and in the low flow rate place, the number of gates is 1. Further, in some places, the passing areas of different gates are different, the corresponding elevator can take different floors, and the corresponding passing waiting time of different gates is also different. Therefore, after the video information of the gate is obtained, the passing target gate needs to be determined according to the real-time video information, so that the problem that the task time is prolonged and the task execution efficiency is low due to the fact that the gate is misplaced or queued at the gate with higher flow is avoided. Therefore, after the video information of the gate is obtained, the position of the target gate to be passed needs to be determined according to the video information so as to control the robot to move to the corresponding passage port of the target gate in advance, and the passing efficiency of the robot is improved.
In this embodiment, the real-time decision engine in the control system of the robot can automatically select the target gate meeting the requirements according to the environmental variables in the video information, such as traffic flow and gate state, so as to avoid selecting gates with more people in the peak period. Specifically, when one gate exists on the travelling path, the gate in the video information is used as a target gate, if a plurality of gates exist on the travelling path, the current traffic information and the use state of the plurality of gates can be determined according to the video information, the current traffic information refers to the traffic people flow, the use state refers to whether the gate can pass, and then, according to the current traffic information and the use state, the gate which can pass with the least number of passes is used as the target gate in the plurality of gates.
In addition, when there are a plurality of gates on the travel path, a gate may be selected according to the task executed by the robot and the set path, for example, the gate set to pass through in the set path is gate 1, and the robot determines the position of gate 1 through video information and uses it as the passing target gate. Or the robot performs a task with higher repeatability at the current position, for example, the current task is to convey the article A to the No. 1 of the 15 th floor, the task performed at the previous time is to convey the article B to the No. 3 of the 15 th floor, at this time, when the last task is performed in the history traffic data, the traffic target gate is the No. 2 gate, and at this time, the No. 2 gate in the video information can be used as the target gate. By determining the position of the target gate, the problem that the task execution efficiency is low due to the fact that the robot selects the gate which cannot pass, the gate with long waiting time or the gate which does not have the task place in the passing target area is avoided. Based on the above, the flexibility of the robot in passing when a plurality of gates exist on the current travelling path is improved.
And step S30, when the robot moves to the passage port of the target gate, controlling the end effector of the mechanical arm to interact with the target gate.
In this embodiment, after the robot selects the target gate and moves to the channel port of the target gate, based on the information of the target gate in the video information, the control system of the robot can dynamically program the motion path of the mechanical arm and the corresponding operation sequence, so that the end effector performs the related interaction action in the verification interaction area of the target gate by moving the mechanical arm. The motion path is a position where the robot controls the mechanical arm to move through the servo motor, the operation sequence is a selected operation mode, for example, touch screen operation is selected or a card scanner is used, and the end effector is in a corresponding execution state.
The robot controls the mechanical arm to place the end effector into a position for card scanning through the high-precision servo motor to execute card scanning action, so that the passing verification of the target gate is completed. In addition, if the target gate only needs to press the pass button, the robot can control the mechanical arm to press the button based on the lever device of the end effector, or when the sliding touch screen is needed, the sliding touch screen is executed based on the smooth touch head of the end effector.
The accurate control information of the robot is generated through the video information, so that when the robot passes through the gate, the robot can independently complete complex gate operation, and meanwhile, the robot can adapt to various operation environments and reduce manual intervention. In the control process, the mechanical arm capable of being accurately controlled moves, the accuracy of operation and the equipment friendliness are ensured based on the intelligent decision system, and the safety of the gate passing machine timing robot is improved.
And S40, responding to the passing instruction of the target gate, and controlling the robot to pass through the channel of the target gate.
As an alternative embodiment, the pass command of the target gate may be a voice prompt command for opening the gate of the target gate, a passable command in a status display area of the gate, a door opening command of an RFID (Radio Frequency Identification, radio frequency identification technology) broadcast to the robot by the gate, or the like. After the end effector of the robot control mechanical arm interacts with the target gate, the state of the target gate can be shot in real time through a vision system, and meanwhile, the passing parameters of the target gate are detected in real time through a sensor. And then the robot confirms the gate response through the sensor, and the robot passes through the channel of the target gate after receiving the gate opening signal. Wherein, the robot can further confirm the open state of gate through the video information that shoots.
In another alternative embodiment, in addition to the following passage of the target gate in response to the command of the target gate, the opening/closing state of the target gate may be determined according to the communication sensor of the robot, for example, whether an obstacle (i.e., gate) exists in front of the robot is detected by the infrared sensor, and when the target gate is detected to be in the opening state, the robot is controlled to pass through the target gate. After passing through the target gate, the operation result data of the mechanical arm can be fed back to the control system of the robot, so that the control system optimizes the control effect and decision algorithm of the mechanical arm.
In addition, the robot is further provided with a machine learning model, the operation of the robot is optimized and adjusted through the robot learning model, and the current operation of the robot is evaluated according to the response result of the target gate corresponding to the interactive operation of the robot and the target gate, so that the accuracy and the efficiency of the subsequent operation are optimized. For example, when the robot controls the mechanical arm to move to the target gate and performs interaction, the interaction time of the whole process is more than 20 seconds, and the normal interaction time is usually about 5-10 seconds, at this time, the movement path or the control strategy of the mechanical arm can be optimized through a machine learning model, so that a subsequent decision algorithm and mechanical arm control logic are improved. The control logic of the robot is continuously optimized, so that the flexibility of the robot is improved.
The embodiment provides a control method of a robot, when detecting that a gate exists on a navigation travelling path through a sensor, a vision system is started, video information of the gate is obtained, a target gate needing to pass is selected from single or multiple gates based on the video information, passing efficiency is improved, meanwhile, based on a flexible control strategy, a mechanical arm and an end effector are controlled to interact with the target gate, interaction accuracy is improved, meanwhile, after interaction is finished, the state of the target gate is detected in real time, a passing instruction of the target gate is responded, a channel of the target gate is passed, finally, the control strategy of the robot is optimized based on the state of the target gate, and flexibility when the robot needs to pass the gate is further improved.
In the second embodiment of the present application, the same or similar content as that of the first embodiment may be referred to the description above, and will not be repeated. On this basis, referring to fig. 2, step S30 further includes steps S31 to S33:
And S31, when the robot moves to the passage port of the target gate, determining the passage control type and the passage control area of the target gate according to the video information.
In this embodiment, the traffic control type refers to a control type required for controlling the opening of the gate of the target gate, for example, by pressing a gate opening command, sliding a touch screen, scanning an identity card, verifying an RFID tag, or voice interaction. The pass control area refers to a control area corresponding to the robot when the robot executes interaction, for example, the pass control type of the target gate is an identity card verification type, the corresponding pass control area is a card reader area of the target gate, and if the pass control type is a button switch control type, the corresponding pass control area is an area of a touch button.
As an alternative implementation manner, the video information can be analyzed and processed through a machine learning model, namely, the video information is input into a pre-trained machine learning model, and the video information is analyzed and processed through the machine learning model, so that the type and model parameters of the target gate are obtained, and the corresponding traffic control type and traffic control area are determined based on the specific type and model parameters of the target gate. The information storage module of the robot can update information of gates which are put into use in the market at regular time, including the height, the size and the like of a traffic control area. Therefore, after the robot moves to the passage port, the control type and the passing control area required by the target gate are analyzed through the video information, so that the robot can conveniently generate a corresponding path for controlling the movement of the mechanical arm and select a corresponding operation sequence to carry out passing verification, and the flexibility of the robot for autonomously passing through the gate is improved.
And step S32, determining a moving path of the mechanical arm according to the traffic control area, and determining an operation sequence of the mechanical arm according to the traffic control type.
In this embodiment, after the traffic control area is obtained, the movement path of the mechanical arm is determined based on the height, the size, and the like of the area, so as to interact the end effector of the mechanical arm with the target gate. Meanwhile, after the passing control type is obtained, a verification mode, namely an operation sequence executed by the mechanical arm, can be determined based on the type, for example, the operation sequence of pressing the passing button is executed in the passing control area by selecting the pressing lever.
And step S33, controlling the mechanical arm to move to the traffic control area according to the moving path, and selecting a target end effector to interact with the traffic control area based on the traffic control type.
In this embodiment, after the movement path is obtained, the mechanical arm is controlled to move based on the movement path, and meanwhile, a suitable end effector is selected to interact based on the control type, so that the interaction efficiency of the robot and the target gate is improved.
The embodiment provides a control method of a robot, which can reduce the possibility of manual intervention and misoperation by acquiring a traffic control type and a traffic control area through video information. Meanwhile, the moving path and the operation sequence of the mechanical arm are determined according to the traffic control type and the traffic control area, so that the robot can be more flexibly adapted to different traffic control requirements. By selecting the target end effector to interact with the traffic control area, the adaptability and universality of the robot can be improved, so that the robot can cope with various different types of traffic control equipment. Based on the method, the efficiency and the accuracy of the robot in the interaction process with the target gate are improved, the requirement of manual intervention is reduced, and the degree of automation and the flexibility of operation of the system are improved.
In the third embodiment of the present application, the same or similar contents as those of the first embodiment can be referred to the description above, and the description is omitted. On this basis, referring to fig. 3, step S40 further includes steps S50 to S60:
step S50, generating traffic operation data of the robot;
Step S60, updating the traffic operation data to a machine learning model so as to optimize the processing action of the robot when the gate is detected.
In this embodiment, after the robot passes through the target gate, it is explained that the robot has passed through the gate autonomously at present, and at this time, it is possible to generate traffic operation data of the robot in the entire traffic process, where the traffic operation data includes data for controlling movement of the robot arm, data for a selected operation sequence, time data for the process, and the like, which are executed by the robot. And then updating the data into a machine learning model, and improving the operation efficiency and success rate of the robot through feedback learning, thereby reducing the possibility of failure.
Further, negative sample learning can be performed based on operation data of passing failure besides positive feedback optimization processing, so that subsequent operation effectiveness and operation efficiency are improved. Based on this, in another alternative embodiment, after step S30, further comprising:
and in a preset period, when the target gate is in a closed state and/or the robot does not receive the passing instruction, performing negative-sample training on the machine learning model based on passing operation data of interaction between the end effector and the target gate, and then executing the step of acquiring video information of the gate and the subsequent steps so as to enable the robot to interact with the target gate again or select a new target gate for interaction. It can be understood that when the traffic fails, the action of the robot currently executing interaction with the target gate is an invalid action, or when the target gate is identified, the gate which cannot be passed is identified as the target gate. Based on the method, the passing operation data in the interaction process is required to be used as a negative sample training set for learning, so that the error processing action is avoided when the same scene appears later, and the flexibility of the robot is improved.
The embodiment provides a control method of a robot, which is used for training a machine learning model by collecting actual operation data of the robot after the robot successfully passes through a target gate or operation data when passing through the gate, and utilizing the forward sample data or the reverse sample data so as to improve response and operation of the robot when encountering the gate and improve flexibility of the robot and effectiveness of a control process and accuracy of decision when passing through the gate.
The application provides a robot which comprises at least one processor and a memory in communication connection with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor can execute the control method of the robot in any embodiment.
Referring now to fig. 4, a schematic diagram of a robot suitable for use in implementing embodiments of the present application is shown. The robot shown in fig. 4 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present application.
As shown in fig. 4, the robot may include a processing device 1001 (e.g., a central processing unit, a graphic processor, etc.), which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage device 1003 into a random access Memory (RAM: random Access Memory) 1004. In the RAM1004, various programs and data required for robot operations are also stored. The processing device 1001, the ROM1002, and the RAM1004 are connected to each other by a bus 1005. An input/output (I/O) interface 1006 is also connected to the bus. In general, a system including an input device 1007 such as a touch screen, a touch pad, a keyboard, a mouse, an image sensor, a microphone, an accelerometer, a gyroscope, etc., an output device 1008 including a Liquid crystal display (LCD: liquid CRYSTAL DISPLAY), a speaker, a vibrator, etc., a storage device 1003 including a magnetic tape, a hard disk, etc., and a communication device 1009 may be connected to the I/O interface 1006. The communication means 1009 may allow the robot to communicate with other devices wirelessly or by wire to exchange data. While robots having various systems are shown in the figures, it should be understood that not all of the illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through a communication device, or installed from the storage device 1003, or installed from the ROM 1002. The above-described functions defined in the method of the disclosed embodiment of the application are performed when the computer program is executed by the processing device 1001.
The robot provided by the application adopts the control method of the robot in the embodiment, and can solve the technical problem of low flexibility of the robot. Compared with the prior art, the beneficial effects of the robot provided by the application are the same as those of the control method of the robot provided by the embodiment, and other technical features of the robot are the same as those disclosed by the method of the previous embodiment, and are not repeated here.
It is to be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the description of the above embodiments, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
The present application provides a computer-readable storage medium having computer-readable program instructions (i.e., a computer program) stored thereon for performing the control method of the robot in the above-described embodiments.
The computer readable storage medium provided by the present application may be, for example, a U disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (RAM: random Access Memory), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (EPROM: erasable Programmable Read Only Memory or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to electrical wiring, fiber optic cable, RF (Radio Frequency) and the like, or any suitable combination of the foregoing.
The computer readable storage medium may be included in the robot or may exist alone without being assembled into the robot.
The computer-readable storage medium carries one or more programs that, when executed by a robot, cause the robot to:
when the existence of the gate on the travelling path is detected, acquiring video information of the gate;
determining the position of a target gate according to the video information;
When the robot moves to the passage port of the target gate, controlling the end effector of the mechanical arm to interact with the target gate;
and responding to the passing instruction of the target gate, and controlling the robot to pass through the channel of the target gate.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of remote computers, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN: local Area Network) or a wide area network (WAN: wide Area Network), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present application may be implemented in software or in hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The readable storage medium provided by the application is a computer readable storage medium, and the computer readable storage medium stores computer readable program instructions (namely computer programs) for executing the control method of the robot, so that the technical problem of low flexibility of the robot can be solved. Compared with the prior art, the beneficial effects of the computer readable storage medium provided by the application are the same as those of the control method of the robot provided by the above embodiment, and are not described in detail herein.
The foregoing description is only a partial embodiment of the present application, and is not intended to limit the scope of the present application, and all the equivalent structural changes made by the description and the accompanying drawings under the technical concept of the present application, or the direct/indirect application in other related technical fields are included in the scope of the present application.

Claims (10)

1.一种机器人的控制方法,其特征在于,所述机器人包括用于操作闸机的机械臂,所述机械臂包括用于与所述闸机进行交互的末端执行器,所述方法包括:1. A control method for a robot, characterized in that the robot comprises a mechanical arm for operating a gate, the mechanical arm comprises an end effector for interacting with the gate, and the method comprises: 在检测到行进路径上存在所述闸机时,获取所述闸机的视频信息;When detecting that the gate exists on the travel path, obtaining video information of the gate; 根据所述视频信息确定目标闸机的位置;Determine the location of the target gate according to the video information; 在所述机器人移动到所述目标闸机的通道口时,控制所述机械臂的所述末端执行器与所述目标闸机进行交互;When the robot moves to the passageway of the target gate, controlling the end effector of the robot arm to interact with the target gate; 响应所述目标闸机的通行指令,控制所述机器人通行所述目标闸机的通道。In response to the passage instruction of the target gate, the robot is controlled to pass through the passage of the target gate. 2.如权利要求1所述的方法,其特征在于,所述根据所述视频信息确定目标闸机的位置,包括:2. The method according to claim 1, wherein determining the location of the target gate according to the video information comprises: 若所述行进路径上存在一个闸机,将所述视频信息中的所述闸机作为所述目标闸机;If there is a gate on the travel path, the gate in the video information is used as the target gate; 若所述行进路径上存在多个闸机,根据所述视频信息确定多个所述闸机的当前通行信息以及使用状态;If there are multiple gates on the travel path, determine the current passage information and usage status of the multiple gates according to the video information; 根据所述当前通行信息以及所述使用状态,在所述多个闸机中,将通行人数最少的一个闸机作为所述目标闸机。According to the current traffic information and the usage status, among the multiple gates, a gate with the least number of passers-by is used as the target gate. 3.如权利要求1所述的方法,其特征在于,所述在所述机器人移动到所述目标闸机的通道口时,控制所述机械臂的所述末端执行器与所述目标闸机进行交互,包括:3. The method according to claim 1, characterized in that when the robot moves to the passageway of the target gate, controlling the end effector of the robot arm to interact with the target gate comprises: 在所述机器人移动到所述目标闸机的通道口时,根据所述视频信息确定所述目标闸机的通行控制类型以及通行控制区域;When the robot moves to the passageway of the target gate, determining the access control type and access control area of the target gate according to the video information; 根据所述通行控制区域确定所述机械臂的移动路径,以及根据所述通行控制类型确定所述机械臂的操作序列;Determining a moving path of the robot arm according to the access control area, and determining an operation sequence of the robot arm according to the access control type; 根据所述移动路径控制所述机械臂移动至所述通行控制区域,并基于所述通行控制类型选定目标末端执行器与所述通行控制区域进行交互。The robot arm is controlled to move to the access control area according to the movement path, and a target end effector is selected based on the access control type to interact with the access control area. 4.如权利要求3所述的方法,其特征在于,所述根据所述视频信息确定所述目标闸机的通行控制类型以及通行控制区域,包括:4. The method according to claim 3, wherein determining the access control type and access control area of the target gate according to the video information comprises: 将所述视频信息输入到预训练的机器学习模型中,得到所述目标闸机的所述通行控制类型以及所述通行控制区域。The video information is input into a pre-trained machine learning model to obtain the access control type and the access control area of the target gate. 5.如权利要求1所述的方法,其特征在于,所述在检测到行进路径上存在所述闸机时,获取所述闸机的视频信息,包括:5. The method according to claim 1, characterized in that, when the gate is detected to exist on the travel path, obtaining the video information of the gate comprises: 在检测到所述行进路径上存在所述闸机时,启动所述机器人的视觉系统;When the gate is detected on the travel path, the visual system of the robot is activated; 当所述闸机处于所述视觉系统的检测视野时,基于所述视觉系统对所述闸机进行扫描处理,得到所述视频信息。When the gate is in the detection field of view of the visual system, the gate is scanned and processed based on the visual system to obtain the video information. 6.如权利要求1所述的方法,其特征在于,所述响应所述目标闸机的通行指令,控制所述机器人通行所述目标闸机的通道之后,还包括:6. The method according to claim 1, characterized in that after responding to the passage instruction of the target gate and controlling the robot to pass through the passage of the target gate, it also includes: 生成所述机器人的通行操作数据;Generating passage operation data of the robot; 将所述通行操作数据更新至机器学习模型,以优化所述机器人在检测到闸机时的处理动作。The passage operation data is updated to the machine learning model to optimize the processing action of the robot when a gate is detected. 7.如权利要求6所述的方法,其特征在于,所述在所述机器人移动到所述目标闸机的通道口时,控制所述机械臂的所述末端执行器与所述目标闸机进行交互之后,还包括:7. The method according to claim 6, characterized in that, when the robot moves to the passageway of the target gate, after controlling the end effector of the robot arm to interact with the target gate, it further comprises: 在预设时段内,所述目标闸机处于关闭状态和/或所述机器人未接收到所述通行指令时,基于所述末端执行器与所述目标闸机进行交互的通行操作数据对所述机器学习模型进行负样本训练;Within a preset time period, when the target gate is in a closed state and/or the robot does not receive the passage instruction, negative sample training is performed on the machine learning model based on the passage operation data of the interaction between the end effector and the target gate; 执行所述获取所述闸机的视频信息的步骤及后续步骤。Execute the step of obtaining the video information of the gate and subsequent steps. 8.如权利要求1所述的方法,其特征在于,所述在所述机器人移动到所述目标闸机的通道口时,控制所述机械臂的所述末端执行器与所述目标闸机进行交互之后,还包括:8. The method according to claim 1, characterized in that, when the robot moves to the passageway of the target gate, after controlling the end effector of the robot arm to interact with the target gate, it further comprises: 根据所述机器人的通信传感器确定所述目标闸机的启闭状态;Determine the opening and closing state of the target gate according to the communication sensor of the robot; 在检测到所述目标闸机处于开启状态时,控制所述机器人通行所述目标闸机;When it is detected that the target gate is in an open state, controlling the robot to pass through the target gate; 将所述机械臂的操作结果数据反馈至所述机器人的控制系统,以使所述控制系统优化所述机械臂的控制逻辑和决策算法。The operation result data of the robotic arm is fed back to the control system of the robot so that the control system optimizes the control logic and decision-making algorithm of the robotic arm. 9.一种机器人,其特征在于,所述机器人包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序配置为实现如权利要求1至8中任一项所述的机器人的控制方法的步骤。9. A robot, characterized in that the robot comprises: a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the computer program is configured to implement the steps of the robot control method as described in any one of claims 1 to 8. 10.一种存储介质,其特征在于,所述存储介质为计算机可读存储介质,所述存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至8中任一项所述的机器人的控制方法的步骤。10. A storage medium, characterized in that the storage medium is a computer-readable storage medium, and a computer program is stored on the storage medium, and when the computer program is executed by a processor, the steps of the robot control method according to any one of claims 1 to 8 are implemented.
CN202411116782.5A 2024-08-14 2024-08-14 Robot control method, robot and storage medium Pending CN119077727A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411116782.5A CN119077727A (en) 2024-08-14 2024-08-14 Robot control method, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411116782.5A CN119077727A (en) 2024-08-14 2024-08-14 Robot control method, robot and storage medium

Publications (1)

Publication Number Publication Date
CN119077727A true CN119077727A (en) 2024-12-06

Family

ID=93700094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411116782.5A Pending CN119077727A (en) 2024-08-14 2024-08-14 Robot control method, robot and storage medium

Country Status (1)

Country Link
CN (1) CN119077727A (en)

Similar Documents

Publication Publication Date Title
US11298825B2 (en) Systems, apparatus, and methods for robotic learning and execution of skills
KR102639675B1 (en) Mobile Robot System, Mobile Robot And Method Of Controlling Mobile Robot System
US12172314B2 (en) Systems, apparatus, and methods for robotic learning and execution of skills
US11904470B2 (en) Systems, apparatuses, and methods for robotic learning and execution of skills including navigation and manipulation functions
JP7765286B2 (en) Systems, devices, and methods for robotic learning and execution of skills
EP3788452B1 (en) Method for determining self-driving vehicle behavior models, a self-driving vehicle, and a method of navigating a self-driving vehicle
US11969893B2 (en) Automated personalized feedback for interactive learning applications
US20220410391A1 (en) Sensor-based construction of complex scenes for autonomous machines
Petersson et al. Systems integration for real-world manipulation tasks
CN114291672A (en) Elevator riding control method and system for robot, robot and storage medium
KR20210026595A (en) Method of moving in administrator mode and robot of implementing thereof
CN118636126A (en) Robot leading method, electronic device and storage medium
CN119077727A (en) Robot control method, robot and storage medium
CN119065378A (en) Robot obstacle avoidance method, device, electronic equipment and computer program product
CN117798925A (en) A mobile robot intelligent control method
Ishii et al. Enhancing human-robot collaborative object search through human behavior observation and dialog
CN118774520A (en) Robot door control method, device, equipment and storage medium
CN111283679A (en) A multi-connected voice control automatic guided transportation system and its control method
CN114952769B (en) Method and device for monitoring cabinet status
CN119871383A (en) Object taking and delivering method, robot, object taking and delivering system and storage medium
WO2017092620A1 (en) Smart device, smart control method thereof and computer storage medium
CN120269575B (en) Biped robot task navigation method based on image segmentation and related device
Cintas et al. Robust behavior and perception using hierarchical state machines: A pallet manipulation experiment
Harshini et al. Personal Assistance for Object Delivery
Jogeshwar et al. Design and Development of a Small UGV for Object Retrieval in Domestic Environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination