[go: up one dir, main page]

CN111736607A - Robot motion guiding method and system based on foot motion and terminal - Google Patents

Robot motion guiding method and system based on foot motion and terminal Download PDF

Info

Publication number
CN111736607A
CN111736607A CN202010600315.5A CN202010600315A CN111736607A CN 111736607 A CN111736607 A CN 111736607A CN 202010600315 A CN202010600315 A CN 202010600315A CN 111736607 A CN111736607 A CN 111736607A
Authority
CN
China
Prior art keywords
foot
motion
detected
position information
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010600315.5A
Other languages
Chinese (zh)
Other versions
CN111736607B (en
Inventor
韩磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Black Eye Intelligent Technology Co ltd
Original Assignee
Shanghai Black Eye Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Black Eye Intelligent Technology Co ltd filed Critical Shanghai Black Eye Intelligent Technology Co ltd
Priority to CN202010600315.5A priority Critical patent/CN111736607B/en
Publication of CN111736607A publication Critical patent/CN111736607A/en
Application granted granted Critical
Publication of CN111736607B publication Critical patent/CN111736607B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a robot motion guiding method, a system and a terminal based on foot motion, comprising the following steps: acquiring and recording an image of a foot to be detected; obtaining foot position information; outputting position information under a world coordinate system; obtaining a detection frame for determining the same foot to be detected by taking the position information as a constraint condition; when the motion of the detection frame exceeds the threshold range, recording the position information of the detection frame; obtaining the motion trail of the foot to be detected; and obtaining a motion instruction of the guiding robot through the classifier and feeding back the motion instruction to the robot. The robot comprises a robot body, a robot handle, a robot control device, a robot control module, a control module and a control module, wherein the robot handle is connected with the robot handle, the robot control device is connected with the control module, and the control module is connected with the control module.

Description

Robot motion guiding method and system based on foot motion and terminal
Technical Field
The invention relates to the field of artificial intelligence, in particular to a robot motion guiding method and system based on foot motion and a terminal.
Background
With the improvement of quality of life, the robot is used by a large amount, but most robots mostly move according to avoiding obstacles, if a user wants to guide the robot to reach a position designated by the user, the robot needs to be controlled through remote control equipment, a large amount of time and energy can be wasted, and the remote control equipment needs to be maintained, so that the guide work cannot be carried out due to the fact that a fault easily occurs, and the user experience degree is low.
Disclosure of Invention
In view of the above disadvantages of the prior art, an object of the present invention is to provide a robot motion guiding method, a robot motion guiding system and a terminal based on foot motion, which are used to solve the problems that in the prior art, a user wants to guide the robot to a position designated by the user, and needs to operate and control the robot through a remote control device, which wastes a lot of time and energy, and the remote control device needs to be maintained and is prone to malfunction, so that the guiding work cannot be performed, and the user experience is not high.
To achieve the above and other related objects, the present invention provides a robot motion guiding method based on foot motion, comprising: acquiring and recording an image of a foot to be detected; obtaining foot position information for positioning the foot position to be detected in the image based on a target detection algorithm; inputting the foot position information into a camera model to output the position information of the foot to be detected in a world coordinate system; obtaining a detection frame for determining the same foot to be detected by taking the position information under the world coordinate system as a constraint condition; when the motion of the detection frame is detected to exceed the threshold range, continuously recording the position information of a plurality of detection frames; obtaining the motion trail of the foot to be detected according to the continuously recorded position information of the plurality of detection frames; inputting the motion trail into a classifier to obtain a motion instruction of the guiding robot; and feeding back a motion instruction of the guiding robot to be guided to the robot so as to guide the robot to move to the position of the foot to be detected.
In an embodiment of the present invention, the foot position information includes: and position information of the target detection frame corresponding to the foot to be detected.
In an embodiment of the present invention, the foot position information is input into a camera model to output the position information of the foot to be detected in the world coordinate system: and inputting the position information of the lower frame of the target detection frame corresponding to the foot to be detected into a camera model so as to output the position information of the foot to be detected in a world coordinate system.
In an embodiment of the present invention, the detection frame for determining the same foot to be detected is obtained by using the position information under the world coordinate system as a constraint condition: and under the condition of keeping the position information of the feet under the world coordinate system unchanged, obtaining the position information of the detection frame for determining the same foot to be detected.
In an embodiment of the present invention, when the motion of the detection frame is detected to exceed the threshold range, the position information of the detection frames is continuously recorded: when detecting that the movement displacement value of the central position of the detection frame exceeds a preset threshold value, continuously recording the central position information of a plurality of detection frames from the current frame; wherein, each frame corresponds to the central position information of one detection frame.
In an embodiment of the present invention, the motion trajectory of the foot to be detected is obtained according to the continuously recorded position information of the plurality of detection frames: and calculating and obtaining the motion trail of the foot to be detected in the time range of the multiple frames according to the continuously recorded central position information of the multiple detection frames corresponding to the multiple frames.
In an embodiment of the present invention, the robot motion guidance instruction includes: the robot motion instruction needs to be directed or does not need to be directed.
In an embodiment of the invention, the number of the center position information of the detection frame is related to a frame rate used in the target detection algorithm.
To achieve the above and other related objects, the present invention provides a robot motion guidance system based on foot motion, comprising: the acquisition module is used for acquiring and recording the image of the foot to be detected; the target detection module is connected with the acquisition module and used for acquiring foot position information used for positioning the foot position to be detected in the image based on a target detection algorithm; the world coordinate position module is connected with the target detection module and used for inputting the foot position information into a camera model so as to output the position information of the foot to be detected in a world coordinate system; the constraint module is connected with the world coordinate position module and is used for obtaining a detection frame for determining the same foot to be detected by taking the position information under the world coordinate system as a constraint condition; the detection frame position recording module is connected with the constraint module and used for starting to continuously record the position information of a plurality of detection frames when the detection frame movement is detected to exceed the threshold range; the motion track module is connected with the detection frame position recording module and used for obtaining the motion track of the foot to be detected according to the continuously recorded position information of the plurality of detection frames; the motion instruction generation module is connected with the motion track module and used for inputting the motion track into the classifier to obtain a motion instruction of the guiding robot; and the motion guiding module is connected with the motion instruction generating module and used for feeding back the motion instruction of the robot to be guided to the robot so as to guide the robot to move to the position of the foot to be detected.
To achieve the above and other related objects, the present invention provides a robot motion guidance terminal based on foot motion, comprising: a memory for storing a computer program; a processor running the computer program to perform the method of robot motion guidance based on foot motion.
As described above, the robot motion guidance method, system and terminal according to the present invention have the following advantageous effects: according to the invention, the robot is guided to move to the position appointed by the user through the movement track of the position appointed by the foot action, so that the guiding action is more convenient and quicker, and the experience degree of the user is greatly increased.
Drawings
Fig. 1 is a flowchart illustrating a robot motion guiding method based on foot motion according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a robot motion guidance system based on foot motion according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a robot motion guidance terminal based on foot motion according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It is noted that in the following description, reference is made to the accompanying drawings which illustrate several embodiments of the present invention. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present invention. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present invention is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "below," "lower," "over," "upper," and the like, may be used herein to facilitate describing one element or feature's relationship to another element or feature as illustrated in the figures.
Throughout the specification, when a part is referred to as being "connected" to another part, this includes not only a case of being "directly connected" but also a case of being "indirectly connected" with another element interposed therebetween. In addition, when a certain part is referred to as "including" a certain component, unless otherwise stated, other components are not excluded, but it means that other components may be included.
The terms first, second, third, etc. are used herein to describe various elements, components, regions, layers and/or sections, but are not limited thereto. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the scope of the present invention.
Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," and/or "comprising," when used in this specification, specify the presence of stated features, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, operations, elements, components, items, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions or operations are inherently mutually exclusive in some way.
The invention provides a robot motion guiding method based on foot motion, which is used for solving the problems that in the prior art, a user needs to control a robot to reach a position appointed by the user through a remote control device, so that a large amount of time and energy are wasted, the remote control device is easy to maintain and break down, so that the guiding work cannot be performed, and the user experience is not high.
The following detailed description of the embodiments of the present invention will be made with reference to fig. 1 so that those skilled in the art to which the present invention pertains can easily carry out the embodiments. The present invention may be embodied in many different forms and is not limited to the embodiments described herein.
As shown in fig. 1, a flow chart of a robot motion guiding method based on foot motion in an embodiment is shown, that is, the following steps are performed;
step S11: and acquiring and recording an image of the foot to be detected.
Optionally, the RGB color image of the foot to be detected is collected and recorded.
Optionally, any acquisition device capable of acquiring and recording the RGB color images of the foot to be detected is used for acquisition, which is not limited in the present invention.
Preferably, the acquisition device adopts one or more of a monocular RGB camera, a binocular RGB camera and a depth camera.
Optionally, the image is a static image or a dynamic image.
Optionally, the image of the complete foot to be detected is collected and recorded. I.e. recording only a part of the image of the foot to be detected is not usable.
Step S12: and obtaining foot position information for positioning the foot position to be detected in the image based on a target detection algorithm.
Optionally, the image of the foot to be detected is calculated by using a target detection algorithm, so as to obtain foot position information for positioning the position of the foot to be detected in the image.
Optionally, the target detection algorithm is a target position detection algorithm. Specifically, the position of the foot to be detected is represented by a target detection box (bounding box). The target detection frame refers to corresponding parameters of a rectangular frame capable of framing the foot to be detected in the image. Wherein the rectangular frame includes: an upper frame, a lower frame, and two side frames.
Optionally, the foot position information includes: and position information of the target detection frame corresponding to the foot to be detected.
Optionally, the network used by the target detection algorithm includes: one or more of an RCNN network, SPP-Net, Fast-RCNN network, and Fast-RCNN network.
Step S13: and inputting the foot position information into a camera model so as to output the position information of the foot to be detected in the world coordinate system.
Optionally, the distance between the camera used by the camera model and the foot is controlled within a range of 2 meters, and if the camera is in a vertical relationship with the horizontal plane, the camera model can be directly applied.
Optionally, the position information of the lower frame of the target detection frame corresponding to the foot to be detected is input into a camera model, so as to output the position information of the foot to be detected in the world coordinate system. Wherein this is when the foot to be detected is on a horizontal plane, so that the lower border corresponds to the horizontal plane on which the foot to be detected is located.
Optionally, the camera model includes: a linear model or a non-linear model.
Step S14: and obtaining a detection frame for determining the same foot to be detected by taking the position information under the world coordinate system as a constraint condition.
Optionally, under the condition that the position information of the foot under the world coordinate system is kept unchanged, the position information of the detection frame for determining the same foot to be detected is obtained, so as to ensure that the same foot moves.
If the sole or the heel of the foot to be detected leaves the bottom surface, a beating horizontal plane or standing on tiptoe action is formed, and the lower frame of the target detection frame of the foot to be detected is kept unchanged, so that the position of the foot to be detected in the world coordinate system cannot be changed. However, the size and the central point of the target detection frame of the foot are changed, so that in order to limit the recognition of the lifting and lowering movement of the same foot, the position information in the world coordinate system obtained by the lower boundary of the target detection frame is generated, and the movement track of the foot to be detected is generated under the condition that the position information in the world coordinate system is not changed.
Optionally, the motion trajectory ensures that the motion trajectory of the center of the foot to be detected (through the center position of the foot to be detected) corresponds to the motion of the foot to be detected under various conditions that the same foot to be detected does not leave the plane (by keeping the foot position information under the world coordinate system unchanged).
Step S15: and when the motion of the detection frame is detected to exceed the threshold range, continuously recording the position information of a plurality of detection frames.
Optionally, when it is detected that the motion displacement value of the center position of the detection frame exceeds a preset threshold, continuously recording center position information of a plurality of detection frames from the current frame; wherein, each frame corresponds to the central position information of one detection frame.
Optionally, the moving displacement includes: horizontal displacement or vertical displacement.
Optionally, the threshold is set according to specific situations, and is not limited in the present invention.
Optionally, the number of the center position information of the detection frame corresponds to a plurality of different continuous frames, and the number of the frames is related to the frame rate used in the target detection algorithm.
For example, if the guidance operation is set to stand on tiptoe 4 times or more, the entire process is about 2000ms, the frame rate of the target detection is 10FPS, and 20 frames of images can be obtained. Therefore, when the motion displacement value of the center position of the detection frame is detected to exceed the preset threshold value, the center position information of the detection frame of 20 frames is continuously stored from the current frame.
Step S16: and obtaining the motion trail of the foot to be detected according to the continuously recorded position information of the plurality of detection frames.
Optionally, the motion trajectory of the foot to be detected in the time range of the multiple frames is calculated and obtained according to the continuously recorded information of the center positions of the multiple detection frames corresponding to the multiple frames.
Optionally, the motion of the foot to be detected is corresponding to the motion trajectory.
Optionally, the motion of the foot to be detected includes: lifting or lowering one or more of the other ends of the forefoot or heel (standing on the tiptoe and tapping the plane) while the other end is at the horizontal plane. The motion trail corresponding to the lifting motion of the foot part to be detected is a trail of which the displacement in the vertical direction of the central point of the detection frame is a positive value, and the motion trail corresponding to the lowering motion of the foot part to be detected is a trail of which the displacement in the vertical direction of the central point of the detection frame is a negative value.
The vertical direction mentioned here is the positive direction of the y axis in the two-dimensional coordinate system where the detection frame is located, if the adopted standard is different, the direction is also different, and the definition of the positive value and the negative value is also different, which is not limited in the present invention.
Optionally, the tiptoe standing to be detected starts to move, and the corresponding track is a track with a positive value of displacement in the horizontal direction under the condition that the displacement of the center of the detection frame in the vertical direction changes;
the foot beating plane to be detected (beating actions in the invention refer to the situation that heels are not moved on the horizontal plane and palms of feet are moved) starts to act, and the corresponding track is the track with a negative value of displacement in the horizontal direction under the situation that the displacement in the vertical direction of the center of the detection frame is changed.
It should be noted that the vertical direction mentioned here is the y-axis forward direction in the two-dimensional coordinate system where the detection frame is located, and the horizontal direction is the x-axis forward direction. If the adopted standard is different, the direction is also different, and the definition of the positive value and the negative value is also different, and the invention is not limited.
Step S17: and inputting the motion trail into a classifier to obtain a motion instruction of the guiding robot.
Optionally, the classifier is trained on each motion trajectory, and takes the motion trajectory as input and the motion instruction of the guiding robot as output. Wherein the motion trail includes: a guide motion trajectory or a non-guide motion trajectory.
It should be noted that the guiding motion track is set in advance according to requirements, and is not limited in the present invention. And if the input is not the preset guide motion track, directly taking the input as the non-guide motion track.
Optionally, the need to direct the robot motion instruction or the need not direct the robot motion instruction. Specifically, if the classifier is input to guide the motion track, outputting a motion instruction of the robot to be guided; and if the input of the classifier is a non-guidance motion track, outputting a motion instruction which does not need to guide the robot.
Optionally, the classifier performs motion trajectory training in combination with a hidden markov model, so as to better process time series data and achieve a better effect.
Step S18: and feeding back a motion instruction of the guiding robot to be guided to the robot so as to guide the robot to move to the position of the foot to be detected.
Optionally, the motion instruction which is output by the classifier and needs to be guided is fed back to the robot, and the robot is guided to move to the position of the foot to be detected.
Optionally, the motion instruction which is output by the classifier and needs to be guided is fed back to the motion module of the robot, and the robot is guided to move to the position of the foot to be detected, so that the robot can be guided to move to any specified position.
Similar to the principle of the above embodiment, the present invention provides a robot motion guidance system based on foot motion.
Specific embodiments are provided below in conjunction with the attached figures:
fig. 2 is a schematic structural diagram showing a robot motion guidance system based on foot motion in an embodiment of the present invention.
The system comprises:
the acquisition module 21 is used for acquiring and recording the images of the feet to be detected;
the target detection module 22 is connected with the acquisition module 21 and is used for acquiring foot position information used for positioning the foot position to be detected in the image based on a target detection algorithm;
a world coordinate position module 23, connected to the target detection module 22, for inputting the foot position information into a camera model to output the position information of the foot to be detected in a world coordinate system;
the constraint module 24 is connected with the world coordinate position module 23 and is used for obtaining a detection frame for determining the same foot to be detected by taking the position information under the world coordinate system as a constraint condition;
the detection frame position recording module 25 is connected with the constraint module 24 and is used for starting to continuously record the position information of a plurality of detection frames when the detection frame movement is detected to exceed the threshold range;
the motion track module 26 is connected to the detection frame position recording module 25, and is configured to obtain a motion track of the foot to be detected according to the continuously recorded position information of the plurality of detection frames;
the motion instruction generating module 27 is connected to the motion trajectory module 26, and is configured to input the motion trajectory into the classifier, so as to obtain a motion instruction for guiding the robot;
and the guiding motion module 28 is connected to the motion instruction generating module 27, and is configured to feed back a guiding robot motion instruction to be guided to the robot, so as to guide the robot to move to the foot position to be detected.
Optionally, the collecting module 21 collects and records an RGB color image of the foot to be detected.
Optionally, the acquisition module 21 includes: any acquisition device capable of acquiring and recording the RGB color images of the foot to be detected is not limited in the present invention.
Preferably, the acquisition device adopts one or more of a monocular RGB camera, a binocular RGB camera and a depth camera.
Optionally, the image is a static image or a dynamic image.
Optionally, the acquisition module 21 acquires and records a complete image of the foot to be detected. I.e. recording only a part of the image of the foot to be detected is not usable.
Optionally, the target detection module 22 calculates the image of the foot to be detected by using a target detection algorithm, so as to obtain foot position information for positioning the position of the foot to be detected in the image.
Optionally, the target detection algorithm is a target position detection algorithm. Specifically, the position of the foot to be detected is represented by a target detection box (bounding box). The target detection frame refers to corresponding parameters of a rectangular frame capable of framing the foot to be detected in the image. Wherein the rectangular frame includes: an upper frame, a lower frame, and two side frames.
Optionally, the foot position information includes: and position information of the target detection frame corresponding to the foot to be detected.
Optionally, the network used by the target detection algorithm includes: one or more of an RCNN network, SPP-Net, Fast-RCNN network, and Fast-RCNN network.
Optionally, the distance between the camera used by the camera model and the foot is controlled within a range of 2 meters, and if the camera is in a vertical relationship with the horizontal plane, the camera model can be directly applied.
Optionally, the world coordinate position module 23 inputs position information of a lower frame of the target detection frame corresponding to the foot to be detected into the camera model, so as to output position information of the foot to be detected in the world coordinate system. Wherein this is when the foot to be detected is on a horizontal plane, so that the lower border corresponds to the horizontal plane on which the foot to be detected is located.
Optionally, the camera model includes: a linear model or a non-linear model.
Optionally, under the condition that the position information of the foot under the world coordinate system is kept unchanged, the constraint module 24 obtains the position information of the detection frame for determining the same foot to be detected, so as to ensure that the same foot moves.
If the sole or the heel of the foot to be detected leaves the bottom surface, a beating horizontal plane or standing on tiptoe action is formed, and the lower frame of the target detection frame of the foot to be detected is kept unchanged, so that the position of the foot to be detected in the world coordinate system cannot be changed. However, the size and the central point of the target detection frame of the foot are changed, so that in order to limit the recognition of the lifting and lowering movement of the same foot, the position information in the world coordinate system obtained by the lower boundary of the target detection frame is generated, and the movement track of the foot to be detected is generated under the condition that the position information in the world coordinate system is not changed.
Optionally, the motion trajectory ensures that the motion trajectory of the center of the foot to be detected (through the center position of the foot to be detected) corresponds to the motion of the foot to be detected under various conditions that the same foot to be detected does not leave the plane (by keeping the foot position information under the world coordinate system unchanged).
Optionally, when the detection frame position recording module 25 detects that the movement displacement value of the center position of the detection frame exceeds a preset threshold, the current frame starts to continuously record information of the center positions of a plurality of detection frames; wherein, each frame corresponds to the central position information of one detection frame.
Optionally, the moving displacement includes: horizontal displacement or vertical displacement.
Optionally, the threshold is set according to specific situations, and is not limited in the present invention.
Optionally, the number of the center position information of the detection frame corresponds to a plurality of different continuous frames, and the number of the frames is related to the frame rate used in the target detection algorithm.
Optionally, the motion trail module 26 calculates and obtains the motion trail of the foot to be detected in the time range of the multiple frames according to the continuously recorded information of the center positions of the multiple detection frames corresponding to the multiple frames.
Optionally, the motion trail module 26 corresponds to the motion of the foot to be detected according to the motion trail.
Optionally, the motion of the foot to be detected includes: one end of the forefoot part or the heel is arranged at the horizontal plane, and the other end is lifted or put down at one time or a plurality of times. The motion trail corresponding to the lifting motion of the foot part to be detected is a trail of which the displacement in the vertical direction of the central point of the detection frame is a positive value, and the motion trail corresponding to the lowering motion of the foot part to be detected is a trail of which the displacement in the vertical direction of the central point of the detection frame is a negative value.
The vertical direction mentioned here is the positive direction of the y axis in the two-dimensional coordinate system where the detection frame is located, if the adopted standard is different, the direction is also different, and the definition of the positive value and the negative value is also different, which is not limited in the present invention.
Optionally, the tiptoe standing to be detected starts to move, and the corresponding track is a track with a positive value of displacement in the horizontal direction under the condition that the displacement of the center of the detection frame in the vertical direction changes;
the foot beating plane to be detected (beating actions in the invention refer to the situation that heels are not moved on the horizontal plane and palms of feet are moved) starts to act, and the corresponding track is the track with a negative value of displacement in the horizontal direction under the situation that the displacement in the vertical direction of the center of the detection frame is changed.
It should be noted that the vertical direction mentioned here is the y-axis forward direction in the two-dimensional coordinate system where the detection frame is located, and the horizontal direction is the x-axis forward direction. If the adopted standard is different, the direction is also different, and the definition of the positive value and the negative value is also different, and the invention is not limited.
Optionally, the classifier is trained on each motion trajectory, and takes the motion trajectory as input and the motion instruction of the guiding robot as output. Wherein the motion trail includes: a guide motion trajectory or a non-guide motion trajectory.
It should be noted that the guiding motion track is set in advance according to requirements, and is not limited in the present invention. And if the input is not the preset guide motion track, directly taking the input as the non-guide motion track.
Optionally, the directing robot motion instruction includes: the robot motion instruction needs to be directed or does not need to be directed. Specifically, if the classifier is input to guide the motion track, outputting a motion instruction of the robot to be guided; and if the input of the classifier is a non-guidance motion track, outputting a motion instruction which does not need to guide the robot.
Optionally, the classifier performs motion trajectory training in combination with a hidden markov model, so as to better process time series data and achieve a better effect.
Optionally, the guiding motion module 28 feeds back the instruction of the required guiding motion output by the classifier to the robot, and guides the robot to move to the position of the foot to be detected.
Optionally, the guiding motion module 28 feeds back the motion instruction which is output by the classifier and needs to be guided to the motion module of the robot, and guides the robot to move to the position of the foot to be detected, so that the robot can be guided to move to any specified position.
As shown in fig. 3, a schematic structural diagram of a robot motion guidance terminal 30 based on foot motion in the embodiment of the present invention is shown.
The robot motion guiding terminal 30 based on the foot motion includes:
the memory 31 is used for storing computer programs; the processor 32 runs a computer program to implement the robot motion guidance method based on foot motion as described in fig. 1.
Optionally, the number of the memories 31 may be one or more, the number of the processors 32 may be one or more, and one is taken as an example in fig. 3.
Optionally, the external device may be an external terminal, for example, any one of a mobile terminal and a control terminal of the robot, which is not limited in the present invention.
Optionally, the processor 32 in the robot motion guidance terminal 30 based on the foot motion loads one or more instructions corresponding to the progress of the application program into the memory 31 according to the steps described in fig. 1, and the processor 32 runs the application program stored in the memory 31, so as to implement various functions in the robot motion guidance method based on the foot motion described in fig. 1.
Optionally, the memory 31 may include, but is not limited to, a high speed random access memory, a non-volatile memory. Such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices; the Processor 31 may include, but is not limited to, a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
Optionally, the Processor 32 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
The present invention also provides a computer-readable storage medium storing a computer program that, when executed, implements the method for guiding robot motion based on foot motion as shown in fig. 1. The computer-readable storage medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (compact disc-read only memories), magneto-optical disks, ROMs (read-only memories), RAMs (random access memories), EPROMs (erasable programmable read only memories), EEPROMs (electrically erasable programmable read only memories), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions. The computer readable storage medium may be a product that is not accessed by the computer device or may be a component that is used by an accessed computer device.
In summary, the robot motion guiding method, the robot motion guiding system and the terminal based on foot motion solve the problems that in the prior art, a user needs to control the robot to reach a position designated by the user through a remote control device, a lot of time and energy are wasted, the remote control device needs to be maintained, and faults are prone to occur, so that the guiding work cannot be performed, and the user experience is not high. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A robot motion guiding method based on foot motion, the method comprising:
acquiring and recording an image of a foot to be detected;
obtaining foot position information for positioning the foot position to be detected in the image based on a target detection algorithm;
inputting the foot position information into a camera model to output the position information of the foot to be detected in a world coordinate system;
obtaining a detection frame for determining the same foot to be detected by taking the position information under the world coordinate system as a constraint condition;
when the motion of the detection frame is detected to exceed the threshold range, continuously recording the position information of a plurality of detection frames;
obtaining the motion trail of the foot to be detected according to the continuously recorded position information of the plurality of detection frames;
inputting the motion trail into a classifier to obtain a motion instruction of the guiding robot;
and feeding back a motion instruction of the guiding robot to be guided to the robot so as to guide the robot to move to the position of the foot to be detected.
2. The method for guiding motion of a robot based on motion of a foot according to claim 1, wherein the foot position information includes: and position information of the target detection frame corresponding to the foot to be detected.
3. The method for guiding robot motion based on foot motion according to claim 2, wherein the foot position information is input into a camera model to output the position information of the foot to be detected in a world coordinate system:
and inputting the position information of the lower frame of the target detection frame corresponding to the foot to be detected into a camera model so as to output the position information of the foot to be detected in a world coordinate system.
4. The method for guiding the motion of the robot based on the motion of the feet according to claim 1, wherein the detection frames for determining the same foot to be detected are obtained by using the position information under the world coordinate system as a constraint condition:
and under the condition of keeping the position information of the feet under the world coordinate system unchanged, obtaining the position information of the detection frame for determining the same foot to be detected.
5. The method according to claim 1, wherein when the detection frame motion is detected to exceed a threshold range, the method starts recording position information of a plurality of detection frames in succession:
when detecting that the movement displacement value of the central position of the detection frame exceeds a preset threshold value, continuously recording the central position information of a plurality of detection frames from the current frame; wherein, each frame corresponds to the central position information of one detection frame.
6. The method for guiding the motion of the robot based on the motion of the foot according to claim 1, wherein the motion trajectory of the foot to be detected is obtained according to the continuously recorded position information of the plurality of detection frames:
and calculating and obtaining the motion trail of the foot to be detected in the time range of the multiple frames according to the continuously recorded central position information of the multiple detection frames corresponding to the multiple frames.
7. The method of claim 1, wherein the directing the robot motion command comprises: the robot motion instruction needs to be directed or does not need to be directed.
8. The method for guiding robot motion based on foot motion according to claim 5, wherein the number of pieces of detection frame center position information is related to a frame rate used in the object detection algorithm.
9. A robot motion guidance system based on foot motion, comprising:
the acquisition module is used for acquiring and recording the image of the foot to be detected;
the target detection module is connected with the acquisition module and used for acquiring foot position information used for positioning the foot position to be detected in the image based on a target detection algorithm;
the world coordinate position module is connected with the target detection module and used for inputting the foot position information into a camera model so as to output the position information of the foot to be detected in a world coordinate system;
the constraint module is connected with the world coordinate position module and is used for obtaining a detection frame for determining the same foot to be detected by taking the position information under the world coordinate system as a constraint condition;
the detection frame position recording module is connected with the constraint module and used for starting to continuously record the position information of a plurality of detection frames when the detection frame movement is detected to exceed the threshold range;
the motion track module is connected with the detection frame position recording module and used for obtaining the motion track of the foot to be detected according to the continuously recorded position information of the plurality of detection frames;
the motion instruction generation module is connected with the motion track module and used for inputting the motion track into the classifier to obtain a motion instruction of the guiding robot;
and the motion guiding module is connected with the motion instruction generating module and used for feeding back the motion instruction of the robot to be guided to the robot so as to guide the robot to move to the position of the foot to be detected.
10. A robot motion guide terminal based on foot motion, comprising:
a memory for storing a computer program;
a processor for running the computer program to perform the method of robot motion guidance based on foot motion according to any one of claims 1 to 8.
CN202010600315.5A 2020-06-28 2020-06-28 Robot motion guiding method, system and terminal based on foot motion Active CN111736607B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010600315.5A CN111736607B (en) 2020-06-28 2020-06-28 Robot motion guiding method, system and terminal based on foot motion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010600315.5A CN111736607B (en) 2020-06-28 2020-06-28 Robot motion guiding method, system and terminal based on foot motion

Publications (2)

Publication Number Publication Date
CN111736607A true CN111736607A (en) 2020-10-02
CN111736607B CN111736607B (en) 2023-08-11

Family

ID=72651531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010600315.5A Active CN111736607B (en) 2020-06-28 2020-06-28 Robot motion guiding method, system and terminal based on foot motion

Country Status (1)

Country Link
CN (1) CN111736607B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232279A (en) * 2020-11-04 2021-01-15 杭州海康威视数字技术股份有限公司 Personnel distance detection method and device
CN112379781A (en) * 2020-12-10 2021-02-19 深圳华芯信息技术股份有限公司 Man-machine interaction method, system and terminal based on foot information identification
CN115847413A (en) * 2022-12-08 2023-03-28 杭州华橙软件技术有限公司 Control instruction generation method and device, storage medium and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224912A (en) * 2015-08-31 2016-01-06 电子科技大学 Based on the video pedestrian detection and tracking method of movable information and Track association
CN105223957A (en) * 2015-09-24 2016-01-06 北京零零无限科技有限公司 A kind of method and apparatus of gesture manipulation unmanned plane
CN108229318A (en) * 2017-11-28 2018-06-29 北京市商汤科技开发有限公司 The training method and device of gesture identification and gesture identification network, equipment, medium
CN109343701A (en) * 2018-09-03 2019-02-15 电子科技大学 An intelligent human-computer interaction method based on dynamic gesture recognition
CN109732593A (en) * 2018-12-28 2019-05-10 深圳市越疆科技有限公司 A remote control method, device and terminal device for a robot
US20190143517A1 (en) * 2017-11-14 2019-05-16 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for collision-free trajectory planning in human-robot interaction through hand movement prediction from vision
CN111241940A (en) * 2019-12-31 2020-06-05 浙江大学 A remote control method for a robot, a method and system for determining a human body bounding box

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224912A (en) * 2015-08-31 2016-01-06 电子科技大学 Based on the video pedestrian detection and tracking method of movable information and Track association
CN105223957A (en) * 2015-09-24 2016-01-06 北京零零无限科技有限公司 A kind of method and apparatus of gesture manipulation unmanned plane
US20190143517A1 (en) * 2017-11-14 2019-05-16 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for collision-free trajectory planning in human-robot interaction through hand movement prediction from vision
CN108229318A (en) * 2017-11-28 2018-06-29 北京市商汤科技开发有限公司 The training method and device of gesture identification and gesture identification network, equipment, medium
CN109343701A (en) * 2018-09-03 2019-02-15 电子科技大学 An intelligent human-computer interaction method based on dynamic gesture recognition
CN109732593A (en) * 2018-12-28 2019-05-10 深圳市越疆科技有限公司 A remote control method, device and terminal device for a robot
CN111241940A (en) * 2019-12-31 2020-06-05 浙江大学 A remote control method for a robot, a method and system for determining a human body bounding box

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蒋涵妮: "基于Kinect的手势识别在人机交互中的应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232279A (en) * 2020-11-04 2021-01-15 杭州海康威视数字技术股份有限公司 Personnel distance detection method and device
CN112232279B (en) * 2020-11-04 2023-09-05 杭州海康威视数字技术股份有限公司 A method and device for detecting distance between people
CN112379781A (en) * 2020-12-10 2021-02-19 深圳华芯信息技术股份有限公司 Man-machine interaction method, system and terminal based on foot information identification
CN112379781B (en) * 2020-12-10 2023-02-28 深圳华芯信息技术股份有限公司 Man-machine interaction method, system and terminal based on foot information identification
CN115847413A (en) * 2022-12-08 2023-03-28 杭州华橙软件技术有限公司 Control instruction generation method and device, storage medium and electronic device

Also Published As

Publication number Publication date
CN111736607B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
US20220366576A1 (en) Method for target tracking, electronic device, and storage medium
US11789545B2 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
CN111736607A (en) Robot motion guiding method and system based on foot motion and terminal
Konstantindis et al. Vision-based product tracking method for cyber-physical production systems in industry 4.0
Shkurti et al. Underwater multi-robot convoying using visual tracking by detection
US11227434B2 (en) Map constructing apparatus and map constructing method
TWI684136B (en) Robot, control system and method for operating the robot
CN109074083B (en) Movement control method, mobile robot, and computer storage medium
US9418280B2 (en) Image segmentation method and image segmentation device
Pal et al. A novel end-to-end vision-based architecture for agricultural human–robot collaboration in fruit picking operations
JP2015520470A (en) Face recognition self-learning using depth-based tracking for database creation and update
US9299161B2 (en) Method and device for head tracking and computer-readable recording medium
US9582711B2 (en) Robot cleaner, apparatus and method for recognizing gesture
CN119311006A (en) Robot posture control method and system based on intelligent recognition
CN112379781B (en) Man-machine interaction method, system and terminal based on foot information identification
CN113524172B (en) Robot, article grabbing method thereof and computer-readable storage medium
Stumpf et al. Salt: A semi-automatic labeling tool for rgb-d video sequences
CN104931091B (en) A kind of bionic machine fish measuring table and using method thereof
JP2022104495A (en) Method, system and non-transitory computer-readable recording medium for creating map for robot
JP2003271933A (en) Face detection device, face detection method, and robot device
US20230169760A1 (en) Computer-readable recording medium storing label change program, label change method, and information processing apparatus
CN115106644A (en) Self-adaptive judgment method for welding seam starting point, welding method, welding equipment and medium
CN109782688B (en) Automatic fabric cutting and imaging method and device
Simul et al. A support vector machine approach for real time vision based human robot interaction
CN107749951A (en) A visual perception method and system for unmanned photography

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant