[go: up one dir, main page]

CN112306242A - Interaction method and system based on book-space gestures - Google Patents

Interaction method and system based on book-space gestures Download PDF

Info

Publication number
CN112306242A
CN112306242A CN202011239984.0A CN202011239984A CN112306242A CN 112306242 A CN112306242 A CN 112306242A CN 202011239984 A CN202011239984 A CN 202011239984A CN 112306242 A CN112306242 A CN 112306242A
Authority
CN
China
Prior art keywords
gesture
book
stroke
operation information
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011239984.0A
Other languages
Chinese (zh)
Inventor
黄昌正
陈曦
周言明
张东耿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huanjing Technology Co ltd
Nanjing Harley Intelligent Technology Co ltd
Mirage Virtual Reality Guangzhou Intelligent Technology Research Institute Co ltd
Original Assignee
Guangzhou Huanjing Technology Co ltd
Nanjing Harley Intelligent Technology Co ltd
Mirage Virtual Reality Guangzhou Intelligent Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huanjing Technology Co ltd, Nanjing Harley Intelligent Technology Co ltd, Mirage Virtual Reality Guangzhou Intelligent Technology Research Institute Co ltd filed Critical Guangzhou Huanjing Technology Co ltd
Priority to CN202011239984.0A priority Critical patent/CN112306242A/en
Publication of CN112306242A publication Critical patent/CN112306242A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides an interaction method and system based on book-space gestures. In the embodiment of the invention, the book empty gesture track is obtained, the operation information corresponding to the book empty gesture track is determined from the preset gesture library, and the operation information is output, so that the gesture track can be drawn out by a user in the air in a virtual reality application scene, and the functions of intelligent terminal control or character input and the like can be realized.

Description

Interaction method and system based on book-space gestures
Technical Field
The invention relates to the technical field of interaction, in particular to an interaction method based on book-space gestures and an interaction system based on book-space gestures.
Background
With the development of science and technology, intelligent terminal equipment develops, a man-machine interaction mode gradually changes from the traditional mode of controlling equipment through a physical key mode on the equipment to the mode of remote control through the physical key mode to the mode of remote interaction through a mobile phone application program to the mode of voice recognition and gesture recognition, but the existing voice recognition interaction equipment and gesture recognition interaction equipment have many problems in the use convenience, the use mode and the application scene of virtual reality.
For example: most of the existing voice recognition devices are entity sound boxes which need to be placed in a specific use scene, and preset intelligent terminal device functions are executed by recognizing specific voice instructions of users; the language supported by the voice instruction is not much, the user must use the supported language when using the voice instruction, and the user can feel embarrassment and inconvenient when making voice control in many occasions and can be interfered by environmental sound to influence the accuracy of voice recognition;
most of the existing gesture recognition control equipment adopts an optical sensing or video recognition mode, most of the equipment is an external sensing device and needs to be placed at a specific position; the user needs to use gesture recognition control in the induction range defined by the induction device during operation, so that the user is limited by the device placement position, the angle and the induction range space when in use, and meanwhile, the user can be influenced by the shielding of light rays and physics of the environment, and the accuracy of gesture recognition is reduced.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a book-empty gesture based interaction method and a corresponding book-empty gesture based interaction system that overcome or at least partially solve the above problems.
In order to solve the above problem, an embodiment of the present invention discloses an interaction method based on a book space gesture, where the method includes:
acquiring a book-space gesture track;
determining operation information corresponding to the book space gesture track from a preset gesture library;
and outputting the operation information.
Optionally, the acquiring the book space gesture track includes:
acquiring a first handwriting gesture stroke;
judging whether a second book gesture stroke is acquired within a preset time;
if not, determining the first book space gesture stroke as the book space gesture track
If so, re-executing the step of judging whether to acquire the second book empty gesture strokes within the preset time until the second book empty gesture strokes are not acquired within the preset time to acquire one or more second book empty gesture strokes;
and combining the first book space gesture strokes and the one or more second book space gesture strokes to generate a book space gesture track.
Optionally, the acquiring the first handwriting stroke comprises:
obtaining a starting time value;
continuously detecting and recording vector acceleration data;
acquiring an end time value;
and calculating to obtain the first book space gesture stroke by adopting the starting time value, the vector acceleration data and the ending time value.
Optionally, the preset book space information base has a relationship between a plurality of groups of gesture feature information and operation information, and the step of determining the operation information corresponding to the book space gesture track from the preset gesture base includes:
matching the book space gesture tracks with the multiple groups of gesture feature information one by one to obtain matching similarity;
determining gesture feature information with matching similarity higher than a preset threshold as first gesture feature information;
and determining the operation information corresponding to the first gesture feature information as the operation information corresponding to the book-space gesture track.
Optionally, the operation information includes control information having a corresponding control function, and after the step of outputting the operation information, the method further includes:
and executing a control function corresponding to the control information by adopting the control information.
The embodiment of the invention discloses an interactive system based on book-space gestures, which comprises:
the book space gesture track acquisition module is used for acquiring a book space gesture track;
the operation information determining module is used for determining operation information corresponding to the book space gesture track from a preset gesture library;
and the operation information output module is used for outputting the operation information.
Optionally, the book-empty gesture trajectory obtaining module includes:
the first book space gesture stroke acquisition submodule is used for acquiring a first book space gesture stroke;
the judging submodule is used for judging whether the second book empty gesture stroke is acquired within the preset time;
a first book space gesture track determining submodule, if not, determining the first book space gesture stroke as the book space gesture track;
the second book empty gesture stroke obtaining submodule is used for re-executing the step of judging whether to obtain second book empty gesture strokes within the preset time if the second book empty gesture stroke obtaining submodule is used for obtaining one or more second book empty gesture strokes until the second book empty gesture strokes are not obtained within the preset time;
and the second book space gesture track determining submodule is used for combining the first book space gesture strokes and the one or more second book space gesture strokes to generate a book space gesture track.
Optionally, the first book space gesture stroke obtaining submodule includes:
an initial time value obtaining unit for obtaining an initial time value;
the vector acceleration data detection and recording unit is used for continuously detecting and recording the vector acceleration data;
an end time value acquisition unit for acquiring an end time value;
and the first book space gesture stroke calculation unit is used for calculating to obtain the first book space gesture stroke by adopting the starting time value, the vector acceleration data and the ending time value.
Optionally, the operation information determining module includes:
the matching similarity obtaining submodule is used for matching the book space gesture tracks with the multiple groups of gesture characteristic information one by one to obtain matching similarity;
the first gesture feature information determining submodule is used for determining gesture feature information with matching similarity higher than a preset threshold as first gesture feature information;
and the operation information determining submodule is used for determining the operation information corresponding to the first gesture feature information as the operation information corresponding to the book empty gesture track.
Optionally, the operation information includes control information, and the control information has a corresponding control function, and further includes:
and the execution module is used for executing the control function corresponding to the control information by adopting the control information.
The embodiment of the invention has the following advantages: in the embodiment of the invention, the book empty gesture track is obtained, the operation information corresponding to the book empty gesture track is determined from the preset gesture library, and the operation information is output, so that the gesture track can be drawn out by a user in the air in a virtual reality application scene, and the functions of intelligent terminal control or character input and the like can be realized.
Drawings
Fig. 1 is a flowchart illustrating steps of a first embodiment of an interaction method based on a book-space gesture according to the present invention.
FIG. 2 is a schematic diagram illustrating the definition of a book-empty gesture track according to the present invention.
Fig. 3 is a schematic view of a wearable device to which the present technology is applied.
FIG. 4 is a block diagram of a first embodiment of an interactive system based on book-space gestures according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Referring to fig. 1, a flowchart of a first step of an interaction method embodiment based on a book-space gesture according to the present invention is shown, which specifically includes the following steps:
step 101, acquiring a book-space gesture track;
with the progress of electronic technology, in order to adapt to the evolution and evolution of intelligent devices, there are more and more human-computer interaction modes, for example, suitable for a mouse and a physical keyboard of a computer, and suitable for a screen cursor and a screen keyboard of a touch screen mobile phone. At present, the virtual reality technology gradually emerges in the market and starts to be widely applied, but the traditional human-computer interaction method through a physical keyboard and the like is obviously not applicable, and the application provides a human-computer interaction mode suitable for the virtual reality technology.
Book empty means to virtually scratch the font in the air with fingers. In the embodiment of the present invention, the book-empty gesture trajectory means a trajectory formed by a gesture virtually drawn in the air by the user.
When a user inputs control information to a device, the control information may consist of a simple stroke, e.g., "|", and when the user inputs text to the device, the text may consist of one or more strokes, e.g., the Chinese character "one" consists of one stroke, "one", and the Chinese character three consists of three strokes, "one". Thus, when a user virtually strokes a book empty gesture trajectory in the air, the book empty gesture trajectory may be composed of one or more strokes.
In the embodiment of the invention, the first stroke in the book space gesture track, namely the first book space gesture stroke, is obtained firstly, and if the book space gesture track is judged to be only composed of one stroke, the first book space gesture stroke is used as the book space gesture track, and if the book space gesture track is composed of more than one stroke, after the first book space gesture stroke is obtained, one or more second book space gesture strokes are continuously obtained, and the first book space gesture stroke and the one or more second book space gesture strokes form the book space gesture track in sequence.
Specifically, the method for acquiring the book-space gesture track may include the following steps:
substep 1011, acquiring a first handwriting gesture stroke;
the step of obtaining the first book space gesture stroke may include:
obtaining a starting time value; the starting time value is a real-time value recorded at the starting point of a first book space gesture stroke when the user draws the first book space gesture stroke in the air;
continuously detecting and recording vector acceleration data; in the embodiment of the invention, the vector acceleration data is vector acceleration data of a three-dimensional space including an x axis, a y axis and a z axis, and the detection can be realized by an acceleration sensor, specifically, the acceleration sensor can adopt a nine-axis inertial sensor.
Acquiring an end time value; the end time value is a real-time value recorded at the end point of a first book space gesture stroke when the user draws the first book space gesture stroke in the air;
and calculating to obtain the first book space gesture stroke by adopting the starting time value, the vector acceleration data and the ending time value.
When a user draws a first book empty gesture stroke in the air, a real-time initial time value is obtained at the starting point of the first book empty gesture stroke, vector acceleration data starts to be detected and recorded, an ending time value is obtained at the end point of the first book empty gesture stroke, the vector acceleration data stops to be detected and recorded, the initial time value, the vector acceleration data and the ending time value are adopted to be calculated through an integral and quaternary parameter method, and then a stroke drawn by the user gesture in a three-dimensional space, namely the first book empty gesture stroke, can be obtained.
A substep 1012, judging whether a second book gesture stroke is acquired within a preset time;
and a substep 1013, if not, determining the first book space gesture stroke as the book space gesture trajectory.
The book empty gesture track may be composed of one or more strokes, so that after a user draws a first stroke in the air, a second stroke may be continuously drawn, and the interval time between the previous stroke and the next stroke is not too long, so that a preset time is set, and if the second stroke is not drawn within the preset time, the book empty gesture track is considered to be composed of only one stroke.
For example, if the user draws a first book empty gesture stroke of "|" in the air and does not draw a second book empty gesture stroke within a preset time, then "|" is determined as the book empty gesture track.
In the substep 1014, if yes, re-executing the step of judging whether to acquire the second book blank gesture stroke within the preset time, until the second book blank gesture stroke is not acquired within the preset time, and acquiring one or more second book blank gesture strokes;
and if the second stroke is drawn within the preset time, continuously judging whether the second stroke is drawn within the preset time or not until the second book empty gesture stroke is not obtained within the preset time, and considering that the user draws all strokes to obtain one or more second book empty gesture strokes. It should be noted that the method for acquiring the second book blank gesture stroke is the same as the method for acquiring the first book blank gesture stroke, and is not described in further detail here.
Substep 1015, combining the first book space gesture strokes and the one or more second book space gesture strokes to generate a book space gesture track;
after all strokes are obtained, combining the first book space gesture stroke and the one or more second book space gesture strokes with the spatial position according to the input sequence to generate a book space gesture track. For example, if the first book empty gesture stroke and the plurality of second book empty gesture strokes are sequentially ' one ', ' and ' one ', then the book empty gesture track of ' main ' is generated by combination.
102, determining operation information corresponding to the book space gesture track from a preset gesture library;
the preset gesture library stores a plurality of groups of corresponding relations between gesture characteristic information and operation information, the operation information can be control information or character information, and the corresponding relations can be set by technicians according to actual needs.
The step of determining the operation information corresponding to the book-space gesture track from the preset gesture library comprises the following steps:
matching the book space gesture tracks with the multiple groups of gesture feature information one by one to obtain matching similarity;
determining gesture feature information with matching similarity higher than a preset threshold as first gesture feature information;
and determining the operation information corresponding to the first gesture feature information as the operation information corresponding to the book-space gesture track.
And comparing and matching the generated book space gesture track with gesture feature information in a preset gesture library to obtain a plurality of matching similarities, if the matching similarity higher than a preset threshold exists, taking the gesture feature information with the matching similarity higher than the preset threshold as first gesture feature information, and if the matching similarity higher than the preset threshold does not exist, determining that the book space gesture track drawn by the user cannot be identified by the system, and requiring the user to draw the book space gesture track again.
And step 103, outputting the operation information.
In an embodiment of the present invention, the operation information may include text information. In a virtual reality scene, when a user needs to input characters, the book space gesture track of the characters can be marked out in the air, and the system can identify corresponding character information from the book space gesture track, so that the functions of character chatting and the like in the virtual reality scene can be realized.
In another embodiment of the present invention, the operation information may include control information having a corresponding control function, and after the step of outputting the operation information, the method further includes:
and executing a control function corresponding to the control information by adopting the control information.
In a virtual reality scene, when a user wants to realize certain control over the system, for example, the volume is increased, a corresponding book space gesture track can be drawn in the air, and the system can recognize corresponding control information from the book space gesture track, so that the control function of increasing the volume is realized.
Referring to fig. 2, a schematic diagram of defining a book-empty gesture track of the present invention is shown, for example, the "201 book-empty gesture track" is defined as follows: defining the gesture of straightening the index finger of the right hand, horizontally forward pointing the index finger tip and leftward palming as the initial gesture position, keeping the long press key, and drawing a I-shaped pattern from bottom to top in a virtual book empty plane projected by a cone in a proper angle range (the proper angle range is set to include but is not limited to a numerical range of +/-30 degrees) around the horizontal front axial direction of the index finger of the initial gesture position of the hand, and releasing the key to complete the book empty gesture track.
For example, the "202 book empty gesture track 2" is defined as follows: defining the gesture of straightening the index finger of the right hand, horizontally pointing the index finger forward and left palm as the initial position of the gesture, keeping the long key press, drawing a I-shaped pattern from top to bottom in a virtual book space plane projected by a cone in a proper angle range (the proper angle range is set to include but not limited to a numerical range of +/-30 degrees) around the horizontal front axial direction of the index finger of the initial position of the gesture, and releasing the key press to complete the book space gesture track.
Referring to fig. 3, a schematic diagram of a wearable device applying the technology of the present invention is shown, the device is in the size of a ring, and a key switch 2 is arranged on the outer side of the wearable device. The device can be worn at the second condyle position of the index finger of a user, and the key switch is positioned on the outer side surface of the index finger, namely the side surface adjacent to the index finger and the thumb, so that the thumb can conveniently press the key switch. The device is internally provided with a nine-axis inertial sensor 1, and vector acceleration data of the motion of the device in a three-dimensional space can be acquired. And the device is also internally provided with a micro control unit which is used for calculating the vector acceleration data, the starting time and the ending time to obtain book space gesture strokes, combining the book space gesture strokes to generate a book space gesture track, and comparing the book space gesture tracks with gesture characteristic information one by one to determine corresponding operation information. During the use process, the first stroke begins to be drawn in the air at the moment when the thumb of a user presses the key switch, the key switch keeps a long-pressing state in the period, the key switch is opened after the first stroke is drawn, if other strokes exist, the strokes are drawn in sequence according to the method, the interval time between the strokes is as short as possible, and after all strokes are drawn, the corresponding operation information can be output. The equipment also comprises a Bluetooth module which is used for realizing communication with the intelligent terminal and transmitting the operation information to the intelligent terminal so as to realize the control operation of the application function on the intelligent terminal equipment. In addition, the equipment is also provided with a charging interface 3, an external power supply is connected with a medium of a circuit board (power supply), a Type-c interface is added on the outer side of the equipment, and the equipment is connected with the power supply through a charging circuit; the power supply module 4 is responsible for supplying power to the circuit board and all elements on the circuit board, is connected with the charging interface through a charging circuit, and additionally realizes charging and discharging management and the like; and the equipment shell 5 is made of a light material with high hardness, so that the internal circuit is protected from being influenced, and the internal components are protected from being damaged.
In the embodiment of the invention, the book space gesture track is obtained, the operation information corresponding to the book space gesture track is determined from the preset gesture library, and the operation information is output, so that the gesture track can be drawn out by a user in the air in a virtual reality application scene, and the functions of controlling an intelligent terminal or inputting characters and the like can be realized.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the embodiments of the invention.
Referring to fig. 4, a structural block diagram of a first embodiment of the interaction system based on the book space gesture according to the present invention is shown, and specifically, the structural block diagram may include the following modules:
a book space gesture track obtaining module 301, configured to obtain a book space gesture track;
an operation information determining module 302, configured to determine, from a preset gesture library, operation information corresponding to the book empty gesture trajectory;
an operation information output module 303, configured to output the operation information.
In an embodiment of the present invention, the book space gesture track obtaining module includes:
the first book space gesture stroke acquisition submodule is used for acquiring a first book space gesture stroke;
the judging submodule is used for judging whether the second book empty gesture stroke is acquired within the preset time;
a first book space gesture track determining submodule, if not, determining the first book space gesture stroke as the book space gesture track
The second book empty gesture stroke obtaining submodule is used for re-executing the step of judging whether to obtain the second book empty gesture strokes within the preset time if the second book empty gesture stroke obtaining submodule is used for obtaining one or more second book empty gesture strokes until the second book empty gesture strokes are not obtained within the preset time;
and the second book space gesture track determining submodule is used for combining the first book space gesture strokes and the one or more second book space gesture strokes to generate a book space gesture track.
In an embodiment of the present invention, the first book space gesture stroke obtaining submodule includes:
an initial time value obtaining unit for obtaining an initial time value;
the vector acceleration data detection and recording unit is used for continuously detecting and recording the vector acceleration data;
an end time value acquisition unit for acquiring an end time value;
and the first book space gesture stroke calculation unit is used for calculating to obtain the first book space gesture stroke by adopting the starting time value, the vector acceleration data and the ending time value.
In an embodiment of the present invention, the operation information determining module includes:
the matching similarity obtaining sub-module is used for matching the book empty gesture tracks with the multiple groups of gesture feature information one by one to obtain matching similarities;
the first gesture feature information determining submodule is used for determining the gesture feature information with the matching similarity higher than a preset threshold as first gesture feature information;
and the operation information determining submodule is used for determining the operation information corresponding to the first gesture feature information as the operation information corresponding to the book space gesture track.
In this embodiment of the present invention, the operation information includes control information, and the control information has a corresponding control function, and further includes:
and the execution module is used for executing the control function corresponding to the control information by adopting the control information.
For the system embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant points, refer to the partial description of the method embodiment.
An embodiment of the present invention further provides an apparatus, including:
the book-vacancy gesture based method comprises a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein when the computer program is executed by the processor, each process of the book-vacancy gesture based method embodiment is realized, the same technical effect can be achieved, and the repeated description is omitted here for avoiding repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above embodiment of the book-empty-gesture-based method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the true scope of the embodiments of the invention.
Finally, it should be further noted that relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. And, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or terminal equipment comprising the element.
The book-space-based gesture method and the book-space-based gesture system provided by the invention are described in detail, specific examples are applied in the text to explain the principle and the implementation of the invention, and the description of the above embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific implementation and application scope, and in summary, the content of the present description should not be construed as a limitation to the present invention.

Claims (10)

1. An interaction method based on a book-space gesture, the method comprising:
acquiring a book-space gesture track;
determining operation information corresponding to the book space gesture track from a preset gesture library;
and outputting the operation information.
2. The method of claim 1, wherein the obtaining a book empty gesture trajectory comprises:
acquiring a first handwriting gesture stroke;
judging whether a second book gesture stroke is acquired within a preset time;
if not, determining the first book space gesture stroke as the book space gesture track
If so, re-executing the step of judging whether to acquire the second book empty gesture strokes within the preset time until the second book empty gesture strokes are not acquired within the preset time to acquire one or more second book empty gesture strokes;
and combining the first book space gesture strokes and the one or more second book space gesture strokes to generate a book space gesture track.
3. The method of claim 2, wherein the obtaining the first book-space gesture stroke comprises:
obtaining a starting time value;
continuously detecting and recording vector acceleration data;
acquiring an end time value;
and calculating to obtain a first handwriting gesture stroke by adopting the starting time value, the vector acceleration data and the ending time value.
4. The method according to claim 1, wherein the preset book space information base has a plurality of groups of relations between gesture feature information and operation information, and the step of determining the operation information corresponding to the book space gesture track from the preset gesture base comprises:
matching the book space gesture tracks with the multiple groups of gesture feature information one by one to obtain matching similarity;
determining gesture feature information with matching similarity higher than a preset threshold as first gesture feature information;
and determining the operation information corresponding to the first gesture feature information as the operation information corresponding to the book-space gesture track.
5. The method of claim 1, wherein the operation information includes control information having a corresponding control function, and wherein the step of outputting the operation information is followed by:
and executing a control function corresponding to the control information by adopting the control information.
6. An interactive system based on book-space gestures, the system comprising:
the book space gesture track acquisition module is used for acquiring a book space gesture track;
the operation information determining module is used for determining operation information corresponding to the book space gesture track from a preset gesture library;
and the operation information output module is used for outputting the operation information.
7. The system of claim 6, wherein the book space gesture trajectory acquisition module comprises:
the first book space gesture stroke acquisition submodule is used for acquiring a first book space gesture stroke;
the judging submodule is used for judging whether the second book empty gesture stroke is acquired within the preset time;
a first book space gesture track determining submodule, if not, determining the first book space gesture stroke as the book space gesture track;
the second book empty gesture stroke obtaining submodule is used for re-executing the step of judging whether to obtain second book empty gesture strokes within the preset time if the second book empty gesture stroke obtaining submodule is used for obtaining one or more second book empty gesture strokes until the second book empty gesture strokes are not obtained within the preset time;
and the second book space gesture track determining submodule is used for combining the first book space gesture strokes and the one or more second book space gesture strokes to generate a book space gesture track.
8. The system of claim 7, wherein the first book space gesture stroke acquisition submodule comprises:
an initial time value obtaining unit for obtaining an initial time value;
the vector acceleration data detection and recording unit is used for continuously detecting and recording the vector acceleration data;
an end time value acquisition unit for acquiring an end time value;
and the first book space gesture stroke calculation unit is used for calculating to obtain the first book space gesture stroke by adopting the starting time value, the vector acceleration data and the ending time value.
9. The system of claim 6, wherein the operational information determination module comprises:
the matching similarity obtaining sub-module is used for matching the book empty gesture tracks with the multiple groups of gesture feature information one by one to obtain matching similarities;
the first gesture feature information determining submodule is used for determining gesture feature information with matching similarity higher than a preset threshold as first gesture feature information;
and the operation information determining submodule is used for determining the operation information corresponding to the first gesture feature information as the operation information corresponding to the book space gesture track.
10. The system of claim 6, wherein the operational information includes control information having a corresponding control function, further comprising:
and the execution module is used for executing the control function corresponding to the control information by adopting the control information.
CN202011239984.0A 2020-11-09 2020-11-09 Interaction method and system based on book-space gestures Withdrawn CN112306242A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011239984.0A CN112306242A (en) 2020-11-09 2020-11-09 Interaction method and system based on book-space gestures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011239984.0A CN112306242A (en) 2020-11-09 2020-11-09 Interaction method and system based on book-space gestures

Publications (1)

Publication Number Publication Date
CN112306242A true CN112306242A (en) 2021-02-02

Family

ID=74325055

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011239984.0A Withdrawn CN112306242A (en) 2020-11-09 2020-11-09 Interaction method and system based on book-space gestures

Country Status (1)

Country Link
CN (1) CN112306242A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113093913A (en) * 2021-04-20 2021-07-09 北京乐学帮网络技术有限公司 Test question processing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050057524A1 (en) * 2003-09-16 2005-03-17 Hill Douglas B. Gesture recognition method and touch system incorporating the same
CN103218160A (en) * 2013-03-18 2013-07-24 广东国笔科技股份有限公司 Man-machine interaction method and terminal
CN103268198A (en) * 2013-05-24 2013-08-28 广东国笔科技股份有限公司 Gesture input method and device
CN103713730A (en) * 2012-09-29 2014-04-09 炬才微电子(深圳)有限公司 Mid-air gesture recognition method and device applied to intelligent terminal
CN110850968A (en) * 2019-10-23 2020-02-28 引力(深圳)智能机器人有限公司 Man-machine interaction system based on gesture control

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050057524A1 (en) * 2003-09-16 2005-03-17 Hill Douglas B. Gesture recognition method and touch system incorporating the same
CN103713730A (en) * 2012-09-29 2014-04-09 炬才微电子(深圳)有限公司 Mid-air gesture recognition method and device applied to intelligent terminal
CN103218160A (en) * 2013-03-18 2013-07-24 广东国笔科技股份有限公司 Man-machine interaction method and terminal
CN103268198A (en) * 2013-05-24 2013-08-28 广东国笔科技股份有限公司 Gesture input method and device
CN110850968A (en) * 2019-10-23 2020-02-28 引力(深圳)智能机器人有限公司 Man-machine interaction system based on gesture control

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113093913A (en) * 2021-04-20 2021-07-09 北京乐学帮网络技术有限公司 Test question processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11048333B2 (en) System and method for close-range movement tracking
CN110598576B (en) Sign language interaction method, device and computer medium
US8958631B2 (en) System and method for automatically defining and identifying a gesture
US9910498B2 (en) System and method for close-range movement tracking
CN106845335B (en) Gesture recognition method and device for virtual reality equipment and virtual reality equipment
CN114365075B (en) Method and corresponding device for selecting a graphic object
US12340083B2 (en) Key function execution method and apparatus, device, and storage medium
CN103092343B (en) A kind of control method based on photographic head and mobile terminal
JP2013037675A5 (en)
JP2020067999A (en) Method of virtual user interface interaction based on gesture recognition and related device
CN103106388B (en) Method and system of image recognition
Saoji et al. Air canvas application using Opencv and numpy in python
WO2021006982A1 (en) Virtual keyboard engagement
CN103455262A (en) Pen-based interaction method and system based on mobile computing platform
CN107450717B (en) Information processing method and wearable device
EP2618237B1 (en) Gesture-based human-computer interaction method and system, and computer storage media
Costagliola et al. Gesture‐Based Computing
CN112306242A (en) Interaction method and system based on book-space gestures
KR101348763B1 (en) Apparatus and method for controlling interface using hand gesture and computer-readable recording medium with program therefor
CN104951211A (en) Information processing method and electronic equipment
CN113791548A (en) Device control method, device, electronic device and storage medium
Chen Universal Motion-based control and motion recognition
Habibi Detecting surface interactions via a wearable microphone to improve augmented reality text entry
Darbar et al. Magitext: Around device magnetic interaction for 3d space text entry in smartphone
KR20250094219A (en) Apparatus and method for recognizing sketch

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Unit 02, 3rd floor, 721 Tianhe North Road, Tianhe District, Guangzhou, Guangdong 510630

Applicant after: Mirage virtual reality technology (Guangzhou) Co.,Ltd.

Applicant after: Nanjing Harley Intelligent Technology Co.,Ltd.

Applicant after: GUANGZHOU HUANTEK Co.,Ltd.

Address before: Unit 02, 3rd floor, 721 Tianhe North Road, Tianhe District, Guangzhou, Guangdong 510630

Applicant before: Mirage virtual reality (Guangzhou) Intelligent Technology Research Institute Co.,Ltd.

Applicant before: Nanjing Harley Intelligent Technology Co.,Ltd.

Applicant before: GUANGZHOU HUANTEK Co.,Ltd.

CB02 Change of applicant information
WW01 Invention patent application withdrawn after publication

Application publication date: 20210202

WW01 Invention patent application withdrawn after publication