Disclosure of Invention
The embodiment of the application provides a vehicle intelligent interaction control method, a system and a storage medium, which are used for solving the problems of complex operation and inconvenient use caused by a mode of confirming that a controlled component with the shortest distance is a target controlled component according to the position of a key and the distance between each controlled component in the related technology.
In a first aspect, a vehicle intelligent interaction control method is provided, which includes:
establishing a coordinate system by taking a certain point of the vehicle as a coordinate origin, and acquiring fixed position information of each controlled component in the coordinate system;
setting a binding relation between a preset gesture and a controlled component; one of the preset gestures binds a plurality of controlled components;
acquiring muscle electric signals of hand movements of a user, and initial position information and end position information of the hand of the user when the hand of the user moves with the movements under the coordinate system;
according to the muscle electrical signals, the gesture type of the hand of the user is obtained; then, based on the gesture type, the binding relation, the fixed position information, the initial position information and the termination position information, a target controlled component is obtained;
and acquiring the execution information and controlling the target controlled component according to the execution information.
In some embodiments, the execution information includes voice information, muscle electrical signals, angle information, or distance information, the distance information being a distance between a user's hand and an origin of coordinates;
before the execution information is acquired, the method comprises the following steps: specific voice information, specific muscle electric signals, or specific distance information is set for each target action of the target controlled component correspondingly.
In some embodiments, the target controlled component is derived based on the gesture type, the binding relationship, the fixed position information, the initial position information and the end position information, and specifically includes the following steps:
finding out a preset gesture with the same gesture type from the preset gestures, and then obtaining a plurality of controlled components to be determined based on the preset gesture and the binding relation;
according to the initial position information and the end position information, a rotation angle of a hand of a user, an indication direction line of a preset gesture and an initial line of the preset gesture are obtained;
based on initial position information, rotation angle, indication direction line of a preset gesture and initial line of the preset gesture of the user hand, a judgment area is obtained; then screening out controlled components in the judging area among the plurality of controlled components to be determined;
and finding out a target controlled component in the judging area according to the indication direction line.
In some embodiments, the method for finding the target controlled component in the judging area according to the indication direction line includes the following steps:
acquiring fixed position information of a controlled component in a judging area;
taking the fixed position information of the controlled component in the judging area as a fixed point;
judging whether a fixed point exists on the indication direction line or not;
if yes, the controlled component corresponding to the fixed point is a target controlled component; otherwise, there is no target controlled component.
In some embodiments, the method for finding the target controlled component in the judging area according to the indication direction line includes the following steps:
acquiring fixed position information of a controlled component in a judging area;
taking the fixed position information of the controlled component in the judging area as a fixed point;
judging whether a fixed point exists in the error range of the indication direction line or not;
if yes, the controlled component corresponding to the fixed point is a target controlled component; otherwise, there is no target controlled component.
In some embodiments, according to the initial position information and the end position information, a rotation angle of a hand of a user, an indication direction line of a preset gesture and an initial line of the preset gesture are obtained, including the following steps:
according to the initial position information and the end position information, an initial point and an end point are obtained;
connecting the initial point and the end point with the origin of coordinates respectively to form an initial line and an indication direction line;
and acquiring an included angle between the initial line and the indication direction line, and taking the included angle as a rotation angle.
In some embodiments, the determining area is obtained based on initial position information of the hand of the user, a rotation angle, an indication direction line of a preset gesture and an initial line of the preset gesture, and the method includes the following steps:
acquiring the actual distance between the initial position information of the hand of the user and the origin of coordinates;
comparing the actual distance with a set distance;
if the actual distance is greater than the set distance, the judging area is positioned outside the vehicle and is a sector area; the two sides of the sector area are respectively an initial line and an indication direction line, and the angle of the initial line and the indication direction line is a rotation angle;
if the actual distance is smaller than the set distance, the judging area is positioned in the vehicle and is a sector area; the two sides of the sector area are respectively an initial line and an indication direction line, and the angle of the initial line and the indication direction line is a rotation angle.
In a second aspect, a vehicle intelligent interactive control system is provided, comprising:
a position acquisition module for establishing a coordinate system with a certain point of the vehicle as a coordinate origin, and acquiring fixed position information of each controlled component in the coordinate system;
the myoelectric bracelet is used for acquiring a myoelectric signal of the hand action of a user, and initial position information and end position information when the hand of the user moves with the action under the coordinate system; the myoelectric wristband is also used for acquiring execution information;
the central controller is connected with the scene engine and the voice processing module; the scene engine is used for setting a binding relation between a preset gesture and the controlled component; one of the preset gestures binds a plurality of controlled components; the voice processing module is used for acquiring execution information; the central controller is used for obtaining the gesture type of the hand of the user according to the muscle electric signals; then, based on the gesture type, the binding relation, the fixed position information, the initial position information and the termination position information, a target controlled component is obtained;
the domain controller is used for acquiring execution information by utilizing the voice processing module and the myoelectric wristband; the domain controller is further configured to control the target controlled component according to the execution information.
In some embodiments, the myoelectric bracelet includes a myoelectric identification module, a bluetooth antenna, a UWB antenna, and a position location module;
the myoelectricity identification module is used for acquiring a myoelectricity signal of the hand action of a user;
the Bluetooth antenna and the UWB antenna are used for transmitting information of the myoelectricity identification module and the position location module to the central controller and the domain controller.
In a third aspect, a computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements a vehicle intelligent interactive control method is provided.
The technical scheme provided by the application has the beneficial effects that:
the embodiment of the application provides a vehicle intelligent interaction control method, a system and a storage medium, wherein the specific positions of an myoelectric bracelet and a controlled part can be known by taking a point in a vehicle as an origin to establish a coordinate system; then binding the controlled component to each preset gesture; the myoelectric wristband can be used for identifying the gesture type made by a user, and then all controlled components can be screened to find out a plurality of corresponding controlled components; then, the myoelectric wristband is utilized to acquire initial position information and end position information of the hand of the user in the process of pointing to the controlled component which the user wants to control in a gesture type, and then the target controlled component, namely the controlled component which the user wants to control, can be acquired by combining the specific positions of the screened controlled components, execution information is acquired, and the target controlled component is controlled according to the execution information; the target controlled component is not determined by the shortest distance in the process, the gesture type determined by the myoelectric bracelet is firstly used for preliminary judgment, and then the final locking is carried out by combining the position change of the hand type in the pointing process, so that the target controlled component is accurate in the process of obtaining the target controlled component, and the steps are simple.
In addition, the mode accords with the operation habit of a user, namely when which controlled component needs to be controlled, the corresponding gesture type is made, then the controlled component needing to be controlled is pointed, the controlled component needing to be controlled is locked, then the subsequent instruction is made, the effect of the idle operation is achieved, and the sense of science and technology is increased.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides a vehicle intelligent interaction control method, a system and a storage medium, which are used for solving the problems of complex operation and inconvenient use caused by a mode of confirming that a controlled component with the shortest distance is a target controlled component according to the position of a key and the distance between each controlled component in the related technology.
Referring to fig. 1-3, a vehicle intelligent interaction control method includes:
step S01, a coordinate system is established by taking a certain point of the vehicle as a coordinate origin, and fixed position information of each controlled component in the coordinate system is acquired; the origin of coordinates is as shown in fig. 1;
step S02, setting a binding relation between a preset gesture and a controlled component; binding a plurality of controlled components by a preset gesture;
step S03, acquiring muscle electric signals of hand movements of a user, and initial position information and end position information of the hand of the user when the hand moves with the movements under a coordinate system;
step S04, according to the muscle electrical signals, obtaining gesture types of the hands of the user; then, based on the gesture type, the binding relation, the fixed position information, the initial position information and the termination position information, obtaining a target controlled component;
step S05, acquiring execution information and controlling the target controlled component according to the execution information.
The specific positions of the myoelectric bracelet and the controlled component can be known by taking one point in the vehicle as an origin to establish a coordinate system; then binding the controlled component to each preset gesture; the myoelectric wristband can be used for identifying the gesture type made by a user, and then all controlled components can be screened to find out a plurality of corresponding controlled components; then, the myoelectric wristband is utilized to acquire initial position information and end position information of the hand of the user in the process of pointing to the controlled component which the user wants to control in a gesture type, and then the target controlled component, namely the controlled component which the user wants to control, can be acquired by combining the specific positions of the screened controlled components, execution information is acquired, and the target controlled component is controlled according to the execution information; the target controlled component is not determined by the shortest distance in the process, the gesture type determined by the myoelectric bracelet is firstly used for preliminary judgment, and then the final locking is carried out by combining the position change of the hand type in the pointing process, so that the target controlled component is accurate in the process of obtaining the target controlled component, and the steps are simple.
In addition, the mode accords with the operation habit of a user, namely when which controlled component needs to be controlled, the corresponding gesture type is made, then the controlled component needing to be controlled is pointed, the controlled component needing to be controlled is locked, then the subsequent instruction is made, the effect of the idle operation is achieved, and the sense of science and technology is increased.
In some preferred embodiments, the performing information in step S05 includes voice information, muscle electrical signals, angle information, or distance information, the distance information being a distance between the user' S hand and the origin of coordinates; before the execution information is acquired, the method comprises the following steps: specific voice information, specific muscle electric signals, or specific distance information is set for each target action of the target controlled component correspondingly.
After the target controlled component to be controlled is locked, the control mode is enriched by voice control and gesture control or the distance between the hand and the origin of coordinates when specific actions are executed, personalized setting can be conveniently carried out by a user, the interestingness is improved, and therefore multi-mode control is achieved.
In some preferred embodiments, step S04 derives the target controlled component based on the gesture type, the binding relationship, the fixed position information, the initial position information, and the end position information, specifically including the steps of:
step S041, finding out preset gestures with the same gesture type in the preset gestures, and then obtaining a plurality of controlled components to be determined based on the preset gestures and binding relations;
step S042, according to the initial position information and the end position information, the rotation angle of the hand of the user, the indication direction line of the preset gesture and the initial line of the preset gesture are obtained; the method specifically comprises the following steps: according to the initial position information and the end position information, an initial point and an end point are obtained; connecting the initial point and the end point with the origin of coordinates respectively to form an initial line and an indication direction line; and acquiring an included angle between the initial line and the indication direction line, and taking the included angle as a rotation angle.
Step S043, deriving a judgment area based on the initial position information of the user' S hand, the rotation angle, the indication direction line of the preset gesture, and the initial line of the preset gesture, as in (a, B, C) and (a °, B °, C °) shown in fig. 1; then screening out controlled components in the judging area among the plurality of controlled components to be determined; the method comprises the following steps of: acquiring the actual distance between the initial position information of the hand of the user and the origin of coordinates; comparing the actual distance with the set distance; if the actual distance is greater than the set distance, the judging area is positioned outside the vehicle and is a sector area; the two sides of the sector area are respectively an initial line and an indication direction line, and the angle of the initial line and the indication direction line is a rotation angle; if the actual distance is smaller than the set distance, the judging area is positioned in the vehicle and is a sector area; the two sides of the sector area are respectively an initial line and an indication direction line, and the angle of the initial line and the indication direction line is a rotation angle. Thus, according to the actual position of the hand of the user, whether the person is outside or inside the vehicle can be judged, so that different components can be correspondingly controlled.
And step S044, finding out a target controlled component in the judging area according to the indication direction line.
Step S044 specifically includes two forms:
firstly, acquiring fixed position information of a controlled component in a judging area; taking the fixed position information of the controlled component in the judging area as a fixed point; judging whether a fixed point exists in the error range of the indication direction line or not; if yes, the controlled component corresponding to the fixed point is a target controlled component; otherwise, there is no target controlled component.
Secondly, acquiring fixed position information of the controlled component in the judging area; taking the fixed position information of the controlled component in the judging area as a fixed point; judging whether a fixed point exists in the error range of the indication direction line or not; if yes, the controlled component corresponding to the fixed point is a target controlled component; otherwise, there is no target controlled component.
The application also provides a vehicle intelligent interaction control system, which comprises: the system comprises a scene engine, a position acquisition module, a voice processing module, an myoelectric bracelet, a central controller and a domain controller.
A position acquisition module for establishing a coordinate system with a certain point of the vehicle as a coordinate origin, and acquiring fixed position information of each controlled component in the coordinate system;
the myoelectric bracelet is used for acquiring a myoelectric signal of the hand action of a user, and initial position information and end position information of the hand of the user when moving with the action under a coordinate system; the myoelectric wristband is also used for acquiring execution information; the principle and method for obtaining the electromyographic signals by the electromyographic wristband can be referred to in the related description of CN 105522986A.
The central controller is connected with the scene engine and the voice processing module; the scene engine is used for setting a binding relation between a preset gesture and the controlled component; binding a plurality of controlled components by a preset gesture; the voice processing module is used for acquiring execution information; the central controller is used for obtaining the gesture type of the hand of the user according to the muscle electric signals; then, based on the gesture type, the binding relation, the fixed position information, the initial position information and the termination position information, obtaining a target controlled component;
a domain controller for acquiring execution information using the voice processing module and the myoelectric wristband; the domain controller is further used for controlling the target controlled component according to the execution information; wherein the execution information is information for controlling the operation of the target controlled component.
The myoelectricity bracelet comprises a myoelectricity identification module, a Bluetooth antenna, a UWB antenna and a position positioning module; the myoelectricity identification module is used for acquiring a myoelectricity signal of the hand action of a user;
the Bluetooth antenna and the UWB antenna are used for transmitting information of the myoelectricity identification module and the position positioning module to the central controller and the domain controller; the position location module is within 1cm of accuracy and also communicates in real time with the vehicle to determine position initial position information and end position information.
The operation principle of the intelligent interactive control system for the vehicle can refer to fig. 2, wherein a plurality of controlled components are equivalent to an actuator 1, an actuator 2, and an actuator n; the central controller is connected with a cloud terminal through a T-BOX to transmit data, the cloud terminal comprises voice, TSP and a developer platform, and the cloud terminal sends received voice information to a voice processing module to be processed.
A computer readable storage medium having a computer program stored thereon, the computer program when executed by a processor implementing a vehicle intelligent interactive control method.
Several examples of applications using the above-described manner are given below.
Example 1
Establishing a binding relation between a preset gesture and a controller of a controlled component in a scene engine, such as a directional gesture binding lamp extending out of one finger, a leg support, a door and other switch controllers
Pre-training by using an myoelectric bracelet, and identifying a myoelectric signal wave band corresponding to a preset gesture of the directivity of a finger of a user; defining a certain position in a vehicle body as a coordinate origin, judging a directional gesture of a user extending out of a finger, and directing to initial position information and end position information in the process to obtain a rotation angle of the user hand and an indication direction line of a preset gesture, and locking a corresponding relation between the preset gesture and a controller defined in a scene engine to lock an actuator for gesture intention control. Then, in combination with the voice instruction, "open that", "close that", a specific operation on the controller is judged. Thus, the user can sit in the front row and want to turn on the reading lamp on the right side of the second row.
Not limited to the above example, for example, when opening and closing of a certain door is to be controlled at a certain interval, it is necessary to determine whether other conditions are satisfied at the same time, for example, the vehicle speed is 0 and P range.
In a second embodiment, it is described how the actuator action is implemented using the exact position.
The gesture of pressing is tied to all seats in the vehicle in the field Jing Yinqing to achieve gesture control of the seats. For example, in order to control the sub-driver seat, first, whether the position where the pressing gesture is performed is the main driver, the sub-driver or the rear row is determined; then combining initial position information and end position information of the hand of the user when moving in the action under the coordinate system to lock the corresponding seat controller, and judging which adjusting means (for example, Y-axis is parallel and is adjusted forwards and backwards, Y-axis is inclined and is adjusted in angle) are adopted according to the angle coordinates of the hand ring, and judging forwards, backwards, upwards, downwards and the like according to the force application direction of the hand ring.
The scheme can also be used for welcome unlocking outside the vehicle, and different instructions are executed according to the distance between a user and the vehicle, such as (10 m external power on, 5m starting light, 1m starting screen and 0.1m automatic unlocking of the vehicle door)
Can be used for lamp language transformation, and different lamp languages are defined by different gestures in a scene engine;
the method can be used for screen control, and the scene engine binds the gestures into shortcut instructions in each application; for the application on the top layer of the screen, taking music as an example, the gesture yaw of two fingers can control the switching of the music, the up and down can control the height of the volume, the yaw and the up and down can be judged by position coordinates, and the adjusting amplitude can also be corresponding to the movement amplitude of the gesture.
The same principle can be used for adjusting the air conditioner, and the partition opening, closing and temperature adjustment are performed; the charging port cover switch can be used for automatically opening the charging port cover when detecting that a user arrives at the charging port cover and makes an action of opening the port cover.
Through the embodiment, the user intention is accurately executed in combination with voice, gestures and positions in the interaction process, so that multi-mode control is completed.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data.
Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves. It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
In the description of the present application, it should be noted that the azimuth or positional relationship indicated by the terms "upper", "lower", etc. are based on the azimuth or positional relationship shown in the drawings, and are merely for convenience of describing the present application and simplifying the description, and are not indicative or implying that the apparatus or element in question must have a specific azimuth, be constructed and operated in a specific azimuth, and thus should not be construed as limiting the present application. Unless specifically stated or limited otherwise, the terms "mounted," "connected," and "coupled" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
It should be noted that in the present application, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the application to enable those skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.