CN114255746A - Control method, device and system of sweeping robot and readable storage medium - Google Patents
Control method, device and system of sweeping robot and readable storage medium Download PDFInfo
- Publication number
- CN114255746A CN114255746A CN202011001294.1A CN202011001294A CN114255746A CN 114255746 A CN114255746 A CN 114255746A CN 202011001294 A CN202011001294 A CN 202011001294A CN 114255746 A CN114255746 A CN 114255746A
- Authority
- CN
- China
- Prior art keywords
- voice
- sweeping robot
- feature vector
- instruction
- mobile terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010408 sweeping Methods 0.000 title claims abstract description 230
- 238000000034 method Methods 0.000 title claims abstract description 117
- 239000013598 vector Substances 0.000 claims description 179
- 238000004140 cleaning Methods 0.000 claims description 20
- 238000012549 training Methods 0.000 claims description 16
- 241000282414 Homo sapiens Species 0.000 claims description 14
- 238000004458 analytical method Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 description 24
- 238000001514 detection method Methods 0.000 description 23
- 238000006243 chemical reaction Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 12
- 238000001228 spectrum Methods 0.000 description 12
- 238000001914 filtration Methods 0.000 description 11
- 238000007781 pre-processing Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 238000012512 characterization method Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 230000003595 spectral effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000009467 reduction Effects 0.000 description 3
- 238000010183 spectrum analysis Methods 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 241001417527 Pempheridae Species 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000011410 subtraction method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/24—Floor-sweeping machines, motor-driven
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/40—Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/40—Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
- A47L11/4002—Installations of electric equipment
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/40—Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
- A47L11/4011—Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L2201/00—Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Manipulator (AREA)
Abstract
The invention relates to the technical field of sweeping robots, and discloses a control method, a device and a system of a sweeping robot and a readable storage medium, wherein the control method of the sweeping robot is applied to a mobile terminal, and comprises the following steps: acquiring voice signals around the mobile terminal in a voice input mode; recognizing the voice signal and judging whether the voice signal is inquired into a corresponding voice instruction in a preset voice instruction set or not; if the voice instruction is inquired, the voice instruction is sent to the sweeping robot, so that the sweeping robot executes the voice instruction. The invention can improve the use convenience of the sweeping robot and improve the use experience of users.
Description
Technical Field
The invention relates to the technical field of sweeping robots, in particular to a control method, a control device and a control system of a sweeping robot and a readable storage medium.
Background
With the rapid development of science and technology and the improvement of the living standard of people, the role of the sweeping robot in the actual life is more and more important. The floor cleaning robot can automatically complete the floor cleaning work, thereby saving the cleaning time of people. The existing sweeping robot needs to be controlled by using the control equipment, so that a user needs to spend time on searching the control equipment before controlling the sweeping robot, and the sweeping robot is very inconvenient to use.
At present, a control APP (Application) of the sweeping robot can be downloaded to a mobile terminal of a user, such as a mobile phone, a tablet computer and the like, but the user can only control the sweeping robot by clicking operation, and the sweeping robot cannot be rapidly controlled to work due to more functions and complex operation steps, so that the use experience of the user is reduced.
Disclosure of Invention
The invention mainly aims to provide a control method, a control device, a control system and a readable storage medium of a sweeping robot, and aims to improve the use convenience of the sweeping robot and the use experience of a user.
In order to achieve the above object, the present invention provides a control method of a sweeping robot, where the control method of the sweeping robot is applied to a mobile terminal, and the control method of the sweeping robot includes the following steps:
acquiring voice signals around the mobile terminal in a voice input mode;
recognizing the voice signal and judging whether the voice signal is inquired into a corresponding voice instruction in a preset voice instruction set or not;
if the voice instruction is inquired, the voice instruction is sent to the sweeping robot, so that the sweeping robot executes the voice instruction.
Optionally, the step of recognizing the speech signal includes:
analyzing the voice signal to obtain a feature vector of the voice signal;
judging whether a preset feature vector comprises the feature vector;
if the preset feature vector comprises the feature vector, generating an acoustic model according to the feature vector;
and recognizing the voice signal through the acoustic model.
Optionally, after the step of determining whether the preset feature vector includes the feature vector, the method further includes:
if the preset feature vector does not comprise the feature vector, analyzing the feature vector;
judging whether the feature vector meets the voice feature of human according to the analysis result;
and if the feature vector meets the voice feature, outputting a first prompt message to prompt a user whether to add the feature vector into the preset feature vector.
Optionally, after the step of outputting a first prompt message to prompt a user whether to add the feature vector to the preset feature vector if the feature vector satisfies the voice feature, the method further includes:
if the feature vector meets the voice feature, recognizing the voice signal into a text;
and displaying the text on a screen of the mobile terminal.
Optionally, after the step of outputting a first prompt message to prompt a user whether to add the feature vector to the preset feature vector if the feature vector satisfies the voice feature, the method further includes:
and when an adding instruction is received, adding the characteristic vector into the preset characteristic vector according to the adding instruction.
Optionally, when an add instruction is received, adding the feature vector to the preset feature vector according to the add instruction includes:
judging whether the voice signal is an adding instruction or not;
and if the voice signal is the adding instruction, adding the feature vector into the preset feature vector according to the adding instruction.
Optionally, the step of generating an acoustic model according to the feature vector includes:
updating training data of the acoustic model according to the feature vector;
and retraining the acoustic model through the updated training data to obtain an updated acoustic model.
Optionally, after the step of recognizing the voice signal and determining whether the voice signal queries the corresponding voice instruction in a preset voice instruction set, the method further includes:
and if the voice instruction cannot be inquired, outputting a second prompt message to prompt the user that the voice instruction is wrong.
Optionally, before the step of acquiring the voice signal around the mobile terminal in the voice input mode, the method further includes:
when the mobile terminal is detected to start a sweeping robot application program, detecting whether a voice input icon is clicked or not;
and if the voice input icon is clicked, entering a voice input mode.
Optionally, before the step of acquiring the voice signal around the mobile terminal in the voice input mode, the method further includes:
when detecting that the mobile terminal starts a sweeping robot application program, detecting whether the mobile terminal collects starting voice;
and if the starting voice is detected, entering a voice input mode.
In order to achieve the above object, the present invention further provides a control method of a sweeping robot applied to a sweeping robot, where the control method of the sweeping robot is applied to a mobile terminal, and the control method of the sweeping robot includes the following steps:
receiving feedback information sent by the sweeping robot, and determining the content of the feedback information;
and if the content of the feedback information is that the voice instruction cannot be executed, outputting a third prompt message to prompt the user that the floor sweeping robot cannot execute the voice instruction.
And if the content of the feedback information is that the voice instruction execution is finished, outputting a fourth prompt message to prompt the user that the voice instruction execution is finished.
Optionally, the control method of the sweeping robot further includes:
and if the voice signal is detected to be voice exiting, exiting the voice input mode.
In addition, in order to achieve the above object, the present invention further provides a control device for a cleaning robot, including:
the voice acquisition module is used for acquiring voice signals around the mobile terminal in a voice input mode;
the instruction judging module is used for identifying the voice signal and judging whether the voice signal queries a corresponding voice instruction in a preset voice instruction set;
and the instruction sending module is used for sending the voice instruction to the sweeping robot if the voice instruction is inquired, so that the sweeping robot executes the voice instruction.
Optionally, the instruction determining module includes:
the vector acquisition unit is used for analyzing the voice signal to obtain a feature vector of the voice signal;
the vector judging unit is used for judging whether a preset feature vector comprises the feature vector;
the model generation unit is used for generating an acoustic model according to the feature vector if the preset feature vector comprises the feature vector;
and the voice recognition unit is used for recognizing the voice signal through the acoustic model.
Optionally, the instruction determining module further includes:
the vector analysis unit is used for analyzing the feature vector if the preset feature vector does not comprise the feature vector;
the characteristic judging unit is used for judging whether the characteristic vector meets the voice characteristic of human according to the analysis result;
and the message prompting unit is used for outputting a first prompting message to prompt a user whether to add the feature vector into the preset feature vector if the feature vector meets the voice feature.
Optionally, the instruction determining module further includes:
a text recognition unit, configured to recognize the speech signal as a text if the feature vector satisfies the speech feature;
and the text display unit is used for displaying the text on the screen of the mobile terminal.
In addition, in order to achieve the above object, the present invention further provides a control system of a sweeping robot, including: the control program of the sweeping robot realizes the steps of the control method of the first sweeping robot or the second sweeping robot when being executed by the processor.
In addition, in order to achieve the above object, the present invention further provides a computer readable storage medium, where a control program of the sweeping robot is stored, and when the control program of the sweeping robot is executed by a processor, the steps of the control method of the sweeping robot are implemented.
The invention provides a control method, a device and a system of a sweeping robot and a readable storage medium, wherein voice signals around a mobile terminal are collected in a voice input mode; identifying the voice signals and judging whether the voice signals inquire corresponding voice instructions in a preset voice instruction set or not; and if the voice instruction is inquired, sending the voice instruction to the sweeping robot so that the sweeping robot executes the voice instruction. Through the mode, the collected voice signals of the user are recognized, the voice instruction corresponding to the voice signals is recognized in the preset voice instruction set, and then the voice instruction is sent to the sweeping robot, so that the sweeping robot executes the voice instruction.
Drawings
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a first embodiment of a control method of a sweeping robot applied to a mobile terminal according to the present invention;
fig. 3 is a schematic flow chart of a first embodiment of a control method of a sweeping robot applied to the sweeping robot according to the present invention;
fig. 4 is a schematic functional block diagram of a control device of a sweeping robot applied to a mobile terminal according to a first embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal in the embodiment of the invention is a mobile terminal or a sweeping robot, and the mobile terminal can be a mobile phone, a tablet computer, a Personal Computer (PC), a microcomputer, a notebook computer and other terminal equipment.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU (Central Processing Unit), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, the memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a control program of the sweeping robot.
In the terminal shown in fig. 1, the processor 1001 may be configured to call up a control program stored in the memory 1005, which is applied to the sweeping robot of the mobile terminal, and perform the following operations:
acquiring voice signals around the mobile terminal in a voice input mode;
recognizing the voice signal and judging whether the voice signal is inquired into a corresponding voice instruction in a preset voice instruction set or not;
if the voice instruction is inquired, the voice instruction is sent to the sweeping robot, so that the sweeping robot executes the voice instruction.
Further, the processor 1001 may be configured to call the control program of the sweeping robot stored in the memory 1005, and further perform the following operations:
analyzing the voice signal to obtain a feature vector of the voice signal;
judging whether a preset feature vector comprises the feature vector;
if the preset feature vector comprises the feature vector, generating an acoustic model according to the feature vector;
and recognizing the voice signal through the acoustic model.
Further, the processor 1001 may be configured to call the control program of the sweeping robot stored in the memory 1005, and further perform the following operations:
if the preset feature vector does not comprise the feature vector, analyzing the feature vector;
judging whether the feature vector meets the voice feature of human according to the analysis result;
and if the feature vector meets the voice feature, outputting a first prompt message to prompt a user whether to add the feature vector into the preset feature vector.
Further, the processor 1001 may be configured to call the control program of the sweeping robot stored in the memory 1005, and further perform the following operations:
if the feature vector meets the voice feature, recognizing the voice signal into a text;
and displaying the text on a screen of the mobile terminal.
Further, the processor 1001 may be configured to call the control program of the sweeping robot stored in the memory 1005, and further perform the following operations:
and when an adding instruction is received, adding the characteristic vector into the preset characteristic vector according to the adding instruction.
Further, the processor 1001 may be configured to call the control program of the sweeping robot stored in the memory 1005, and further perform the following operations:
judging whether the voice signal is an adding instruction or not;
and if the voice signal is the adding instruction, adding the feature vector into the preset feature vector according to the adding instruction.
Further, the processor 1001 may be configured to call the control program of the sweeping robot stored in the memory 1005, and further perform the following operations:
updating training data of the acoustic model according to the feature vector;
and retraining the acoustic model through the updated training data to obtain an updated acoustic model.
Further, the processor 1001 may be configured to call the control program of the sweeping robot stored in the memory 1005, and further perform the following operations:
and if the voice instruction cannot be inquired, outputting a second prompt message to prompt the user that the voice instruction is wrong.
Further, the processor 1001 may be configured to call the control program of the sweeping robot stored in the memory 1005, and further perform the following operations:
when the mobile terminal is detected to start a sweeping robot application program, detecting whether a voice input icon is clicked or not;
and if the voice input icon is clicked, entering a voice input mode.
Further, the processor 1001 may be configured to call the control program of the sweeping robot stored in the memory 1005, and further perform the following operations:
when detecting that the mobile terminal starts a sweeping robot application program, detecting whether the mobile terminal collects starting voice;
and if the starting voice is detected, entering a voice input mode.
Further, the processor 1001 may be configured to call the control program of the sweeping robot stored in the memory 1005, and further perform the following operations:
receiving feedback information sent by the sweeping robot, and determining the content of the feedback information;
and if the content of the feedback information is that the voice instruction cannot be executed, outputting a third prompt message to prompt the user that the floor sweeping robot cannot execute the voice instruction.
And if the content of the feedback information is that the voice instruction execution is finished, outputting a fourth prompt message to prompt the user that the voice instruction execution is finished.
Further, the processor 1001 may be configured to call the control program of the sweeping robot stored in the memory 1005, and further perform the following operations:
and if the voice signal is detected to be voice exiting, exiting the voice input mode.
In the terminal shown in fig. 1, the processor 1001 may be further configured to call up a control program stored in the memory 1005, and applied to a sweeping robot of a sweeper, and perform the following operations:
receiving a voice instruction sent by a mobile terminal, and executing the voice instruction;
detecting whether the sweeping robot normally executes the voice command;
and sending feedback information of the execution process to the mobile terminal according to the detection result.
Further, the processor 1001 may be configured to call the control program of the sweeping robot stored in the memory 1005, and further perform the following operations:
if the floor sweeping robot is detected to be incapable of executing the voice command normally, sending feedback information that the voice command cannot be executed to the mobile terminal;
and if the floor sweeping robot is detected to finish the voice command, sending feedback information of finishing the voice command execution to the mobile terminal.
Based on the hardware structure, various embodiments of the control method of the sweeping robot are provided.
The invention provides a control method of a sweeping robot applied to a mobile terminal.
Referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of a control method of a sweeping robot applied to a mobile terminal according to the present invention.
In this embodiment, the control method of the sweeping robot includes:
step S10, collecting voice signals around the mobile terminal in a voice input mode;
in this embodiment, the control method of the sweeping robot is implemented by a control device of the sweeping robot, and the control device of the sweeping robot may be a terminal device such as a mobile phone, a tablet computer, a PC, a microcomputer, and a notebook computer. The control device of the sweeping robot is described by taking a mobile phone as an example.
In the present embodiment, in the voice input mode, voice signals around the mobile terminal are collected. The voice input mode is a mode of controlling the sweeping robot by utilizing voice, and a user can control the sweeping robot by utilizing voice only, for example, the user starts sweeping towards the mobile terminal, and the sweeping robot can start sweeping.
It should be noted that, the mobile terminal is provided with an audio acquisition device, and the audio acquisition device can be used for acquiring voice signals within a certain distance around the mobile terminal. The speech signal is subjected to noise reduction processing, and specifically, the speech signal can be processed by a noise reduction method such as an adaptive filter, a basic spectral subtraction method, a wiener filtering method and the like, namely, fourier transform is performed on the speech signal to obtain a speech spectrum signal, then the speech spectrum signal is analyzed, and after irrelevant frequency spectrum is removed, the speech signal subjected to noise reduction processing is obtained.
In addition, it should be noted that the voice signal is generally input through a microphone. The microphone converts the sound waves into a voltage signal, which is then sampled by a digital-to-analog conversion device (e.g., a sound card) to convert the continuous voltage signal into a digital signal that can be processed by a processor. The sound card is an important part for processing a voice signal by an audio acquisition device, and has the functions of filtering, amplifying, digital-to-analog conversion, analog-to-digital conversion and the like of the voice signal.
Step S20, recognizing the voice signal and judging whether the voice signal inquires a corresponding voice instruction in a preset voice instruction set;
after the voice signals are acquired, the voice signals are identified, and whether the voice signals inquire the corresponding voice commands in a preset voice command set or not is judged. The preset voice instruction set is an instruction set for controlling the sweeping robot to execute various functions, and is set according to actual functions of the sweeping robot, for example, an operation instruction set for controlling the movement of the sweeping robot, selecting a sweeping mode, performing charging and discharging, and the like.
It should be noted that, by presetting the voice instruction in the voice instruction set, the sweeping robot can be controlled to implement various functions, for example, if the voice instruction is forward, the sweeping robot is controlled to move forward, and if the voice instruction is room cleaning, the sweeping robot is controlled to start room cleaning. The voice signal corresponding to the preset voice instruction set can be set according to the habit of the user, for example, the voice instruction for cleaning a room can correspond to the voice signals for starting to clean, starting to sweep, removing the floor, and the like. In addition, the preset voice instruction set can be increased according to the increase of the functions of the sweeping robot, specifically, the increased voice instructions can be added into the preset voice instruction set through the sweeping robot application program, and the voice signals corresponding to the increased voice instructions are learned, so that the increased voice instructions can be triggered by the voice signals.
In addition, it should be noted that the speech recognition needs to process the acquired speech signal, first extract the feature parameters of the speech signal, then vectorize the feature parameters, and finally generate the corresponding acoustic model for recognition. The voice recognition can be used for recognizing voice signals through a hidden Markov model algorithm, a deep neural network algorithm, a convolutional neural network algorithm, a cyclic neural network algorithm and the like; in addition, the speech recognition may be based on continuous words, keywords, isolated words, etc., and will not be described in detail here. Specifically, the speech recognition process involves sampling a continuous sound wave, quantizing each sample to obtain a compressed digital representation of the sound wave. The samples are located in overlapping frames and for each frame a feature vector is extracted describing the spectral content. Then, recognizing the word represented by the voice according to the characteristics of the voice signal, wherein the voice recognition process mainly comprises five steps: voice signal collection, voice signal preprocessing, voice signal characteristic parameter extraction, characteristic parameter vectorization and voice recognition.
Step S30, if the voice command is inquired, the voice command is sent to the sweeping robot, so that the sweeping robot executes the voice command.
And finally, if the voice instruction is inquired, sending the voice instruction to the sweeping robot so that the sweeping robot executes the voice instruction. The voice instruction can be directly sent to the sweeping robot or can be forwarded through the server.
It should be noted that the sending mode may be set according to the connection relationship between the mobile terminal and the sweeping robot, for example, the mobile terminal and the sweeping robot are connected to the same router or network, and the mobile terminal only needs to send a voice instruction through the lan, so that the sweeping robot can respond to the voice instruction quickly; for another example, when the mobile terminal is connected with a 3G, 4G, 5G and other mobile networks, the mobile terminal may be connected with a local area network where the sweeping robot is located through the internet, and then send a voice instruction through the local area network; for another example, if the mobile terminal and the sweeping robot are connected to a mobile network such as 3G, 4G, 5G, etc., the mobile terminal transmits the data through the internet.
The embodiment of the invention provides a control method of a sweeping robot, which comprises the steps of collecting voice signals around a mobile terminal in a voice input mode; identifying the voice signals and judging whether the voice signals inquire corresponding voice instructions in a preset voice instruction set or not; and if the voice instruction is inquired, sending the voice instruction to the sweeping robot so that the sweeping robot executes the voice instruction. Through the mode, the collected voice signals of the user are recognized, the voice instruction corresponding to the voice signals is recognized in the preset voice instruction set, and then the voice instruction is sent to the sweeping robot, so that the sweeping robot executes the voice instruction.
Further, based on the first embodiment, a second embodiment of the control method of the sweeping robot of the present invention is provided.
In this embodiment, the step S20 may include:
a21, analyzing the voice signal to obtain a feature vector of the voice signal;
step a22, judging whether the preset feature vector comprises the feature vector;
a23, if the preset feature vector comprises the feature vector, generating an acoustic model according to the feature vector;
step a24, recognizing the speech signal through the acoustic model.
In this embodiment, first, the voice signal is analyzed to obtain feature parameters of the voice signal, the feature parameters are vectorized to obtain feature vectors of the voice signal, then, whether the preset feature vectors include the feature vectors is judged, if the preset feature vectors include the feature vectors, learning training is performed according to the feature vectors to obtain an acoustic model, and the voice signal is identified through the acoustic model.
It should be noted that the preset feature vector is a feature vector of a voice signal of a user to which the sweeping robot belongs, and the preset feature vector is obtained by inputting the voice signal in advance by the user, and can also be continuously learned and obtained in the daily use process, and whether to add the feature vector to the preset feature vector is selected according to the selection of the user.
It should also be noted that the speech analysis process includes sampling a continuous sound wave, and quantizing each sampled value to obtain a compressed digital representation of the sound wave. The samples are located in overlapping frames and for each frame a feature vector is extracted describing the spectral content. The voice analysis process is mainly divided into four steps: voice signal collection, voice signal preprocessing, characteristic parameter extraction of voice signals and vector quantization of characteristic parameters.
The voice signal preprocessing comprises preprocessing such as filtering, analog-to-digital conversion, pre-emphasis, endpoint detection and the like, and then recognition can be carried out. The purpose of filtering is two, namely, preventing aliasing interference; and secondly, the power frequency interference of a 50Hz power supply is inhibited. The filter used for this filtering should be a band pass filter. Analog-to-digital conversion is the conversion of a speech analog signal to a digital signal. The pre-emphasis processing aims to promote the high-frequency part, so that the frequency spectrum of the signal becomes flat, the signal is kept in the whole frequency band from low frequency to high frequency, the frequency spectrum can be obtained by using the same signal-to-noise ratio, and the frequency spectrum analysis is convenient. Endpoint detection is the determination of the beginning and end of speech from a segment of a signal containing speech. Effective endpoint detection not only reduces processing time, but also eliminates noise interference in the silence. There are two main types of methods at present: a time domain characterization method and a frequency domain characterization method. The time domain characteristic method is to utilize the voice volume and zero crossing rate to carry out endpoint detection, and the calculated amount is small; the frequency domain feature method is used for voice detection by detecting the frequency spectrum variation and entropy of sound.
In this embodiment, it is determined whether the feature vector of the voice signal is in the preset feature vector in the voice recognition process, and if the feature vector exists in the preset feature vector, it indicates that the sender of the voice signal is the user of the floor sweeping robot, and after determining that the sender is the user of the floor sweeping robot, the voice signal is further recognized, so that the situation of false triggering can be reduced.
Further, based on the second embodiment, a third embodiment of the control method of the sweeping robot of the present invention is provided.
In this embodiment, after the step a22, the method further includes:
step A, if the preset feature vector does not comprise the feature vector, analyzing the feature vector;
b, judging whether the feature vector meets the voice feature of the human according to the analysis result;
and step C, if the feature vector meets the voice feature, outputting a first prompt message to prompt a user whether to add the feature vector into the preset feature vector.
In this embodiment, if the preset feature vector does not include the feature vector, the feature vector is analyzed, and then, according to an analysis result of the feature vector, whether the feature vector satisfies a human voice feature is determined, and if the feature vector satisfies the human voice feature, a first prompt message is output, so that a user can select whether to add the feature vector to the preset feature vector.
It should be noted that the frequency range of human voice is usually 50HZ (hertz) -800HZ, and if resonance occurs, it can be as high as 4KHZ, 8KHZ, etc. According to the voice frequency characteristics of human beings, human voice can be distinguished from sound generated by other objects. Of course, human speech can also be distinguished from other sounds by other voiceprint features.
In addition, it should be further noted that the first prompting message may be a visual prompting manner, an auditory prompting manner, a tactile prompting manner, and the like, where the visual prompting manner may be displayed on a screen of the mobile terminal, for example, an add button is displayed on the screen of the mobile terminal; the auditory reminding mode is to play audio for reminding, for example, whether the voice feature is added or not is played; the touch reminding mode is a vibration mode and is combined with a display mode, or reminding messages are distinguished by different vibration frequencies.
Further, after the step C, the method further comprises:
and D, when an adding instruction is received, adding the characteristic vector into the preset characteristic vector according to the adding instruction.
In this embodiment, when an add instruction is received, the feature vector is added to the preset feature vector according to the add instruction. The adding instruction can be triggered by an adding button on the mobile terminal and can also be triggered by a voice signal.
In an embodiment, the step D includes:
step D1, judging whether the voice signal is an adding instruction;
and D2, if the voice signal is the adding instruction, adding the feature vector into the preset feature vector according to the adding instruction.
In this embodiment, whether the voice signal is an adding instruction is determined, and if the voice signal is the adding instruction, the feature vector is added to the preset feature vector according to the adding instruction. Specifically, after the voice signal is identified and determined as the adding instruction, the feature vector of the voice signal is added to the preset feature vector.
In this embodiment, the voice signals are distinguished by the feature vectors of the voice signals, so that the feature vectors meeting the voice features of human beings are added with the preset feature vectors, that is, users of the sweeping robot can be continuously added in the use process, and the addition in another process is not needed, thereby improving the intelligence of the sweeping robot.
Further, based on the third embodiment, a fourth embodiment of the control method of the sweeping robot of the present invention is provided.
In this embodiment, after the step C, the method further includes:
step E, if the feature vector meets the voice feature, recognizing the voice signal into a text;
and F, displaying the text on a screen of the mobile terminal.
In this embodiment, if the feature vector of the voice signal satisfies the voice feature of a human being, the voice signal is recognized as a text, and then the text is displayed on the screen of the mobile terminal. The text is in a character representation form, is the voice content spoken by the user, and converts the voice signal into the text according to a voice recognition method.
In this embodiment, the voice signal is converted into a text for display, so that a user can view the specific content of the voice signal more intuitively.
Further, based on the second embodiment, a fifth embodiment of the control method of the sweeping robot of the present invention is provided.
In this embodiment, the step a23 includes:
step a231, updating training data of the acoustic model according to the feature vector;
step a232, retraining the acoustic model through the updated training data to obtain an updated acoustic model.
In this embodiment, the training data of the acoustic model is updated according to the feature vector, and then the acoustic model is retrained through the updated training data to obtain the updated acoustic model.
It should be noted that after the feature vectors are extracted, the words represented by the features can be identified. Feature vectors extracted from the speech signal, such as predictive coding parameters, of course, correspond to the letter sequence. The methods adopted by speech recognition generally include three methods, namely a template matching method, a random model method and a probabilistic grammar analysis method. The three methods are all based on the maximum likelihood decision Bayes decision.
In the template matching method, in the training stage, a user speaks each word in a vocabulary list in sequence, and the feature vector of each word is stored in a template library as a template. In the recognition stage, the feature vector sequence of the input voice is compared with each template in the template library in sequence in similarity, and the highest similarity is output as a recognition result.
In the embodiment, in the process of generating the acoustic model, new training data is continuously generated, and the acoustic model is retrained, so that the acoustic model is continuously updated, and the accuracy of voice recognition is improved.
Further, based on the first embodiment, a sixth embodiment of the control method of the sweeping robot of the present invention is provided.
In this embodiment, after the step S20, the method further includes:
and G, if the voice instruction cannot be inquired, outputting a second prompt message to prompt the user that the voice instruction is wrong.
In this embodiment, if the voice command is not queried, a second prompt message is output to prompt the user that the voice command is wrong. The second prompting message may be a visual prompting mode, an auditory prompting mode, a tactile prompting mode, or the like, where the visual prompting mode may be displayed on a screen of the mobile terminal, for example, a voice instruction error is displayed on the screen of the mobile terminal; the auditory reminding mode is that audio is played for reminding, for example, a voice instruction is played wrongly; the touch reminding mode is a vibration mode and is combined with a display mode, or reminding messages are distinguished by different vibration frequencies.
In this embodiment, after the voice instruction is not queried, the user is prompted in time to enable the user to input the voice signal again, so that the intelligence of the floor sweeping robot is further improved.
Further, based on the first embodiment, a seventh embodiment of the control method of the sweeping robot of the present invention is provided.
In this embodiment, before the step S10, the method further includes:
step H, when detecting that the mobile terminal starts the sweeping robot application program, detecting whether a voice input icon is clicked or not;
and step I, if the voice input icon is clicked, entering a voice input mode.
In this embodiment, when it is detected that the mobile terminal starts the sweeping robot application, it is detected whether the voice input icon is clicked, and then, if the voice input icon is clicked, the mobile terminal enters the voice input mode. The sweeping robot application program is an application program for operating the sweeping robot, and a voice input icon in the application program is used for triggering and entering a voice input mode. It should be noted that the user opens the sweeping robot application program, then clicks the voice input icon, and then does not need to click the application program, which is convenient for the user to operate.
In this embodiment, through the voice input icon, the user can independently select whether to enter the voice input mode, and the intelligence of the sweeping robot can be further improved.
Further, based on the first embodiment, an eighth embodiment of the control method of the sweeping robot of the present invention is provided.
In this embodiment, before the step S10, the method further includes:
step J, when detecting that the mobile terminal starts the sweeping robot application program, detecting whether the mobile terminal collects starting voice;
and step K, if the opening voice is detected, entering a voice input mode.
In this embodiment, when detecting that the mobile terminal starts the sweeping robot application, it is detected whether the mobile terminal collects a start voice, and if the start voice is detected, the mobile terminal enters a voice input mode. Specifically, the collected voice signal is identified, and whether the identified voice signal is an opening voice is judged.
It should be noted that the speech recognition process includes sampling a continuous sound wave, and quantizing each sampled value to obtain a compressed digital representation of the sound wave. The samples are located in overlapping frames and for each frame a feature vector is extracted describing the spectral content. Then, recognizing the word represented by the voice according to the characteristics of the voice signal, wherein the voice recognition process mainly comprises five steps: voice signal collection, voice signal preprocessing, voice signal characteristic parameter extraction, characteristic parameter vectorization and voice recognition.
The voice signal preprocessing comprises preprocessing such as filtering, analog-to-digital conversion, pre-emphasis, endpoint detection and the like, and then recognition can be carried out. The purpose of filtering is two, namely, preventing aliasing interference; and secondly, the power frequency interference of a 50Hz power supply is inhibited. The filter used for this filtering should be a band pass filter. Analog-to-digital conversion is the conversion of a speech analog signal to a digital signal. The pre-emphasis processing aims to promote the high-frequency part, so that the frequency spectrum of the signal becomes flat, the signal is kept in the whole frequency band from low frequency to high frequency, the frequency spectrum can be obtained by using the same signal-to-noise ratio, and the frequency spectrum analysis is convenient. Endpoint detection is the determination of the beginning and end of speech from a segment of a signal containing speech. Effective endpoint detection not only reduces processing time, but also eliminates noise interference in the silence. There are two main types of methods at present: a time domain characterization method and a frequency domain characterization method. The time domain characteristic method is to utilize the voice volume and zero crossing rate to carry out endpoint detection, and the calculated amount is small; the frequency domain feature method is used for voice detection by detecting the frequency spectrum variation and entropy of sound.
In this embodiment, through gathering pronunciation, the user can independently select whether to get into the speech input mode, and wherein, this speech input mode is the mode of speech control robot of sweeping the floor, can further improve the intelligence of robot of sweeping the floor to, open the speech input mode through pronunciation, can further improve the convenience that the user used the robot of sweeping the floor.
Further, based on the first embodiment, a ninth embodiment of the control method of the sweeping robot of the present invention is provided.
In this embodiment, after the step S30, the method further includes:
step L, receiving feedback information sent by the sweeping robot, and determining the content of the feedback information;
and step M, if the content of the feedback information is that the voice instruction can not be executed, outputting a third prompt message to prompt the user that the floor sweeping robot can not execute the voice instruction.
And N, if the content of the feedback information is that the voice instruction execution is finished, outputting a fourth prompt message to prompt the user that the voice instruction execution is finished.
In this embodiment, feedback information sent by the sweeping robot is received, the content of the feedback information is determined, and if the content of the feedback information is that the voice instruction cannot be executed, a third prompt message is output to prompt a user that the sweeping robot cannot execute the voice instruction; and if the content of the feedback information is that the voice instruction execution is finished, outputting a fourth prompt message to prompt the user that the voice instruction execution is finished.
The feedback information may be set according to an actual situation, for example, the sweeping robot encounters an obstacle, the current of the sweeping robot is low, the voice instruction cannot be executed, the execution of the voice instruction is completed, and the like, which is not specifically limited herein.
It should be noted that the third prompt message and the fourth prompt message may be a visual prompt mode, an auditory prompt mode, a tactile prompt mode, and the like, where the visual prompt mode may be displayed on a screen of the mobile terminal; the auditory reminding mode is that audio is played for reminding; the touch reminding mode is a vibration mode and is combined with a display mode, or reminding messages are distinguished by different vibration frequencies.
In this embodiment, the working condition of the sweeping robot can be known in time by receiving the feedback information of the sweeping robot, so that the user can perform corresponding processing in time, and the intelligence of the sweeping robot is further improved.
Further, based on the first embodiment, a tenth embodiment of the control method of the sweeping robot of the present invention is provided.
In this embodiment, after the step S30, the method further includes:
and step O, if the voice signal is detected to be voice exiting, exiting the voice input mode.
In this embodiment, if the voice signal is detected as the exit voice, the voice input mode is exited. Specifically, the collected voice signal is identified, and whether the identified voice signal is an exit voice is judged.
It should be noted that the speech recognition process includes sampling a continuous sound wave, and quantizing each sampled value to obtain a compressed digital representation of the sound wave. The samples are located in overlapping frames and for each frame a feature vector is extracted describing the spectral content. Then, recognizing the word represented by the voice according to the characteristics of the voice signal, wherein the voice recognition process mainly comprises five steps: voice signal collection, voice signal preprocessing, voice signal characteristic parameter extraction, characteristic parameter vectorization and voice recognition.
The voice signal preprocessing comprises preprocessing such as filtering, analog-to-digital conversion, pre-emphasis, endpoint detection and the like, and then recognition can be carried out. The purpose of filtering is two, namely, preventing aliasing interference; and secondly, the power frequency interference of a 50Hz power supply is inhibited. The filter used for this filtering should be a band pass filter. Analog-to-digital conversion is the conversion of a speech analog signal to a digital signal. The pre-emphasis processing aims to promote the high-frequency part, so that the frequency spectrum of the signal becomes flat, the signal is kept in the whole frequency band from low frequency to high frequency, the frequency spectrum can be obtained by using the same signal-to-noise ratio, and the frequency spectrum analysis is convenient. Endpoint detection is the determination of the beginning and end of speech from a segment of a signal containing speech. Effective endpoint detection not only reduces processing time, but also eliminates noise interference in the silence. There are two main types of methods at present: a time domain characterization method and a frequency domain characterization method. The time domain characteristic method is to utilize the voice volume and zero crossing rate to carry out endpoint detection, and the calculated amount is small; the frequency domain feature method is used for voice detection by detecting the frequency spectrum variation and entropy of sound.
In the embodiment, the voice input mode is exited in a voice recognition mode, and a user does not need to click a voice input button on the mobile terminal, so that the convenience of the user in using the sweeping robot can be further improved.
The invention also provides a control method of the sweeping robot applied to the sweeping robot.
Referring to fig. 3, fig. 3 is a schematic flow chart of a first embodiment of a control method of a sweeping robot applied to the sweeping robot according to the present invention.
In this embodiment, the control method of the sweeping robot includes:
step S40, receiving a voice instruction sent by the mobile terminal and executing the voice instruction;
in this example, a voice command sent by the mobile terminal is received and executed. The voice instruction can control the sweeping robot to realize various functions, for example, if the voice instruction is forward, the sweeping robot is controlled to move forward, and if the voice instruction is room cleaning, the sweeping robot is controlled to start room cleaning.
It should be noted that the receiving mode may be set according to the connection relationship between the mobile terminal and the sweeping robot, for example, the mobile terminal and the sweeping robot are connected to the same router or network, and the mobile terminal only needs to send a voice instruction through the local area network, so that the sweeping robot can quickly receive the voice instruction; for another example, when the mobile terminal is connected with a 3G, 4G, 5G and other mobile networks, the mobile terminal may be connected with a local area network where the sweeping robot is located through the internet, and then receive a voice instruction through the local area network; for another example, the mobile terminal and the sweeping robot are all connected to a mobile network such as 3G, 4G, 5G, etc., and then the mobile terminal receives the information through the internet.
Step S50, detecting whether the sweeping robot normally executes the voice command;
in this embodiment, whether the sweeping robot normally executes the voice command is detected. It should be noted that the sweeping robot usually encounters various unpredictable conditions, such as obstacles, low power, and incapability.
And step S60, sending feedback information of the execution process to the mobile terminal according to the detection result.
In this embodiment, according to the detection result, the feedback information of the execution process is sent to the mobile terminal. The feedback information may be set according to an actual situation, for example, the sweeping robot encounters an obstacle, the current of the sweeping robot is low, the voice instruction cannot be executed, the execution of the voice instruction is completed, and the like, which is not specifically limited herein.
It should be noted that the sending mode may be set according to the connection relationship between the mobile terminal and the sweeping robot, for example, the mobile terminal and the sweeping robot are connected to the same router or network, and the mobile terminal only needs to send a voice instruction through the lan, so that the sweeping robot can respond to the voice instruction quickly; for another example, when the mobile terminal is connected with a 3G, 4G, 5G and other mobile networks, the mobile terminal may be connected with a local area network where the sweeping robot is located through the internet, and then send a voice instruction through the local area network; for another example, if the mobile terminal and the sweeping robot are connected to a mobile network such as 3G, 4G, 5G, etc., the mobile terminal transmits the data through the internet.
The invention provides a control method of a sweeping robot, which comprises the steps of receiving a voice instruction sent by a mobile terminal and executing the voice instruction; detecting whether the sweeping robot normally executes a voice instruction; and sending feedback information of the execution process to the mobile terminal according to the detection result. Through the mode, the sweeping robot in the embodiment of the invention executes the received voice instruction and feeds back the execution information to the mobile terminal in time.
Further, based on the first embodiment, a second embodiment of the control method of the sweeping robot of the present invention is provided.
In this embodiment, the step S60 includes:
step a61, if it is detected that the sweeping robot cannot normally execute the voice command, sending feedback information that the voice command cannot be executed to the mobile terminal;
step a62, if it is detected that the sweeping robot completes the voice command, sending feedback information of completion of voice command execution to the mobile terminal.
In this embodiment, if it is detected that the sweeping robot cannot normally execute the voice command, sending feedback information that the voice command cannot be executed to the mobile terminal; and if the floor sweeping robot is detected to execute the voice command, sending feedback information of the completion of the voice command execution to the mobile terminal.
In this embodiment, the feedback information of the sweeping robot is sent, so that the user can timely know the working condition of the sweeping robot, and the user can timely perform corresponding processing, thereby further improving the intelligence of the sweeping robot.
The invention further provides a control device of the sweeping robot applied to the mobile terminal.
Referring to fig. 4, fig. 4 is a functional module schematic diagram of a control device of a sweeping robot applied to a mobile terminal according to a first embodiment of the present invention.
As shown in fig. 4, the control device of the sweeping robot includes:
a voice acquisition module 10, configured to acquire voice signals around the mobile terminal in a voice input mode;
the instruction judging module 20 is configured to identify the voice signal and judge whether the voice signal is queried in a preset voice instruction set to obtain a corresponding voice instruction;
the instruction sending module 30 is configured to send the voice instruction to the sweeping robot if the voice instruction is queried, so that the sweeping robot executes the voice instruction.
Further, the instruction determination module 20 includes:
the vector acquisition unit is used for analyzing the voice signal to obtain a feature vector of the voice signal;
the vector judging unit is used for judging whether a preset feature vector comprises the feature vector;
the model generation unit is used for generating an acoustic model according to the feature vector if the preset feature vector comprises the feature vector;
and the voice recognition unit is used for recognizing the voice signal through the acoustic model.
Further, the instruction determining module 20 further includes:
the vector analysis unit is used for analyzing the feature vector if the preset feature vector does not comprise the feature vector;
the characteristic judging unit is used for judging whether the characteristic vector meets the voice characteristic of human according to the analysis result;
and the message prompting unit is used for outputting a first prompting message to prompt a user whether to add the feature vector into the preset feature vector if the feature vector meets the voice feature.
Further, the instruction determining module 20 further includes:
a text recognition unit, configured to recognize the speech signal as a text if the feature vector satisfies the speech feature;
and the text display unit is used for displaying the text on the screen of the mobile terminal.
Further, the instruction determining module 20 further includes:
and the vector adding unit is used for adding the characteristic vector into the preset characteristic vector according to the adding instruction when the adding instruction is received.
Further, the vector adding unit further includes:
the voice judging subunit is used for judging whether the voice signal is an adding instruction or not;
and the vector adding subunit is configured to add the feature vector to the preset feature vector according to the adding instruction if the voice signal is the adding instruction.
Further, the instruction determining module 20 further includes:
the data updating unit is used for updating the training data of the acoustic model according to the feature vector;
and the model training unit is used for retraining the acoustic model through the updated training data to obtain the updated acoustic model.
Further, the control device of the sweeping robot further comprises:
and the message prompting module is used for outputting a second prompting message to prompt the user that the voice instruction is wrong if the voice instruction cannot be queried.
Further, the control device of the sweeping robot further comprises:
the icon detection module is used for detecting whether the voice input icon is clicked or not when the mobile terminal is detected to start the sweeping robot application program;
and the mode entering module is used for entering a voice input mode if the voice input icon is clicked.
Further, the control device of the sweeping robot further comprises:
the voice detection module is used for detecting whether the mobile terminal acquires a starting voice or not when the mobile terminal is detected to start the sweeping robot application program;
and the mode entering module is also used for entering a voice input mode if the starting voice is detected.
Further, the control device of the sweeping robot further comprises:
the feedback acquisition module is used for receiving feedback information sent by the sweeping robot and determining the content of the feedback information;
and the message prompt module is further used for outputting a third prompt message to prompt the user that the floor sweeping robot cannot execute the voice command if the content of the feedback information is that the voice command cannot be executed.
And the message prompt module is further used for outputting a fourth prompt message to prompt the user that the execution of the voice instruction is finished if the content of the feedback information is that the execution of the voice instruction is finished.
The function implementation of each module in the control device of the sweeping robot corresponds to each step in the control method embodiment of the sweeping robot applied to the mobile terminal, and the functions and implementation processes are not described in detail herein.
The invention also provides a control device of the sweeping robot, which is applied to the sweeping robot.
The control device of the sweeping robot comprises:
the instruction receiving module is used for receiving a voice instruction sent by the mobile terminal and executing the voice instruction;
the sweeper detection module is used for detecting whether the sweeping robot normally executes the voice command;
and the feedback sending module is used for sending feedback information of the execution process to the mobile terminal according to the detection result.
Further, the feedback sending module includes:
the feedback sending unit is used for sending feedback information that the voice instruction cannot be executed to the mobile terminal if the fact that the sweeping robot cannot normally execute the voice instruction is detected;
and the feedback sending unit is further used for sending feedback information of the completion of the execution of the voice instruction to the mobile terminal if the fact that the execution of the voice instruction by the sweeping robot is completed is detected.
The invention also provides a control system of the sweeping robot, which comprises the following components: the control program of the sweeping robot is stored in the memory and can be run on the processor, and when the control program of the sweeping robot is executed by the processor, the steps of the control method of the first or second sweeping robot in any one of the above embodiments are realized.
The specific embodiment of the control system of the sweeping robot of the present invention is basically the same as the embodiments of the control method of the sweeping robot, and is not described herein again.
The invention further provides a computer-readable storage medium, on which a control program of a sweeping robot is stored, and the control program of the sweeping robot, when executed by a processor, implements the steps of the control method of the first or second sweeping robot according to any one of the above embodiments.
The specific embodiment of the computer-readable storage medium of the present invention is basically the same as the embodiments of the control method of the sweeping robot, and is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011001294.1A CN114255746A (en) | 2020-09-25 | 2020-09-25 | Control method, device and system of sweeping robot and readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011001294.1A CN114255746A (en) | 2020-09-25 | 2020-09-25 | Control method, device and system of sweeping robot and readable storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN114255746A true CN114255746A (en) | 2022-03-29 |
Family
ID=80789541
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202011001294.1A Pending CN114255746A (en) | 2020-09-25 | 2020-09-25 | Control method, device and system of sweeping robot and readable storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114255746A (en) |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140046662A1 (en) * | 2012-08-07 | 2014-02-13 | Interactive Intelligence, Inc. | Method and system for acoustic data selection for training the parameters of an acoustic model |
| CN103631800A (en) * | 2012-08-23 | 2014-03-12 | 腾讯科技(深圳)有限公司 | Information processing method and device |
| US20150350396A1 (en) * | 2012-12-07 | 2015-12-03 | Baidu Online Network Technology (Beijing) Co., Ltd | Method for controlling background through voice and mobile terminal |
| CN106782504A (en) * | 2016-12-29 | 2017-05-31 | 百度在线网络技术(北京)有限公司 | Audio recognition method and device |
| CN107450390A (en) * | 2017-07-31 | 2017-12-08 | 绵阳美菱软件技术有限公司 | A kind of intelligent electrical appliance control device, control method and control system |
| CN108376543A (en) * | 2018-02-11 | 2018-08-07 | 深圳创维-Rgb电子有限公司 | A kind of control method of electrical equipment, device, equipment and storage medium |
| CN111477220A (en) * | 2020-04-15 | 2020-07-31 | 南京邮电大学 | A neural network speech recognition method and system for home spoken environment |
-
2020
- 2020-09-25 CN CN202011001294.1A patent/CN114255746A/en active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140046662A1 (en) * | 2012-08-07 | 2014-02-13 | Interactive Intelligence, Inc. | Method and system for acoustic data selection for training the parameters of an acoustic model |
| CN103631800A (en) * | 2012-08-23 | 2014-03-12 | 腾讯科技(深圳)有限公司 | Information processing method and device |
| US20150350396A1 (en) * | 2012-12-07 | 2015-12-03 | Baidu Online Network Technology (Beijing) Co., Ltd | Method for controlling background through voice and mobile terminal |
| CN106782504A (en) * | 2016-12-29 | 2017-05-31 | 百度在线网络技术(北京)有限公司 | Audio recognition method and device |
| CN107450390A (en) * | 2017-07-31 | 2017-12-08 | 绵阳美菱软件技术有限公司 | A kind of intelligent electrical appliance control device, control method and control system |
| CN108376543A (en) * | 2018-02-11 | 2018-08-07 | 深圳创维-Rgb电子有限公司 | A kind of control method of electrical equipment, device, equipment and storage medium |
| CN111477220A (en) * | 2020-04-15 | 2020-07-31 | 南京邮电大学 | A neural network speech recognition method and system for home spoken environment |
Non-Patent Citations (2)
| Title |
|---|
| 周雷;龙艳花;魏浩然;: "一种新型的与文本相关的说话人识别方法研究", 上海师范大学学报(自然科学版), no. 02, 15 April 2017 (2017-04-15), pages 224 - 230 * |
| 罗庆;包亚萍;俞强;: "基于改进语音特征与极限学习机的语音端点检测", 微电子学与计算机, no. 03, 5 March 2020 (2020-03-05), pages 41 - 45 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110310623B (en) | Sample generation method, model training method, device, medium, and electronic apparatus | |
| JP6887031B2 (en) | Methods, electronics, home appliances networks and storage media | |
| CN112074900B (en) | Audio analysis for natural language processing | |
| US12154591B2 (en) | Voice interactive wakeup electronic device and method based on microphone signal, and medium | |
| CN101510905B (en) | Method and apparatus for multi-sensory speech enhancement on a mobile device | |
| KR101622111B1 (en) | Dialog system and conversational method thereof | |
| JP4901736B2 (en) | Apparatus, method, and program for performing user-machine interaction | |
| WO2020244402A1 (en) | Speech interaction wakeup electronic device and method based on microphone signal, and medium | |
| CN110097875A (en) | Interactive voice based on microphone signal wakes up electronic equipment, method and medium | |
| US11398219B2 (en) | Speech synthesizer using artificial intelligence and method of operating the same | |
| CN103065629A (en) | Speech recognition system of humanoid robot | |
| US11200888B2 (en) | Artificial intelligence device for providing speech recognition function and method of operating artificial intelligence device | |
| KR20150145024A (en) | Terminal and server of speaker-adaptation speech-recognition system and method for operating the system | |
| KR20210042523A (en) | An electronic apparatus and Method for controlling the electronic apparatus thereof | |
| CN113593565B (en) | Intelligent home device management and control method and system | |
| US11393447B2 (en) | Speech synthesizer using artificial intelligence, method of operating speech synthesizer and computer-readable recording medium | |
| CN114255746A (en) | Control method, device and system of sweeping robot and readable storage medium | |
| Suk et al. | Voice activated appliances for severely disabled persons | |
| CN116504246B (en) | Voice remote control method, device, storage medium and device based on Bluetooth device | |
| CN108682416B (en) | Local adaptive speech training method and system | |
| US11443732B2 (en) | Speech synthesizer using artificial intelligence, method of operating speech synthesizer and computer-readable recording medium | |
| CN111028832A (en) | Method and device for controlling mute mode of microphone, storage medium and electronic equipment | |
| Wahile et al. | Home Automation System Based on Speech Recognition | |
| CN118155609A (en) | Self-learning local voice recognition method | |
| CN117219077A (en) | Smoke exhaust ventilator controller based on voice recognition, method and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |