[go: up one dir, main page]

WO2018124355A1 - Dispositif audio et procédé de commande associé - Google Patents

Dispositif audio et procédé de commande associé Download PDF

Info

Publication number
WO2018124355A1
WO2018124355A1 PCT/KR2017/000096 KR2017000096W WO2018124355A1 WO 2018124355 A1 WO2018124355 A1 WO 2018124355A1 KR 2017000096 W KR2017000096 W KR 2017000096W WO 2018124355 A1 WO2018124355 A1 WO 2018124355A1
Authority
WO
WIPO (PCT)
Prior art keywords
voice
state
function
recognition
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2017/000096
Other languages
English (en)
Korean (ko)
Inventor
장순필
김정은
심보준
쓔썅란
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Publication of WO2018124355A1 publication Critical patent/WO2018124355A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1637Details related to the display arrangement, including those related to the mounting of the display in the housing
    • G06F1/1647Details related to the display arrangement, including those related to the mounting of the display in the housing including at least an additional display
    • G06F1/165Details related to the display arrangement, including those related to the mounting of the display in the housing including at least an additional display the additional display being small, e.g. for presenting status information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present invention relates to an audio device capable of speech recognition and a control method thereof.
  • Terminals may be divided into mobile / portable terminals and stationary terminals according to their mobility.
  • the mobile terminal may be further classified into a handheld terminal and a vehicle mounted terminal according to whether a user can directly carry it.
  • the functions of mobile terminals are diversifying. For example, data and voice communication, taking a picture and video with a camera, recording a voice, playing a music file through a speaker system, and outputting an image or video to a display unit.
  • Some terminals have an electronic game play function or a multimedia player function.
  • recent mobile terminals may receive multicast signals that provide visual content such as broadcasting, video, and television programs.
  • such a terminal is a multimedia player having a complex function such as taking a picture or a video, playing a music or video file, playing a game, or receiving a broadcast. Is being implemented.
  • the audio device is a device having a speaker system, and may be formed to recognize a voice and perform an operation related to the voice.
  • the audio device may control the home appliances by communicating with the connected home appliances.
  • the user can conveniently execute various functions simply by talking to the audio device.
  • Such an audio device may be configured to start speech recognition when receiving a specific keyword from a user.
  • These specific keywords may vary depending on the manufacturer or software developer.
  • the audio device since the audio device cannot detect that the voice command has ended, it is configured to terminate the voice recognition when a specific voice is recognized and a function related to the specific voice is executed. Thus, when a function related to a specific voice is erroneously executed, the user has to inconvenience the specific keyword in order to execute the voice recognition function again.
  • An object of the present invention is to improve the accuracy of speech recognition.
  • Another object of the present invention is to provide a method for recognizing an additional voice associated with a plurality of functions when a plurality of functions associated with a specific voice are detected together.
  • Another object of the present invention is to provide an appropriate speech recognition function according to an operating state.
  • the execution state of the voice recognition function when the start signal is received through the audio input unit and the audio input unit configured to receive voice, the execution state of the voice recognition function is in a standby state waiting for reception of the start signal, And a control unit which switches to a recognition state capable of recognizing speech, and wherein, when only one function corresponding to the recognized speech is detected in the recognition state, the controller is further configured to execute a function of the speech recognition function.
  • the execution state is switched from the recognition state to the standby state and a plurality of functions corresponding to the voice recognized in the recognition state are detected, the execution state of the voice recognition function is executed together with the execution of a specific function among the plurality of functions.
  • the recognition state is characterized in that to switch to the modified recognition state that can receive the additional voice.
  • the controller executes a function corresponding to the additional voice among the plurality of functions based on the voice information corresponding to the additional voice. Characterized in that.
  • the controller may terminate execution of the specific function.
  • the apparatus may further include an optical output unit, wherein the controller controls the optical output unit to output light indicating that the user has switched to the modified recognition state when the execution state of the voice recognition function is changed to the modified recognition state. It is characterized by.
  • the controller may change the execution state of the voice recognition function to the standby state.
  • the controller when the execution state of the voice recognition function is switched to the standby state, the controller continuously executes the specific function.
  • the audio output unit may further include a sound output unit configured to output specific audio according to the execution of the basic function, and the controller may control the volume of the audio output from the sound output unit when the audio input unit is switched to a modified recognition state.
  • the sound output unit is controlled to have the first value.
  • the sound output unit when the execution state of the voice recognition function is changed from the modified recognition state to the standby state, the sound output unit may be controlled to have the second volume greater than the first value.
  • the plurality of functions corresponding to the voice is a function for reproducing different sound sources, respectively, and the control unit controls the sound output unit to reproduce at least a portion of the different sound sources in the crystal recognition state. It is characterized by.
  • the apparatus may further include an optical output unit, and different colors correspond to the different sound sources, and the controller outputs the light output based on a priority between the different sound sources in the crystal recognition state.
  • the unit is characterized in that for determining the area in which the different colors are output.
  • the control unit may control the sound output unit to reproduce a sound source corresponding to the specific color when a voice indicating a specific color among the different colors is received in the correction recognition state.
  • the controller may execute the execution of the voice recognition function in the recognition state of the audio input unit to wait for the reception of the additional voice. And switch to a crystal recognition state.
  • the controller switches the execution state of the voice recognition function from the correction recognition state to the standby state, and receives the additional voice before the execution of the specific function ends. To this end, the execution state of the voice recognition function is switched from the standby state to the modified recognition state again.
  • the controller when the execution state of the voice recognition function is switched back to the correction recognition state, the controller outputs notification information indicating the correction recognition state.
  • the apparatus may further include an optical output unit.
  • the controller executes a function corresponding to any one of the plurality of voices and outputs the light.
  • the notification information may be output to a region located in a direction in which the voice is received among the negative output regions.
  • the execution state of the voice recognition function in the standby state Switching to a recognition state capable of speech recognition, receiving a voice through the audio input unit in the recognition state, and detecting a function corresponding to the received voice based on a speech recognition algorithm; Determining whether a plurality of functions corresponding to a voice are detected, and performing different control according to the number of functions corresponding to the received voice, wherein performing the different control includes: receiving When one function corresponding to the received voice is detected, the recognition state is switched to the standby state, and the When Placed a function corresponding to a sound to be detected by a plurality, and the execution state of the speech recognition in the recognition condition, characterized in that to switch to the modified recognized state for receiving the additional speech related to the plurality of functions.
  • the one function in the performing of the different control, when one function corresponding to the received voice is detected, the one function is executed and a plurality of functions corresponding to the received voice are executed. If detected, characterized in that for executing a specific function of the plurality of functions.
  • a specific function of the plurality of functions is determined by a preset priority.
  • the execution state of the voice recognition function is switched back to the standby state. do.
  • notification information indicating that the modified recognition state is executed is output.
  • a plurality of functions associated with the plurality of functions in the recognition state when a plurality of functions corresponding to the voice received through the audio input unit in the recognition state capable of speech recognition, a plurality of functions associated with the plurality of functions in the recognition state
  • the accuracy of speech recognition can be improved by switching to a crystal recognition state in which voice can be received and executing a specific function among a plurality of functions based on the recognized additional speech in the correction recognition state.
  • the present invention can improve the user's convenience by enabling speech recognition without a start signal for starting speech recognition in the modified recognition state.
  • the present invention may induce additional voice input to the user by providing notification information related to the modified recognition state through the light output unit.
  • the present invention by visually notifying a plurality of functions corresponding to the voice through a variety of colors, the user can more easily select a specific function of the plurality of functions.
  • FIG. 1 is a block diagram illustrating a mobile terminal related to the present invention.
  • 2A and 2B are views related to an audio device.
  • FIG. 3 is a conceptual diagram illustrating an execution state of a conventional speech recognition function
  • FIG. 4 is a flowchart illustrating a method of executing the conventional speech recognition function.
  • FIG. 5 is a conceptual diagram illustrating an execution state of a voice recognition function according to the present invention.
  • FIG. 6 is a flowchart illustrating a method of executing a voice recognition function in an audio device according to the present invention
  • FIG. 7 is a conceptual diagram schematically illustrating the control method of FIG. 6.
  • FIGS. 8A and 8B are diagrams illustrating a method of executing a preview function for a plurality of functions when a plurality of functions corresponding to voices are detected.
  • FIGS. 9A to 9C are diagrams illustrating a method of executing a voice recognition function when essential information is missing from a recognized voice in a recognition state.
  • 10A to 10C are conceptual views illustrating an execution state of a voice recognition function according to an execution state of a specific function.
  • 11 is a conceptual diagram illustrating a method of providing notification information associated with a plurality of voices when a plurality of voices are simultaneously received.
  • the mobile terminal described herein includes a voice recognition speaker, a mobile phone, a smart phone, a laptop computer, a digital broadcasting terminal, a personal digital assistant, a portable multimedia player (PMP), navigation, and a slate PC.
  • PMP portable multimedia player
  • slate PC slate PC
  • PMP portable multimedia player
  • tablet PC tablet PC
  • ultrabook wearable device (e.g., smartwatch, smart glass, head mounted display) Etc. may be included.
  • FIG. 1 is a block diagram illustrating a mobile terminal related to the present invention.
  • the mobile terminal 100 includes a wireless communication unit 110, an input unit 120, a sensing unit 140, an output unit 150, an interface unit 160, a memory 170, a controller 180, and a power supply unit 190. ) May be included.
  • the components shown in FIG. 1 are not essential to implementing a mobile terminal, so a mobile terminal described herein may have more or fewer components than those listed above.
  • the wireless communication unit 110 of the components, between the mobile terminal 100 and the wireless communication system, between the mobile terminal 100 and another mobile terminal 100, or the mobile terminal 100 and the external server It may include one or more modules that enable wireless communication therebetween.
  • the wireless communication unit 110 may include one or more modules for connecting the mobile terminal 100 to one or more networks.
  • the wireless communication unit 110 may include at least one of the broadcast receiving module 111, the mobile communication module 112, the wireless internet module 113, the short range communication module 114, and the location information module 115. .
  • the input unit 120 may include a camera 121 or an image input unit for inputting an image signal, a microphone 122 for inputting an audio signal, an audio input unit, or a user input unit 123 for receiving information from a user. , Touch keys, mechanical keys, and the like.
  • the voice data or the image data collected by the input unit 120 may be analyzed and processed as a control command of the user.
  • the sensing unit 140 may include one or more sensors for sensing at least one of information in the mobile terminal, surrounding environment information surrounding the mobile terminal, and user information.
  • the sensing unit 140 may include a proximity sensor 141, an illumination sensor 142, an illumination sensor, a touch sensor, an acceleration sensor, a magnetic sensor, and gravity.
  • Optical sensors e.g. cameras 121), microphones (see 122), battery gauges, environmental sensors (e.g.
  • the mobile terminal disclosed herein may use a combination of information sensed by at least two or more of these sensors.
  • the output unit 150 is used to generate an output related to sight, hearing, or tactile sense.
  • the output unit 150 includes a display unit 151, an audio output unit 152, a hap tip module 153, an optical output unit 154, and an infrared output unit ( 155).
  • the display unit 151 forms a layer structure with or is integrally formed with the touch sensor, thereby implementing a touch screen.
  • the touch screen may function as a user input unit 123 that provides an input interface between the mobile terminal 100 and the user, and may also provide an output interface between the mobile terminal 100 and the user.
  • the interface unit 160 serves as a path to various types of external devices connected to the mobile terminal 100.
  • the interface unit 160 connects a device equipped with a wired / wireless headset port, an external charger port, a wired / wireless data port, a memory card port, and an identification module. It may include at least one of a port, an audio input / output (I / O) port, a video input / output (I / O) port, and an earphone port.
  • I / O audio input / output
  • I / O video input / output
  • earphone port an earphone port
  • the memory 170 stores data supporting various functions of the mobile terminal 100.
  • the memory 170 may store a plurality of application programs or applications driven in the mobile terminal 100, data for operating the mobile terminal 100, and instructions. At least some of these applications may be downloaded from an external server via wireless communication.
  • at least some of these application programs may exist on the mobile terminal 100 from the time of shipment for basic functions of the mobile terminal 100 (for example, a call forwarding, a calling function, a message receiving, and a calling function).
  • the application program may be stored in the memory 170 and installed on the mobile terminal 100 to be driven by the controller 180 to perform an operation (or function) of the mobile terminal.
  • the controller 180 In addition to the operation related to the application program, the controller 180 typically controls the overall operation of the mobile terminal 100.
  • the controller 180 may provide or process information or a function appropriate to a user by processing signals, data, information, and the like, which are input or output through the above-described components, or by driving an application program stored in the memory 170.
  • controller 180 may control at least some of the components described with reference to FIG. 1 to drive an application program stored in the memory 170. Furthermore, the controller 180 may operate by combining at least two or more of the components included in the mobile terminal 100 to drive the application program.
  • the power supply unit 190 receives power from an external power source and an internal power source under the control of the controller 180 to supply power to each component included in the mobile terminal 100.
  • the power supply unit 190 includes a battery, which may be a built-in battery or a replaceable battery.
  • At least some of the components may operate in cooperation with each other to implement an operation, control, or control method of the mobile terminal according to various embodiments described below.
  • the operation, control, or control method of the mobile terminal may be implemented on the mobile terminal by driving at least one application program stored in the memory 170.
  • the broadcast receiving module 111 of the wireless communication unit 110 receives a broadcast signal and / or broadcast related information from an external broadcast management server through a broadcast channel.
  • the broadcast channel may include a satellite channel and a terrestrial channel.
  • Two or more broadcast receiving modules may be provided to the mobile terminal 100 for simultaneous broadcast reception or switching of broadcast channels for at least two broadcast channels.
  • the mobile communication module 112 may include technical standards or communication schemes (eg, Global System for Mobile communication (GSM), Code Division Multi Access (CDMA), Code Division Multi Access 2000 (CDMA2000), and EV).
  • GSM Global System for Mobile communication
  • CDMA Code Division Multi Access
  • CDMA2000 Code Division Multi Access 2000
  • EV Enhanced Voice-Data Optimized or Enhanced Voice-Data Only (DO), Wideband CDMA (WCDMA), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Long Term Evolution (LTE), LTE-A (Long Term Evolution-Advanced) and the like to transmit and receive a radio signal with at least one of a base station, an external terminal, a server on a mobile communication network.
  • GSM Global System for Mobile communication
  • CDMA Code Division Multi Access
  • CDMA2000 Code Division Multi Access 2000
  • EV Enhanced Voice-Data Optimized or Enhanced Voice-Data Only (DO)
  • WCDMA Wideband CDMA
  • HSDPA High
  • the wireless signal may include various types of data according to transmission and reception of a voice call signal, a video call call signal, or a text / multimedia message.
  • the wireless internet module 113 refers to a module for wireless internet access and may be embedded or external to the mobile terminal 100.
  • the wireless internet module 113 is configured to transmit and receive wireless signals in a communication network according to wireless internet technologies.
  • wireless Internet technologies include Wireless LAN (WLAN), Wireless-Fidelity (Wi-Fi), Wireless Fidelity (Wi-Fi) Direct, Digital Living Network Alliance (DLNA), Wireless Broadband (WiBro), and WiMAX (World).
  • the wireless Internet module 113 for performing a wireless Internet access through the mobile communication network 113 May be understood as a kind of mobile communication module 112.
  • the short range communication module 114 is for short range communication, and includes Bluetooth TM, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), ZigBee, and NFC. (Near Field Communication), at least one of Wi-Fi (Wireless-Fidelity), Wi-Fi Direct, Wireless USB (Wireless Universal Serial Bus) technology can be used to support short-range communication.
  • the short-range communication module 114 may be configured between a mobile terminal 100 and a wireless communication system, between the mobile terminal 100 and another mobile terminal 100, or through the wireless area networks. ) And a network in which the other mobile terminal 100 (or an external server) is located.
  • the short range wireless communication network may be short range wireless personal area networks.
  • the other mobile terminal 100 is a wearable device capable of exchanging (or interworking) data with the mobile terminal 100 according to the present invention (for example, smartwatch, smart glasses). (smart glass), head mounted display (HMD).
  • the short range communication module 114 may sense (or recognize) a wearable device that can communicate with the mobile terminal 100, around the mobile terminal 100.
  • the controller 180 may include at least a portion of data processed by the mobile terminal 100 in the short range communication module ( The transmission may be transmitted to the wearable device through 114. Therefore, the user of the wearable device may use data processed by the mobile terminal 100 through the wearable device. For example, according to this, when a call is received by the mobile terminal 100, the user performs a phone call through the wearable device or when a message is received by the mobile terminal 100, the received through the wearable device. It is possible to check the message.
  • the location information module 115 is a module for obtaining a location (or current location) of a mobile terminal, and a representative example thereof is a Global Positioning System (GPS) module or a Wireless Fidelity (WiFi) module.
  • GPS Global Positioning System
  • Wi-Fi Wireless Fidelity
  • the mobile terminal may acquire the location of the mobile terminal using a signal transmitted from a GPS satellite.
  • the mobile terminal may acquire the location of the mobile terminal based on information of the wireless access point (AP) transmitting or receiving the Wi-Fi module and the wireless signal.
  • the location information module 115 may perform any function of other modules of the wireless communication unit 110 to substitute or additionally obtain data regarding the location of the mobile terminal.
  • the location information module 115 is a module used to obtain the location (or current location) of the mobile terminal, and is not limited to a module that directly calculates or obtains the location of the mobile terminal.
  • the input unit 120 is for inputting image information (or signal), audio information (or signal), data, or information input from a user.
  • the mobile terminal 100 is one.
  • the plurality of cameras 121 may be provided.
  • the camera 121 processes image frames such as still images or moving images obtained by the image sensor in the video call mode or the photographing mode.
  • the processed image frame may be displayed on the display unit 151 or stored in the memory 170.
  • the plurality of cameras 121 provided in the mobile terminal 100 may be arranged to form a matrix structure, and through the camera 121 forming a matrix structure in this way, the mobile terminal 100 may have various angles or focuses.
  • the plurality of pieces of image information may be input.
  • the plurality of cameras 121 may be arranged in a stereo structure to acquire a left image and a right image for implementing a stereoscopic image.
  • the microphone 122 processes external sound signals into electrical voice data.
  • the processed voice data may be variously used according to a function (or an application program being executed) performed by the mobile terminal 100. Meanwhile, various noise reduction algorithms may be implemented in the microphone 122 to remove noise generated in the process of receiving an external sound signal.
  • the user input unit 123 is for receiving information from a user. When information is input through the user input unit 123, the controller 180 may control an operation of the mobile terminal 100 to correspond to the input information. .
  • the user input unit 123 may be a mechanical input unit (or a mechanical key, for example, a button, a dome switch, a jog wheel, or the like located on the front, rear, or side surfaces of the mobile terminal 100). Jog switch, etc.) and touch input means.
  • the touch input means may include a virtual key, a soft key, or a visual key displayed on the touch screen through a software process, or a portion other than the touch screen.
  • the virtual key or the visual key may be displayed on the touch screen while having various forms, for example, graphic or text. ), An icon, a video, or a combination thereof.
  • the sensing unit 140 senses at least one of information in the mobile terminal, surrounding environment information surrounding the mobile terminal, and user information, and generates a sensing signal corresponding thereto.
  • the controller 180 may control driving or operation of the mobile terminal 100 or perform data processing, function or operation related to an application program installed in the mobile terminal 100 based on the sensing signal. Representative sensors among various sensors that may be included in the sensing unit 140 will be described in more detail.
  • the proximity sensor 141 refers to a sensor that detects the presence or absence of an object approaching a predetermined detection surface or an object present in the vicinity without using a mechanical contact by using an electromagnetic force or infrared rays.
  • the proximity sensor 141 may be disposed in an inner region of the mobile terminal covered by the touch screen described above or near the touch screen.
  • the proximity sensor 141 examples include a transmission photoelectric sensor, a direct reflection photoelectric sensor, a mirror reflection photoelectric sensor, a high frequency oscillation proximity sensor, a capacitive proximity sensor, a magnetic proximity sensor, and an infrared proximity sensor.
  • the proximity sensor 141 may be configured to detect the proximity of the object by the change of the electric field according to the proximity of the conductive object.
  • the touch screen (or touch sensor) itself may be classified as a proximity sensor.
  • the proximity sensor 141 may detect a proximity touch and a proximity touch pattern (for example, a proximity touch distance, a proximity touch direction, a proximity touch speed, a proximity touch time, a proximity touch position, and a proximity touch movement state). have.
  • the controller 180 processes data (or information) corresponding to the proximity touch operation and the proximity touch pattern detected through the proximity sensor 141 as described above, and further, provides visual information corresponding to the processed data. It can be output on the touch screen. Further, the controller 180 may control the mobile terminal 100 to process different operations or data (or information) according to whether the touch on the same point on the touch screen is a proximity touch or a touch touch. .
  • the touch sensor applies a touch (or touch input) applied to the touch screen (or the display unit 151) using at least one of various touch methods such as a resistive film method, a capacitive method, an infrared method, an ultrasonic method, and a magnetic field method. Detect.
  • the touch sensor may be configured to convert a change in pressure applied to a specific portion of the touch screen or capacitance generated at the specific portion into an electrical input signal.
  • the touch sensor may be configured to detect a position, an area, a pressure at the touch, a capacitance at the touch, and the like, when the touch object applying the touch on the touch screen is touched on the touch sensor.
  • the touch object is an object applying a touch to the touch sensor and may be, for example, a finger, a touch pen or a stylus pen, a pointer, or the like.
  • the touch controller processes the signal (s) and then transmits the corresponding data to the controller 180.
  • the controller 180 can know which area of the display unit 151 is touched.
  • the touch controller may be a separate component from the controller 180 or may be the controller 180 itself.
  • the controller 180 may perform different control or perform the same control according to the type of touch object that touches the touch screen (or a touch key provided in addition to the touch screen). Whether to perform different control or the same control according to the type of touch object may be determined according to the operation state of the mobile terminal 100 or an application program being executed.
  • the touch sensor and the proximity sensor described above may be independently or combined, and may be a short (or tap) touch, a long touch, a multi touch, a drag touch on a touch screen. ), Flick touch, pinch-in touch, pinch-out touch, swipe touch, hovering touch, etc. A touch can be sensed.
  • the ultrasonic sensor may recognize location information of a sensing object using ultrasonic waves.
  • the controller 180 can calculate the position of the wave generation source through the information detected from the optical sensor and the plurality of ultrasonic sensors.
  • the position of the wave source can be calculated using the property that the light is much faster than the ultrasonic wave, that is, the time that the light reaches the optical sensor is much faster than the time when the ultrasonic wave reaches the ultrasonic sensor. More specifically, the position of the wave generation source may be calculated using a time difference from the time when the ultrasonic wave reaches the light as the reference signal.
  • the camera 121 which has been described as the configuration of the input unit 120, includes at least one of a camera sensor (eg, CCD, CMOS, etc.), a photo sensor (or an image sensor), and a laser sensor.
  • a camera sensor eg, CCD, CMOS, etc.
  • a photo sensor or an image sensor
  • a laser sensor e.g., a laser sensor
  • the camera 121 and the laser sensor may be combined with each other to detect a touch of a sensing object with respect to a 3D stereoscopic image.
  • the photo sensor may be stacked on the display element, which is configured to scan the movement of the sensing object in proximity to the touch screen. More specifically, the photo sensor mounts a photo diode and a transistor (TR) in a row / column and scans contents mounted on the photo sensor by using an electrical signal that varies according to the amount of light applied to the photo diode. That is, the photo sensor calculates coordinates of the sensing object according to the amount of light change, and thus, the position information of the sensing object can be obtained.
  • TR transistor
  • the display unit 151 displays (outputs) information processed by the mobile terminal 100.
  • the display unit 151 may display execution screen information of an application program driven in the mobile terminal 100 or user interface (UI) and graphical user interface (GUI) information according to the execution screen information. .
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 may be configured as a stereoscopic display unit for displaying a stereoscopic image.
  • the stereoscopic display unit may be a three-dimensional display method such as a stereoscopic method (glasses method), an auto stereoscopic method (glasses-free method), a projection method (holographic method).
  • the sound output unit 152 may output audio data received from the wireless communication unit 110 or stored in the memory 170 in a call signal reception, a call mode or a recording mode, a voice recognition mode, a broadcast reception mode, and the like.
  • the sound output unit 152 may also output a sound signal related to a function (for example, a call signal reception sound or a message reception sound) performed in the mobile terminal 100.
  • the sound output unit 152 may include a receiver, a speaker, a buzzer, and the like.
  • the haptic module 153 generates various haptic effects that a user can feel.
  • a representative example of the tactile effect generated by the haptic module 153 may be vibration.
  • the intensity and pattern of vibration generated by the haptic module 153 may be controlled by the user's selection or the setting of the controller. For example, the haptic module 153 may synthesize different vibrations and output or sequentially output them.
  • the haptic module 153 may be used to stimulate pins that vertically move with respect to the contact skin surface, jetting force or suction force of air through the jetting or suction port, grazing to the skin surface, contact of electrodes, and electrostatic force.
  • Various tactile effects can be generated, such as effects by the endothermic and the reproduction of a sense of cold using the elements capable of endotherm or heat generation.
  • the haptic module 153 may not only deliver a tactile effect through direct contact, but also may allow a user to feel the tactile effect through a muscle sense such as a finger or an arm. Two or more haptic modules 153 may be provided according to a configuration aspect of the mobile terminal 100.
  • the light output unit 154 outputs a signal for notifying occurrence of an event by using light of a light source of the mobile terminal 100.
  • Examples of events occurring in the mobile terminal 100 may be message reception, call signal reception, missed call, alarm, schedule notification, email reception, information reception through an application, and the like.
  • the signal output from the light output unit 154 is implemented as the mobile terminal emits light of a single color or a plurality of colors to the front or the rear.
  • the signal output may be terminated by the mobile terminal detecting the user's event confirmation.
  • the infrared output unit 155 may output an infrared signal for controlling an external device.
  • the external device is a device having an infrared receiver, and can be controlled such as turning on / off power according to the infrared signal.
  • the external device may be a TV, a lighting, an air conditioner, a refrigerator, a washing machine, a boiler, a switch, a plug, a gas lock, a home CCTV, or the like.
  • the infrared signal may be implemented by emitting infrared light from the light emitter.
  • the light emitting unit may be implemented as an infrared emitting diode.
  • Such an infrared signal may be output based on a user request.
  • the interface unit 160 serves as a path to all external devices connected to the mobile terminal 100.
  • the interface unit 160 receives data from an external device, receives power, transfers the power to each component inside the mobile terminal 100, or transmits data inside the mobile terminal 100 to an external device.
  • the port, audio input / output (I / O) port, video input / output (I / O) port, earphone port, etc. may be included in the interface unit 160.
  • the identification module is a chip that stores a variety of information for authenticating the usage rights of the mobile terminal 100, a user identification module (UIM), subscriber identity module (SIM), universal user authentication And a universal subscriber identity module (USIM).
  • a device equipped with an identification module (hereinafter referred to as an 'identification device') may be manufactured in the form of a smart card. Therefore, the identification device may be connected to the terminal 100 through the interface unit 160.
  • the interface unit 160 may be a passage for supplying power from the cradle to the mobile terminal 100 or may be input from the cradle by a user.
  • Various command signals may be a passage through which the mobile terminal 100 is transmitted.
  • Various command signals or power input from the cradle may operate as signals for recognizing that the mobile terminal 100 is correctly mounted on the cradle.
  • the memory 170 may store a program for the operation of the controller 180 and may temporarily store input / output data (for example, a phone book, a message, a still image, a video, etc.).
  • the memory 170 may store data regarding vibration and sound of various patterns output when a touch input on the touch screen is performed.
  • the memory 170 may include a flash memory type, a hard disk type, a solid state disk type, an SSD type, a silicon disk drive type, and a multimedia card micro type. ), Card-type memory (e.g., SD or XD memory), random access memory (RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read It may include at least one type of storage medium of -only memory (PROM), programmable read-only memory (PROM), magnetic memory, magnetic disk and optical disk.
  • the mobile terminal 100 may be operated in connection with a web storage that performs a storage function of the memory 170 on the Internet.
  • the controller 180 controls the operation related to the application program, and generally the overall operation of the mobile terminal 100. For example, if the state of the mobile terminal satisfies a set condition, the controller 180 may execute or release a lock state that restricts input of a user's control command to applications.
  • controller 180 may perform control and processing related to voice call, data communication, video call, or the like, or may perform pattern recognition processing for recognizing handwriting input or drawing input performed on a touch screen as text and images, respectively. Can be. Furthermore, the controller 180 may control any one or a plurality of components described above in order to implement various embodiments described below on the mobile terminal 100 according to the present invention.
  • the power supply unit 190 receives an external power source and an internal power source under the control of the controller 180 to supply power for operation of each component.
  • the power supply unit 190 includes a battery, and the battery may be a built-in battery configured to be rechargeable, and may be detachably coupled to the terminal body for charging.
  • the power supply unit 190 may be provided with a connection port, the connection port may be configured as an example of the interface 160 is electrically connected to the external charger for supplying power for charging the battery.
  • the power supply unit 190 may be configured to charge the battery in a wireless manner without using the connection port.
  • the power supply unit 190 uses one or more of an inductive coupling based on a magnetic induction phenomenon or a magnetic resonance coupling based on an electromagnetic resonance phenomenon from an external wireless power transmitter. Power can be delivered.
  • various embodiments of the present disclosure may be implemented in a recording medium readable by a computer or a similar device using, for example, software, hardware, or a combination thereof.
  • FIG. 1 is a view related to an audio device.
  • the audio device 100 includes a speaker system for outputting audio data and is a mobile terminal capable of speech recognition.
  • a mobile terminal capable of speech recognition.
  • the term mobile terminal described with reference to FIG. 1 will be described as the term audio device 100.
  • the present invention is not limited to the term audio device 100 and may be applied to the various mobile terminals described with reference to FIG. 1.
  • the audio device 100 may include one or more components described with reference to FIG. 1. The description of the components of the audio device 100 is replaced with the description of FIG. 1.
  • the audio device 100 may include a user input unit 123, a sound output unit 152, and a light output unit 154 on an outer surface of the body unit 200.
  • the user input unit 123 may be configured to receive a control command from a user, and may be provided in plurality.
  • the plurality of user input units will be described as first user input unit 123a, second user input unit 123b, and third user input unit 123c.
  • a plurality of light output units 154 may also be provided, and will be described by referring to the first light output unit 154a and the second light output unit 154b, respectively.
  • reference numerals 123 and 154 will be described.
  • the body portion 200 may be cylindrical, and may itself have a function of a ringing barrel.
  • the size of the body part 200 may be determined in consideration of a design.
  • the shape of the body portion 200 may be variously changed.
  • the body portion 200 is formed to face the first region 210 forming the side surface of the cylinder, the second region 220 forming the bottom surface of the cylinder, and the second region 220, and the other bottom surface of the cylinder. It may include a third region 230 forming. The second region 220 and the third region 230 may have the same area or may have different areas.
  • the first region 210 may be referred to as an outer side surface, and the second region 220 and the third region 230 may be referred to as an outer upper surface and an outer lower surface.
  • the first region 210 may be referred to as a first, second and third region.
  • the first region 210 may include a third user input unit 123c, a second light output unit 154b, an infrared output unit 155, and a sound output unit 152.
  • the second light output unit 154c and the sound output unit 152 may be formed to be spaced apart from each other.
  • at least a part of the second light output unit 154c may be formed to overlap with each other in a layer structure with the sound output unit 152. This can be easily changed by the designer's design.
  • the second light output unit 154b and the sound output unit 152 may be formed to surround the first area 210 of the body part 200. Therefore, the sound output unit 152 is formed to output sound in all directions with respect to the body unit 200, and the second light output unit 154c may output light in all directions with respect to the body unit 200. have.
  • the third user input unit 123c may be disposed on an upper end of the first area 210.
  • the third user input unit 123c may be formed to rotate about the center point of the body unit 200. Therefore, the user may rotate the third user input unit 123c to increase or decrease the volume of the audio device 100.
  • the infrared output unit 155 may be disposed at a position where the infrared signal can be output in all directions.
  • an infrared output unit may be disposed on an upper end of the first region 210.
  • the upper region of the first region 210 may be disposed to be rotatable. Accordingly, the infrared output unit 155 may output an infrared signal to reach the external device located at an arbitrary position.
  • the arrangement position of the infrared output unit may be changed to a position where the infrared signal can be output in all directions by the design of those skilled in the art.
  • the display unit 151, the first and second user input units 123a and 123b, the first light output unit 154a, and a temperature / humidity sensor may be disposed in the second region 220.
  • the display unit 151 may be disposed at the center of the second area 220 to secure the user's vision.
  • the first and second user input units 123a and 123b may be disposed in a peripheral area of the display unit 151 to receive a user input.
  • the first and second user input units 123a and 123b may be formed in a button type to operate by a pressing operation, or may be formed in a touch type to operate by a touch operation.
  • the first and second user input units 123a and 123b may be formed to perform different functions.
  • the first user input unit 123a is a button for inputting a control command for stopping voice recognition
  • the second user input unit 123b is a button for inputting a control command for turning on / off power. Can be.
  • the first light output unit 154a may be formed along the periphery of the second region 220. That is, the first light output unit 154a may have a band shape surrounding the outer portion of the second region 200. For example, when the second region 200 is circular, the first light output unit 154a may have a band shape surrounding the circle.
  • the light output unit 154 may be formed to emit light from a light source.
  • a light source a light emitted diode (LED) may be used.
  • the light source is positioned on the inner circumferential surface of the light output unit 154, and the light output from the light source passes through the light output unit 154 to shine on the outside.
  • the light output unit 154 is made of a transparent or translucent material through which light can pass.
  • the optical output unit 154 may output notification information related to an event occurring in the audio device 100 as light. For example, when voice recognition is being performed in the audio device 100, red light may be output. In addition, when the audio device 100 is waiting for a correction command, yellow light may be output.
  • the temperature / humidity sensor may be disposed in the second area 220 that may directly contact the outside to sense the external temperature and humidity.
  • the third region 230 includes a power supply unit 190 for receiving power from the outside, an interface 160 for transmitting and receiving data with an external device, an audio input unit (microphone) for receiving sound, and the like. May be further arranged.
  • the audio device 100 may control the external devices through short-range wireless communication with the external device.
  • the audio device 100 may perform near field communication with electronic devices such as a refrigerator, a washing machine, a TV, an air conditioner, a robot cleaner, and the like that exist on the same home network as the audio device 100.
  • the short range wireless communication may include Wi-Fi, Bluetooth, Z-wave, infrared communication, and the like.
  • the audio device 100 may turn on the air conditioner or adjust the temperature of the air conditioner through infrared communication.
  • the audio device 100 may serve as a controller for controlling external devices in an Internet of Things (IOT) environment.
  • IOT Internet of Things
  • the audio device 100 has been described above. Although the above description has shown the arrangement structure of the components of the audio device 100, the present invention is not limited thereto, and arrangement positions of the components may be changed within a range that can be easily changed by those skilled in the art.
  • FIG. 3 is a conceptual diagram illustrating an execution state of a conventional speech recognition function
  • FIG. 4 is a flowchart illustrating a method of executing the conventional speech recognition function.
  • the speech recognition function is a technology for converting a sound signal received through a sound sensor such as a microphone into words or sentences.
  • the speech recognition function is a function of performing a specific operation based on the information converted into the word or sentence. That is, the voice recognition function is a function of determining whether a received voice corresponds to a specific word or a function of detecting a voice command corresponding to the received voice.
  • the audio device 100 may store a voice recognition application related to a voice recognition function in the memory 170.
  • a voice recognition application may perform voice recognition through a database provided in itself or through a database provided in a server connected to the communication.
  • the conventional speech recognition function may have a plurality of execution states.
  • the controller 180 analyzes a voice recognition waiting state 310 waiting for input of a start signal or a voice recognition state to perform voice recognition in order to start the voice recognition function based on the execution state of the voice recognition function.
  • the state may be set to any one of 320.
  • the voice recognition waiting state 310 may include a listening state 311 for detecting a voice having a volume of a predetermined size or more and a keyword detection state 312 for detecting a specific word.
  • the controller 180 may detect the reception of a voice having a volume of a predetermined size or more from the outside in the listening state 311.
  • the controller 180 detects a voice having a volume of a predetermined size or more in the listening state 311 and does not perform voice recognition.
  • the controller 180 may switch the execution state of the speech recognition function from the listening state 311 to the keyword detection state 312 when a voice having a volume of a predetermined size or more is received.
  • the controller 180 may detect whether a specific word is received in the keyword detection state 312.
  • the specific word is a start signal for starting the speech recognition function, and a different signal may be preset for each audio device 100 or for each application providing the speech recognition function.
  • the controller 180 may analyze whether the voice received in the keyword detection state 312 corresponds to a specific word. That is, the controller 180 can start the voice recognition function in the keyword detection state 312.
  • the controller 180 may switch to the recognition state 320. For example, when a predetermined word, “Alice”, is received in the keyword detection state 312, the controller 180 may detect whether the voice “Alice” corresponds to the specific word. In addition, when the “Alice” corresponds to a specific word, the controller 180 can switch the execution state of the speech recognition function to the recognition state 320 capable of speech recognition.
  • the controller 180 may switch the execution state of the voice recognition function back to the listening state 311. For example, when the voice is not received for a predetermined time in the keyword detection state 312, the controller 180 may switch the execution state of the voice recognition function to the listening state 311. Alternatively, when the voice received in the keyword detection state 312 does not correspond to a specific word, the controller 180 may switch the execution state of the voice recognition function back to the listening state 311.
  • the controller 180 may start a voice recognition function.
  • the controller 180 may receive a voice in the recognition state (S320) (S420).
  • the controller 180 may analyze the received voice based on a preset algorithm.
  • the preset algorithm is a conventionally known speech recognition algorithm, and details thereof will be apparent to those skilled in the art, and thus description thereof will be omitted.
  • the controller 180 may detect at least one function corresponding to the voice based on the analysis result. That is, the controller 180 may detect only one function corresponding to the voice, or may detect a plurality of functions.
  • the controller 180 may determine whether a plurality of detections corresponding to the voice are detected (S430). In addition, when a plurality of functions corresponding to the voice are detected, the controller 180 can execute a first function among the plurality of functions (S442).
  • the first function may be any one of a function set as a basic function or a function having a high priority.
  • the basic function is a function set as a function to be executed first among a plurality of functions when a plurality of functions correspond to one voice.
  • the priority may be determined by alphabetical order, execution frequency, usage pattern, sound source chart ranking. For example, when different song reproducing functions are detected, priorities may be determined in order of high frequency of each song.
  • the controller 180 detects "singer's generation music playback function” and "singing title Girls Generation music playback function” as a function corresponding to "girl generation” as a result of the analysis of the voice "girl generation”. Can be. In this case, the controller 180 may execute the "singer girls' music reproduction function" set as the first function.
  • the first function when the plurality of functions corresponding to the voice are detected, the first function is executed. However, the first function may not be executed, and the notification information may be output so that the user inputs the voice again. For example, the controller 180 may output a voice of "plural sound sources detected, please inform us correctly".
  • the controller 180 can immediately execute the function corresponding to the voice (S441). For example, the controller 180 may analyze a voice of “Play Song No. 1” and play a sound source stored as Song No. 1.
  • the controller 180 may switch the execution state of the voice recognition function back to the standby state 310. In this way, the controller 180 may selectively analyze only the voices required for voice recognition without analyzing all the voices received through the microphone, thereby preventing unnecessary power consumption.
  • the user in order to input a voice command, the user must utter a voice corresponding to a specific word every time and then utter a voice corresponding to the voice command. That is, when a user speaks a second voice associated with the first voice after uttering a first voice, the user should utter a second voice after uttering a specific word. Thus, even when a user inputs a related voice several times, it is inconvenient to input a specific word for each voice. Therefore, in the following, when a speech of the second voice associated with the first voice is required, a method of speaking the second voice without a specific word is proposed.
  • FIG. 5 is a conceptual diagram illustrating an execution state of a voice recognition function according to the present invention.
  • 6 is a flowchart illustrating a method of executing a voice recognition function in an audio device according to the present invention
  • FIG. 7 is a conceptual diagram schematically illustrating the control method of FIG. 6.
  • the audio device 100 When the voice recognition function is being executed, the audio device 100 according to the present invention may be in one of the execution states as shown in FIG. 5.
  • the execution state of the voice recognition function may include a standby state 410, a recognition state 420, and a modified recognition state 430. Since the standby state 410 and the recognition state 420 are the same as those of FIG. 3, the description will be made below with reference to FIG. 3, and the modified recognition state 430 will be described below.
  • the modified recognition state 430 is a state in which the voice associated with the voice recognized in the recognition state 420 can be received.
  • the modified recognition state 430 may recognize the voice even if the user does not receive the start signal of the voice recognition function. That is, unlike the related art, when the voice is recognized in the recognition state 420, the voice recognition function may be switched to the modified recognition state 430 instead of being immediately switched to the standby state 410.
  • the modified recognition state 430 may be executed based on a condition in which the voice recognized in the recognition state 420 is preset.
  • the preset condition may be a condition in which an additional voice is required and a condition in which a plurality of functions corresponding to the voice are detected.
  • the condition that requires additional speech may be a situation in which essential information for performing a specific function corresponding to the speech is insufficient as a result of analyzing the speech recognized in the recognition state 420. For example, if you receive a voice prompt, "Please set an alarm tomorrow", you may not have the necessary information about when to set the alarm.
  • the condition for detecting a plurality of functions corresponding to the voice may be a condition for detecting two or more functions corresponding to the voice as a result of analysis of the voice recognized in the recognition state 420. For example, when a voice of “Play Girls' Songs” is received, the controller 180 may use the words “Singer Girls' Songs” and “Sings of Girls' Songs” as voice commands corresponding to the voices. Two voice commands can be detected.
  • the preset condition may be a condition in which the end of the voice utterance of the user is not detected.
  • the controller 180 detects the movement of the user's mouth through the camera 121, and if the movement of the mouth is still detected after the recognition of the voice, the controller 180 recognizes the execution state of the voice recognition function. It may be switched to the modified recognition state 430 in.
  • the preset condition may be set by a user or by a manufacturer that provides a voice recognition function.
  • the voice recognized in the modified recognition state 430 may be analyzed based on context information of the voice recognized in the recognized state 420. That is, the voice recognized in the modified recognition state 430 may be recognized as one voice synthesized with the voice recognized in the recognition state 420 during voice analysis. Therefore, the audio device 100 does not execute a function corresponding to each of the voice recognized in the recognition state 420 and the voice recognized in the modified recognition state 430, but instead performs one function corresponding to two voices. You can run
  • the controller 180 receives a voice of “Girls' Generation Song” in the recognition state 420, and receives a voice of “singer” in the crystal recognition state 430.
  • Voice commands can be analyzed as “Singer's Generation” based on contextual information called “Girls' Generation Song.”
  • the controller 180 may play a sound source corresponding to the age of the singer girl.
  • the controller 180 may receive a start signal of voice recognition in a waiting state for waiting for reception of a start signal (S410).
  • the controller 180 may be in a standby state 410 where the execution state of the voice recognition function is waiting to receive a start signal of the voice recognition function.
  • the start signal may be a voice corresponding to a specific word.
  • the controller 180 may switch the execution state of the voice recognition function to the recognition state 420.
  • the controller 180 may receive a voice in the recognition state 420 (S420), analyze the received voice, and detect at least one function corresponding to the voice. For example, as shown in FIG. 6A, the controller 180 may receive a voice corresponding to “Alice” in a standby state. The controller 180 may switch the execution state of the voice recognition function from the standby state 410 to the recognition state 420 when “Alice” is received.
  • a voice is analyzed through a conventionally known speech recognition algorithm, and a detailed description of the conventionally known speech recognition algorithm is omitted.
  • the controller 180 may determine whether a plurality of functions corresponding to the voice are detected (S430). The controller 180 may perform different control according to the number of the function corresponding to the voice detected. More specifically, when only one specific function corresponding to the voice is detected, the controller 180 may execute the detected specific function (S441). In this case, when the specific function is executed, the controller 180 may switch the execution state of the voice recognition function from the recognition state 420 to the standby state 410 again.
  • the controller 180 may execute a first function among the plurality of functions (S442).
  • the controller 180 may switch the execution state of the voice recognition function from the recognition state 420 to the modified recognition state 430 in order to recognize the additional voice along with the execution of the first function.
  • the controller 180 can execute the first function based on priority.
  • the priority may be determined by the execution frequency and the speech recognition accuracy. For example, when the plurality of functions are functions of reproducing different sound sources, the priority may be determined according to the reproduction frequency of the sound sources. As another example, when the plurality of functions reproduce different sound sources, the priority may be determined based on the accuracy of the voice.
  • the controller 180 may receive a voice corresponding to “play the girls' generation” in the recognition state.
  • the controller 180 may detect two functions, that is, a "song title girls generation play function" and a "singer girls generation song play function” as a function corresponding to the voice. Then, the controller 180 may execute the "song title girls generation playback function" based on the priority.
  • step 441 when a plurality of functions are detected, the controller 180 may wait for an additional voice without executing the first function.
  • whether to execute a specific function may be a design situation determined by the designer of the voice recognition function.
  • the controller 180 may switch the execution state of the voice recognition function from the recognition state 420 to the correction recognition state 430 with the execution of the first function.
  • the controller 180 may output notification information indicating that the modification recognition state 430 has been entered.
  • the notification information may be output in at least one of a visual, tactile and auditory manner.
  • the controller 180 may output yellow light indicating the crystal recognition state 430 through the light output unit 154.
  • the controller 180 may execute a preview function for each of the plurality of functions to indicate that the crystal recognition state 430 is performed through the sound output unit 152.
  • the preview function refers to a function of executing only a part of the function, not all of the specific function. For example, in the case of the sound source playback function, only the highlight portion of the sound source can be played back.
  • the controller 180 may output a name (name) representing each of a plurality of functions, visually and visually, to indicate that the correction recognition state 430 is provided through the sound output unit 152.
  • name representing each of a plurality of functions, visually and visually, to indicate that the correction recognition state 430 is provided through the sound output unit 152.
  • the titles of the respective sound sources may be sequentially output.
  • the user may recognize that additional voice is currently recognized.
  • the controller 180 may control the execution state of the first function when the execution state of the voice recognition function is changed from the recognition state 420 to the modified recognition state 430.
  • the controller 180 may control the output volume of the sound source to have a specific value.
  • the specific value is a very small volume, and the present invention may represent that the specific sound source is being reproduced in the crystal recognition state 430 by reproducing the specific sound source at a very small volume.
  • the controller 180 may play a highlight portion of the specific sound source. The highlight portion may be a preset portion for each sound source.
  • the controller 180 may not control the execution state of the first function.
  • the first function can be executed in a normally executed state, regardless of the execution state of the speech recognition function.
  • the controller 180 may determine whether to receive additional voice within a preset time after entering the crystal recognition state 430 (S610). When the modified recognition state 430 is started, the controller 180 can wait to receive additional voice.
  • the additional voice may be a voice associated with the voice recognized in the recognition state 420. That is, the additional voice may be a voice related to execution of a specific function among a plurality of functions. For example, as shown in (b) of FIG. 7, when the “song of girls 'generation” and the “singer's girls' song playback function” are detected with respect to the voice of “play girl's age,” the additional voice is a singer.
  • the voice for selecting the girls generation song playback function it may be a voice of "no, singer".
  • the additional voice is a voice to select the area. Can be.
  • the controller 180 may switch the execution state of the voice recognition function from the modified recognition state 430 to the standby state 410. In this case, the controller 180 can maintain execution of the first function. In addition, if the execution state of the first function is controlled in the correction recognition state 430, the controller 180 may return the first function to a state before the execution state of the first function is controlled. For example, if the controller 180 is playing a specific sound source at a volume of a specific value in the crystal recognition state 430, the controller 180 may change the volume to play back a default value (the volume set as a default) rather than a specific value.
  • the default value may be a value larger than a specific value.
  • the controller 180 may analyze the additional voice and execute a second function corresponding to the additional voice among the plurality of functions as a result of the analysis (S620).
  • the controller 180 may analyze the additional voice based on the context information corresponding to the voice recognized in the recognition state 420. The description of this is replaced with the description of FIG. 4.
  • the controller 180 may execute a second function corresponding to the additional voice among the plurality of functions. For example, when the function of reproducing a specific sound source of the second function, the controller 180 can reproduce the second sound source at a volume set by the sound output unit.
  • the controller 180 may end the execution of the first function. In addition, when the second function is executed, the controller 180 may switch the execution state of the voice recognition function from the modified recognition state 430 to the standby state 410.
  • the controller 180 may not output any notification information indicating the modified recognition state 430. Thus, the user can recognize that the additional voice is no longer received.
  • the present invention does not need to input a start signal every time the voice is recognized, thereby enabling a more natural voice conversation.
  • the present invention can improve the accuracy of speech recognition by allowing the user to execute a function desired by the additional voice in the case of a plurality of functions corresponding to the voice.
  • 8A and 8B are diagrams illustrating a method of executing a preview function for a plurality of functions when a plurality of functions corresponding to voices are detected.
  • the controller 180 may enter the modified recognition state 430 and output notification information for notifying each of the plurality of functions. . That is, the present invention outputs the notification information for each of the plurality of functions in the modified recognition state 430, thereby informing the user that the additional voice can be received, and provide information on each of the plurality of functions. .
  • the controller 180 may determine an output color of notification information indicating each of the plurality of functions according to the priority of each of the plurality of functions. For example, as shown in (a) of FIG. 7A, when the plurality of functions are three sound source reproduction functions, the controller 180 may display notification information indicating each of the three sound sources in order of priority (red, green, green). The output may be performed at 720 and blue 730.
  • the controller 180 may determine an output area of the notification information according to the priority of the plurality of functions. That is, the controller 180 may output notification information indicating a function having a high priority to have a large area, and output notification information indicating a function having a low priority to have a small area.
  • the notification information 710 and the second function indicating the first function are provided.
  • the notification area 720 indicating the function and the notification information 730 indicating the third function may set the output area of the notification information so that the output area decreases in the order of high priority to low order.
  • the controller 180 may execute a preview function for the plurality of functions together with the output of the notification information for the plurality of functions in the modified recognition state 430.
  • a method of executing the preview function together with the drawings will be described in more detail.
  • FIG. 8A descriptions overlapping with the foregoing description (FIGS. 6 and 4, etc.) are omitted.
  • FIG. 8A it is assumed that a plurality of functions are functions of reproducing different sound sources.
  • the present invention is not limited thereto, and it will be apparent to those skilled in the art that the preview function may be applied in a similar manner to various functions that can execute the preview function.
  • the controller 180 may detect a plurality of sound sources corresponding to voices in the recognition state 420 (S430). For example, as shown in (a) of FIG. 8B, the controller 180 may respond to the voice of “girl's age” by “singer girl's song (first sound source)” or “song title girl's song (second sound source)”. ",” Singer's Generation Girls 'Generation (3rd Sound Source) Singed by Singer Girls' Generation "can be detected.
  • the controller 180 may execute a preview function on the plurality of sound sources (S810).
  • the controller 180 may execute the preview function when a plurality of functions capable of the preview function are detected.
  • the controller 180 may execute a preview function of reproducing a part of the first sound source.
  • the controller 180 may output the red notification information 710, which is an output color corresponding to the first sound source, to the entire area of the light output unit 154b with execution of the preview function of the first sound source.
  • colors corresponding to the remaining sound sources may not be output. Through this, the user may recognize that the color corresponding to the first sound source is red.
  • the preview functions of the second sound source and the third sound source can also be executed in order of priority.
  • the controller 180 outputs the green notification information 720 and the blue notification information 730 corresponding to each sound source to the light output unit 154b. Can be output to the entire area of the Thus, the user can visually check the color corresponding to each of the plurality of functions.
  • the controller 180 may associate the color information of the notification information with an execution command for executing each of the plurality of sound sources. That is, the controller 180 may include a command corresponding to the first color to the command for reproducing the first sound source, a command corresponding to the second color to the command for reproducing the second sound source, and a third color for the command to reproduce the third sound source. You can set the command corresponding to.
  • the controller 180 may switch from the recognition state 420 to the modified recognition state 430 while the pre-listening function is being executed. In addition, the controller 180 may determine whether an additional voice is received during the execution of the pre-listening function (S820).
  • the additional voice may be a voice for selecting a specific sound source from a plurality of sound sources.
  • the additional voice may be a voice associated with a color corresponding to each sound source previously set.
  • the controller 180 may be a voice corresponding to “red”. Accordingly, the present invention can improve user convenience by allowing a user to select a plurality of functions through colors, even if the user does not know the direct names associated with the plurality of functions for the speech recognition result.
  • the controller 180 can play a specific sound source corresponding to the additional voice among the plurality of sound sources (S830). For example, if a voice of “red” is received during the pre-listening of the “second sound source”, the controller 180 may play the “first sound source” corresponding to “red”. In this case, the controller 180 may stop the pre-listening function of the “second and third sound sources” and change the execution state of the voice recognition function from the modified recognition state 430 to the standby state 410. That is, the controller 180 may no longer receive the additional voice.
  • the present invention can provide a function for receiving additional voice at the time of receiving the additional voice, and at the same time can block the reception of unnecessary voice.
  • the controller 180 may not receive the additional voice during the execution of the pre-listening function. In this case, the controller 180 may determine whether to receive the additional voice after the execution of the pre-listening function (S840). When the preset time has elapsed after the pre-listen function ends, the controller 180 determines that the additional voice is no longer received, and switches the voice recognition function from the modified recognition state 430 to the standby state 410. can do.
  • the controller 180 can play a specific sound source corresponding to the additional voice, as in step S830.
  • the present invention can improve user convenience by providing visual information together with audio information in selecting a function suitable for a user's intention.
  • the present invention may be induced to receive additional voice of the user naturally, thereby improving the accuracy of the speech recognition function, it is possible to reduce the power consumption consumed when executing the speech recognition function.
  • 9A to 9C are diagrams illustrating a method of executing a voice recognition function when essential information is missing from a recognized voice in a recognition state.
  • the controller 180 may receive a voice in a recognized state (S420) and analyze the voice. As a result of the analysis, the controller 180 may detect a specific function corresponding to the voice and determine whether sufficient information for executing the specific function is sufficient (S910).
  • the controller 180 may determine that essential information is not included in the voice recognized in the recognition state 420.
  • Essential information may be essential information for the execution of a specific function. For example, when a specific function is "set alarm", time information is essential information, and when a specific function is "how is the weather today?", Local information may be essential information.
  • the controller 180 can enter the modified recognition state 430. If there is no essential information, the controller 180 can enter the modified recognition state 430 to receive the essential information.
  • the controller 180 may receive a voice of “Alice, set an alarm tomorrow”. In this case, when it is determined that there is no time information as a result of the voice analysis, the controller 180 may enter the corrected recognition state 430.
  • the controller 180 may output notification information indicating that the correction recognition state 740 is provided through the light output unit 154. For example, as illustrated in (a) of FIG. 9B, the controller 180 may output notification information 740 of yellow light to the light output unit 154. Accordingly, the user may recognize the need for additional voice by viewing the notification information 740.
  • the controller 180 may determine whether an additional voice is received within a preset time (S920). When the additional voice is received, the controller 180 may analyze the additional voice. In addition, the controller 180 may execute the specific function based on the essential information included in the additional voice (S930). For example, as illustrated in (b) of FIG. 9B, the controller 180 may receive a voice of “6 am”. In this case, the controller 180 may analyze one voice corresponding to “set the alarm tomorrow” and the “6 am” morning to detect one function corresponding to the two voices. That is, the controller 180 may analyze the voice of “6 am morning” based on the context information of the voice recognized in the recognition state 420.
  • the controller 180 may set an alarm at 6 AM tomorrow.
  • the controller 180 can switch the execution state of the voice recognition function from the modified recognition state 430 to the standby state 410.
  • the notification information 740 indicating the modified recognition state 430 may no longer be output.
  • the controller 180 may determine that no additional voice is received within a preset time. That is, as shown in (b) of FIG. 9C, the controller 180 may not receive the additional voice in the crystal recognition state 430.
  • the controller 180 may execute a specific function based on the user information of the talker (S940).
  • the user information of the talker may be information previously stored in the memory 170.
  • the user information of the talker may be detected based on the voice received in the recognition state 420.
  • the controller 180 may detect the talker information based on the attribute information of the voice.
  • the attribute information may include a loudness, a pitch, and a quality of the sound.
  • the controller 180 may extract user information from the memory 170 based on the talker information.
  • the user information may include history (history) information on a specific function of the user.
  • history history
  • the user information for a person A may have history information that sets an alarm at 6 am
  • the user information for a person B may have history information that sets an alarm at 7 am.
  • the controller 180 may execute a specific function by using history information on the specific function. For example, when detecting that the talker is A, the controller 180 may set an alarm at 6 am. As another example, as illustrated in (a) to (c) of FIG. 9C, when detecting that the talker is B, the controller 180 may set an alarm at 7:00 AM.
  • the controller 180 can execute a specific function appropriate to the situation. As described above, when a specific function is executed, the controller 180 can switch the modified recognition state 430 back to the standby state 410.
  • 10A to 10C are conceptual views illustrating an execution state of a voice recognition function according to an execution state of a specific function.
  • the controller 180 may control the execution state of the voice recognition function according to the execution state of the specific function.
  • the controller 180 may set the voice recognition function to the standby state 410 while executing a specific function.
  • the controller 180 may switch the execution state of the voice recognition function from the standby state 410 to the modified recognition state 430 before receiving the additional voice, before the execution of the specific function ends.
  • the controller 180 may switch the voice recognition function from the standby state 410 to the crystal recognition state 430 10 seconds before the reproduction of the first sound source ends during the reproduction of the first sound source. There is (S1010).
  • the controller 180 revises and recognizes the execution state of the voice recognition function 10 seconds before the reproduction of the first sound source ends during the reproduction of the first sound source. May transition to state 430.
  • the 10 seconds is not a fixed value and may be changed to another value by the designer of the speech recognition function. Or it may be changed at the request of the user.
  • the controller 180 may output notification information indicating that the correction recognition state 430 is performed. For example, as illustrated in (b) of FIG. 10B, the controller 180 may output light having a yellow color indicating the crystal recognition state 430 through the light output unit 154. In this way, the present invention may induce a user to naturally input a voice for executing another function related to the specific function before the execution of the specific function is finished. In addition, when a user wants to continuously execute a new function through a voice command after executing a specific function, the user may input a voice more conveniently even without inputting a start signal for voice recognition.
  • the controller 180 may receive an additional voice in the modification recognition state 430 (S1020). For example, as illustrated in (b) of FIG. 10B, the controller 180 may receive a voice corresponding to “No. 2 Songdo”.
  • the controller 180 may add a second voice corresponding to the additional voice to the play list (S1030).
  • the controller 180 may play the second sound source after the reproduction of the first sound source is finished.
  • the play list is a list of sound sources to be played. Therefore, when the reproduction of the first sound source is finished, the audio device 100 may reproduce the second sound source as the next reproduced song. Through this, the user may sequentially execute new functions without interrupting the currently executing function.
  • the controller 180 may stop executing the currently executing function and immediately execute a new function corresponding to the additional voice. For example, as illustrated in (a) and (b) of FIG. 10C, when a voice of “another song” is received during playback of the first sound source, the controller 180 stops playback of the first sound source, and then You can start playback of two sound sources. The method of executing the new function for the additional voice may be changed by the designer of the audio device 100.
  • the controller 180 may switch the execution state of the voice recognition function from the modified recognition state 430 to the standby state 410.
  • the controller 180 switches the execution state of the speech recognition function from the modification recognition state 430 to the standby state 410 when the specific function is terminated without receiving the additional voice in the modification recognition state 430. can do.
  • 11 is a conceptual diagram illustrating a method of providing notification information associated with a plurality of voices when a plurality of voices are simultaneously received.
  • the controller 180 may simultaneously receive a plurality of voices in a recognition state 420 in which voice recognition is possible. For example, as shown in FIG. 11A, the controller 180 may recognize a first voice of “Alice, how is the weather tomorrow?” And a second voice of “Alice, tell the weather of Jeju this weekend” in the recognition state 42. Can be simultaneously received.
  • the controller 180 may execute a function corresponding to one of the plurality of voices based on a preset condition.
  • the preset condition may be a voice having a loudest volume, a voice having a high accuracy of speech recognition, a voice of a specific user, or a voice of a user having the highest priority among a plurality of users. Can be.
  • the controller 180 may analyze attribute information of the first voice and the second voice, and distinguish the talker based on the attribute information.
  • the method of detecting the talker is replaced by the description of FIG. 9C above.
  • the controller 180 may execute a function corresponding to the first voice spoken by the first speaker having the highest priority based on the priority of the talkers.
  • the priority among the talkers may be set in advance.
  • the controller 180 may output notification information indicating that one voice is recognized.
  • the notification information may be output in an area located in the direction of the talker who uttered one voice.
  • the direction of the talker may be extracted based on the attribute information of the voice.
  • the audio device may include a plurality of microphones, and may extract the direction of the talker based on attribute information of voices received from the plurality of microphones, respectively. This technique is known in the art, and detailed description thereof will be omitted herein.
  • the controller 180 controls the first speaker.
  • the voice corresponding to the first voice of the voice "tomorrow the weather will be mostly sunny nationwide" can be output.
  • the controller 180 may output the notification information 1010 in an area corresponding to the location of the first talker.
  • the controller 180 may switch the execution state of the voice recognition function from the recognition state 420 to the correction recognition state 430 together with the execution of the function corresponding to one voice. Can be.
  • the controller 180 may execute a function corresponding to a new voice different from the one voice of the plurality of voices.
  • the controller 180 determines whether the first voice and the second voice are included in the modified recognition state 430 based on the additional voice of “no Jeju island weather”.
  • the related second voice can be extracted.
  • the controller 180 can output a voice that is a function corresponding to the second voice "Typhoon is coming to Jeju weekend.”
  • the controller 180 may output the notification information 1020 in an area corresponding to the direction in which the second talker is located.
  • the notification information may be output in different colors for each talker.
  • the controller 180 may output red notification information when the first voice of the first talker is recognized and blue notification information when the second voice of the second talker is recognized.
  • the output color of the notification information according to the talker may be preset by the user.
  • the present invention can intuitively provide information on the voice currently being recognized.
  • a plurality of functions associated with the plurality of functions in the recognition state when a plurality of functions corresponding to the voice received through the audio input unit in the recognition state capable of speech recognition, a plurality of functions associated with the plurality of functions in the recognition state
  • the accuracy of speech recognition can be improved by switching to a crystal recognition state in which voice can be received and executing a specific function among a plurality of functions based on the recognized additional speech in the correction recognition state.
  • the present invention can improve the user's convenience by enabling speech recognition without a start signal for starting speech recognition in the modified recognition state.
  • the present invention may induce additional voice input to the user by providing notification information related to the modified recognition state through the light output unit.
  • the present invention by visually notifying a plurality of functions corresponding to the voice through a variety of colors, the user can more easily select a specific function of the plurality of functions.
  • the present invention described above can be embodied as computer readable codes on a medium in which a program is recorded.
  • the computer-readable medium includes all kinds of recording devices in which data that can be read by a computer system is stored. Examples of computer-readable media include hard disk drives (HDDs), solid state disks (SSDs), silicon disk drives (SDDs), ROMs, RAMs, CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and the like. This also includes those implemented in the form of carrier waves (eg, transmission over the Internet).
  • the computer may include the controller 180 of the terminal. Accordingly, the above detailed description should not be construed as limiting in all aspects and should be considered as illustrative. The scope of the invention should be determined by reasonable interpretation of the appended claims, and all changes within the equivalent scope of the invention are included in the scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)

Abstract

La présente invention concerne un dispositif audio apte à reconnaître une voix et un procédé de commande associé, le dispositif audio comprenant : une unité d'entrée audio formée afin de permettre à une voix d'être reçue ; et une unité de commande pour commuter un état d'exécution d'une fonction de reconnaissance vocale, pendant un état de veille dans lequel la réception d'un signal de départ est attendue, à un état de reconnaissance, dans lequel une reconnaissance vocale est possible, lorsque le signal de départ est reçu par l'intermédiaire de l'unité d'entrée audio, l'unité de commande commutant l'état d'exécution de la fonction de reconnaissance vocale, de l'état de reconnaissance à l'état de veille conjointement avec l'exécution de la fonction de reconnaissance vocale détectée lorsqu'une seule fonction correspondant à la voix, qui est reconnue dans l'état de reconnaissance, est détectée, et commute l'état d'exécution de la fonction de reconnaissance vocale, de l'état de reconnaissance à un état de reconnaissance de correction apte à recevoir une voix supplémentaire conjointement avec une exécution d'une fonction spécifique parmi la pluralité de fonctions lorsque la pluralité de fonctions correspondant à la voix, qui est reconnue dans l'état de reconnaissance, sont détectées.
PCT/KR2017/000096 2016-12-28 2017-01-04 Dispositif audio et procédé de commande associé Ceased WO2018124355A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020160181415A KR20180076830A (ko) 2016-12-28 2016-12-28 오디오 장치 및 그 제어방법
KR10-2016-0181415 2016-12-28

Publications (1)

Publication Number Publication Date
WO2018124355A1 true WO2018124355A1 (fr) 2018-07-05

Family

ID=62709360

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/000096 Ceased WO2018124355A1 (fr) 2016-12-28 2017-01-04 Dispositif audio et procédé de commande associé

Country Status (2)

Country Link
KR (1) KR20180076830A (fr)
WO (1) WO2018124355A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108962250A (zh) * 2018-09-26 2018-12-07 出门问问信息科技有限公司 语音识别方法、装置及电子设备
CN110445931A (zh) * 2019-08-01 2019-11-12 花豹科技有限公司 语音识别开启方法及电子设备
CN111261152A (zh) * 2018-12-03 2020-06-09 西安易朴通讯技术有限公司 智能交互系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102697371B1 (ko) * 2018-10-31 2024-08-22 삼성전자주식회사 음성 명령에 응답하여 컨텐츠를 표시하기 위한 방법 및 그 전자 장치
WO2021006620A1 (fr) * 2019-07-08 2021-01-14 Samsung Electronics Co., Ltd. Procédé et système de traitement d'un dialogue entre un dispositif électronique et un utilisateur

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101056589B1 (ko) * 2004-04-07 2011-08-11 주식회사 케이티 음성인식 기능을 이용한 홈네트워크 제어 서비스 방법
US20140249816A1 (en) * 2004-12-01 2014-09-04 Nuance Communications, Inc. Methods, apparatus and computer programs for automatic speech recognition
KR101556173B1 (ko) * 2012-11-28 2015-09-30 엘지전자 주식회사 음성인식을 이용한 전자 기기 구동 장치 및 방법
KR20160059640A (ko) * 2014-11-19 2016-05-27 에스케이텔레콤 주식회사 다중 음성인식모듈을 적용한 음성 인식 방법 및 이를 위한 음성인식장치

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101056589B1 (ko) * 2004-04-07 2011-08-11 주식회사 케이티 음성인식 기능을 이용한 홈네트워크 제어 서비스 방법
US20140249816A1 (en) * 2004-12-01 2014-09-04 Nuance Communications, Inc. Methods, apparatus and computer programs for automatic speech recognition
KR101556173B1 (ko) * 2012-11-28 2015-09-30 엘지전자 주식회사 음성인식을 이용한 전자 기기 구동 장치 및 방법
KR20160059640A (ko) * 2014-11-19 2016-05-27 에스케이텔레콤 주식회사 다중 음성인식모듈을 적용한 음성 인식 방법 및 이를 위한 음성인식장치

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"SKT Artificial Intelligence Speaker 'NUGU' - User Review", BLOGER NAMUGEUNEUL, 16 October 2016 (2016-10-16), Retrieved from the Internet <URL:http://blog.naver.com/pomie_1999/220837762714> *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108962250A (zh) * 2018-09-26 2018-12-07 出门问问信息科技有限公司 语音识别方法、装置及电子设备
CN111261152A (zh) * 2018-12-03 2020-06-09 西安易朴通讯技术有限公司 智能交互系统
CN110445931A (zh) * 2019-08-01 2019-11-12 花豹科技有限公司 语音识别开启方法及电子设备

Also Published As

Publication number Publication date
KR20180076830A (ko) 2018-07-06

Similar Documents

Publication Publication Date Title
WO2017014374A1 (fr) Terminal mobile et son procédé de commande
WO2019160198A1 (fr) Terminal mobile, et procédé de commande associé
WO2018026059A1 (fr) Terminal mobile et son procédé de commande
WO2016208797A1 (fr) Casque d&#39;écoute et son procédé de commande
WO2016010262A1 (fr) Terminal mobile et son procédé de commande
WO2018093005A1 (fr) Terminal mobile et procédé de commande associé
WO2017094926A1 (fr) Dispositif terminal et procédé de commande
WO2015133658A1 (fr) Dispositif mobile et son procédé de commande
WO2016195147A1 (fr) Visiocasque
WO2015026101A1 (fr) Procédé d&#39;exécution d&#39;application au moyen d&#39;un dispositif d&#39;affichage et dispositif d&#39;affichage à cet effet
WO2016122151A1 (fr) Dispositif récepteur et procédé de commande correspondant
WO2016076474A1 (fr) Terminal mobile et son procédé de commande
WO2016190466A1 (fr) Terminal portable servant à afficher un écran optimisé pour diverses situations
WO2015125993A1 (fr) Terminal mobile et son procédé de commande
WO2015105257A1 (fr) Terminal mobile et son procédé de commande
WO2016039496A1 (fr) Terminal mobile, et procédé de commande correspondant
WO2015194723A1 (fr) Terminal mobile et procédé de commande correspondant
WO2018124355A1 (fr) Dispositif audio et procédé de commande associé
WO2015126012A1 (fr) Terminal mobile et son procédé de commande
WO2016032039A1 (fr) Appareil pour projeter une image et procédé de fonctionnement associé
WO2016003066A1 (fr) Terminal mobile et son procédé de commande
WO2017030212A1 (fr) Terminal mobile et son procédé de commande
WO2016111406A1 (fr) Terminal mobile et son procédé de commande
WO2016039509A1 (fr) Terminal et procédé d&#39;utilisation de celui-ci
WO2016129781A1 (fr) Terminal mobile, et procédé de commande associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17886858

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17886858

Country of ref document: EP

Kind code of ref document: A1