[go: up one dir, main page]

WO2020050554A1 - Dispositif électronique et son procédé de commande - Google Patents

Dispositif électronique et son procédé de commande Download PDF

Info

Publication number
WO2020050554A1
WO2020050554A1 PCT/KR2019/011135 KR2019011135W WO2020050554A1 WO 2020050554 A1 WO2020050554 A1 WO 2020050554A1 KR 2019011135 W KR2019011135 W KR 2019011135W WO 2020050554 A1 WO2020050554 A1 WO 2020050554A1
Authority
WO
WIPO (PCT)
Prior art keywords
keyword
electronic device
call
user
keywords
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2019/011135
Other languages
English (en)
Korean (ko)
Inventor
변동남
김서희
김현한
정유진
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to US17/255,605 priority Critical patent/US12243516B2/en
Publication of WO2020050554A1 publication Critical patent/WO2020050554A1/fr
Anticipated expiration legal-status Critical
Priority to US18/321,146 priority patent/US20230290343A1/en
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1815Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3334Selection or weighting of terms from queries, including natural language queries
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/258Heading extraction; Automatic titling; Numbering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
    • H04M1/72472User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons wherein the items are sorted according to specific criteria, e.g. frequency of use
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/64Automatic arrangements for answering calls; Automatic arrangements for recording messages for absent subscribers; Arrangements for recording conversations
    • H04M1/65Recording arrangements for recording a message from the calling party
    • H04M1/656Recording arrangements for recording a message from the calling party for recording conversations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72445User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting Internet browser applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/34Microprocessors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/36Memories
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/38Displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means

Definitions

  • the present disclosure relates to an electronic device and a control method thereof, and more particularly, to an electronic device and a control method thereof that can acquire at least one keyword from a call content and utilize it for various functions.
  • AI artificial intelligence
  • the artificial intelligence system is a system in which a machine learns, judges, and becomes smart unlike a rule-based smart system. As the artificial intelligence system is used, the recognition rate is improved and the user's taste can be more accurately understood, and the existing rule-based smart system is gradually being replaced by a deep learning-based artificial intelligence system.
  • Linguistic understanding is a technology that recognizes and applies / processes human language / characters, and includes natural language processing, machine translation, conversation system, question and answer, speech recognition / synthesis.
  • Visual understanding is a technique of recognizing and processing objects as human vision, and includes object recognition, object tracking, image search, human recognition, scene understanding, spatial understanding, and image improvement.
  • Inference prediction is a technique for logically inferring and predicting information by determining information, and includes knowledge / probability-based reasoning, optimization prediction, preference-based planning, and recommendation.
  • Knowledge expression is a technology that automatically processes human experience information into knowledge data, and includes knowledge building (data generation / classification), knowledge management (data utilization), and so on.
  • Motion control is a technique for controlling autonomous driving of a vehicle and movement of a robot, and includes motion control (navigation, collision, driving), operation control (behavior control), and the like.
  • an artificial intelligence personal assistant platform has been provided on an electronic device.
  • a control method of an electronic device may acquire at least one keyword in a content of a call with a user of another electronic device while performing a call with a user of another electronic device using the electronic device And displaying the at least one keyword while the call is being performed, and providing a search result for a keyword selected by the user among the displayed at least one keyword.
  • At least one circular UI element including each of the at least one keyword may be displayed.
  • control method further includes determining the importance of the at least one keyword according to the number of times mentioned in a call with the user of the other electronic device, and the displaying step comprises: The size of the at least one circular UI element may be differently displayed according to the determined importance of the at least one keyword.
  • the arrangement interval of the plurality of keywords is determined according to the interval mentioned in the call with the user of the other electronic device, and according to the determined arrangement interval.
  • the plurality of keywords can be displayed.
  • the control method when there is a plurality of at least one keyword displayed, when a user input for sequentially selecting two or more keywords among the plurality of keywords is received, the selected two or more keywords are set as one set.
  • the method may further include a step of determining, and the step of providing the search result may provide the search result for the determined set.
  • control method further includes displaying a first UI element corresponding to the search function for the determined set and a second UI element corresponding to the memo function for the determined set, In the step of providing the search result, when the first UI element is selected, the search result for the determined set may be provided.
  • control method according to the present embodiment may further include storing the determined set in the memo application when the second UI element is selected.
  • the acquiring step may acquire at least one keyword from the content of a call with a user of the other electronic device using the learned artificial intelligence model.
  • a communication unit a display, a memory for storing computer executable instructions (computer executable instructions), and executing the computer executable instructions, other electronics using the electronic device While performing a call with a user of the device, acquire at least one keyword from the content of a call with the user of the other electronic device, control the display to display the at least one keyword while the call is being performed, and display the And a processor that provides a search result for a keyword selected by a user among at least one keyword.
  • computer executable instructions computer executable instructions
  • the processor may control the display to display the at least one keyword on a call screen with a user of the other electronic device.
  • the processor determines the importance of the at least one keyword according to the number of times mentioned in a call with the user of the other electronic device, and determines the importance of the at least one keyword according to the determined importance of the at least one keyword.
  • the display can be controlled to display different sizes of UI elements.
  • the processor determines the spacing of the plurality of keywords according to the interval mentioned in the call with the user of the other electronic device, and the plurality of keywords according to the determined placement interval.
  • the display may be controlled to display a keyword of.
  • the processor determines the selected two or more keywords as a set, and the determined You can provide search results for sets.
  • the processor controls the display to display a first UI element corresponding to a search function for the determined set and a second UI element corresponding to a memo function for the determined set, and the first UI element When is selected, a search result for the determined set may be provided.
  • the processor may store the determined set in a memo application.
  • the processor may acquire at least one keyword from the content of a call with a user of the other electronic device using the learned artificial intelligence model.
  • the processor determines a recommended application among the plurality of applications of the electronic device based on the obtained at least one keyword, and the determined recommended application and the acquisition
  • the display may be controlled to display a UI inquiring whether to perform a task related to at least one keyword.
  • FIG. 1 is a view for explaining an electronic device displaying keywords obtained from a call according to an embodiment of the present disclosure
  • 4 to 5 is a view for explaining an embodiment of the present disclosure to obtain a keyword in the call content and display the obtained keyword on the call screen,
  • FIG. 6 is a view for explaining an embodiment of the present disclosure for selecting keywords displayed on a call screen
  • FIG. 7 is a view for explaining a UI according to an embodiment of the present disclosure providing a memo function or a search function with selected keywords
  • 9 to 10 are views for explaining an embodiment of the present disclosure in which keywords are changed in real time on a call screen in real time according to the content of a call;
  • FIG. 11 is a view for explaining a UI according to another embodiment of the present disclosure providing a memo function or a search function with selected keywords;
  • FIG. 12 is a view for explaining an embodiment of the present disclosure for registering a memo with keywords obtained in a currency
  • FIG. 13 is a view for explaining a UI according to an embodiment of the present disclosure recommending an action that can be performed with keywords acquired in a call immediately after the call ends;
  • FIG. 14 is a view for explaining an embodiment of the present disclosure to provide personalized information with keywords obtained from a currency
  • 15 is a view for explaining the overall structure of a service that provides various functions by obtaining keywords in a call according to an embodiment of the present disclosure
  • 16 is a block diagram illustrating a processor for learning and using a recognition model, according to an embodiment of the present disclosure
  • 17 to 19 are block diagrams illustrating a learning unit and an analysis unit according to various embodiments of the present disclosure.
  • FIG. 20 is a flowchart of a network system using an artificial intelligence model according to an embodiment of the present disclosure.
  • 21 is a flowchart illustrating a method of controlling an electronic device according to an embodiment of the present disclosure.
  • expressions such as “A or B,” “at least one of A or / and B,” or “one or more of A or / and B”, etc. may include all possible combinations of the items listed together. .
  • first,” “second,” “first,” or “second,” as used herein may modify various components, regardless of order and / or importance, and denote one component. It is used to distinguish from other components, but does not limit the components.
  • the first user device and the second user device may indicate different user devices regardless of order or importance.
  • the first component may be referred to as a second component without departing from the scope of rights described in this document, and similarly, the second component may also be referred to as a first component.
  • module means that perform at least one function or operation, and these components are implemented in hardware or software. Or it may be implemented as a combination of hardware and software. In addition, a plurality of “modules”, “units”, “parts”, etc., are integrated into at least one module or chip, except that each needs to be implemented with individual specific hardware. Can be implemented as
  • Some component eg, first component
  • another component eg, second component
  • “connected to” it should be understood that any of the above components may be directly connected to the other component or may be connected through another component (eg, a third component).
  • a component e.g., a first component
  • another component e.g., a second component
  • An electronic device includes, for example, a smart phone, a tablet personal computer (PC), a mobile phone, a video phone, and an e-book reader.
  • Desktop PC desktop personal computer
  • laptop PC laptop personal computer
  • netbook computer netbook computer
  • workstation workstation
  • PMP portable multimedia player
  • MP3 player mobile medical It may include at least one of a device, a camera, or a wearable device.
  • the wearable device may be an accessory type (for example, a watch, ring, bracelet, anklet, necklace, glasses, contact lens, or head-mounted device (HMD)), a fabric or clothing integrated type ( Examples may include at least one of an electronic garment), a body attachment type (eg, a skin pad or a tattoo), or a bio-implantable type (eg, an implantable circuit).
  • an accessory type for example, a watch, ring, bracelet, anklet, necklace, glasses, contact lens, or head-mounted device (HMD)
  • HMD head-mounted device
  • fabrics or clothing integrated type examples may include at least one of an electronic garment), a body attachment type (eg, a skin pad or a tattoo), or a bio-implantable type (eg, an implantable circuit).
  • the electronic device may be a home appliance.
  • Household appliances include, for example, televisions, digital video disk (DVD) players, audio, refrigerators, air conditioners, vacuum cleaners, ovens, microwave ovens, washing machines, air cleaners, set-top boxes, and home automation controls.
  • Home automation control panel, security control panel, TV box eg Samsung HomeSync TM, Apple TV TM, or Google TV TM, game console (eg Xbox TM, PlayStation TM, electronic dictionary, electronics) It may include at least one of a key, a camcorder, or an electronic picture frame.
  • the electronic device includes various medical devices (eg, various portable medical measurement devices (such as a blood glucose meter, heart rate monitor, blood pressure meter, or body temperature meter), magnetic resonance angiography (MRA), magnetic resonance imaging (MRI), Computed tomography (CT), camera, or ultrasound, etc., navigation device, global navigation satellite system (GNSS), event data recorder (EDR), flight data recorder (FDR), automotive infotainment ) Devices, marine electronic equipment (e.g.
  • marine navigation devices gyro compasses, etc.
  • avionics security devices
  • head units for vehicles industrial or household robots
  • ATMs automatic teller's machines
  • POS Point of sales
  • Internet of things e.g. light bulbs, various sensors, electricity or gas meters, sprinkler devices, fire alarms, thermostats, street lights, soil It may include at least one of a (aster), exercise equipment, hot water tank, heater, boiler, and the like.
  • the electronic device may be a furniture or part of a building / structure, an electronic board, an electronic signature receiving device, a projector, or various measuring devices (eg, Water, electricity, gas, or radio wave measurement devices, etc.).
  • the electronic device may be one or a combination of the above-mentioned devices.
  • An electronic device according to an embodiment may be a flexible electronic device.
  • the electronic device according to the embodiment of the present document is not limited to the above-described devices, and may include a new electronic device according to technological development.
  • One object of the present disclosure is to provide an electronic device equipped with an artificial intelligence agent system for controlling keyword-based personalized recommendation information by obtaining a keyword from a call content and performing a related command, and a control method thereof.
  • the electronic device may perform a simple search and memo function, for example, based on keywords obtained from a call during a call.
  • an application predicted to be used by the user may be determined based on the keyword obtained from the call and executed in association with the acquired keyword.
  • the keywords obtained in the currency may be stored to provide personalized information based on the keywords at any time.
  • FIG. 1 is a view for explaining an electronic device according to an embodiment of the present disclosure to obtain and provide keywords in a call with a counterpart.
  • a user may perform a call with a user of another electronic device using the electronic device 100.
  • the electronic device 100 may display keywords obtained in the call on the call screen with the other party.
  • the displayed keywords are selectable UI elements.
  • various functions may be executed based on the selected keyword. For example, a search function may be performed based on the selected keyword, and a memo function may be performed based on the selected keyword.
  • the electronic device 100 After the call is over, the electronic device 100 applies (or inputs) the acquired keywords to an application (note application, calendar application, schedule application, search application, etc.) that the user is expected to execute based on the acquired keywords.
  • an application note application, calendar application, schedule application, search application, etc.
  • the electronic device 100 stores keywords obtained in the call and may provide personalized information based on the keywords obtained in the call at any time according to a user request.
  • FIG. 2 is a block diagram illustrating a configuration of an electronic device 100 according to an embodiment of the present disclosure.
  • the electronic device 100 includes a communication unit 110, a memory 120, a processor 130, and a display 140. Depending on the embodiment, some of the components may be omitted, and although not illustrated, appropriate hardware / software components of a level apparent to those skilled in the art may be additionally included in the electronic device 100.
  • the communication unit 110 may be connected to a network through, for example, wireless communication or wired communication to communicate with an external device.
  • Wireless communication is, for example, a cellular communication protocol, for example, long-term evolution (LTE), LTE Advance (LTE-A), code division multiple access (CDMA), wideband CDMA (WCDMA), universal (UMTS) Mobile telecommunications system), WiBro (Wireless Broadband), or GSM (Global System for Mobile Communications) may be used.
  • the wireless communication may include short-range communication, for example.
  • the short-range communication may include, for example, at least one of WiFi direct, Bluetooth, near field communication (NFC), and Zigbee.
  • the wired communication may include, for example, at least one of a universal serial bus (USB), a high definition multimedia interface (HDMI), a recommended standard232 (RS-232), or a plain old telephone service (POTS).
  • the network may include at least one of a telecommunications network, for example, a computer network (eg, LAN or WAN), the Internet, or a telephone network.
  • the communication unit 110 may include, for example, a cellular module, a WiFi module, a Bluetooth module, a GNSS module (eg, a GPS module, a Glonass module, a Beidou module, or a Galileo module), an NFC module, and a radio frequency (RF) module.
  • a cellular module e.g., a Global System for Mobile Communications (GSM) module
  • a WiFi module e.g., a GSM, or a GSM
  • GNSS module eg, a GPS module, a Glonass module, a Beidou module, or a Galileo module
  • NFC e.g, a GPS module, a Glonass module, a Beidou module, or a Galileo module
  • RF radio frequency
  • the cellular module may provide, for example, a voice call, a video call, a text service, or an internet service through a communication network.
  • the cellular module may perform identification and authentication of an electronic device in a communication network using a subscriber identification module (eg, a SIM card).
  • the cellular module may perform at least some of the functions that the processor can provide.
  • the cellular module may include a communication processor (CP).
  • Each of the WiFi module, the Bluetooth module, the GNSS module, or the NFC module may include a processor for processing data transmitted and received through a corresponding module.
  • a processor for processing data transmitted and received through a corresponding module.
  • at least some (eg, two or more) of a cellular module, a WiFi module, a Bluetooth module, a GNSS module, or an NFC module may be included in one integrated chip (IC) or IC package.
  • the RF module may transmit and receive communication signals (eg, RF signals).
  • the RF module may include, for example, a transceiver, a power amp module (PAM), a frequency filter, a low noise amplifier (LNA), or an antenna.
  • PAM power amp module
  • LNA low noise amplifier
  • at least one of a cellular module, a WiFi module, a Bluetooth module, a GNSS module, or an NFC module may transmit and receive an RF signal through a separate RF module.
  • the memory 120 may include, for example, internal memory or external memory.
  • the internal memory includes, for example, volatile memory (eg, dynamic RAM (DRAM), static RAM (SRAM), or synchronous dynamic RAM (SDRAM)), non-volatile memory (eg, OTPROM (one time programmable ROM (PROM), programmable ROM (PROM), erasable and programmable ROM (EPROM), electrically erasable and programmable ROM (EEPROM), mask ROM, flash ROM, flash memory (e.g. NAND flash or NOR flash, etc.), hard drives, Or it may include at least one of a solid state drive (SSD).
  • volatile memory eg, dynamic RAM (DRAM), static RAM (SRAM), or synchronous dynamic RAM (SDRAM)
  • non-volatile memory eg, OTPROM (one time programmable ROM (PROM), programmable ROM (PROM), erasable and programmable ROM (EPROM), electrically erasable and programmable ROM
  • the external memory is a flash drive, for example, compact flash (CF), secure digital (SD), micro secure digital (Micro-SD), mini secure digital (Mini-SD), extreme digital (xD), It may include a multi-media card (MMC) or a memory stick.
  • CF compact flash
  • SD secure digital
  • Micro-SD micro secure digital
  • Mini-SD mini secure digital
  • xD extreme digital
  • MMC multi-media card
  • the memory 120 is accessed by the processor 130, and data read / write / modify / delete / update may be performed by the processor 130.
  • the term memory may include at least one of a memory provided separately from the processor 130, a ROM (not shown), and a RAM (not shown) in the processor 130.
  • the memory 120 may store trained artificial intelligence models, learning data, and the like.
  • the memory 120 may store various applications such as a memo application, a schedule application, a calendar application, a web browser application, and a call application.
  • the processor 130 may perform various calculations using the learned artificial intelligence model. For example, the processor 130 may acquire at least one keyword in a call with a counterpart performed using the electronic device 100 using the learned artificial intelligence model. According to an embodiment of the present disclosure, the processor 130 may acquire unique nouns by inputting the content of a call into a named entity recognition (NER) model, and select keywords from among the nouns. The acquired nouns mode may be selected as a keyword, or only those that satisfy a specific criterion among the acquired nouns may be selected as keywords.
  • NER named entity recognition
  • the specific criterion may be, for example, whether the number of times the proper noun is mentioned in a call is greater than or equal to a preset number of times, or whether the interval at which the proper noun is mentioned is less than a preset time.
  • a proper noun belonging to a category corresponding to the keyword may be selected as a keyword.
  • categories such as time, person, and place may be categories corresponding to keywords.
  • a keyword can be obtained by entering a proper noun into an artificial intelligence model trained to select a key keyword.
  • the artificial intelligence model is a judgment model learned based on an artificial intelligence algorithm, and may be, for example, a model based on a neural network.
  • the trained artificial intelligence model may be designed to simulate a human brain structure on a computer and may include a plurality of network nodes having weights, simulating neurons of a human neural network. A plurality of network nodes may each form a connection relationship so that neurons simulate synaptic activity of neurons that exchange signals through synapses.
  • the trained artificial intelligence model may include, for example, a neural network model or a deep learning model developed from a neural network model.
  • a plurality of network nodes may be located at different depths (or layers) and exchange data according to a convolution connection relationship.
  • Examples of the learned artificial intelligence model may include, but are not limited to, Deep Neural Network (DNN), Recurrent Neural Network (RNN), and Bidirectional Recurrent Deep Neural Network (BRDNN).
  • the electronic device 100 may use a personal assistant program (eg, Bixby TM), which is a program dedicated to artificial intelligence (or an artificial intelligence agent).
  • the personal assistant program is a dedicated program for providing AI (Artificial Intelligence) based services.
  • AI Artificial Intelligence
  • Existing general-purpose processors eg, CPUs
  • single-purpose processors eg, GPUs, FPGAs, ASICs, etc.
  • the electronic device 100 may include a plurality of processors, for example, a processor dedicated to artificial intelligence and other processors.
  • a preset user input for example, an icon touch corresponding to a personal assistant chatbot, a user voice including a preset word, etc.
  • a button provided in the electronic device 100 For example, when the button for executing the AI agent is pressed, the AI agent may operate (or execute).
  • the AI agent may be in a standby state before a preset user input is detected or a button provided in the electronic device 100 is selected.
  • the standby state is a state of detecting that a predefined user input (eg, when a user's voice including a preset keyword (ex. Bixby) is input) is received to control the start of an AI agent operation. to be.
  • the electronic device 100 may operate the AI agent. Then, the artificial intelligence agent may perform the function of the electronic device 100 based on the voice when the user voice is received, and may output an answer if the voice is an inquiry voice.
  • Operations based on artificial intelligence may be performed within the electronic device 100, or may be performed through an external server.
  • the artificial intelligence model is stored in the server, and the electronic device 100 provides data to be input to the artificial intelligence model as a server, and the server inputs data to the artificial intelligence model to transmit the result data obtained by the electronic device ( 100).
  • the electronic device 100 includes a communication unit 110, a memory 120, a processor 130, a display 140, an input unit 150, an audio output unit 160, a microphone 170 It may include. Depending on the embodiment, some of the components may be omitted, and although not illustrated, appropriate hardware / software components of a level apparent to those skilled in the art may be additionally included in the electronic device 100. Meanwhile, since the communication unit 110, the memory 120, the processor 130, and the display 140 are described in FIG. 2, redundant description will be omitted.
  • the microphone 170 is configured to receive a user voice or other sound and convert it into a digital signal.
  • the processor 130 may acquire at least one keyword from a call voice input through the microphone 170.
  • the microphone 170 may be provided inside the electronic device 100, but this is only an example, and may be provided outside the electronic device 100 to be electrically connected to the electronic device 100.
  • the input unit 150 may receive a user input and transmit it to the processor 130.
  • the input unit 150 may include, for example, a touch sensor, a (digital) pen sensor, a pressure sensor, a key, or a microphone.
  • a touch sensor for example, at least one of capacitive, pressure-sensitive, infrared, and ultrasonic methods may be used.
  • the (digital) pen sensor may be, for example, a part of the touch panel or may include a separate recognition sheet.
  • the key may include, for example, a physical button, an optical key, or a keypad.
  • the touch sensor of the display 140 and the input unit 150 may be implemented as a touch screen by forming a mutual layer structure.
  • the audio output unit 160 may output an audio signal.
  • the voice output of the other party received through the audio output unit 160 and the communication unit 110 may be output.
  • the audio output unit 160 may output audio data stored in the memory 120.
  • the sound output unit 160 may output various notification sounds and may output the voice of an artificial intelligence assistant.
  • the audio output unit 160 may include a receiver, a speaker, and a buzzer.
  • the memory 120 stores computer executable instructions, and when it is executed by the processor 130, a control method of the electronic device 100 described in the present disclosure may be performed.
  • the processor 130 may acquire at least one keyword in a call with a user of another electronic device while performing a call with a user of another electronic device through the communication unit 110 by executing computer-executable instructions.
  • the display 140 may be controlled to display the displayed keyword while the call is being performed.
  • the processor 130 collects the voice of the user input through the microphone and the voice of the other party of the other electronic device received through the communication unit 110, converts the collected voice into text, and keywords from the converted text Can be obtained.
  • the processor 130 may acquire at least one keyword in a call with a user of another electronic device using the learned artificial intelligence model.
  • the processor 130 may obtain at least one keyword in a call with a user of another electronic device based on a rule. For example, the processor 130 may acquire at least one keyword based on the frequency of words mentioned in a call with a user of another electronic device, the strength of the sound speaking the word, and the like.
  • FIGS. 4 to 5 An embodiment of the present disclosure for obtaining a keyword in a currency will be described with reference to FIGS. 4 to 5.
  • FIG. 4 illustrates the content of a call between a user 10 of the electronic device 100 and a user 20 of another electronic device.
  • the processor 130 may acquire at least one keyword (indicated by an underline) from the content of the call in which the call is performed. For example, the processor 130 may acquire nouns such as “tomorrow”, “7 o'clock”, “dongnam brother”, “gangnam station”, “gourmet” as keywords.
  • the processor 130 may control the display 140 to display at least one UI element 51 to 58 each including at least one keyword obtained in a call.
  • the at least one UI element 51 to 58 may be circular, specifically, a bubble shape.
  • the shape of the UI element including the keyword is not particularly limited.
  • a UI element including a keyword may have a quadrangular shape.
  • the processor 130 may display the sizes of at least one UI element 51 to 58 including the keyword differently according to the importance of the keyword contained therein. That is, in the case of a UI element containing a keyword with high importance, the size of the UI element containing a keyword with low importance can be expressed in a larger size. For example, referring to FIG. 5, UI elements 51, 54, 56, and 58 that include keywords with relatively high importance are UI elements 52, 53, 55, and 57 that contain keywords with relatively low importance. ).
  • the processor 130 may input keywords into the learned artificial intelligence model to obtain information about the importance of the keywords.
  • the processor 130 may determine the importance of the keyword according to the weight set differently for each category. For example, when weights (numbers in parentheses) are set, such as the time category 5, person category 4, and place category 2, keywords belonging to the time category have higher importance than keywords belonging to the place category.
  • the processor 130 may determine the importance according to the number of times the keyword is mentioned in the currency. That is, the greater the number of times mentioned, the higher the importance.
  • the processor 130 may control to display only a certain number of keywords on the call screen 500. In this case, it is possible to control such that only a few keywords having high importance are displayed.
  • keywords displayed on the call screen 500 may disappear from the call screen 500 according to user manipulation.
  • another new keyword may be displayed on the call screen 500.
  • a new keyword having the next importance level may be displayed on the call screen 500.
  • the processor 130 removes the corresponding UI element from the call screen 500, and generates a keyword having the next importance level. Including UI elements may be displayed on the call screen 500.
  • the user input for removing the UI element may be, for example, a user's touch motion that selects the UI element and moves it out of the call screen 500. As another example, it may be a user input that double-touches a UI element. In addition, various user inputs are possible. According to an embodiment, when the UI elements 51 to 58 have a bubble shape, a graphic effect in which the bubble bursts when removed may be provided.
  • the processor 130 may arrange keywords according to the degree of association between the keywords. According to an embodiment of the present disclosure, the processor 130 determines a placement interval of a plurality of keywords according to the interval mentioned in the call with the user 20 of another electronic device, and displays 140 to display a plurality of keywords at the determined placement interval. ) Can be controlled.
  • keywords mentioned together may be displayed adjacently. For example, if the currency includes the phrase "How about Gangnam Station at 7:00 tomorrow?", The UI element 55 including the keyword “tomorrow”, the UI element 51 including the keyword “7 o'clock”, the keyword UI elements 56 including “Gangnam Station” may be displayed at positions adjacent to each other.
  • the user 10 can intuitively know which topic the conversation is under, while watching the keywords displayed on the call screen 500 while making a call.
  • keywords obtained from currencies can be used for various functions.
  • the processor 130 may apply a keyword selected by a user among at least one keyword displayed on the display 140 to an application of the electronic device 100 to perform a function related to the keyword in the application. .
  • the processor 130 may provide a search result for a keyword selected by a user among at least one keyword displayed on the display 140.
  • the processor 130 may register a keyword selected by the user from at least one keyword displayed on the display 140 to the memo application.
  • various functions may be provided.
  • two or more keywords may be selected by various user operations, such as an operation of sequentially touching two or more keywords, an operation of simultaneously touching two or more keywords (multi-touch), a drag operation of two or more keywords, and the like.
  • 6 illustrates an example of a method of selecting two or more keywords.
  • the user performs the drag operation from the UI element 56 including the “Gangnam Station” and ends at the UI element 58 including the keyword “Gourmet Station” to perform the keywords “Gangnam Station” and “Gourmet Restaurant”. You can choose.
  • the processor 130 may determine the selected two or more keywords as one set. In the case of FIG. 6, the processor 130 may determine the keywords “Gangnam Station” and “Gourmet Restaurant” as one set.
  • the electronic device 100 may search or make a note with the determined set. The description of this embodiment will be described with reference to FIG. 7.
  • FIG. 7 is a view for explaining an embodiment of the present disclosure to perform a memo or search with a keyword selected by a user among keywords acquired in a currency.
  • the processor 130 determines two or more keywords as a set, and as shown in FIG. 7, the user selects two or more keywords of the call screen 500. It can be displayed on one area 710.
  • the processor 130 controls the display 140 to display the first UI element 720 corresponding to the memo function for the determined set and the second UI element 730 corresponding to the search function for the determined set. can do.
  • the processor 130 may store the keyword set in the memo application.
  • the processor 130 may provide search results for the keyword set. For example, as illustrated in FIG. 8, search results 810 corresponding to “Gangnam Station” and “Gourmet Restaurant” may be provided. The search result 810 may also be provided on the call screen 500.
  • FIG. 10 shows a call screen 500 after performing a call as shown in FIG. 9. It can be seen that the keywords displayed on the call screen 500 of FIG. 10 are different from the keywords displayed on the call screen 500 of FIG. 5.
  • the processor 130 determines the three keywords selected by the drag operation as a set, as shown in FIG. Similarly, three keywords selected by the user may be displayed on one area 710 of the call screen 500.
  • keywords in the region 710 may be arranged in a natural form according to the language structure.
  • the processor 130 may arrange keywords according to keyword categories (time, place, activity). For example, the processor 130 may place a keyword (a tasting room) belonging to a place category after a keyword belonging to a time category (7 o'clock), and a keyword (reservation) belonging to an activity category after a keyword belonging to a place category.
  • the processor 130 displays 140 to display a first UI element 720 corresponding to a search function for a set of three keywords and a second UI element 730 corresponding to a memo function for a set of three keywords ) Can be controlled.
  • the processor 130 may store the keyword set in the memo application. For example, as illustrated in FIG. 12, the processor 130 may input and store the keyword set “7 o'clock tasting room reservation” in the memo application.
  • the processor 130 determines a recommended application among the plurality of applications of the electronic device 100 based on at least one keyword obtained in the call, and the determined recommended application and
  • the display 140 may be controlled to display a UI inquiring whether to perform an operation related to the obtained at least one keyword.
  • the processor 130 may perform the job.
  • the processor 130 may determine an application corresponding to a keyword (or a category of the obtained keyword) as a recommended application, based on a mapping list between the keyword (or a category of the keyword) and the application.
  • the mapping list may be pre-stored in the memory 120.
  • the processor 130 may display a UI inquiring whether to perform a task of registering a keyword related to the schedule among the obtained at least one keyword to the schedule application.
  • a UI inquiring whether to perform a task of registering a keyword related to the schedule among the obtained at least one keyword to the schedule application.
  • the processor 130 when a call with a user 20 of another electronic device is completed, the processor 130 immediately inquires whether to perform a task of registering a keyword obtained in a call to a schedule application immediately after the call is completed.
  • the display 140 may be controlled to display 1310.
  • the processor 130 may perform a task of registering a schedule in the schedule application.
  • the processor 130 transmits the amount of the remittance based on a keyword related to a remittance amount among at least one keyword obtained.
  • the display 140 may be controlled to display a UI (ex. "Are you sure you want to proceed with the transfer of 15,000 won to Kim Young-hee?") Asking whether or not to perform a remittance.
  • the processor 130 may use the first contact stored in the contact application based on a keyword related to contact transfer among at least one keyword.
  • the display 140 may be controlled to display a UI (ex. "Shall you send Kim Young-hee's contact with Kim Cheol-soo's contact?”) Asking whether or not to perform the task of transmitting information about the second contact stored in the contact application. .
  • the processor 130 transmits a character composed of keywords based on a keyword related to message transmission among at least one keyword.
  • the display 140 may be controlled to display a UI inquiring whether to perform a task.
  • personalized information may be provided based on at least one keyword acquired in a currency.
  • An example of personalization information will be described with reference to FIG. 14.
  • FIG. 14 illustrates a personalization information providing screen according to an embodiment of the present disclosure.
  • the electronic device 100 may provide personalized information based on user profile information.
  • the user profile information may include, for example, a user name, age, gender, job, schedule, a user's movement path (path that the electronic device 100 traveled), and at least one keyword obtained from a call.
  • the electronic device 100 may provide information based on the user profile information.
  • the electronic device 100 may provide information based on keywords acquired in a call. For example, as illustrated in FIG. 14, when the keywords “evening”, “7 o’clock”, and “Gangnam Station” are obtained in a call with a user of another electronic device, based on these keywords, the words “near 7 o'clock near Gangnam Station” 'Recommended things to play'.
  • information that is more suited to the context of use may be provided based on the content of the call.
  • 15 is a view for explaining a service structure that provides various functions by obtaining keywords in a call according to an embodiment of the present disclosure.
  • a call voice (user's voice and voice of another electronic device) is input, and pre-processing of the input call voice Is performed and feature vectors can be obtained. Then, by inputting feature vectors into a trained artificial intelligence model, such as a named entity recognition (NER) model, unique nouns can be extracted from the voice of the call.
  • NER named entity recognition
  • the electronic device 100 may select keywords from the extracted nouns. At this time, the importance of the keyword may be considered. Also, the electronic device 100 may provide the selected keywords to the user, perform a search with the keyword selected by the user, or perform a memo function. Also, the electronic device 100 may perform deep learning-based action recommendation (eg, schedule registration, remittance, contact transfer, message (SMS) transmission, etc.) using the selected keywords.
  • deep learning-based action recommendation eg, schedule registration, remittance, contact transfer, message (SMS) transmission, etc.
  • the functions performed in the above-described embodiments may be performed using an artificial intelligence model.
  • functions such as acquiring keywords, determining importance for keywords, arranging keywords, and determining recommended applications based on keywords may be performed using an artificial intelligence model.
  • 16 is a block diagram illustrating a processor for learning and using an artificial intelligence model according to an embodiment of the present disclosure.
  • the processor 1600 may include at least one of a learning unit 1610 and an analysis unit 1620.
  • the learning unit 1610 may generate an artificial intelligence model having a judgment criterion using the learning data.
  • the learning unit 1610 may generate and train an artificial intelligence model to acquire keywords from speech data as speech data as learning data.
  • the analysis unit 1620 may input a voice data into the artificial intelligence model to obtain keywords from the voice data.
  • the artificial intelligence model may include a speech to text (STT) module, a named entity recognition (NER) module, and a keyword selection module.
  • the STT (Speech to text) module converts the input voice into text.
  • the NER (Named entity recognition) module that receives the converted text can extract a proper noun from the text.
  • the keyword selection module may select keywords by determining the importance of the extracted noun. The importance may be determined according to the weight of the proper noun or the weight of the category to which the proper noun belongs, the frequency of use of the proper noun, the interval between use of the proper noun, and the volume of the user's voice that utters the proper noun.
  • At least a portion of the learning unit 1610 and at least a portion of the analysis unit 1620 may be implemented as a software module or manufactured in the form of at least one hardware chip and mounted on an electronic device.
  • at least one of the learning unit 1610 and the analysis unit 1620 may be manufactured in the form of a dedicated hardware chip for artificial intelligence (AI), or an existing general-purpose processor (eg, CPU or application) It may be manufactured as part of a processor or a graphics-only processor (for example, a GPU) and mounted on a server that provides an analysis result to the aforementioned electronic device 100 or the electronic device 100.
  • AI artificial intelligence
  • CPU or application existing general-purpose processor
  • It may be manufactured as part of a processor or a graphics-only processor (for example, a GPU) and mounted on a server that provides an analysis result to the aforementioned electronic device 100 or the electronic device 100.
  • the dedicated hardware chip for artificial intelligence is a dedicated processor specialized in probability calculation, and has higher parallel processing performance than a conventional general-purpose processor, and thus can rapidly perform computational tasks in the field of artificial intelligence such as machine learning.
  • the learning unit 1610 and the analysis unit 1620 are implemented as a software module (or a program module including an instruction)
  • the software module is a computer-readable, non-transitory readable recording medium (non- transitory computer readable media).
  • the software module may be provided by an operating system (OS) or may be provided by a predetermined application. Alternatively, some of the software modules may be provided by an operating system (OS), and the other may be provided by a predetermined application.
  • the learning unit 1610 and the analysis unit 1620 may be mounted on one electronic device, or may be mounted on separate electronic devices, respectively.
  • the processor 1600 of FIG. 16 may be the processor 130 of FIG. 2 or 3.
  • one of the learning unit 1610 and the analysis unit 1620 may be included in the electronic device 100, and the other may be included in an external server.
  • the learning unit 1610 and the analysis unit 1620 may provide model information constructed by the learning unit 1610 to the analysis unit 1620 through wired or wireless communication, or input to the learning unit 1610. Data may be provided to the learning unit 1610 as additional learning data.
  • 17 is a block diagram of a learning unit 1610 according to an embodiment.
  • the learning unit 1610 may include a learning data acquisition unit 1610-1 and a model learning unit 1610-4.
  • the learning unit 1610 may further include at least one of a learning data pre-processing unit 1610-2, a learning data selection unit 1610-3, and a model evaluation unit 1610-5.
  • the learning data acquisition unit 1610-1 may acquire learning data to train a model for keyword acquisition from speech data.
  • the learning data may be data collected or tested by the learning unit 1610 or the manufacturer of the learning unit 1610.
  • learning data may include voice or text.
  • the model learning unit 1610-4 may use the training data to train the model to have a standard on how to understand, recognize, recognize, judge, and infer input data.
  • the model learning unit 1610-4 may train the model through supervised learning using at least a part of the training data as a judgment criterion.
  • the model learning unit 1610-4 for example, learns the model by itself using learning data without much guidance, through unsupervised learning to discover judgment criteria for judgment of the situation. Can be learned.
  • the model learning unit 1610-4 may train the model, for example, through reinforcement learning using feedback on whether a result of situation determination according to learning is correct.
  • the model learning unit 1610-4 may train a model using, for example, a learning algorithm including an error back-propagation or a gradient descent method.
  • the model learning unit 1610-4 may determine, as a model to learn, a model having a large relationship between input learning data and basic learning data when there are a plurality of pre-built models.
  • the basic learning data may be pre-classified for each type of data, and the model may be pre-built for each type of data.
  • the basic learning data may be pre-classified based on various criteria such as a region where learning data is generated, a time when learning data is generated, a size of learning data, a genre of learning data, and a creator of learning data.
  • the model learning unit 1610-4 may store the trained model.
  • the model learning unit 1610-4 may store the trained model in the memory 120 of the electronic device 100.
  • the model learning unit 1610-4 may store the trained model in a memory of a server connected to the electronic device 100 through a wired or wireless network.
  • the learning unit 1610 further includes a learning data pre-processing unit 1610-2 and a learning data selection unit 1610-3 in order to improve the processing power of the model or to save resources or time required to generate the model. You may.
  • the learning data preprocessing unit 1610-2 may preprocess the acquired data so that the acquired data can be used for learning for situation determination.
  • the learning data preprocessing unit 1610-2 may process the acquired data in a preset format so that the model learning unit 1610-4 can use the acquired data for learning for situation determination.
  • the learning data selection unit 1610-3 may select data necessary for learning from data acquired by the learning data acquisition unit 1610-1 or data pre-processed by the learning data pre-processing unit 1610-2.
  • the selected learning data may be provided to the model learning unit 1610-4.
  • the learning data selection unit 1610-3 may select learning data necessary for learning from acquired or preprocessed data according to a preset selection criterion. Further, the learning data selector 1610-3 may select learning data according to a preset selection criterion by learning by the model learning unit 1610-4.
  • the learning unit 1610 may further include a model evaluation unit 1610-5 to improve the processing power of the model.
  • the model evaluation unit 1610-5 may input evaluation data into the model, and when the result output from the evaluation data does not satisfy a predetermined criterion, the model learning unit 1610-4 may cause the model learning unit 1610-4 to train the model again.
  • the evaluation data may be predefined data for evaluating the model.
  • the model evaluation unit 1610-5 does not satisfy a predetermined criterion when the number or ratio of evaluation data in which the analysis result is not accurate among the analysis results of the trained model for the evaluation data exceeds a preset threshold. It can be evaluated as failed.
  • the model evaluator 1610-5 may evaluate whether each of the trained models satisfies a predetermined criterion and determine a model that satisfies a predetermined criterion as a final model. In this case, when there are a plurality of models satisfying a predetermined criterion, the model evaluator 1610-5 may determine any one or a predetermined number of models preset in order of highest evaluation score as the final model.
  • the analysis unit 1620 may include a data acquisition unit 1620-1 and an analysis result providing unit 1620-4.
  • the analysis unit 1620 may further include at least one of a data preprocessing unit 1620-2, a data selection unit 1620-3, and a model update unit 1620-5.
  • the data acquisition unit 1620-1 may acquire data necessary for analysis.
  • the analysis result providing unit 1620-4 may provide the result of inputting the data acquired by the data acquisition unit 1620-1 to the trained model.
  • the analysis result providing unit 1620-4 may provide analysis results according to the purpose of analyzing the data.
  • the analysis result providing unit 1620-4 may obtain analysis results by applying data selected by the data pre-processing unit 1620-2 or data selection unit 1620-3, which will be described later, to the recognition model as input values.
  • the results of the analysis can be determined by the model.
  • the analysis result providing unit 1620-4 may acquire at least one keyword by applying the voice data acquired by the data acquisition unit 1620-1 to the artificial intelligence model.
  • the analysis unit 1620 may further include a data preprocessing unit 1620-2 and a data selection unit 1620-3 to improve the analysis results of the model or to save resources or time for providing analysis results. It might be.
  • the data preprocessing unit 1620-2 may preprocess the acquired data so that the acquired data can be used for situation determination.
  • the data pre-processing unit 1620-2 may process the acquired data in a predefined format so that the analysis result providing unit 1620-4 can use the acquired data.
  • the data selection unit 1620-3 may select data required for situation determination from data acquired by the data acquisition unit 1620-1 or data preprocessed by the data preprocessing unit 1620-2. The selected data may be provided to the analysis result providing unit 1620-4. The data selector 1620-3 may select some or all of the obtained or preprocessed data according to a preset selection criterion for determining the situation. Further, the data selection unit 1620-3 may select data according to a preset selection criterion by learning by the model learning unit 1610-4.
  • the model updating unit 1620-5 may control the model to be updated based on the evaluation of the analysis results provided by the analysis result providing unit 1620-4. For example, the model updating unit 1620-5 provides the analysis result provided by the analysis result providing unit 1620-4 to the model learning unit 1610-4, so that the model learning unit 1610-4 You can ask the model to learn or update further.
  • 19 is a diagram for explaining an embodiment in which the above-described learning unit 1610 and the analysis unit 1620 are implemented in different devices.
  • the external server 200 may include a learning unit 1610, and the electronic device 100 may include an analysis unit 1620.
  • the electronic device 100 and the server 200 may communicate with each other on the network.
  • the analysis result providing unit 1620-4 of the electronic device 100 may obtain the analysis result by applying the data selected by the data selection unit 1620-3 to the model generated by the server 200.
  • the analysis result providing unit 1620-4 of the electronic device 100 receives the model generated by the server 200 from the server 200, and uses the received model to the user of the electronic device 100 And at least one keyword can be obtained in a call with the other party.
  • 20 is a flowchart of a network system using an artificial intelligence model, according to various embodiments of the present disclosure.
  • a network system using an artificial intelligence model may include a first component 2010 and a second component 2020.
  • the first component 2010 may be the electronic device 100.
  • the second component 2020 may be a server in which the artificial intelligence model is stored.
  • the first component 2010 may be a general-purpose processor, and the second component 2020 may be an artificial intelligence-only processor.
  • the first component 2010 may be at least one application, and the second component 2020 may be an operating system (OS). That is, the second component 2020 is more integrated, dedicated, or has a small delay, a performance advantage, or a component having many resources than the first component 2010.
  • a number of operations required when updating or applying may be a component that can be processed more quickly and effectively than the first component 2010.
  • An interface for transmitting / receiving data between the first component 2010 and the second component 2020 may be defined.
  • an application program interface having learning data to be applied to a model as an argument value (or an intermediate value or a passing value) may be defined.
  • the API may be a subroutine that can be called for any processing of one protocol (eg, a protocol defined in the electronic device 100) to another protocol (eg, a protocol defined in an external server of the electronic device 100), or It can be defined as a set of functions. That is, an environment in which an operation of another protocol can be performed in one protocol may be provided through an API.
  • a user of the first component 2010 can make a call with a user of another device (S2001).
  • the first component 2010 may transmit the content of the call between the user and the other party to the second component 2020 (S2003).
  • the audio data including the call voice is transmitted to the second component 2020, or the text converted to the second component 2020 by processing the speech to text in the first component 2010 Can send.
  • the second component 2020 may acquire at least one keyword in the currency using the learned artificial intelligence model (S2005).
  • the keyword may be a proper noun of high importance among the proper nouns included in the currency.
  • the importance may be determined based on the frequency of the proper noun, the weight set for the category to which the proper noun belongs.
  • the second component 2020 may transmit at least one keyword acquired in the call to the first component 2010 (S2007).
  • the first component 2010 may provide at least one keyword received (S2009).
  • the first component 2010 may provide the received keyword on the call screen, and perform various functions using the keyword in response to the user selecting the keyword.
  • a search function may be provided or a memo generation function may be provided.
  • the first component 2010 may determine the recommended application based on the keyword received from the second component 2020, and execute the recommended application in association with the keyword.
  • FIG. 21 is a flowchart illustrating a method of controlling an electronic device according to an embodiment of the present disclosure.
  • the flowchart illustrated in FIG. 21 may be configured with operations processed by the electronic device 100 described herein. Therefore, even if omitted, the description of the electronic device 100 may be applied to the flowchart shown in FIG. 21.
  • At least one keyword is acquired from the content of a call with the user of the other electronic device (S2110).
  • the obtained at least one keyword is displayed while the call is being performed (S2120).
  • keywords obtained may be displayed on the call screen.
  • the keyword may be automatically erased on the call screen according to an algorithm that considers the frequency of mention of the keyword and the time of the last mention.
  • the keyword on the call screen may be moved and destroyed by user interaction (long click and send to the trash).
  • a search result for a keyword selected by the user is provided from at least one keyword displayed (S2130).
  • a keyword selected by a user among at least one keyword displayed may be stored in the memo application.
  • the user can designate multiple keywords to search or memo. For example, after touching the first keyword, a keyword to search or memo together may be designated in order, and the electronic device 100 determines that selection of the keyword is ended when a long touch is performed on the last keyword, and thus the first selected keyword From the last selected keyword to the set, you can display a screen to inquire whether to search or take notes.
  • the electronic device may predict a scenario that a user can perform after a call and display a UI inquiring whether the user performs a task using a keyword.
  • the UI when the UI is selected, it is determined that the user agrees to perform the operation, and the operation of inputting a keyword into the application by using the API may be automatically performed.
  • personalized recommendation information can be actively provided from keywords extracted during a call without user intervention, thereby greatly reducing a series of processes performed by the user during and after a call, and thus efficiency. And user satisfaction.
  • embodiments described above may be implemented in software, hardware, or a combination thereof.
  • embodiments described in the present disclosure include application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs) ), Processors, controllers, micro-controllers, microprocessors, and other electrical units for performing other functions.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • Processors controllers, micro-controllers, microprocessors, and other electrical units for performing other functions.
  • controllers such as procedures and functions described herein may be implemented as separate software modules. Each of the software modules may perform one or more functions and operations described herein.
  • Various embodiments of the present disclosure may be implemented with software including instructions that can be stored in a machine (eg, computer) readable storage media.
  • a machine is a device that calls a stored command from a storage medium and is operable according to the called command, and may include the electronic device 100 of the disclosed embodiments.
  • the processor may perform a function corresponding to the instruction directly or using other components under the control of the processor.
  • Instructions may include code generated or executed by a compiler or interpreter.
  • the instructions stored in the storage medium are executed by the processor, so that the above-described control method of the electronic device 100 can be executed.
  • a control method comprising obtaining a keyword of, displaying the at least one keyword while the call is being performed, and providing a search result for a keyword selected by a user among the displayed at least one keyword.
  • the device-readable storage medium may be provided in the form of a non-transitory storage medium.
  • 'non-temporary' means that the storage medium does not include a signal and is tangible, but does not distinguish that data is stored semi-permanently or temporarily on the storage medium.
  • a method according to various embodiments disclosed in this document may be provided as being included in a computer program product.
  • Computer program products are commodities that can be traded between sellers and buyers.
  • Computer program products may be distributed online in the form of a storage medium readable by the device (eg compact disc read only memory (CD-ROM)) or through an application store (eg Play Store TM, App Store TM). have.
  • a storage medium such as a memory of a manufacturer's server, an application store's server, or a relay server, or may be temporarily generated.
  • Each component may be composed of a singular or a plurality of entities, and some of the aforementioned sub-components may be omitted, or other sub-components may be various. It may be further included in the embodiment. Alternatively or additionally, some components (eg, modules or programs) may be integrated into one entity, performing the same or similar functions performed by each corresponding component before being integrated. According to various embodiments, operations performed by a module, program, or other component may be executed sequentially, in parallel, repeatedly, or heuristically, or at least some operations may be executed in a different order, omitted, or another operation may be added. Can be.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

L'invention concerne un procédé de commande pour un dispositif électronique. Le procédé de commande comprend les étapes consistant à : pendant un appel avec un utilisateur d'un autre dispositif électronique au moyen d'un dispositif électronique, acquérir au moins un mot-clé à partir d'un contenu de l'appel avec l'utilisateur de l'autre dispositif électronique; afficher l'un ou les mots-clés acquis pendant l'appel; et fournir un résultat de recherche pour un mot-clé sélectionné par un utilisateur parmi l'un ou les mots-clés affichés. De plus, au moins une partie du dispositif électronique peut utiliser un modèle d'intelligence artificielle acquis selon au moins un apprentissage automatique, un réseau neuronal et un algorithme d'apprentissage profond.
PCT/KR2019/011135 2018-09-06 2019-08-30 Dispositif électronique et son procédé de commande Ceased WO2020050554A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/255,605 US12243516B2 (en) 2018-09-06 2019-08-30 Electronic device and control method therefor
US18/321,146 US20230290343A1 (en) 2018-09-06 2023-05-22 Electronic device and control method therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020180106303A KR102608953B1 (ko) 2018-09-06 2018-09-06 전자 장치 및 그의 제어방법
KR10-2018-0106303 2018-09-06

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US17/255,605 A-371-Of-International US12243516B2 (en) 2018-09-06 2019-08-30 Electronic device and control method therefor
US18/321,146 Continuation US20230290343A1 (en) 2018-09-06 2023-05-22 Electronic device and control method therefor

Publications (1)

Publication Number Publication Date
WO2020050554A1 true WO2020050554A1 (fr) 2020-03-12

Family

ID=69722580

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/011135 Ceased WO2020050554A1 (fr) 2018-09-06 2019-08-30 Dispositif électronique et son procédé de commande

Country Status (3)

Country Link
US (2) US12243516B2 (fr)
KR (3) KR102608953B1 (fr)
WO (1) WO2020050554A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113079247A (zh) * 2021-03-24 2021-07-06 广州三星通信技术研究有限公司 关联服务提供方法和关联服务提供装置
JP2024098997A (ja) * 2020-04-23 2024-07-24 富士フイルムビジネスイノベーション株式会社 情報処理装置及びプログラム

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7196122B2 (ja) * 2020-02-18 2022-12-26 株式会社東芝 インタフェース提供装置、インタフェース提供方法およびプログラム
WO2025155016A1 (fr) * 2024-01-18 2025-07-24 삼성전자 주식회사 Procédé permettant de générer une interface utilisateur à l'aide d'une intelligence artificielle générative, et dispositif électronique associé
WO2025230123A1 (fr) * 2024-05-03 2025-11-06 삼성전자주식회사 Dispositif électronique comprenant une application de contact pour fournir des informations d'identification correspondant à un modèle d'intelligence artificielle, et son procédé de commande

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011205238A (ja) * 2010-03-24 2011-10-13 Ntt Docomo Inc 通信端末及び情報検索方法
KR20140109719A (ko) * 2013-03-06 2014-09-16 엘지전자 주식회사 이동 단말기 및 그것의 제어 방법
KR20150134087A (ko) * 2014-05-21 2015-12-01 삼성전자주식회사 전자 장치 및 전자 장치에서 데이터를 추천하는 방법
KR20160043842A (ko) * 2014-10-14 2016-04-22 엘지전자 주식회사 이동 단말기
KR20160114928A (ko) * 2015-03-25 2016-10-06 주식회사 카카오 인터랙션을 통해 키워드를 검색하는 단말, 서버 및 방법

Family Cites Families (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7672845B2 (en) 2004-06-22 2010-03-02 International Business Machines Corporation Method and system for keyword detection using voice-recognition
US7818117B2 (en) * 2007-06-20 2010-10-19 Amadeus S.A.S. System and method for integrating and displaying travel advices gathered from a plurality of reliable sources
KR101466027B1 (ko) 2008-04-30 2014-11-28 엘지전자 주식회사 이동 단말기 및 그 통화내용 관리 방법
US8755511B2 (en) * 2008-09-08 2014-06-17 Invoca, Inc. Methods and systems for processing and managing telephonic communications
US8266148B2 (en) * 2008-10-07 2012-09-11 Aumni Data, Inc. Method and system for business intelligence analytics on unstructured data
KR101528266B1 (ko) 2009-01-05 2015-06-11 삼성전자 주식회사 휴대 단말기 및 그의 응용프로그램 제공 방법
US8768308B2 (en) 2009-09-29 2014-07-01 Deutsche Telekom Ag Apparatus and method for creating and managing personal schedules via context-sensing and actuation
US8797380B2 (en) * 2010-04-30 2014-08-05 Microsoft Corporation Accelerated instant replay for co-present and distributed meetings
KR20120104662A (ko) * 2011-03-14 2012-09-24 주식회사 팬택 이동통신 단말기의 발신자 정보 제공 장치 및 방법
KR101798968B1 (ko) * 2011-03-30 2017-11-17 엘지전자 주식회사 이동 단말기 및 이동 단말기의 제어 방법
US9443518B1 (en) * 2011-08-31 2016-09-13 Google Inc. Text transcript generation from a communication session
US9148741B2 (en) * 2011-12-05 2015-09-29 Microsoft Technology Licensing, Llc Action generation based on voice data
KR101920019B1 (ko) 2012-01-18 2018-11-19 삼성전자 주식회사 휴대단말기의 통화 서비스 장치 및 방법
KR101379405B1 (ko) 2012-05-08 2014-03-28 김경서 키워드 음성 인식을 통해 관련 어플리케이션을 실행시키는 음성 통화 처리 방법 및 이를 실행하는 모바일 단말
US8522130B1 (en) * 2012-07-12 2013-08-27 Chegg, Inc. Creating notes in a multilayered HTML document
US9569083B2 (en) * 2012-12-12 2017-02-14 Adobe Systems Incorporated Predictive directional content queue
US20140164923A1 (en) * 2012-12-12 2014-06-12 Adobe Systems Incorporated Intelligent Adaptive Content Canvas
US8537983B1 (en) * 2013-03-08 2013-09-17 Noble Systems Corporation Multi-component viewing tool for contact center agents
US10235042B2 (en) * 2013-03-15 2019-03-19 Forbes Holten Norris, III Space optimizing micro keyboard method and apparatus
KR101455924B1 (ko) 2013-03-26 2014-11-03 주식회사 엘지유플러스 통화 기반 관심사 제공을 위한 단말, 서버 장치, 방법, 및 기록 매체
US10002345B2 (en) * 2014-09-26 2018-06-19 At&T Intellectual Property I, L.P. Conferencing auto agenda planner
KR101630069B1 (ko) 2014-10-07 2016-06-14 주식회사 엘지유플러스 통화 내용 기반 주요 키워드 정보 및 배경 이미지 제공을 위한 단말, 서버, 방법, 기록 매체, 및 컴퓨터 프로그램
KR20160043588A (ko) * 2014-10-13 2016-04-22 삼성전자주식회사 컨텐츠 서비스 제공 방법 및 장치
US10291766B2 (en) 2014-12-09 2019-05-14 Huawei Technologies Co., Ltd. Information processing method and apparatus
US10547571B2 (en) * 2015-05-06 2020-01-28 Kakao Corp. Message service providing method for message service linked to search service and message server and user terminal to perform the method
US10430070B2 (en) * 2015-07-13 2019-10-01 Sap Se Providing defined icons on a graphical user interface of a navigation system
KR101578346B1 (ko) 2015-11-04 2015-12-17 에스케이텔레콤 주식회사 통화 메모 생성 및 관리 방법 그리고 통화 메모 생성 및 관리 방법을 실행하는 프로그램을 기록한 컴퓨터 판독 가능한 기록매체
JP6805552B2 (ja) * 2016-05-26 2020-12-23 コニカミノルタ株式会社 情報処理装置及びプログラム
WO2018013804A1 (fr) 2016-07-15 2018-01-18 Circle River, Inc. Réponse automatique d'appel basée sur l'intelligence artificielle
US10957321B2 (en) * 2016-07-21 2021-03-23 Samsung Electronics Co., Ltd. Electronic device and control method thereof
US20180061396A1 (en) * 2016-08-24 2018-03-01 Knowles Electronics, Llc Methods and systems for keyword detection using keyword repetitions
WO2018117685A1 (fr) * 2016-12-23 2018-06-28 Samsung Electronics Co., Ltd. Système et procédé de fourniture d'une liste à faire d'un utilisateur
US10347244B2 (en) * 2017-04-21 2019-07-09 Go-Vivace Inc. Dialogue system incorporating unique speech to text conversion method for meaningful dialogue response
US12002010B2 (en) * 2017-06-02 2024-06-04 Apple Inc. Event extraction systems and methods
JP2020531942A (ja) * 2017-08-22 2020-11-05 サブプライ ソリューションズ エルティーディー. 再セグメント化されたオーディオコンテンツを提供するための方法およびシステム
US20190156826A1 (en) * 2017-11-18 2019-05-23 Cogi, Inc. Interactive representation of content for relevance detection and review
CN111813828B (zh) * 2020-06-30 2024-02-27 北京百度网讯科技有限公司 一种实体关系挖掘方法、装置、电子设备及存储介质
US11990125B2 (en) * 2021-06-21 2024-05-21 Kyndryl, Inc. Intent driven voice interface

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011205238A (ja) * 2010-03-24 2011-10-13 Ntt Docomo Inc 通信端末及び情報検索方法
KR20140109719A (ko) * 2013-03-06 2014-09-16 엘지전자 주식회사 이동 단말기 및 그것의 제어 방법
KR20150134087A (ko) * 2014-05-21 2015-12-01 삼성전자주식회사 전자 장치 및 전자 장치에서 데이터를 추천하는 방법
KR20160043842A (ko) * 2014-10-14 2016-04-22 엘지전자 주식회사 이동 단말기
KR20160114928A (ko) * 2015-03-25 2016-10-06 주식회사 카카오 인터랙션을 통해 키워드를 검색하는 단말, 서버 및 방법

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2024098997A (ja) * 2020-04-23 2024-07-24 富士フイルムビジネスイノベーション株式会社 情報処理装置及びプログラム
JP7704249B2 (ja) 2020-04-23 2025-07-08 富士フイルムビジネスイノベーション株式会社 情報処理装置及びプログラム
CN113079247A (zh) * 2021-03-24 2021-07-06 广州三星通信技术研究有限公司 关联服务提供方法和关联服务提供装置

Also Published As

Publication number Publication date
US12243516B2 (en) 2025-03-04
US20210264905A1 (en) 2021-08-26
KR20230169016A (ko) 2023-12-15
KR102795400B1 (ko) 2025-04-16
KR102608953B1 (ko) 2023-12-04
KR20250055477A (ko) 2025-04-24
US20230290343A1 (en) 2023-09-14
KR20200028089A (ko) 2020-03-16

Similar Documents

Publication Publication Date Title
WO2020067633A1 (fr) Dispositif électronique et procédé d'obtention d'informations émotionnelles
WO2020050554A1 (fr) Dispositif électronique et son procédé de commande
WO2019164140A1 (fr) Système pour traiter un énoncé d'utilisateur et son procédé de commande
WO2019203488A1 (fr) Dispositif électronique et procédé de commande de dispositif électronique associé
EP3659106A1 (fr) Dispositif électronique et procédé de changement d'agent conversationnel
WO2020040517A1 (fr) Appareil électronique et son procédé de commande
EP3811234A1 (fr) Dispositif électronique et procédé de commande du dispositif électronique
WO2018182351A1 (fr) Procédé destiné à fournir des informations et dispositif électronique prenant en charge ledit procédé
WO2019177344A1 (fr) Appareil électronique et son procédé de commande
WO2019231130A1 (fr) Dispositif électronique et son procédé de commande
WO2019039915A1 (fr) Procede d'activation d'un service de reconnaissance vocale et dispositif électronique le mettant en œuvre
WO2020085796A1 (fr) Dispositif électronique et procédé associé de commande de dispositif électronique
WO2017090954A1 (fr) Dispositif électronique et procédé de fonctionnement associé
WO2019146942A1 (fr) Appareil électronique et son procédé de commande
WO2018093229A1 (fr) Procédé et dispositif appliquant une intelligence artificielle afin d'envoyer de l'argent à l'aide d'une entrée vocale
EP3820369A1 (fr) Dispositif électronique et procédé d'obtention d'informations émotionnelles
WO2019164120A1 (fr) Dispositif électronique et procédé de commande associé
EP3523710A1 (fr) Appareil et procédé servant à fournir une phrase sur la base d'une entrée d'utilisateur
EP3698258A1 (fr) Appareil électronique et son procédé de commande
WO2019132410A1 (fr) Dispositif électronique et son procédé de commande
EP3533015A1 (fr) Procédé et dispositif appliquant une intelligence artificielle afin d'envoyer de l'argent à l'aide d'une entrée vocale
WO2019146970A1 (fr) Procédé de réponse à un énoncé d'utilisateur et dispositif électronique prenant en charge celui-ci
US20180137550A1 (en) Method and apparatus for providing product information
WO2018101671A1 (fr) Appareil et procédé servant à fournir une phrase sur la base d'une entrée d'utilisateur
WO2020096255A1 (fr) Appareil électronique et son procédé de commande

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19857450

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19857450

Country of ref document: EP

Kind code of ref document: A1

WWG Wipo information: grant in national office

Ref document number: 17255605

Country of ref document: US