WO2021006406A1 - Climatiseur fondé sur l'intelligence artificielle - Google Patents
Climatiseur fondé sur l'intelligence artificielle Download PDFInfo
- Publication number
- WO2021006406A1 WO2021006406A1 PCT/KR2019/008602 KR2019008602W WO2021006406A1 WO 2021006406 A1 WO2021006406 A1 WO 2021006406A1 KR 2019008602 W KR2019008602 W KR 2019008602W WO 2021006406 A1 WO2021006406 A1 WO 2021006406A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- air conditioner
- learning model
- reinforcement learning
- processor
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/092—Reinforcement learning
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/50—Control or safety arrangements characterised by user interfaces or communication
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/70—Control systems characterised by their outputs; Constructional details thereof
- F24F11/80—Control systems characterised by their outputs; Constructional details thereof for controlling the temperature of the supplied air
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- the present invention relates to an artificial intelligence-based air conditioner.
- artificial intelligence does not exist by itself, but is directly or indirectly related to other fields of computer science.
- attempts are being made very actively to introduce artificial intelligence elements in various fields of information technology and to use them in solving problems in that field.
- one household may be composed of a plurality of members, and the optimized cooling method may be different for each member.
- An object to be solved by the present invention is to provide an air conditioner capable of providing personalized cooling by recognizing a user in a cooling space, and appropriately harmonizing the individual's cooling taste and power consumption.
- An air conditioner includes a compressor, a casing including a suction port and a discharge port, a fan motor installed inside the casing to blow air, a discharge vane movably provided to the discharge port, and the discharge vane A member existing in the indoor space by using a vane motor that operates a vane motor, a sensor that acquires data for identifying a member, a memory that stores a learning result corresponding to the identified member, and data for identifying the member And set a target temperature corresponding to the identified member based on context information of the identified member and a learning result corresponding to the identified member, and use feedback on the target temperature to determine the learning result. Includes the processor to update.
- the target temperature can be set in consideration of the member's behavior information, there is an advantage in that the optimum target temperature can be set according to the current behavior of the user.
- the target temperature can be set in consideration of the member's behavior information and environment information, there is an advantage in that the optimum target temperature can be set according to the user's current behavior and current environment.
- the reinforcement learning model may output an output value capable of properly matching the first target of increasing the speed of reaching the target temperature and maintaining the target temperature and the second target of reducing power consumption.
- FIG 1 shows an AI device according to an embodiment of the present invention.
- FIG 2 shows an AI server according to an embodiment of the present invention.
- FIG 3 shows an AI system according to an embodiment of the present invention.
- FIG. 4 is a block diagram of an artificial intelligence-based air conditioner according to an embodiment of the present invention.
- FIG. 5 is a flowchart illustrating a control operation of an air conditioner according to an embodiment of the present invention.
- FIG. 6 is a flowchart illustrating an operation of acquiring member information according to an embodiment of the present invention in more detail.
- FIG. 7 shows a state of acquiring member information using an image acquisition device according to an embodiment of the present invention.
- FIG. 8 shows an example of acquiring member data by interlocking the image acquisition device and the air conditioner according to the embodiment of FIG. 7.
- FIG. 9 is a flowchart illustrating an operation condition of a member according to an embodiment of the present invention.
- FIG. 10 shows an example of setting a reward for the operating condition of FIG. 9.
- FIG. 11 is a diagram illustrating an example of updating an operating condition through voice recognition according to a second embodiment of the present invention.
- FIG. 13 is an exploded perspective view of an air conditioner according to an embodiment of the present invention.
- FIG. 14 is a schematic block diagram of components included in an air conditioner according to an embodiment of the present invention.
- 15 is a diagram illustrating a method of operating an air conditioner according to an exemplary embodiment of the present invention.
- 16 is a diagram illustrating a reinforcement learning model corresponding to a member and a re-learning method using feedback.
- 17 is a diagram for explaining a reinforcement learning model used in the present invention.
- FIG. 18 is a diagram illustrating a method of providing feedback to a reinforcement learning model according to an embodiment of the present invention.
- FIG. 19 is a diagram illustrating a method of operating an air conditioner according to another embodiment of the present invention.
- 20 is a diagram illustrating a reinforcement learning model used in the present invention.
- 21 is a diagram illustrating a graph of temperature change and a graph of power consumption according to different set values.
- Machine learning refers to the field of researching methodologies to define and solve various problems dealt with in the field of artificial intelligence. do.
- Machine learning is also defined as an algorithm that improves the performance of a task through continuous experience.
- An artificial neural network is a model used in machine learning, and may refer to an overall model with problem-solving capabilities, composed of artificial neurons (nodes) that form a network by combining synapses.
- the artificial neural network may be defined by a connection pattern between neurons of different layers, a learning process for updating model parameters, and an activation function for generating an output value.
- the artificial neural network may include an input layer, an output layer, and optionally one or more hidden layers. Each layer includes one or more neurons, and the artificial neural network may include neurons and synapses connecting neurons. In an artificial neural network, each neuron can output a function of an activation function for input signals, weights, and biases input through synapses.
- Model parameters refer to parameters determined through learning, and include weights of synaptic connections and biases of neurons.
- hyperparameters refer to parameters that must be set before learning in a machine learning algorithm, and include a learning rate, iteration count, mini-batch size, and initialization function.
- the purpose of learning artificial neural networks can be seen as determining model parameters that minimize the loss function.
- the loss function can be used as an index to determine an optimal model parameter in the learning process of the artificial neural network.
- Machine learning can be classified into supervised learning, unsupervised learning, and reinforcement learning according to the learning method.
- Supervised learning refers to a method of training an artificial neural network when a label for training data is given, and a label indicates the correct answer (or result value) that the artificial neural network should infer when training data is input to the artificial neural network. It can mean.
- Unsupervised learning may refer to a method of training an artificial neural network in a state where a label for training data is not given.
- Reinforcement learning may mean a learning method in which an agent defined in a certain environment learns to select an action or action sequence that maximizes the cumulative reward in each state.
- machine learning implemented as a deep neural network (DNN) including a plurality of hidden layers is sometimes referred to as deep learning (deep learning), and deep learning is a part of machine learning.
- DNN deep neural network
- machine learning is used in the sense including deep learning.
- a robot may refer to a machine that automatically processes or operates a task given by its own capabilities.
- a robot having a function of recognizing the environment and performing an operation by self-determining may be referred to as an intelligent robot.
- Robots can be classified into industrial, medical, household, military, etc. depending on the purpose or field of use.
- the robot may be provided with a driving unit including an actuator or a motor to perform various physical operations such as moving a robot joint.
- a driving unit including an actuator or a motor to perform various physical operations such as moving a robot joint.
- the movable robot includes a wheel, a brake, a propeller, etc. in a driving unit, and can travel on the ground or fly in the air through the driving unit.
- Autonomous driving refers to self-driving technology
- autonomous driving vehicle refers to a vehicle that is driven without a user's manipulation or with a user's minimal manipulation.
- a technology that maintains a driving lane a technology that automatically adjusts the speed such as adaptive cruise control, a technology that automatically drives along a specified route, and a technology that automatically sets a route when a destination is set, etc. All of these can be included.
- the vehicle includes all of a vehicle having only an internal combustion engine, a hybrid vehicle including an internal combustion engine and an electric motor, and an electric vehicle including only an electric motor, and may include not only automobiles, but also trains and motorcycles.
- the autonomous vehicle can be viewed as a robot having an autonomous driving function.
- the extended reality collectively refers to Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR).
- VR technology provides only CG images of real world objects or backgrounds
- AR technology provides virtually created CG images on top of real object images
- MR technology is a computer that mixes and combines virtual objects in the real world. It is a graphic technology.
- MR technology is similar to AR technology in that it shows real and virtual objects together.
- virtual objects are used in a form that complements real objects
- MR technology virtual objects and real objects are used with equal characteristics.
- XR technology can be applied to HMD (Head-Mount Display), HUD (Head-Up Display), mobile phones, tablet PCs, laptops, desktops, TVs, digital signage, etc., and devices applied with XR technology are XR devices. It can be called as.
- HMD Head-Mount Display
- HUD Head-Up Display
- mobile phones tablet PCs, laptops, desktops, TVs, digital signage, etc.
- devices applied with XR technology are XR devices. It can be called as.
- FIG 1 shows an AI device 100 according to an embodiment of the present invention.
- the AI device 100 includes a TV, projector, mobile phone, smartphone, desktop computer, notebook, digital broadcasting terminal, personal digital assistants (PDA), portable multimedia player (PMP), navigation, tablet PC, wearable device, set-top box (STB). ), a DMB receiver, a radio, a washing machine, a refrigerator, a desktop computer, a digital signage, a robot, a vehicle, and the like.
- PDA personal digital assistants
- PMP portable multimedia player
- STB set-top box
- DMB receiver a radio, a washing machine, a refrigerator, a desktop computer, a digital signage, a robot, a vehicle, and the like.
- the terminal 100 includes a communication unit 110, an input unit 120, a running processor 130, a sensing unit 140, an output unit 150, a memory 170, and a processor 180.
- the communication unit 110 may transmit and receive data with external devices such as other AI devices 100a to 100e or the AI server 200 using wired/wireless communication technology.
- the communication unit 110 may transmit and receive sensor information, a user input, a learning model, and a control signal with external devices.
- the communication technologies used by the communication unit 110 include Global System for Mobile communication (GSM), Code Division Multi Access (CDMA), Long Term Evolution (LTE), 5G, Wireless LAN (WLAN), and Wireless-Fidelity (Wi-Fi). ), BluetoothTM, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), ZigBee, and Near Field Communication (NFC).
- GSM Global System for Mobile communication
- CDMA Code Division Multi Access
- LTE Long Term Evolution
- 5G Fifth Generation
- WLAN Wireless LAN
- Wi-Fi Wireless-Fidelity
- BluetoothTM BluetoothTM
- RFID Radio Frequency Identification
- IrDA Infrared Data Association
- ZigBee ZigBee
- NFC Near Field Communication
- the input unit 120 may acquire various types of data.
- the input unit 120 may include a camera for inputting an image signal, a microphone for receiving an audio signal, a user input unit for receiving information from a user, and the like.
- a camera or microphone for treating a camera or microphone as a sensor, a signal obtained from the camera or microphone may be referred to as sensing data or sensor information.
- the input unit 120 may acquire training data for model training and input data to be used when acquiring an output by using the training model.
- the input unit 120 may obtain unprocessed input data, and in this case, the processor 180 or the running processor 130 may extract an input feature as a preprocess for the input data.
- the learning processor 130 may train a model composed of an artificial neural network using the training data.
- the learned artificial neural network may be referred to as a learning model.
- the learning model can be used to infer a result value for new input data other than the training data, and the inferred value can be used as a basis for a decision to perform a certain operation.
- the learning processor 130 may perform AI processing together with the learning processor 240 of the AI server 200.
- the learning processor 130 may include a memory integrated or implemented in the AI device 100.
- the learning processor 130 may be implemented using the memory 170, an external memory directly coupled to the AI device 100, or a memory maintained in an external device.
- the sensing unit 140 may acquire at least one of internal information of the AI device 100, information about the surrounding environment of the AI device 100, and user information by using various sensors.
- the sensors included in the sensing unit 140 include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, and a lidar. , Radar, etc.
- the output unit 150 may generate output related to visual, auditory or tactile sense.
- the output unit 150 may include a display unit that outputs visual information, a speaker that outputs auditory information, and a haptic module that outputs tactile information.
- the memory 170 may store data supporting various functions of the AI device 100.
- the memory 170 may store input data, training data, a learning model, and a learning history acquired from the input unit 120.
- the processor 180 may determine at least one executable operation of the AI device 100 based on information determined or generated using a data analysis algorithm or a machine learning algorithm. Further, the processor 180 may perform the determined operation by controlling the components of the AI device 100.
- the processor 180 may request, search, receive, or utilize data from the learning processor 130 or the memory 170, and perform a predicted or desirable operation among the at least one executable operation.
- the components of the AI device 100 can be controlled to execute.
- the processor 180 may generate a control signal for controlling the corresponding external device and transmit the generated control signal to the corresponding external device.
- the processor 180 may obtain intention information for a user input, and determine a user's requirement based on the obtained intention information.
- the processor 180 uses at least one of a Speech To Text (STT) engine for converting a speech input into a character string or a Natural Language Processing (NLP) engine for obtaining intention information of a natural language. Intention information corresponding to the input can be obtained.
- STT Speech To Text
- NLP Natural Language Processing
- At this time, at least one or more of the STT engine and the NLP engine may be composed of an artificial neural network, at least partially trained according to a machine learning algorithm.
- at least one of the STT engine or the NLP engine is learned by the learning processor 130, learned by the learning processor 240 of the AI server 200, or learned by distributed processing thereof. Can be.
- the processor 180 collects history information including user feedback on the operation content or operation of the AI device 100 and stores it in the memory 170 or the learning processor 130, or the AI server 200 Can be transferred to an external device.
- the collected history information can be used to update the learning model.
- the processor 180 may control at least some of the components of the AI device 100 to drive an application program stored in the memory 170. Furthermore, the processor 180 may operate by combining two or more of the components included in the AI device 100 to drive the application program.
- FIG 2 shows an AI server 200 according to an embodiment of the present invention.
- the AI server 200 may refer to a device that trains an artificial neural network using a machine learning algorithm or uses the learned artificial neural network.
- the AI server 200 may be composed of a plurality of servers to perform distributed processing, or may be defined as a 5G network.
- the AI server 200 may be included as a part of the AI device 100 to perform at least part of AI processing together.
- the AI server 200 may include a communication unit 210, a memory 230, a learning processor 240, and a processor 260.
- the communication unit 210 may transmit and receive data with an external device such as the AI device 100.
- the memory 230 may include a model storage unit 231.
- the model storage unit 231 may store a model (or artificial neural network, 231a) being trained or trained through the learning processor 240.
- the learning processor 240 may train the artificial neural network 231a using the training data.
- the learning model may be used while being mounted on the AI server 200 of the artificial neural network, or may be mounted on an external device such as the AI device 100 and used.
- the learning model can be implemented in hardware, software, or a combination of hardware and software. When part or all of the learning model is implemented in software, one or more instructions constituting the learning model may be stored in the memory 230.
- the processor 260 may infer a result value for new input data using the learning model, and generate a response or a control command based on the inferred result value.
- FIG 3 shows an AI system 1 according to an embodiment of the present invention.
- the AI system 1 includes at least one of an AI server 200, a robot 100a, an autonomous vehicle 100b, an XR device 100c, a smartphone 100d, or a home appliance 100e. It is connected to the cloud network 10.
- the robot 100a to which the AI technology is applied, the autonomous vehicle 100b, the XR device 100c, the smartphone 100d, or the home appliance 100e may be referred to as the AI devices 100a to 100e.
- the cloud network 10 may constitute a part of the cloud computing infrastructure or may mean a network that exists in the cloud computing infrastructure.
- the cloud network 10 may be configured using a 3G network, a 4G or Long Term Evolution (LTE) network, or a 5G network.
- LTE Long Term Evolution
- the devices 100a to 100e and 200 constituting the AI system 1 may be connected to each other through the cloud network 10.
- the devices 100a to 100e and 200 may communicate with each other through a base station, but may communicate with each other directly without through a base station.
- the AI server 200 may include a server that performs AI processing and a server that performs an operation on big data.
- the AI server 200 includes at least one of a robot 100a, an autonomous vehicle 100b, an XR device 100c, a smartphone 100d, or a home appliance 100e, which are AI devices constituting the AI system 1 It is connected through the cloud network 10 and may help at least part of the AI processing of the connected AI devices 100a to 100e.
- the AI server 200 may train an artificial neural network according to a machine learning algorithm in place of the AI devices 100a to 100e, and may directly store the learning model or transmit it to the AI devices 100a to 100e.
- the AI server 200 receives input data from the AI devices 100a to 100e, infers a result value for the received input data using a learning model, and generates a response or control command based on the inferred result value. It can be generated and transmitted to the AI devices 100a to 100e.
- the AI devices 100a to 100e may infer a result value of input data using a direct learning model, and generate a response or a control command based on the inferred result value.
- the AI devices 100a to 100e to which the above-described technology is applied will be described.
- the AI devices 100a to 100e illustrated in FIG. 3 may be viewed as a specific example of the AI device 100 illustrated in FIG. 1.
- the robot 100a is applied with AI technology and may be implemented as a guide robot, a transport robot, a cleaning robot, a wearable robot, an entertainment robot, a pet robot, an unmanned flying robot, and the like.
- the robot 100a may include a robot control module for controlling an operation, and the robot control module may refer to a software module or a chip implementing the same as hardware.
- the robot 100a acquires status information of the robot 100a by using sensor information acquired from various types of sensors, detects (recognizes) the surrounding environment and objects, generates map data, or moves paths and travels. It can decide a plan, decide a response to user interaction, or decide an action.
- the robot 100a may use sensor information obtained from at least one sensor from among a lidar, a radar, and a camera in order to determine a moving route and a driving plan.
- the robot 100a may perform the above operations using a learning model composed of at least one artificial neural network.
- the robot 100a may recognize a surrounding environment and an object using a learning model, and may determine an operation using the recognized surrounding environment information or object information.
- the learning model may be directly learned by the robot 100a or learned by an external device such as the AI server 200.
- the robot 100a may perform an operation by generating a result using a direct learning model, but it transmits sensor information to an external device such as the AI server 200 and performs the operation by receiving the result generated accordingly. You may.
- the robot 100a determines a movement path and a driving plan using at least one of map data, object information detected from sensor information, or object information acquired from an external device, and controls the driving unit to determine the determined movement path and travel plan. Accordingly, the robot 100a can be driven.
- the map data may include object identification information on various objects arranged in a space in which the robot 100a moves.
- the map data may include object identification information on fixed objects such as walls and doors and movable objects such as flower pots and desks.
- the object identification information may include a name, type, distance, and location.
- the robot 100a may perform an operation or run by controlling a driving unit based on a user's control/interaction.
- the robot 100a may acquire interaction intention information according to a user's motion or voice speech, and determine a response based on the obtained intention information to perform an operation.
- the autonomous vehicle 100b may be implemented as a mobile robot, vehicle, or unmanned aerial vehicle by applying AI technology.
- the autonomous driving vehicle 100b may include an autonomous driving control module for controlling an autonomous driving function, and the autonomous driving control module may refer to a software module or a chip implementing the same as hardware.
- the autonomous driving control module may be included inside as a configuration of the autonomous driving vehicle 100b, but may be configured as separate hardware and connected to the exterior of the autonomous driving vehicle 100b.
- the autonomous driving vehicle 100b acquires state information of the autonomous driving vehicle 100b using sensor information obtained from various types of sensors, detects (recognizes) surrounding environments and objects, or generates map data, It is possible to determine a travel route and a driving plan, or to determine an action.
- the autonomous vehicle 100b may use sensor information obtained from at least one sensor from among a lidar, a radar, and a camera, similar to the robot 100a, in order to determine a moving route and a driving plan.
- the autonomous vehicle 100b may recognize an environment or object in an area where the view is obscured or an area greater than a certain distance by receiving sensor information from external devices, or directly recognized information from external devices. .
- the autonomous vehicle 100b may perform the above operations using a learning model composed of at least one artificial neural network.
- the autonomous vehicle 100b may recognize a surrounding environment and an object using a learning model, and may determine a driving movement using the recognized surrounding environment information or object information.
- the learning model may be directly learned by the autonomous vehicle 100b or learned by an external device such as the AI server 200.
- the autonomous vehicle 100b may perform an operation by generating a result using a direct learning model, but it operates by transmitting sensor information to an external device such as the AI server 200 and receiving the result generated accordingly. You can also do
- the autonomous vehicle 100b determines a movement path and a driving plan using at least one of map data, object information detected from sensor information, or object information acquired from an external device, and controls the driving unit to determine the determined movement path and driving.
- the autonomous vehicle 100b can be driven according to a plan.
- the map data may include object identification information on various objects arranged in a space (eg, a road) in which the autonomous vehicle 100b travels.
- the map data may include object identification information on fixed objects such as street lights, rocks, and buildings, and movable objects such as vehicles and pedestrians.
- the object identification information may include a name, type, distance, and location.
- the autonomous vehicle 100b may perform an operation or drive by controlling a driving unit based on a user's control/interaction.
- the autonomous vehicle 100b may acquire interaction intention information according to a user's motion or voice speech, and determine a response based on the obtained intention information to perform the operation.
- the XR device 100c is applied with AI technology, such as HMD (Head-Mount Display), HUD (Head-Up Display) provided in the vehicle, TV, mobile phone, smart phone, computer, wearable device, home appliance, digital signage. , A vehicle, a fixed robot, or a mobile robot.
- HMD Head-Mount Display
- HUD Head-Up Display
- the XR device 100c analyzes 3D point cloud data or image data acquired through various sensors or from an external device to generate location data and attribute data for 3D points, thereby providing information on surrounding spaces or real objects.
- the XR object to be acquired and output can be rendered and output.
- the XR apparatus 100c may output an XR object including additional information on the recognized object in correspondence with the recognized object.
- the XR apparatus 100c may perform the above operations using a learning model composed of at least one artificial neural network.
- the XR device 100c may recognize a real object from 3D point cloud data or image data using a learning model, and may provide information corresponding to the recognized real object.
- the learning model may be directly learned by the XR device 100c or learned by an external device such as the AI server 200.
- the XR device 100c may directly generate a result using a learning model to perform an operation, but transmits sensor information to an external device such as the AI server 200 and receives the result generated accordingly to perform the operation. You can also do it.
- the robot 100a may be implemented as a guide robot, a transport robot, a cleaning robot, a wearable robot, an entertainment robot, a pet robot, an unmanned flying robot, etc. by applying AI technology and autonomous driving technology.
- the robot 100a to which AI technology and autonomous driving technology are applied may refer to a robot having an autonomous driving function or a robot 100a interacting with the autonomous driving vehicle 100b.
- the robot 100a having an autonomous driving function may collectively refer to devices that move by themselves according to a given movement line without the user's control or by determining the movement line by themselves.
- the robot 100a having an autonomous driving function and the autonomous driving vehicle 100b may use a common sensing method to determine one or more of a moving route or a driving plan.
- the robot 100a having an autonomous driving function and the autonomous driving vehicle 100b may determine one or more of a movement route or a driving plan using information sensed through a lidar, a radar, and a camera.
- the robot 100a interacting with the autonomous driving vehicle 100b exists separately from the autonomous driving vehicle 100b, and is linked to an autonomous driving function inside the autonomous driving vehicle 100b, or to the autonomous driving vehicle 100b. It is possible to perform an operation associated with the user on board.
- the robot 100a interacting with the autonomous driving vehicle 100b acquires sensor information on behalf of the autonomous driving vehicle 100b and provides it to the autonomous driving vehicle 100b, or acquires sensor information and information about the surrounding environment or By generating object information and providing it to the autonomous vehicle 100b, it is possible to control or assist the autonomous driving function of the autonomous driving vehicle 100b.
- the robot 100a interacting with the autonomous vehicle 100b may monitor a user in the autonomous vehicle 100b or control the function of the autonomous vehicle 100b through interaction with the user. .
- the robot 100a may activate an autonomous driving function of the autonomous driving vehicle 100b or assist the control of a driving unit of the autonomous driving vehicle 100b.
- the functions of the autonomous vehicle 100b controlled by the robot 100a may include not only an autonomous driving function, but also functions provided by a navigation system or an audio system provided inside the autonomous driving vehicle 100b.
- the robot 100a interacting with the autonomous driving vehicle 100b may provide information or assist a function to the autonomous driving vehicle 100b from outside of the autonomous driving vehicle 100b.
- the robot 100a may provide traffic information including signal information to the autonomous vehicle 100b, such as a smart traffic light, or interact with the autonomous driving vehicle 100b, such as an automatic electric charger for an electric vehicle. You can also automatically connect an electric charger to the charging port.
- the robot 100a may be implemented as a guide robot, a transport robot, a cleaning robot, a wearable robot, an entertainment robot, a pet robot, an unmanned flying robot, a drone, etc., by applying AI technology and XR technology.
- the robot 100a to which the XR technology is applied may refer to a robot that is an object of control/interaction in an XR image.
- the robot 100a is distinguished from the XR device 100c and may be interlocked with each other.
- the robot 100a which is the object of control/interaction in the XR image, acquires sensor information from sensors including a camera
- the robot 100a or the XR device 100c generates an XR image based on the sensor information.
- the XR device 100c may output the generated XR image.
- the robot 100a may operate based on a control signal input through the XR device 100c or a user's interaction.
- the user can check the XR image corresponding to the viewpoint of the robot 100a linked remotely through an external device such as the XR device 100c, and adjust the autonomous driving path of the robot 100a through the interaction.
- You can control motion or driving, or check information on surrounding objects.
- the autonomous vehicle 100b may be implemented as a mobile robot, a vehicle, or an unmanned aerial vehicle by applying AI technology and XR technology.
- the autonomous driving vehicle 100b to which the XR technology is applied may refer to an autonomous driving vehicle including a means for providing an XR image, or an autonomous driving vehicle that is an object of control/interaction within the XR image.
- the autonomous driving vehicle 100b, which is an object of control/interaction in the XR image is distinguished from the XR device 100c and may be interlocked with each other.
- the autonomous vehicle 100b provided with a means for providing an XR image may acquire sensor information from sensors including a camera, and may output an XR image generated based on the acquired sensor information.
- the autonomous vehicle 100b may provide an XR object corresponding to a real object or an object in a screen to the occupant by outputting an XR image with a HUD.
- the XR object when the XR object is output to the HUD, at least a part of the XR object may be output to overlap the actual object facing the occupant's gaze.
- the XR object when the XR object is output on a display provided inside the autonomous vehicle 100b, at least a part of the XR object may be output to overlap an object in the screen.
- the autonomous vehicle 100b may output XR objects corresponding to objects such as lanes, other vehicles, traffic lights, traffic signs, motorcycles, pedestrians, and buildings.
- the autonomous driving vehicle 100b which is the object of control/interaction in the XR image, acquires sensor information from sensors including a camera
- the autonomous driving vehicle 100b or the XR device 100c is based on the sensor information.
- An XR image is generated, and the XR device 100c may output the generated XR image.
- the autonomous vehicle 100b may operate based on a control signal input through an external device such as the XR device 100c or a user's interaction.
- FIG. 4 is a block diagram of an artificial intelligence-based air conditioner 300 according to an embodiment of the present invention.
- the air conditioner 300 may include a communication unit 210, a processor 260, and a memory.
- the air conditioner 300 may be interlocked with the image acquisition device 400, the database 500, and the external server 600.
- the communication unit 210 may receive an image including member data capable of distinguishing the member from the image acquisition device 400 corresponding to a group including at least one member.
- the communication unit 210 may transmit and receive data with external devices such as other AI devices 100a to 100e or the AI server 200 using wired/wireless communication technology.
- the communication unit 110 may transmit and receive sensor information, a user input, a learning model, and a control signal with external devices.
- the communication unit 210 may receive an image of each member from the image acquisition device 400 and transmit processed member data and member information to the external server 600 or the database 500.
- the communication unit 210 GSM (Global System for Mobile communication), CDMA (Code Division Multi Access), LTE (Long Term Evolution), 5G, WLAN (Wireless LAN), Wi-Fi (Wireless-Fidelity), Bluetooth (BluetoothTM), Radio Frequency Identification (RFID), Infrared Data Association (IrDA), ZigBee, and NFC (Near Field Communication) can transmit and receive.
- GSM Global System for Mobile communication
- CDMA Code Division Multi Access
- LTE Long Term Evolution
- 5G Fifth Generation
- WLAN Wireless LAN
- Wi-Fi Wireless-Fidelity
- Bluetooth Bluetooth
- RFID Radio Frequency Identification
- IrDA Infrared Data Association
- ZigBee ZigBee
- NFC Near Field Communication
- the processor 260 may recognize the member data from the received image, and obtain operation data including an operation condition of the air conditioner desired by the member based on the recognized member data.
- the processor 260 stores the obtained member data and the member information processed by obtaining the motion data in the database 500, and from a plurality of member information corresponding to the group stored in the database 500, at least For each member, it is possible to obtain and analyze the operating conditions of the air conditioner desired by the member.
- the processor 260 may determine an operation and perform the operation based on information determined or generated using a data analysis algorithm or a machine learning algorithm. To this end, the processor 260 may request, search, receive, or utilize data from a learning processor or memory.
- the processor 260 acquires corresponding intention information by changing a user's program, or an STT (Speech To Text) engine for converting a speech input into a character string, or a natural language processing (NLP) for obtaining intention information of a natural language. ) At least one of the engines may be used to obtain intention information corresponding to a user input.
- STT Seech To Text
- NLP natural language processing
- the memory is trained using machine learning or deep learning algorithms, and may store a member data recognition model for recognizing the member data from the image.
- the processor 260 may collect history information including the operation content or user's feedback on the operation of the AI device 100, and the external server 600 or the database 500 through the communication unit 210 Can be transferred to.
- the collected history information can be used to update the learning model.
- FIG. 5 is a flowchart illustrating a control operation of the air conditioner 300 according to an embodiment of the present invention.
- the air conditioner 300 includes: accumulating and obtaining group member information (S10); Obtaining and analyzing operating conditions for each member of the group (S20); And autonomously driving the air conditioner according to an operating condition under the control of the processor 260 (S30).
- the air conditioner has the advantage of being able to construct data having a normalized distribution from the cumulatively acquired member information, and to subdivide operating conditions for each member.
- FIG. 6 is a flowchart illustrating an operation of acquiring member information according to an embodiment of the present invention in more detail.
- the member information includes: receiving an image including member data from the image acquisition device 400 (S11); Recognizing member data from the received image (S12); Obtaining motion data of the member (S13); And storing member information including recognized member data and acquired motion data in the database 500 (S14).
- Step S11 of receiving an image including member data from the image acquisition device 400 and step S12 of recognizing member data from the received image will be described with reference to FIGS. 7 and 8.
- the air conditioner is operated according to operation conditions according to operation data set for each member, thereby increasing the convenience of individual members.
- FIG. 7 shows a state of acquiring member information using the image acquisition device 400 according to an embodiment of the present invention
- FIG. 8 is a view illustrating the image acquisition device 400 and the air conditioner according to the embodiment of FIG. 7. 300) shows how to acquire member data.
- the image acquisition device 400 is involved in the step S11 of receiving an image including member data from the image acquisition device 400.
- the image acquisition device 400 may continuously take pictures along a moving line in which the member moves, and store various angles of the same person.
- the processor 260 may extract the member by analyzing the image of the member obtained from the image acquisition device 400.
- the processor 260 inputs the image received from the image acquisition device 400 as input data of the member data recognition model, obtains the member data recognized from the member data recognition model, and receives the image and the member Based on the data, it is possible to extract who the member is.
- the member data recognition model can recognize a person by detecting a face in a still image or video obtained from the image acquisition device 400, labeling the detected face with a unique value, and recognizing a person according to an embodiment of the present invention. It is possible to perform clustering among the characters through the relationship frequency matrix for.
- the member data recognition model may estimate a relationship between a plurality of people using the relationship frequency matrix and clustered data, and may repeatedly perform the above-described process to increase the accuracy of the member data.
- the member data may further include information on at least one or more of a face shape, a body shape, or a voice that can distinguish each member, and the face shape or body shape may be at least one or more angles according to the movement of the member. It may be extracted from the collected still image or video.
- the member data may further include biometric information and environmental information, and the biometric information includes at least one of pulse information, body temperature information, or facial expression information of the member, and the environmental information includes location information of the member , At least one of time information and room temperature information may be included.
- the member information can be divided into member data and operation data
- the member data is information for identifying who the member is
- the operation data refers to data on how the member normally operates the air conditioner.
- the operation data may include an operation condition, and the operation condition will be described later in FIG. 9.
- the step (S14) of storing member information including the recognized member data and the acquired motion data in the database 500 may be performed through the memory and the communication unit 210.
- Member information for which the member's operation method is determined by matching member data and operation data can be stored in or viewed from the memory, and this can be transmitted and received with the external server 600 or the database 500 through the communication unit 210 have.
- the step of acquiring and analyzing operation conditions for each member of the group (S20) is a step of acquiring and analyzing an air conditioner setting method included in the operation data.
- the operating condition can be obtained by learning about the member's usage method by the processor 260, and it is possible to construct the motion data by inferring the items for which the usual user mainly changes the settings, from which the operating conditions are determined according to each item. Can be extracted.
- operation data and operation conditions according to member data can be extracted, and each reward value can be inferred.
- the operation data refers to data on categories such as temperature, wind speed, and wind direction that are mainly set by a member
- the operation condition refers to corresponding values
- the reward may be defined as a value corresponding to each category of the operation condition.
- the reward defines specific values such as 20°C or 1m/s.
- Motion data Operating condition Member A Temperature, wind speed, wind direction 20°C, 1m/s, 2° rotation Member B Temperature, wind speed 23°C, 2m/s Member C Wind direction, perceived temperature 15° rotation, 27°C
- FIG. 9 shows an embodiment of the present invention. It shows a flow chart for extracting the member's operating conditions.
- member data is extracted (S22), and it is determined whether the member is an existing member (S23), Existing members may go through the process of extracting motion data (S26).
- a new identification number may be assigned to the member (S24), and a process of accumulating motion data (S25) to extract motion data may be additionally performed.
- FIG. 10 shows an example of setting a reward for the operating condition of FIG. 9.
- the operation data may be obtained by analyzing an operation pattern of a user operating a remote control of the air conditioner, and the operation data includes an operation type and the operation of the air conditioner desired by the user. It can be labeled by giving a reward for each operation value according to the type.
- member 1 is a remote control user, and the value input through the remote control adjusts for member 1, and the remaining members 2 and 3 cannot input through the remote control.
- the value input through the remote control adjusts for member 1, and the remaining members 2 and 3 cannot input through the remote control.
- the father of the house wants a cool and comfortable wind (23 degrees), but the mother may want to maintain the appropriate temperature (26 degrees) because the mother is concerned that a 5-year-old child will get sick with an air conditioner.
- the degree of comfort is increased, and the child and mother can adjust the strength of the air conditioner so that the sensory temperature stays at 26 degrees through the indirect wind direction.
- the air conditioner has the convenience of automatically updating the operating conditions of each member without the user's setting operation by updating the reward through an image acquisition device or voice recognition.
- FIG. 11 is a diagram illustrating an example of updating an operating condition through voice recognition according to a second embodiment of the present invention.
- an operation condition may be updated by recognizing a member's voice.
- the process of updating the operating conditions is performed through a voice recognition step (S41), a voice analysis step (S42), a member data-related determination step (S43), and then a member is extracted (S45), and a temperature is increased (S46). Conditions can be updated.
- the operation data or operation condition may include any method that can be updated in real time according to the member's request. For example, if member 2 (mother) says “they are cold” to the air conditioner, the rewards for those who are less than 130cm tall in the clustering result can be updated and reflected in the air conditioner temperature control.
- an update to the reward can be implemented through additional voice recognition.
- a process of checking whether a specific reward is input may be added, and accordingly, a process of resetting the reward according to each operating condition may be included.
- the air conditioner 300 is connected to an external server 600 for storing consumed power, and the external server 600 is a preset period for each member or household stored in the database 500.
- the amount of power consumed during the period may be calculated as an electric charge and transmitted to the processor 260 in real time.
- the power consumption and the electricity bill for the next month may be estimated through the processor 260 based on the power consumption and the electricity bill in January and February.
- the processor 260 may infer prediction information on the amount of electricity to be used in the future based on the electricity rate.
- the air conditioner has the advantage of saving power consumption by learning members by itself and controlling activities in an optimal state.
- FIG. 13 is an exploded perspective view of an air conditioner according to an embodiment of the present invention.
- FIG. 14 is a schematic block diagram of components included in an air conditioner according to an embodiment of the present invention.
- FIG. 13 various embodiments of the present invention will be described by taking a ceiling air conditioner as an example, but the embodiment of the present invention is not limited to a ceiling air conditioner and may be applied to various types of air conditioners such as a stand type and a wall-mounted type.
- the external shape of the air conditioner shown in FIG. 13 is only an example for convenience of description, and the external shape of the air conditioner according to the exemplary embodiment of the present invention is not limited thereto.
- an air conditioner 700 may include a casing.
- the casing may include a body casing 20 and a front panel 781.
- the casing is fixed to the ceiling or wall, and can suck in outside air and discharge heat exchanged air.
- the main body casing 20 may include a fixing member 201 for fixing the main body casing 20.
- the fixing member 201 may be fixed by a fastening member such as a bolt (not shown), for example.
- a plurality of parts may be disposed in the inner space of the main body casing 20.
- the plurality of components may include a heat exchanger (not shown) for heat exchange of air sucked from the outside, and a blower fan (not shown) for discharging heat exchanged air from the heat exchanger to the outside.
- the air conditioner 700 may further include a front panel 781 that can be coupled to a lower side of the body casing 20.
- a front panel 781 may be positioned at approximately the height of the ceiling and exposed to the outside.
- the air conditioner 700 may have an overall appearance formed by the body casing 20 and the front panel 781.
- the casing may include a suction port for inhaling indoor air and a discharge port for discharging the air heat exchanged from the air conditioner 700 to the room.
- the front panel 781 may further include a suction grill 104 that prevents foreign substances contained in the air sucked through the suction port from entering.
- the suction grill 104 may be detachably coupled from the suction port.
- the suction port may be formed long in a horizontal direction at a front portion of the front panel 781, and the discharge port may be formed long in a horizontal direction at a rear portion of the front panel 781.
- the casing may further include a discharge vane 102 that is movably provided to the discharge port.
- the discharge vane 102 may adjust the amount or direction of air discharged through the discharge port.
- the air conditioner 700 may include a vane motor that operates the discharge vane.
- the discharge vane 102 may be provided to be rotatable in an upward direction and a left-right direction around a hinge axis (not shown) of the discharge vane 102.
- the discharge vane 102 receives a driving force from the vane motor and rotates in the vertical and horizontal directions, thereby adjusting the wind direction.
- air in the indoor space may be sucked into the casing through an inlet. And the air sucked into the casing can be heat exchanged in the heat exchanger. In addition, the air passing through the heat exchanger may be discharged to the indoor space through the discharge port of the casing by the blowing fan.
- the main body casing 20 may include a suction hole 203 formed to communicate with a suction port formed in the front of the front panel 781.
- the air conditioner 700 may include a filter assembly 30 disposed in the suction hole 203.
- the filter assembly 30 may filter foreign substances such as dust contained in the air flowing into the air conditioner 700 to minimize foreign substances contained in the air discharged through the discharge port.
- the air conditioner includes a compressor that receives and compresses a refrigerant, a condenser, an expansion device, and an evaporator, and a refrigeration cycle that performs compression, condensation, expansion, and evaporation of the refrigerant is driven to cool or heat the indoor space.
- a compressor that receives and compresses a refrigerant, a condenser, an expansion device, and an evaporator, and a refrigeration cycle that performs compression, condensation, expansion, and evaporation of the refrigerant is driven to cool or heat the indoor space.
- FIG. 14 is a schematic block diagram of components included in an air conditioner according to an embodiment of the present invention.
- the air conditioner 700 includes a communication unit 710, an input unit 720, a sensor unit 730, a compressor 740, a fan motor 750, an output unit 760, a memory 770, and a processor. 780, and a power supply unit 790.
- the components shown in FIG. 4 are not essential to implement the air conditioner, and thus the air conditioner described herein may have more or fewer components than the components listed above.
- the communication unit 710 is between the air conditioner 700 and an external device (eg, a mobile air conditioner such as a moving agent, a smartphone, a tablet PC, or a fixed air conditioner such as a desktop computer), or It may include one or more modules that enable wired or wireless communication between the air conditioner 700 and an external server.
- an external device eg, a mobile air conditioner such as a moving agent, a smartphone, a tablet PC, or a fixed air conditioner such as a desktop computer
- the communication unit 710 may include one or more modules that enable wired or wireless communication between the air conditioner 700 and an external server.
- the communication unit 710 may include one or more modules that connect the air conditioner 700 to one or more networks.
- the communication unit 710 may include at least one of a wireless Internet module and a short-range communication module.
- the wireless Internet module refers to a module for wireless Internet access, and may be built-in or external to the air conditioner 700.
- the wireless Internet module is configured to transmit and receive wireless signals in a communication network according to wireless Internet technologies.
- wireless Internet technologies include WLAN (Wireless LAN), Wi-Fi (Wireless-Fidelity), Wi-Fi (Wireless Fidelity) Direct, and DLNA (Digital Living Network Alliance).
- the short-range communication module is for short range communication, and uses at least one of technologies such as BluetoothTM, Infrared Data Association (IrDA), ZigBee, and NFC (Near Field Communication). Can support communication.
- the short-range communication module may support wireless communication between the air conditioner 700 and an external device through wireless area networks.
- the short-range wireless communication network may be a short-range wireless personal area network (Wireless Personal Area Networks).
- the communication unit 710 may communicate with the moving agent through various communication methods described above under the control of the processor 780.
- the input unit 720 may include a touch key, a mechanical key, a dial key, or the like for receiving information or commands from a user. Depending on the embodiment, the input unit 720 may be understood as a concept encompassing an interface unit receiving information or a command from a separate remote control device.
- the input unit 720 is for receiving information from a user, and when information is input through the input unit 720, the processor 780 may control the operation of the air conditioner 700 to correspond to the input information. .
- the input unit 720 is a mechanical (mechanical) input means (or a mechanical key, for example, a button located on the front, rear or side of the air conditioner 700, a dome switch, a jog wheel, a jog switch Etc.) and a touch input means.
- a mechanical (mechanical) input means or a mechanical key, for example, a button located on the front, rear or side of the air conditioner 700, a dome switch, a jog wheel, a jog switch Etc.
- the touch-type input means comprises a virtual key, a soft key, or a visual key displayed on a touch screen through software processing, or a portion other than the touch screen Touch keys (touch
- the virtual key or visual key can be displayed on the touch screen while having various forms, for example, graphic, text, icon, video, or these It can be made of a combination of.
- the sensor unit 730 may include one or more sensors for sensing at least one of user information and surrounding environment information surrounding the air conditioner 700.
- the sensor unit 730 may include a temperature sensor 732 for sensing the temperature of a space in which the air conditioner 700 is installed, and a humidity sensor 734 for sensing the humidity of the space.
- the output unit 760 is for generating output related to visual and audio, and may include at least one of a display unit 762 and an audio output unit 764 (for example, a speaker).
- the display unit 762 may form a layer structure with the touch sensor or be integrally formed, thereby implementing a touch screen.
- a touch screen may function as an input unit 720 that provides an input interface between the air conditioner 700 and a user, and may provide an output interface between the air conditioner 700 and the user.
- the display unit 762 may display various types of information related to the operation of the air conditioner 700.
- the display unit 762 may display information on a set temperature, air volume, wind direction, current indoor temperature, humidity, etc. of the air conditioner 700 and information on an operation mode such as a power saving mode, a normal mode, and a sleep mode. .
- the sound output unit 764 may output an audio signal to notify the occurrence of an event of the air conditioner 700.
- Examples of events occurring in the air conditioner 700 may include an alarm, power on/off, an error occurrence, and an operation mode change.
- the memory 770 stores data supporting various functions of the air conditioner 700.
- the memory 770 may store various data and commands for operating the air conditioner 700.
- Memory 770 is a flash memory type, a hard disk type (hard disk type), an SSD type (Solid State Disk type), an SDD type (Silicon Disk Drive type), a multimedia card micro type (multimedia card micro type), card type memory (eg SD or XD memory, etc.), random access memory (RAM), static random access memory (SRAM), read-only memory (ROM), EEPROM (electrically Erasable programmable read-only memory), programmable read-only memory (PROM), magnetic memory, magnetic disk and optical disk may include at least one type of storage medium.
- RAM random access memory
- SRAM static random access memory
- ROM read-only memory
- EEPROM electrically Erasable programmable read-only memory
- PROM programmable read-only memory
- magnetic memory magnetic disk and optical disk may include at least one type of storage medium.
- the processor 780 typically controls the overall operation of the air conditioner 700.
- the processor 780 may provide or process appropriate information or functions to a user by processing signals, data, information, etc. input or output through the above-described components.
- the processor 780 may adjust at least one of the compressor 740, the fan motor 750, and the vane motor based on an action output from the reinforcement learning model.
- the action output from the reinforcement learning model may include at least one of a set temperature, an air volume, and a wind direction.
- the fan motor 750 may be installed inside the casing and may blow air by rotating a blowing fan (not shown) provided in the casing.
- the blowing fan rotates, and as the blowing fan rotates, air may be sucked through the inlet and air may be discharged through the discharge port.
- the processor 780 may control the operation of the compressor 740 based on a set temperature among actions output from the reinforcement learning model.
- the set temperature of the air conditioner 780 may be adjusted.
- the processor 780 may control the operation of the fan motor 750 based on the amount of air among actions output from the reinforcement learning model. In addition, as the operation of the fan motor 750 is controlled, the air volume of the air conditioner 780 may be adjusted.
- the processor 780 may control the operation of the vane motor based on the wind direction among actions output from the reinforcement learning model. In addition, as the operation of the vane motor is controlled, the wind direction of the air conditioner 780 may be adjusted.
- the power supply unit 790 receives external power and internal power under the control of the processor 780 and supplies power to each of the components included in the air conditioner 700.
- At least some of the components may operate in cooperation with each other in order to implement an operation, control, or control method of an air conditioner according to various embodiments described below.
- the operation, control, or control method of the air conditioner may be implemented on the air conditioner by driving at least one application program stored in the memory 770.
- 15 is a diagram illustrating a method of operating an air conditioner according to an exemplary embodiment of the present invention.
- the step of identifying a member existing in an indoor space using data for identifying a member (S1510), corresponding to the identified member's situation information and the identified member Setting a target temperature corresponding to the identified member based on the learning result (S1530) and updating the learning result using feedback on the target temperature (S1550) may be included.
- the sensing unit 140 of the air conditioner 100 may acquire data for identifying a member.
- the camera may acquire an image of an indoor space.
- the microphone can acquire sound collected in an indoor space.
- the processor 180 of the air conditioner 100 may identify members existing in the indoor space by using data for identifying members.
- the processor 180 may distinguish a plurality of members based on the size, appearance, behavior pattern, and voice in the received sound of a person in the captured image.
- the processor 180 may identify a member existing in the indoor space among the plurality of members by using data for identifying the member.
- a method of identifying a member is a conventional technique and a detailed description thereof will be omitted.
- the processor 180 of the air conditioner 100 may obtain behavior information of the identified member by using data for identifying the member.
- the behavior information may mean a member's behavior, such as lying down, sitting, standing, lying on the sofa, eating, exercising, reading, taking a nap, and running.
- the processor 180 of the air conditioner 100 may acquire behavior information of the identified member based on the location of the identified person in the captured image, the movement of the person, the direction in which the voice is received, and the size of the voice. have.
- the method of determining the member's behavior information is a conventional technique and a detailed description thereof will be omitted.
- the processor 180 of the air conditioner 100 may set a target temperature corresponding to the identified member based on context information of the identified member and a learning result corresponding to the identified member.
- 16 is a diagram illustrating a reinforcement learning model corresponding to a member and a re-learning method using feedback.
- 17 is a diagram for explaining a reinforcement learning model used in the present invention.
- a reinforcement learning model may be used to implement the present invention. First, a reinforcement learning model will be described with reference to FIG. 17.
- the processor 180 may provide a state to a reinforcement learning model mounted in an air conditioner.
- reinforcement learning is the theory that given an environment in which an agent can determine what action to do at every moment, it can find the best way to experience without data.
- Reinforcement Learning can be mainly performed by a Markov Decision Process (MDP).
- MDP Markov Decision Process
- MDP Markov Decision Process
- the agent may mean an air conditioner, more specifically, a reinforcement learning model.
- an environment in which information necessary for an agent (reinforcement learning model) to perform the next action that is, an indoor space in which an air conditioner is arranged, may be given.
- the agent determines how to act based on a given state (status information of the identified member), that is, how to set the target temperature of the air conditioner. I can.
- the agent when the agent (reinforced learning model) sets the target temperature of the air conditioner according to the user's taste, a reward is given, and if the target temperature cannot be set according to the user's taste, a penalty is given. Can be defined.
- the agent may update the parameters of the neural network based on rewards and penalties.
- the agent can repeatedly experience until the future reward reaches the peak, and set the optimal policy, that is, the user's favorite target temperature.
- the reinforcement learning model may be mounted on the air conditioner 100.
- the reinforcement learning model may be implemented in hardware, software, or a combination of hardware and software, and when part or all of the reinforcement learning model is implemented in software, one or more instructions constituting the reinforcement learning model may be stored in a memory.
- the reinforcement learning model may be composed of a neural network, and a parameter of the reinforcement learning model, that is, a parameter of a neural network may be updated using feedback from members.
- a learning result obtained using feedback according to an action and an action of the reinforcement learning model may be stored in a memory.
- the learning result may refer to a parameter of the reinforcement learning model that is updated using feedback from members.
- the learning result may constitute a part of the neural network constituting the reinforcement learning model.
- the state may include the member's context information 1710.
- the member's situation information 1710 may include member's behavior information and environment information.
- the behavior information may mean a member's behavior, such as lying down, sitting, standing, cooking, lying on a sofa, eating, exercising, reading, taking a nap, and running.
- the environmental information may include at least one of outdoor temperature, indoor temperature, outdoor humidity, indoor humidity, weather, date, and time.
- the indoor temperature may refer to a temperature of an indoor space in which the air conditioner is disposed
- the set temperature may refer to a discharge temperature
- the target temperature which will be described later, may mean a target value at which the temperature of the indoor space must reach.
- the reinforcement learning model may output an action (target temperature) as a result value based on a state (action information, or behavior information and environment information).
- the reinforcement learning model matches the mother's cooling taste based on the current date and mother's behavior information.
- the target temperature can be output.
- the processor provides context information of the identified member to the reinforcement learning model, and when the reinforcement learning model outputs an action (target temperature 1740) based on the context information, the air conditioner is set according to the target temperature 1740 Values (at least one of the set temperature, air volume, wind direction, start of operation, and end of operation) can be adjusted.
- the processor 180 may adjust a set value (at least one of a set temperature of an air conditioner, an air volume, a wind direction, an operation start and an end of operation) in order to reach the target temperature.
- the processor 180 may control the operation of at least one of a compressor, a fan motor, and a vane motor to adjust a set value (at least one of a set temperature of an air conditioner, an air volume, a wind direction, an operation start and an end of operation).
- the processor 180 may adjust a set temperature to reach a target temperature output from the reinforcement learning model.
- the processor 180 may control the operation of the compressor.
- the processor 180 may adjust the air volume to reach the target temperature output from the reinforcement learning model. In addition, in order to adjust the air volume, the processor 180 may control the operation of the fan motor 740.
- the processor 180 of the air conditioner 100 may obtain the location of the identified member by using data for identifying the member. For example, the processor 180 of the air conditioner 100 determines where the identified member is in the indoor space based on the location of the identified person in the captured image, the direction in which the voice is received, and the size of the voice. I can. A method of determining the location of a member is a conventional technique, and a detailed description thereof will be omitted.
- the processor may control the operation of the vane motor.
- the processor may update the learning result using the feedback on the target temperature. Specifically, the processor may adjust the parameters of the reinforcement learning model by training the reinforcement learning model using the feedback on the target temperature.
- the reinforcement learning model 1610 includes a plurality of reinforcement learning models, for example, a first reinforcement learning model 1611, a second reinforcement learning model 1612, a third reinforcement learning model 1613, and a fourth reinforcement learning model 1614. ) Can be included.
- a plurality of reinforcement learning models (1611, 1612, 1613, 1614) may each correspond to a plurality of members.
- the first reinforcement learning model 1611 may correspond to a father
- the second reinforcement learning model 1612 may correspond to a mother
- the third reinforcement learning model 1613 may correspond to an elementary school daughter
- 4 The reinforcement learning model 1614 can respond to infants and toddlers.
- the processor 180 may provide a state to a reinforcement learning model corresponding to the identified member.
- the processor 180 may input current situation information of the father into the first reinforcement learning model 1611.
- the processor sets a target temperature based on the action (target temperature), and a set value of the air conditioner according to the set target temperature. Can be adjusted.
- the processor may update the first reinforcement learning model using feedback of the identified member with respect to the set target temperature.
- the processor 180 may receive the father's feedback through the microphone 122.
- the processor 180 may train a reinforcement learning model corresponding to the identified user by using the identified user's feedback.
- the processor that adjusts the set value of the air conditioner according to the behavior of the first reinforcement learning model corresponding to the father provides a reward or penalty point 1720 corresponding to the feedback received from the father to the first reinforcement learning model. can do.
- the first reinforcement learning model 1611 establishes a new policy based on the behavior output from the first reinforcement learning model 1611 and rewards or penalties, and the first reinforcement learning model to respond to the new policy.
- the parameters of (1611) can be updated.
- a plurality of learning results corresponding to each of a plurality of members may be stored in the memory 170.
- the learning result has previously been described as a parameter of a neural network that is updated using the behavior of the reinforcement learning model and feedback according to the behavior.
- a learning result corresponding to the father that is, a parameter of the first reinforcement learning model 1611 may be updated.
- the other reinforcement learning models 1612, 1613, and 1614 may output actions in the same manner and be trained using feedback, and accordingly, learning results corresponding to other members may also be updated.
- the processor 180 may input the mother's context information into the second reinforcement learning model 1612.
- the processor may set a target temperature of the air conditioner based on the action and adjust a set value according to the target temperature.
- the second reinforcement learning model 1612 may be updated using the identified mother's feedback on the adjusted setting value.
- the second reinforcement learning model 1612 establishes a new policy based on the behavior output from the second reinforcement learning model 1612 and rewards or penalties, and the second reinforcement learning model responds to the new policy.
- the parameters of (1612) can be updated.
- a learning result corresponding to the mother that is, a parameter of the second reinforcement learning model 1612 may be updated.
- FIG. 18 is a diagram illustrating a method of providing feedback to a reinforcement learning model according to an embodiment of the present invention.
- the feedback may be a response of a member including at least one of readjustment of a target temperature by a member, stopping an operation of the air conditioner by a member, and an action of a member corresponding to the target temperature.
- the member can control the target temperature by operating the remote control. That is, the adjustment of the target temperature may indicate that the member is not satisfied with the target temperature output from the reinforcement learning model.
- the following describes the interruption of the air conditioner by a member.
- the member can stop the operation of the air conditioner by operating the remote control. That is, interruption of the operation of the air conditioner may indicate that the member is not satisfied with the target temperature output from the reinforcement learning model.
- the behavior of the members is that the sleeping member is cold and covers the blanket, the sleeping member is hot and turns or wakes up, the behavior of wrapping his body with his hand because it is cold, satisfactory facial expression, “I like the child”, “The child is cool. It may include the utterance of expressions such as "", “it's very cold”.
- the processor may give a reward or penalty 1720 to the reinforcement learning model based on the user's response.
- the feedback may include positive feedback indicating a positive reaction to the target temperature recommended by the reinforcement learning model and negative feedback indicating a negative reaction.
- the positive feedback may be a positive expression for the target temperature output from the reinforcement learning model.
- positive feedback includes voices such as “I like my child” and “I'm cool”, a satisfactory facial expression, and not receiving input to readjust the target temperature in response to the setting of the target temperature of the air conditioner. can do.
- the negative feedback may be a negative expression for the target temperature output from the reinforcement learning model.
- Negative feedback for example, is, "It's very cold! It may include a voice on the back, the action of wrapping the body with hands because it is cold, readjustment of the target temperature by the member, stopping the operation of the air conditioner by the member, turning over or waking up because it is hot.
- the processor may give a reward to the reinforcement learning model when the user's feedback is positive feedback, and may give a penalty to the reinforcement learning model when the user's feedback is negative feedback.
- the reinforcement learning model may establish a new policy based on rewards or penalties, and update parameters of the reinforcement learning model to respond to the new policy.
- the target temperature can be set in consideration of the member's behavior information, there is an advantage in that the optimum target temperature can be set according to the current behavior of the user.
- the target temperature can be set in consideration of the member's behavior information and environment information, there is an advantage in that the optimum target temperature can be set according to the user's current behavior and current environment.
- FIGS. 19 to 21 a method in which the reinforcement learning model outputs a set value of the air conditioner together with a target temperature, and the reinforcement learning model is updated using a member's response and power consumption will be described.
- FIGS. 13 to 18 may be applied to the following embodiments, and will be described focusing on contents different from the embodiments of FIGS. 13 to 18.
- FIG. 19 is a diagram illustrating a method of operating an air conditioner according to another embodiment of the present invention.
- the reinforcement learning model outputs a target temperature and a set value (S1910), and a feedback including a reaction of a member to the set value and an amount of power consumption according to the set value It may include updating the reinforcement learning model by using (S1920).
- 20 is a diagram illustrating a reinforcement learning model used in the present invention.
- the reinforcement learning model outputs a target temperature.
- the reinforcement learning model may output the set value of the air conditioner together with the target temperature.
- the processor 180 may provide a state (status information) to the reinforcement learning model 2030 mounted in the air conditioner.
- the reinforcement learning model 2030 determines how to act based on a given state (identified member's situation information 2010), that is, a target temperature 2040 and a set value 2050 of the air conditioner. ) May be determined, and the reinforcement learning model 2030 may output a set value for reaching a target temperature or maintaining a target temperature based on context information.
- the reinforcement learning model 2030 outputs the target temperature and the set value together, but is not limited thereto, and the reinforcement learning model 2030 may output only the set value excluding the target temperature.
- the processor may control an operation of at least one of a compressor, a fan motor, and a vane motor according to the set value output from the reinforcement learning model.
- the set value may include at least one of a set temperature, an air volume, a wind direction, an operation start and an end of the operation.
- the processor may update the learning result using feedback on the set value. Specifically, the processor may adjust the parameters of the reinforcement learning model by training the reinforcement learning model using feedback on the set value.
- the feedback on the set value may include a member's response to the set value and an amount of power consumption according to the set value.
- the member's response to the set value is at least one of a member's readjustment of the target temperature (or set value), the member's stopping the operation of the air conditioner, and the member's action in response to the target temperature (or set value). It may include.
- the amount of power consumption according to the set value may include the amount of power consumption of the air conditioner when the operation is performed with the set value output from the reinforcement learning model.
- the air conditioner may include a sensor that acquires data for calculating power consumption.
- 21 is a diagram illustrating a graph of temperature change and a graph of power consumption according to different set values.
- a first graph 2111 is a graph of a change in room temperature when the reinforcement learning model outputs a first output value (a first target temperature (A) and a first set value)
- a second graph ( 2121) is a graph of the room temperature change when the reinforcement learning model outputs the second output value (the first target temperature A and the second set value).
- the target temperature A at the first output value and at the second output value is the same.
- the air conditioner performs rapid cooling to reach the target temperature A, and after reaching the target temperature A, it operates so that the gap between the indoor temperature and the target is small.
- the air conditioner performs general cooling to reach the target temperature (A), and after reaching the target temperature (A), it operates so that the gap between the indoor temperature and the target is large.
- the amount of power consumption when the air conditioner operates at the first output value is greater than the amount of power consumption when the air conditioner operates at the second output value (the second set value).
- the air conditioner can satisfy the member's cooling taste, but can prevent an increase in power consumption. none.
- the reinforcement learning model needs to properly match one goal of increasing the speed of reaching the target temperature and maintaining the target temperature with another goal of reducing power consumption.
- the processor may update the reinforcement learning model using feedback including a member's response to the set value and the amount of power consumption according to the set value.
- the processor may give a reward to the reinforcement learning model.
- the processor may penalize the reinforcement learning model.
- the processor may penalize the reinforcement learning model.
- the processor may reward the reinforcement learning model.
- the reinforcement learning model may output an output value capable of properly matching the first target of increasing the speed of reaching the target temperature and maintaining the target temperature and the second target of reducing power consumption.
- the degree to which each of the first and second targets is reflected can be determined by adjusting the level of compensation or penalty for the member's response, and the level of compensation or penalty for the amount of power consumed.
- the reinforcement learning model refers to an increase in the speed of reaching the target temperature and maintenance of the target temperature. Policies can be established in the direction of increasing the weight of the first goal.
- the reinforcement learning model is a policy toward increasing the weight of the second goal of reducing power consumption. Can be established.
- the weight of the first goal and the weight of the second goal may be set by members.
- the processor may receive an input for enhancing reflection of the cooling taste from a member.
- the processor may increase the compensation for the response of the member or decrease the compensation for the amount of power consumption.
- the processor may provide an increased reward or a reduced reward to the reinforcement learning model.
- the reinforcement learning model may establish a new policy and update a parameter of the reinforcement learning model to respond to the new policy based on an increased reward or a reduced reward.
- the new policy may be a policy that can more suit the user's cooling taste compared to the existing policy.
- the processor may receive an input for reducing power consumption from a member.
- the processor may reduce the compensation for the member's response or increase the compensation for the amount of power consumed.
- the processor may provide the reduced or increased reward to the reinforcement learning model.
- the reinforcement learning model may establish a new policy and update a parameter of the reinforcement learning model to respond to the new policy based on an increased reward or a reduced reward.
- the new policy may be a policy that can be more consistent with reducing the amount of power consumption compared to the existing policy.
- cooling can be performed by establishing a policy that satisfies the user's intention (emphasis on saving electricity, focusing on preference for cooling).
- the weight of the first goal and the weight of the second goal may be set differently for each member.
- the processor provides the identified context information of the first member to the first reinforcement learning model, and when an input for enhancing reflection of the cooling taste is received from the first member, the reward for the member's response is increased or consumed. It is possible to reduce the compensation for the amount of power.
- the processor may train the first reinforcement learning model by using a reward for the first reinforcement learning model.
- the processor provides the identified context information of the second member to the second reinforcement learning model, and when an input for power consumption reduction is received from the second member, the compensation for the response of the second member is reduced or the amount of power consumption is reduced. You can increase the reward for it. Also, the processor may train the second reinforcement learning model by using the reward for the second reinforcement learning model.
- the first reinforcement learning model can establish a new policy that more suits the intention of the first member (focusing on cooling taste), and accordingly, the air conditioner can provide a cooling service that meets the intention of the first member. .
- the second reinforcement learning model can establish a new policy more in line with the intention of the second member (emphasis on saving electricity bills), and accordingly, the air conditioner can provide a cooling service that meets the intention of the second member. .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Mechanical Engineering (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Chemical & Material Sciences (AREA)
- Combustion & Propulsion (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Air Conditioning Control Device (AREA)
Abstract
La présente invention concerne un climatiseur. Selon un mode de réalisation, le climatiseur de la présente invention comprend : un compresseur ; un carter comprenant un trou d'admission et un trou d'évacuation ; un moteur de ventilateur installé dans le carter et destiné à souffler de l'air ; une palette d'évacuation agencée mobile dans le trou d'évacuation ; un moteur à palettes destiné à faire fonctionner la palette d'évacuation ; un capteur destiné à acquérir des données permettant d'identifier des éléments ; une mémoire permettant de stocker des résultats d'apprentissage correspondant aux éléments identifiés ; et un processeur destiné à identifier un élément présent dans un espace intérieur au moyen de l'utilisation des données permettant d'identifier les éléments, à définir une température cible correspondant à l'élément identifié en fonction des informations de situation concernant l'élément identifié et des résultats d'apprentissage correspondant à l'élément identifié, et à utiliser une rétroaction concernant la température cible afin de mettre à jour le résultat d'apprentissage.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/KR2019/008602 WO2021006406A1 (fr) | 2019-07-11 | 2019-07-11 | Climatiseur fondé sur l'intelligence artificielle |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/KR2019/008602 WO2021006406A1 (fr) | 2019-07-11 | 2019-07-11 | Climatiseur fondé sur l'intelligence artificielle |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021006406A1 true WO2021006406A1 (fr) | 2021-01-14 |
Family
ID=74114048
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2019/008602 Ceased WO2021006406A1 (fr) | 2019-07-11 | 2019-07-11 | Climatiseur fondé sur l'intelligence artificielle |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2021006406A1 (fr) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114183892A (zh) * | 2021-12-08 | 2022-03-15 | 珠海格力电器股份有限公司 | 室内环境的智能调节方法、装置、系统及存储介质 |
| CN116358114A (zh) * | 2023-05-06 | 2023-06-30 | 国网浙江省电力有限公司综合服务分公司 | 一种基于深度强化学习的空调温度控制方法 |
| CN116587944A (zh) * | 2022-02-03 | 2023-08-15 | 通用汽车环球科技运作有限责任公司 | 车辆中的自适应个人热控制 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20090115586A (ko) * | 2008-05-02 | 2009-11-05 | 엘지전자 주식회사 | 공기조화기 및 그 동작방법 |
| US20180100662A1 (en) * | 2016-10-11 | 2018-04-12 | Mitsubishi Electric Research Laboratories, Inc. | Method for Data-Driven Learning-based Control of HVAC Systems using High-Dimensional Sensory Observations |
| KR20180074903A (ko) * | 2016-12-23 | 2018-07-04 | 삼성전자주식회사 | 공기 조화기 제어 방법 및 장치 |
| KR20180099645A (ko) * | 2016-01-08 | 2018-09-05 | 소니 주식회사 | 제어 장치, 제어 방법 및 프로그램 |
| KR20180138463A (ko) * | 2017-06-21 | 2018-12-31 | 주식회사 케이티 | Ai 기반으로 대상 건물의 냉방 시스템을 최적 제어하는 클라우드 서버 및 방법, 냉방 시스템 제어 장치 |
-
2019
- 2019-07-11 WO PCT/KR2019/008602 patent/WO2021006406A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20090115586A (ko) * | 2008-05-02 | 2009-11-05 | 엘지전자 주식회사 | 공기조화기 및 그 동작방법 |
| KR20180099645A (ko) * | 2016-01-08 | 2018-09-05 | 소니 주식회사 | 제어 장치, 제어 방법 및 프로그램 |
| US20180100662A1 (en) * | 2016-10-11 | 2018-04-12 | Mitsubishi Electric Research Laboratories, Inc. | Method for Data-Driven Learning-based Control of HVAC Systems using High-Dimensional Sensory Observations |
| KR20180074903A (ko) * | 2016-12-23 | 2018-07-04 | 삼성전자주식회사 | 공기 조화기 제어 방법 및 장치 |
| KR20180138463A (ko) * | 2017-06-21 | 2018-12-31 | 주식회사 케이티 | Ai 기반으로 대상 건물의 냉방 시스템을 최적 제어하는 클라우드 서버 및 방법, 냉방 시스템 제어 장치 |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114183892A (zh) * | 2021-12-08 | 2022-03-15 | 珠海格力电器股份有限公司 | 室内环境的智能调节方法、装置、系统及存储介质 |
| CN116587944A (zh) * | 2022-02-03 | 2023-08-15 | 通用汽车环球科技运作有限责任公司 | 车辆中的自适应个人热控制 |
| CN116358114A (zh) * | 2023-05-06 | 2023-06-30 | 国网浙江省电力有限公司综合服务分公司 | 一种基于深度强化学习的空调温度控制方法 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2020204221A1 (fr) | Dispositif de conditionnement d'air | |
| WO2020246844A1 (fr) | Procédé de commande de dispositif, procédé de traitement de conflit, appareil correspondant et dispositif électronique | |
| WO2021010509A1 (fr) | Appareil de cuisson à intelligence artificielle | |
| WO2021006366A1 (fr) | Dispositif d'intelligence artificielle pour ajuster la couleur d'un panneau d'affichage et procédé associé | |
| WO2021006404A1 (fr) | Serveur d'intelligence artificielle | |
| WO2019225961A1 (fr) | Dispositif électronique permettant de générer une réponse à une entrée vocale à l'aide d'une application, et procédé de fonctionnement associé | |
| WO2020166995A1 (fr) | Appareil et procédé de gestion de programme dans un dispositif électronique | |
| WO2020213750A1 (fr) | Dispositif d'intelligence artificielle pour reconnaître un objet, et son procédé | |
| WO2021029457A1 (fr) | Serveur d'intelligence artificielle et procédé permettant de fournir des informations à un utilisateur | |
| WO2019151845A2 (fr) | Climatiseur | |
| WO2016099048A1 (fr) | Appareil électronique et procédé de commande d'action de groupe | |
| WO2019151846A2 (fr) | Purificateur d'air | |
| EP3345379A1 (fr) | Procèdè pour la commande d'un objet par un dispositif èlectronique et dispositif èlectronique | |
| WO2021125453A1 (fr) | Réfrigérateur à intelligence artificielle | |
| EP3676830A1 (fr) | Procédé de fourniture de service sur la base de l'emplacement d'une source sonore et dispositif de reconnaissance vocale associé | |
| WO2020251101A1 (fr) | Dispositif d'intelligence artificielle pour déterminer un trajet de déplacement d'un utilisateur, et procédé associé | |
| WO2020213758A1 (fr) | Dispositif d'intelligence artificielle à interactivité locutoire et procédé associé | |
| WO2019135621A1 (fr) | Dispositif de lecture vidéo et son procédé de commande | |
| WO2020246647A1 (fr) | Dispositif d'intelligence artificielle permettant de gérer le fonctionnement d'un système d'intelligence artificielle, et son procédé | |
| WO2021137345A1 (fr) | Réfrigérateur à intelligence artificielle et son procédé de fonctionnement | |
| WO2020184748A1 (fr) | Dispositif d'intelligence artificielle et procédé de commande d'un système d'arrêt automatique sur la base d'informations de trafic | |
| WO2020184753A1 (fr) | Appareil d'intelligence artificielle pour effectuer une commande vocale à l'aide d'un filtre d'extraction de voix, et procédé associé | |
| WO2021006406A1 (fr) | Climatiseur fondé sur l'intelligence artificielle | |
| WO2021020621A1 (fr) | Agent de déplacement à intelligence artificielle | |
| WO2020246640A1 (fr) | Dispositif d'intelligence artificielle pour déterminer l'emplacement d'un utilisateur et procédé associé |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19937235 Country of ref document: EP Kind code of ref document: A1 |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19937235 Country of ref document: EP Kind code of ref document: A1 |