[go: up one dir, main page]

WO2022025413A1 - Dispositif électronique et procédé de génération de données d'instruction d'un modèle d'apprentissage d'intelligence artificielle par dispositif électronique - Google Patents

Dispositif électronique et procédé de génération de données d'instruction d'un modèle d'apprentissage d'intelligence artificielle par dispositif électronique Download PDF

Info

Publication number
WO2022025413A1
WO2022025413A1 PCT/KR2021/007194 KR2021007194W WO2022025413A1 WO 2022025413 A1 WO2022025413 A1 WO 2022025413A1 KR 2021007194 W KR2021007194 W KR 2021007194W WO 2022025413 A1 WO2022025413 A1 WO 2022025413A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
electronic device
light source
virtual light
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2021/007194
Other languages
English (en)
Korean (ko)
Inventor
최재성
도원준
최우존
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of WO2022025413A1 publication Critical patent/WO2022025413A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • Embodiments disclosed in this document relate to a technique for training an artificial intelligence learning model.
  • various electronic devices include a display and a camera.
  • the display may display an image using a plurality of pixels located in the display area.
  • the display may include drivers and signal wires for driving the pixels.
  • the electronic device captures a bright light source through a camera, pixels around the light source in the image are saturated and a desired image cannot be captured.
  • pixels or wires of the display may diffract light entering the camera, thereby degrading image resolution and causing light splitting or light bleeding around the light source.
  • Various embodiments of the present disclosure provide an electronic device and method for generating training data of an artificial intelligence learning model for compensating for an image captured using a camera disposed under a display.
  • Various embodiments of the present disclosure provide an electronic device for generating a training data set of an AI learning model for compensating an image including a light source, and a method for generating training data for an AI learning model of the electronic device.
  • An electronic device may include a communication circuit, a memory, and a processor operatively connected to the memory.
  • the memory when executed, causes the processor to acquire a first image, generate a virtual light source image, insert the virtual light source image into the first image to generate a first output image, , After inserting the virtual light source image into the first image, a first filter process is performed to generate a first input image, and an artificial intelligence learning model for compensating an image for the first input image and the first output image. Instructions for generating the first training data set may be stored.
  • the method for generating training data for an artificial intelligence learning model of an electronic device includes the operation of acquiring a first image, the operation of generating a virtual light source image, and the operation of generating the virtual light source image as the first generating a first output image by inserting it into an image, inserting the virtual light source image into the first image and performing a first filter process to generate a first input image, and the first input image and the first image and generating an output image as a first training data set of an artificial intelligence learning model for compensating for an image captured by the camera.
  • an artificial intelligence learning model when generating the training data set of the artificial intelligence learning model, by including the virtual light source image, it is possible to solve the light splitting and light spreading phenomenon during image shooting, and improve the image quality
  • An artificial intelligence learning model can be trained to
  • FIG. 1 illustrates an electronic device in a network environment according to various embodiments of the present disclosure.
  • FIG. 2A is a block diagram of an electronic device according to an exemplary embodiment.
  • 2B is a block diagram of an electronic device according to an exemplary embodiment.
  • 3A and 3B are examples of light source images according to various embodiments.
  • FIG. 4 is a diagram for explaining an image compensation operation of an electronic device according to an exemplary embodiment.
  • FIG. 5 is a diagram for explaining an operation of training an artificial intelligence learning model of an electronic device according to an exemplary embodiment.
  • FIG. 6 is a diagram for explaining an operation of generating a virtual light source image of an electronic device according to an exemplary embodiment.
  • FIG. 7 is a diagram for explaining an operation of generating a training image set of an electronic device according to an exemplary embodiment.
  • FIG. 8 is a flowchart of a method for training an artificial intelligence learning model of an electronic device according to an exemplary embodiment.
  • FIG. 9 is a flowchart of an operation of generating a virtual light source image of an electronic device according to an exemplary embodiment.
  • FIG. 10 is a flowchart of an image compensation operation of an electronic device according to an exemplary embodiment.
  • FIG. 11 is a plan view of a display of an electronic device according to an exemplary embodiment.
  • FIG. 12 is a cross-sectional view of a first display area of an electronic device according to an exemplary embodiment.
  • FIG. 1 is a block diagram of an electronic device 101 in a network environment 100 according to various embodiments.
  • an electronic device 101 communicates with an electronic device 102 through a first network 198 (eg, a short-range wireless communication network) or a second network 199 . It may communicate with the electronic device 104 or the server 108 through (eg, a long-distance wireless communication network). According to an embodiment, the electronic device 101 may communicate with the electronic device 104 through the server 108 .
  • a first network 198 eg, a short-range wireless communication network
  • a second network 199 e.g., a second network 199
  • the electronic device 101 may communicate with the electronic device 104 through the server 108 .
  • the electronic device 101 includes a processor 120 , a memory 130 , an input module 150 , a sound output module 155 , a display module 160 , an audio module 170 , and a sensor module ( 176), interface 177, connection terminal 178, haptic module 179, camera module 180, power management module 188, battery 189, communication module 190, subscriber identification module 196 , or an antenna module 197 may be included.
  • at least one of these components eg, the connection terminal 178
  • may be omitted or one or more other components may be added to the electronic device 101 .
  • some of these components are integrated into one component (eg, display module 160 ). can be
  • the processor 120 for example, executes software (eg, a program 140) to execute at least one other component (eg, a hardware or software component) of the electronic device 101 connected to the processor 120 . It can control and perform various data processing or operations. According to one embodiment, as at least part of data processing or operation, the processor 120 converts commands or data received from other components (eg, the sensor module 176 or the communication module 190 ) to the volatile memory 132 . may be stored in the volatile memory 132 , and may process commands or data stored in the volatile memory 132 , and store the result data in the non-volatile memory 134 .
  • software eg, a program 140
  • the processor 120 converts commands or data received from other components (eg, the sensor module 176 or the communication module 190 ) to the volatile memory 132 .
  • the volatile memory 132 may be stored in the volatile memory 132 , and may process commands or data stored in the volatile memory 132 , and store the result data in the non-volatile memory 134 .
  • the processor 120 is the main processor 121 (eg, a central processing unit or an application processor) or a secondary processor 123 (eg, a graphic processing unit, a neural network processing unit) a neural processing unit (NPU), an image signal processor, a sensor hub processor, or a communication processor).
  • the main processor 121 e.g, a central processing unit or an application processor
  • a secondary processor 123 eg, a graphic processing unit, a neural network processing unit
  • NPU neural processing unit
  • an image signal processor e.g., a sensor hub processor, or a communication processor.
  • the main processor 121 e.g, a central processing unit or an application processor
  • a secondary processor 123 eg, a graphic processing unit, a neural network processing unit
  • NPU neural processing unit
  • an image signal processor e.g., a sensor hub processor, or a communication processor.
  • the main processor 121 e.g, a central processing unit or an application processor
  • a secondary processor 123
  • the auxiliary processor 123 is, for example, on behalf of the main processor 121 while the main processor 121 is in an inactive (eg, sleep) state, or the main processor 121 is active (eg, executing an application). ), together with the main processor 121, at least one of the components of the electronic device 101 (eg, the display module 160, the sensor module 176, or the communication module 190) It is possible to control at least some of the related functions or states.
  • the co-processor 123 eg, an image signal processor or a communication processor
  • may be implemented as part of another functionally related component eg, the camera module 180 or the communication module 190. have.
  • the auxiliary processor 123 may include a hardware structure specialized for processing an artificial intelligence model.
  • Artificial intelligence models can be created through machine learning. Such learning may be performed, for example, in the electronic device 101 itself on which artificial intelligence is performed, or may be performed through a separate server (eg, the server 108).
  • the learning algorithm may include, for example, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning, but in the above example not limited
  • the artificial intelligence model may include a plurality of artificial neural network layers.
  • Artificial neural networks include deep neural networks (DNNs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), restricted boltzmann machines (RBMs), deep belief networks (DBNs), bidirectional recurrent deep neural networks (BRDNNs), It may be one of deep Q-networks or a combination of two or more of the above, but is not limited to the above example.
  • the artificial intelligence model may include, in addition to, or alternatively, a software structure in addition to the hardware structure.
  • the memory 130 may store various data used by at least one component of the electronic device 101 (eg, the processor 120 or the sensor module 176 ).
  • the data may include, for example, input data or output data for software (eg, the program 140 ) and instructions related thereto.
  • the memory 130 may include a volatile memory 132 or a non-volatile memory 134 .
  • the program 140 may be stored as software in the memory 130 , and may include, for example, an operating system 142 , middleware 144 , or an application 146 .
  • the input module 150 may receive a command or data to be used in a component (eg, the processor 120 ) of the electronic device 101 from the outside (eg, a user) of the electronic device 101 .
  • the input module 150 may include, for example, a microphone, a mouse, a keyboard, a key (eg, a button), or a digital pen (eg, a stylus pen).
  • the sound output module 155 may output a sound signal to the outside of the electronic device 101 .
  • the sound output module 155 may include, for example, a speaker or a receiver.
  • the speaker can be used for general purposes such as multimedia playback or recording playback.
  • the receiver may be used to receive an incoming call. According to one embodiment, the receiver may be implemented separately from or as part of the speaker.
  • the display module 160 may visually provide information to the outside (eg, a user) of the electronic device 101 .
  • the display module 160 may include, for example, a control circuit for controlling a display, a hologram device, or a projector and a corresponding device.
  • the display module 160 may include a touch sensor configured to sense a touch or a pressure sensor configured to measure the intensity of a force generated by the touch.
  • the audio module 170 may convert a sound into an electric signal or, conversely, convert an electric signal into a sound. According to an embodiment, the audio module 170 acquires a sound through the input module 150 , or an external electronic device (eg, a sound output module 155 ) connected directly or wirelessly with the electronic device 101 . A sound may be output through the electronic device 102 (eg, a speaker or headphones).
  • an external electronic device eg, a sound output module 155
  • a sound may be output through the electronic device 102 (eg, a speaker or headphones).
  • the sensor module 176 detects an operating state (eg, power or temperature) of the electronic device 101 or an external environmental state (eg, user state), and generates an electrical signal or data value corresponding to the sensed state. can do.
  • the sensor module 176 may include, for example, a gesture sensor, a gyro sensor, a barometric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an IR (infrared) sensor, a biometric sensor, It may include a temperature sensor, a humidity sensor, or an illuminance sensor.
  • the interface 177 may support one or more designated protocols that may be used by the electronic device 101 to directly or wirelessly connect with an external electronic device (eg, the electronic device 102 ).
  • the interface 177 may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, an SD card interface, or an audio interface.
  • HDMI high definition multimedia interface
  • USB universal serial bus
  • SD card interface Secure Digital Card
  • the connection terminal 178 may include a connector through which the electronic device 101 can be physically connected to an external electronic device (eg, the electronic device 102 ).
  • the connection terminal 178 may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (eg, a headphone connector).
  • the haptic module 179 may convert an electrical signal into a mechanical stimulus (eg, vibration or movement) or an electrical stimulus that the user can perceive through tactile or kinesthetic sense.
  • the haptic module 179 may include, for example, a motor, a piezoelectric element, or an electrical stimulation device.
  • the camera module 180 may capture still images and moving images. According to an embodiment, the camera module 180 may include one or more lenses, image sensors, image signal processors, or flashes.
  • the power management module 188 may manage power supplied to the electronic device 101 .
  • the power management module 188 may be implemented as, for example, at least a part of a power management integrated circuit (PMIC).
  • PMIC power management integrated circuit
  • the battery 189 may supply power to at least one component of the electronic device 101 .
  • battery 189 may include, for example, a non-rechargeable primary cell, a rechargeable secondary cell, or a fuel cell.
  • the communication module 190 is a direct (eg, wired) communication channel or a wireless communication channel between the electronic device 101 and an external electronic device (eg, the electronic device 102, the electronic device 104, or the server 108). It can support establishment and communication performance through the established communication channel.
  • the communication module 190 may include one or more communication processors that operate independently of the processor 120 (eg, an application processor) and support direct (eg, wired) communication or wireless communication.
  • the communication module 190 is a wireless communication module 192 (eg, a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module 194 (eg, : It may include a LAN (local area network) communication module, or a power line communication module).
  • GNSS global navigation satellite system
  • a corresponding communication module among these communication modules is a first network 198 (eg, a short-range communication network such as Bluetooth, wireless fidelity (WiFi) direct, or infrared data association (IrDA)) or a second network 199 (eg, legacy It may communicate with the external electronic device 104 through a cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (eg, a telecommunication network such as a LAN or a WAN).
  • a first network 198 eg, a short-range communication network such as Bluetooth, wireless fidelity (WiFi) direct, or infrared data association (IrDA)
  • a second network 199 eg, legacy It may communicate with the external electronic device 104 through a cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (eg, a telecommunication network such as a LAN or a WAN).
  • a telecommunication network
  • the wireless communication module 192 uses the subscriber information (eg, International Mobile Subscriber Identifier (IMSI)) stored in the subscriber identification module 196 within a communication network such as the first network 198 or the second network 199 .
  • the electronic device 101 may be identified or authenticated.
  • the wireless communication module 192 may support a 5G network after a 4G network and a next-generation communication technology, for example, a new radio access technology (NR).
  • NR access technology includes high-speed transmission of high-capacity data (eMBB (enhanced mobile broadband)), minimization of terminal power and access to multiple terminals (mMTC (massive machine type communications)), or high reliability and low latency (URLLC (ultra-reliable and low-latency) -latency communications)).
  • eMBB enhanced mobile broadband
  • mMTC massive machine type communications
  • URLLC ultra-reliable and low-latency
  • the wireless communication module 192 may support a high frequency band (eg, mmWave band) to achieve a high data rate, for example.
  • a high frequency band eg, mmWave band
  • the wireless communication module 192 includes various technologies for securing performance in a high-frequency band, for example, beamforming, massive multiple-input and multiple-output (MIMO), all-dimensional multiplexing. It may support technologies such as full dimensional MIMO (FD-MIMO), an array antenna, analog beam-forming, or a large scale antenna.
  • the wireless communication module 192 may support various requirements specified in the electronic device 101 , an external electronic device (eg, the electronic device 104 ), or a network system (eg, the second network 199 ).
  • the wireless communication module 192 may include a peak data rate (eg, 20 Gbps or more) for realizing eMBB, loss coverage (eg, 164 dB or less) for realizing mMTC, or U-plane latency for realizing URLLC ( Example: downlink (DL) and uplink (UL) each 0.5 ms or less, or round trip 1 ms or less).
  • a peak data rate eg, 20 Gbps or more
  • loss coverage eg, 164 dB or less
  • U-plane latency for realizing URLLC
  • the antenna module 197 may transmit or receive a signal or power to the outside (eg, an external electronic device).
  • the antenna module 197 may include an antenna including a conductor formed on a substrate (eg, a PCB) or a radiator formed of a conductive pattern.
  • the antenna module 197 may include a plurality of antennas (eg, an array antenna). In this case, at least one antenna suitable for a communication method used in a communication network such as the first network 198 or the second network 199 is connected from the plurality of antennas by, for example, the communication module 190 . can be selected. A signal or power may be transmitted or received between the communication module 190 and an external electronic device through the selected at least one antenna.
  • other components eg, a radio frequency integrated circuit (RFIC)
  • RFIC radio frequency integrated circuit
  • the antenna module 197 may form a mmWave antenna module.
  • the mmWave antenna module comprises a printed circuit board, an RFIC disposed on or adjacent to a first side (eg, bottom side) of the printed circuit board and capable of supporting a designated high frequency band (eg, mmWave band); and a plurality of antennas (eg, an array antenna) disposed on or adjacent to a second side (eg, top or side) of the printed circuit board and capable of transmitting or receiving signals of the designated high frequency band. can do.
  • peripheral devices eg, a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)
  • GPIO general purpose input and output
  • SPI serial peripheral interface
  • MIPI mobile industry processor interface
  • the command or data may be transmitted or received between the electronic device 101 and the external electronic device 104 through the server 108 connected to the second network 199 .
  • Each of the external electronic devices 102 or 104 may be the same as or different from the electronic device 101 .
  • all or a part of operations executed in the electronic device 101 may be executed in one or more external electronic devices 102 , 104 , or 108 .
  • the electronic device 101 may perform the function or service itself instead of executing the function or service itself.
  • one or more external electronic devices may be requested to perform at least a part of the function or the service.
  • One or more external electronic devices that have received the request may execute at least a part of the requested function or service, or an additional function or service related to the request, and transmit a result of the execution to the electronic device 101 .
  • the electronic device 101 may process the result as it is or additionally and provide it as at least a part of a response to the request.
  • cloud computing distributed computing, mobile edge computing (MEC), or client-server computing technology may be used.
  • the electronic device 101 may provide an ultra-low latency service using, for example, distributed computing or mobile edge computing.
  • the external electronic device 104 may include an Internet of things (IoT) device.
  • Server 108 may be an intelligent server using machine learning and/or neural networks.
  • the external electronic device 104 or the server 108 may be included in the second network 199 .
  • the electronic device 101 may be applied to an intelligent service (eg, smart home, smart city, smart car, or health care) based on 5G communication technology and IoT-related technology.
  • FIG. 2A is a block diagram of an electronic device according to an exemplary embodiment.
  • the electronic device 200a (eg, the electronic device 101 of FIG. 1 ) (eg, a PC or a server) includes a communication circuit 210a (eg, the communication module 190 of FIG. 1 ); It may include a memory 220a (eg, the memory 130 of FIG. 1 ), and a processor 230a (eg, the processor 120 of FIG. 1 ).
  • the electronic device 200a is independently or externally configured (eg, (eg, the electronic devices 102 and 104 of FIG. 1 , the server 106 , or the electronic device 200b of FIG. 2B )). It is possible to train an AI learning model by being integrated with the AI learning system, for example, the AI learning system may be configured with the electronic device 200a, an external electronic device, or a combination of the electronic device 200a and an external electronic device.
  • the communication circuit 210a communicates data with an external electronic device (eg, (eg, the electronic devices 102 and 104 of FIG. 1 , the server 106 , or the electronic device 200b of FIG. 2B )).
  • the communication circuit 210a may transmit or receive an image and/or an image to an external electronic device.
  • the communication circuit 210a may be configured by the processor 230a.
  • the training data set (eg, input image and output image) of the generated artificial intelligence learning model may be transmitted to an external electronic device
  • the communication circuit 210a may provide the artificial intelligence learning model to the external electronic device
  • the communication circuit 210a may provide a learning model for training image compensation to an external electronic device.
  • the memory 220a may include instructions executed by the processor 230a. According to an embodiment, the memory 220a may at least temporarily store an image and/or an image received from an external electronic device or a network. According to an embodiment, the memory 220a may store a training data set (eg, an input image and an output image pair) of the AI learning model.
  • a training data set eg, an input image and an output image pair
  • the processor 230a may control the overall operation of the electronic device 200a.
  • the processor 230a may train an artificial intelligence learning model.
  • a system for training an artificial intelligence learning model is implemented internally in the electronic device 200a (eg, the processors 230a and 250), external to the electronic device 200a, or the electronic device It may be implemented as a combination of (200a) and the outside.
  • the AI learning model may be trained by at least one of the electronic device 200a and an external device.
  • an artificial intelligence learning system may be trained using federated learning.
  • the AI learning model may include an image compensation model for compensating for an image.
  • the AI learning model is at least one of a neural network, a support vector machine (SVN), a hidden Markov model (HMM), and a Baysian. It may be trained based on at least one of machine learning including According to various embodiments, the artificial intelligence learning model may be trained based on at least one of various pattern recognition algorithms that can find patterns from learning data and apply the patterns to real data in addition to the aforementioned machine learning.
  • SVN support vector machine
  • HMM hidden Markov model
  • Baysian Baysian
  • the artificial intelligence learning model may be trained based on at least one of various pattern recognition algorithms that can find patterns from learning data and apply the patterns to real data in addition to the aforementioned machine learning.
  • the processor 230a may generate a training data set for training the AI learning model.
  • the training data set may include a pair of input images and output images.
  • the processor 230a may acquire a first image for generating a training data set.
  • the first image may include an image previously stored in the electronic device 200a (eg, the memory 220a), an image received from an external electronic device, and/or an image acquired through crawling. have.
  • the first image may include arbitrary images for generating a training data set (eg, an input image and an output image).
  • the processor 230a may generate a virtual light source image.
  • the processor 230a may generate a virtual light source image based on at least one parameter of the size, color, intensity, shape, brightness, position, light spread, and degree of light splitting of the virtual light source. have.
  • the at least one parameter may be designated according to a user input or may be automatically designated by the processor 230a.
  • the processor 230a generates a raw image of the virtual light source, processes the raw image with a low-pass filter of less than or equal to a specified band, resizes the low-pass filter-processed image, and performs the resizing of the image.
  • a virtual light source image may be generated by performing colorization and Bayerization, and demosaicing the colored and Bayer-ized image.
  • the processor 230a may generate a raw image of the virtual light source in a Bayer domain.
  • the processor 230a may generate a raw image of the virtual light source in a domain other than the Bayer domain (eg, an RGB domain) in a different manner.
  • at least some of the operations of generating the virtual light source image may be omitted.
  • the processor 230a may generate a first output image by inserting the virtual light source image into the first image.
  • the first output image may be output data for training an artificial intelligence learning model.
  • the first output image may be a target image to be acquired through a trained artificial intelligence learning model when the first input image is input.
  • the processor 230a may generate a first output image by second filtering the virtual light source image and inserting the second filtered virtual light source image into the first image.
  • the second filter may be a filter for correcting the virtual light source image similarly to the case of photographing the light source through a real camera.
  • the processor 230a inserts the virtual light source image into the first image and then performs a first filter process to generate the first input image.
  • the first filter may be a filter for correcting the first image in which the virtual light source image is inserted to correspond to the actual photographed image.
  • the first filter may be a filter for applying an effect (eg, a diffraction effect) appearing when an image is captured by a real camera to the first image into which the virtual light source image is inserted.
  • the first input image may be input data for training an artificial intelligence learning model.
  • the processor 230a may generate the first input image and the first output image as a training data set of an artificial intelligence learning model for compensating the image.
  • the artificial intelligence learning model may be trained to derive a first output image or a result image similar to the first output image.
  • the processor 230a may acquire a second image for generating a training data set.
  • the second image may be the same as or different from the first image.
  • the second image may include an image previously stored in the electronic device 200a (eg, the memory 220a), an image received from an external electronic device, and/or an image acquired through crawling. have.
  • the second image may include arbitrary images for generating a training data set (eg, an input image and an output image).
  • the processor 230a may generate a second image as a second output image, and may generate a second input image by performing a first filter process on the second image. According to an embodiment, the processor 230a may generate the second input image and the second output image as the second training data set of the AI learning model.
  • the first training data set may be a training data set including a virtual light source image
  • the second training data set may be a training data set not including a virtual light source image
  • the processor 230a trains an artificial intelligence learning model by itself using a training data set (eg, a first training data set and/or a second training data set), or an electronic device ( 200a)
  • An artificial intelligence learning model can be trained through an external artificial intelligence learning system.
  • the processor 230a may provide the generated training data set to an external artificial intelligence learning system through the communication circuit 210a.
  • the electronic devices 200a and 200a and the external electronic device may train the AI learning model together. .
  • some components of the electronic device 200a may be omitted or may further include at least some of the components of the electronic device 101 illustrated in FIG. 1 or the electronic device 200b illustrated in FIG. 2B . .
  • an artificial intelligence learning model for image compensation by training an artificial intelligence learning model for image compensation through a training data set (eg, a first input image and a first output image) including a virtual light source, It can compensate for damage or deformation of an image, and can provide an artificial intelligence learning model that can compensate for light saturation, light blurring, or light splitting within an image.
  • synthetic data eg, a composite image
  • actual measurement data eg, a photographed image
  • a training data set eg, a first input image and a first output image
  • an actual photographed image is generated Disparity that may occur when used as training data for an artificial intelligence learning model, noise (noise change) depending on the surrounding environment, motion during actual shooting, and/or limitations of the camera (image sensor) (eg, light saturation value) ), errors and problems can be minimized.
  • 2B is a block diagram of an electronic device according to an exemplary embodiment.
  • the electronic device 200b (eg, the electronic device 101 of FIG. 1 ) (eg, a mobile device) includes a display 210b (eg, the display module 160 of FIG. 1 ) and a camera 220b. ) (eg, camera module 180 of FIG. 1 ), communication circuit 230b (eg, communication module 190 of FIG. 1 ), memory 240b (eg, memory 130 of FIG. 1 ), and a processor 250b (eg, processor 120 of FIG. 1 ).
  • a display 210b eg, the display module 160 of FIG. 1
  • a camera 220b eg, camera module 180 of FIG. 1
  • communication circuit 230b eg, communication module 190 of FIG. 1
  • memory 240b eg, memory 130 of FIG. 1
  • a processor 250b eg, processor 120 of FIG. 1
  • the display 210b may display an image and/or an image. According to an embodiment, the display 210b may include a plurality of pixels. According to an embodiment, the display 210b may include a plurality of regions having different pixel densities. According to an embodiment, the display 210b may include wires for driving respective pixels.
  • the camera 220b may be disposed below at least a partial area of the display 210b.
  • the camera 220b may include an under display camera (UDC).
  • the camera 220b may generate an image based on the light incident through the display 210b.
  • the camera 220b eg, the camera 220b hole
  • the image display area of the display 210b may include at least two sub-areas.
  • the at least two sub-regions may include a first region having a relatively high pixel density and a second region having a relatively low pixel density.
  • at least a portion of the camera 220b may be disposed under the second area.
  • the communication circuit 230b may transmit/receive data to and from an external electronic device (eg, the electronic devices 102 and 104 of FIG. 1 , the server 106 , or the electronic device 200a of FIG. 2A ). have.
  • the communication circuit 230b may transmit or receive an image and/or an image to an external electronic device.
  • the communication circuit 230b may receive an artificial intelligence learning model from an external electronic device.
  • the memory 240b may include instructions executed by the processor 250b. According to an embodiment, the memory 240b may store an image and/or an image captured by the camera 220b or an image and/or an image received from an external electronic device.
  • the processor 250b may control the overall operation of the electronic device 200b.
  • the processor 250b may capture an image through the camera 220b. According to an embodiment, the processor 250b may compensate the captured image based on the AI learning model. For example, the processor 250b receives an artificial intelligence learning model from an external electronic device (eg, the electronic device 200a of FIG. 2A ) through the communication circuit 230b, and captures based on the received artificial intelligence learning model. image can be compensated.
  • an external electronic device eg, the electronic device 200a of FIG. 2A
  • the processor 250b compensates for an image based on an artificial intelligence learning model trained based on a training data set including an input image and an output image generated using a virtual light source image, and compensates for an image within the captured image. It is possible to compensate for light splitting and light blurring, and to improve image quality.
  • some components of the electronic device 200b may be omitted or may further include at least some of the components of the electronic device 101 illustrated in FIG. 1 .
  • the electronic device 200b may include at least a part of the configuration of the electronic device 200a of FIG. 2 or perform at least a part of the operation of the electronic device 200a of FIG. 2 .
  • 3A and 3B are examples of light source images according to various embodiments.
  • the first image 310 represents an image when an actual light source is photographed using a camera (eg, an external electronic device (eg, the camera 220b of the electronic device 200b of FIG. 2B ).
  • the second image 320 represents a first input image including a virtual light source image generated by the electronic device (eg, the electronic device 200a of FIG. 2A ) according to an embodiment
  • the third image 330 is A first output image generated by an electronic device (eg, the electronic device 200a of FIG. 2A ) according to an embodiment is shown.
  • the fourth image 340 is an image obtained when an actual light source is photographed using a camera (eg, the camera 220b of an external electronic device (eg, the electronic device 200b of FIG. 2B )).
  • the fifth image 350 represents a first input image including a virtual light source image generated by an electronic device (eg, the electronic device 200a of FIG. 2A ) according to an embodiment, and a sixth image 360 . denotes a first output image generated by the electronic device (eg, the electronic device 200a of FIG. 2A ) according to an embodiment.
  • the electronic device may include diffraction of light that may occur in a photo taken with a real light source, such as the first image 310 and the fourth image 340 .
  • a training data set of an artificial intelligence learning model may be generated.
  • the electronic device generates a virtual light source image, performs a first filter process to apply an effect similar to an image obtained by photographing a real light source to the virtual light source image, thereby generating the second image 320 and the fifth image 350 You can create the same input image.
  • the electronic device may generate a virtual light source image and generate first output images (ie, target images) such as the third image 330 and the sixth image 360 based on the generated virtual light source image.
  • the electronic device may generate and provide a pair of a first input image and a first output image as training data of the AI learning model.
  • the external electronic device eg, the electronic device 200b of FIG. 2B
  • the external electronic device is an external electronic device based on an artificial intelligence learning model trained by the electronic device (eg, the electronic device 200a of FIG. 2A )
  • An image for example, the first image 310 or the fourth image 340
  • the external electronic device compensates the captured image based on the artificial intelligence learning model to compensate for information loss due to light blurring, light splitting, and/or light saturation, such as the target image (eg, the third image). (330) or the sixth image 360) may be acquired.
  • the actual light source images 310 and 340, the first input images 320 and 350, and the first output images 330 and 360 shown in FIGS. 3A and 3B are an example, and the electronic device
  • the type of light source eg, an image including the light source
  • the type of light source is not limited thereto, and various types and/or shapes of light sources may be compensated.
  • FIG. 4 is a diagram for explaining an image compensation operation of an electronic device according to an exemplary embodiment.
  • the electronic device receives a camera (eg, the camera module 180 of FIG. 1 or the electronic device 200b of FIG. 2 ).
  • An image may be acquired through the camera 220b).
  • the camera of the electronic device may be disposed under an area of a display (eg, the display module 160 of FIG. 1 or the display 210b of FIG. 2B ).
  • a display eg, the display module 160 of FIG. 1 or the display 210b of FIG. 2B
  • light saturation, light bleeding, and/or light splitting may occur in the image due to diffraction of light due to pixels and/or wiring of the display.
  • the electronic device may pre-process the acquired image.
  • the electronic device may perform preprocessing (eg, resolution, brightness, or size adjustment) on at least a portion of the acquired image based at least on characteristics of the acquired image or characteristics of a display.
  • the electronic device may compensate an image based on the artificial intelligence learning model.
  • the AI learning model may be a learning model for compensating for an image acquired through a camera (eg, an under display camera (UDC)).
  • a camera eg, an under display camera (UDC)
  • the AI learning model is a training data set (eg, an input image and an output image pair) generated by an external electronic device (not shown) (eg, the electronic device 200a of FIG. 2A ).
  • the training data set may include an input image and an output image including a virtual light source image generated by the electronic device.
  • an electronic device receives a trained artificial intelligence learning model from an external electronic device, and compensates for an image based on the received artificial intelligence learning model, so that loss due to saturation of light contained in the image, light blur, and/or an image in which light splitting is compensated may be generated.
  • the electronic device may post-process the image.
  • the electronic device may perform post-processing (eg, adjustment of resolution, brightness, or color) on at least a portion of the compensated image based on at least characteristics of the compensated image or display characteristics.
  • the electronic device may acquire a processed image.
  • the finally acquired image may be an image in which light blurring, light splitting, and/or light saturation caused by a light source included in the image is compensated.
  • FIG. 5 is a diagram for explaining an operation of training an artificial intelligence learning model of an electronic device according to an exemplary embodiment.
  • the electronic device (eg, the electronic device 101 of FIG. 1 or the electronic device 200a of FIG. 2A ) generates a training data set 517 for training an artificial intelligence learning model for image compensation.
  • the training data set 517 may include a pair of input images and output images.
  • the electronic device may acquire the first image 501 .
  • the first image 501 may include an image pre-stored in the electronic device, an image received from an external electronic device, and/or an image acquired through crawling.
  • the first image 501 may include an arbitrary image for generating the training data set 517 (eg, an input image and an output image).
  • the electronic device may generate a virtual light source image.
  • the virtual light source image may include a composite image generated by the electronic device, or an image obtained by photographing actual lighting by the electronic device or an external electronic device.
  • the electronic device may generate a virtual light source image based on a virtual light source generation parameter.
  • the virtual light source generation parameter may include at least one of a size, color, intensity, shape, brightness, location, light spread, and degree of light splitting of the virtual light source.
  • the virtual light source generation parameter may be designated according to a user input or may be automatically designated by the electronic device.
  • the electronic device may generate a plurality of virtual light source images with respect to one image (eg, a first image) by applying a plurality of virtual light source generation parameters.
  • At least a part of the operation of generating the virtual light source image in operation 503 may be performed together as at least a part of the operation of inserting the virtual light source image in operation 507 or 511 .
  • at least a part of the virtual light source image generating operation eg, disposing the virtual light source at a specific position in the image according to the position parameter
  • the electronic device may perform a second filter process on the virtual light source image.
  • the second filter may be a filter for correcting the virtual light source image similar to the case of photographing the light source through the real camera 220 .
  • the second filter may be set based on a light source design parameter.
  • the light source design parameter includes various values for adjusting the boundary of the virtual light source included in the virtual light image, light source shape (pattern) processing, light spreading, and light splitting into a form suitable for training of an artificial intelligence learning model. can do.
  • operation 505 may be omitted as an optional operation.
  • the electronic device may generate the first output image 509 by inserting the virtual light source image into the first image 501 .
  • the electronic device may insert the virtual light source image into the first image 501 .
  • operations 511 and 507 may be the same operation. For example, when operation 505 is omitted, operations 507 and 511 may be performed as a single operation.
  • the electronic device may perform a first filter processing on the first image 501 in which the virtual light source image is inserted to generate the first input image 515 .
  • the first filter may be a filter for correcting the first image 501 in which the virtual light source image is inserted to correspond to the actual photographed image.
  • the first filter may be a filter for applying an effect (eg, a diffraction effect) appearing when an image is captured through the real camera 220 to the first image 501 in which the virtual light source image is inserted.
  • the electronic device uses a point spread function (PSF) (eg, a convolution operation) to insert the first filter can be processed
  • PSF point spread function
  • the electronic device may acquire the PSF to process the first filter based on measurement data or simulation.
  • the electronic device generates various virtual light source images by applying a plurality of virtual light source generation parameters, thereby generating an input image (eg, a first image) or an output for one image (eg, a first image).
  • An image eg, a first output image
  • an input image eg, a first image
  • an output for one image eg, a first image
  • An image eg, a first output image
  • the electronic device may generate the first input image 515 and the first output image 509 as the first training data set 517 of the AI learning model.
  • the electronic device may acquire the second image 519 .
  • the second image 519 may include an image previously stored in the electronic device, an image received from an external electronic device, and/or an image acquired through crawling.
  • the second image 519 may be the same as or different from the first image 501 .
  • the electronic device may generate the second image 519 as the second output image 521 .
  • the electronic device may perform a first filter process on the second image 519 to generate a second input image 525 .
  • the first filter may be a filter for correcting the second image 519 to correspond to an actual photographed image.
  • the first filter may be a filter for applying an effect (eg, a diffraction effect) that appears when an image is actually captured by a camera to the second image 519 .
  • the first filter applied in operation 513 and the first filter applied in operation 523 may be the same filter or different filters.
  • the electronic device may generate the second input image 525 and the second output image 521 as the second training data set 517 of the AI learning model.
  • the electronic device may provide the generated training data set 517 to the AI learning model.
  • the electronic device is configured with a training data set 517 (eg, a first training data set (a first input image and a first output image) and/or a second training data set (a second input image and a second input image) It is possible to train the AI learning model by itself using the output image)) or to train the AI learning model through an AI learning system external to the electronic device.
  • the electronic device may provide the generated training data set 517 to an external artificial intelligence learning system.
  • the electronic device may receive a training data set generated by the external electronic device from the external electronic device, and train the AI learning model based on the received training data set.
  • the electronic device may train an artificial intelligence learning model in conjunction with an external electronic device. For example, when the electronic device forms an artificial intelligence learning system with an external electronic device, the electronic device and the external electronic device may work together to train an artificial intelligence learning model.
  • the image compensation model 529 may be generated by training the AI learning model based on the training data set 517 generated by the electronic device.
  • the electronic device may compensate for an image captured by the camera based on the image compensation model 529 .
  • the electronic device may compensate for damage or deformation of an image due to diffraction of light in a photographed image, and may compensate for light saturation, light spreading, or light splitting in the photographed image.
  • FIG. 6 is a diagram for explaining an operation of generating a virtual light source image of an electronic device according to an exemplary embodiment. According to an embodiment, the operations of FIG. 6 may be an example of generating the virtual light source image 503 of FIG. 5 .
  • the electronic device may generate a raw image of the virtual light source.
  • the electronic device may generate the raw image 660 including the virtual light source.
  • the raw image 660 including the virtual light source.
  • one area 670 of the virtual light source in the image will be described.
  • the electronic device may generate a raw image of the virtual light source in a Bayer domain.
  • an electronic device may generate an initial image of a virtual light source in the Bayer domain similar to an image captured using a real camera (eg, complementary metal-oxide-semiconductor (CMOS) / charge coupled device (CCD) sensor).
  • CMOS complementary metal-oxide-semiconductor
  • CCD charge coupled device
  • a boundary of a virtual light source may be formed in the image as 671 .
  • the electronic device may perform a low pass filter (LPF) process on the raw image of the virtual light source below a specified band.
  • LPF low pass filter
  • the electronic device may reflect a blurring effect that may occur depending on a lens and/or an optical structure when an actual camera is photographed in a virtual light source of a raw image through the LPF.
  • the electronic device may resize the low-pass filter-processed image.
  • the electronic device may resize the virtual light source to correspond to a desired size of the light source.
  • the electronic device may resize the virtual light source to correspond to a specified distance.
  • the electronic device may resize the virtual light source to correspond to an actual light source located at a specific distance.
  • the electronic device may downsample at least a portion of an image of the virtual light source to process an edge of the virtual light source similarly to an actual light source.
  • the electronic device may deform the edge of the virtual light source that is formed in a shape close to a straight line by processing the edge of the virtual light source to be more similar to the shape of the edge of the actually photographed light source.
  • the electronic device may colorize or bayerize the resized image.
  • the electronic device may convert an image having a grayscale into three R, G, and B channels and set a color.
  • the electronic device may convert the re-imaged image into an image 674 having RGB colors.
  • the electronic device may demosaicing the colored and Bayer-ized images. For example, the electronic device may perform demosaicing to generate a virtual light source image 675 having R, G, and B 3-channels.
  • the brightness of the virtual light source included in the virtual light source image generated by the electronic device may be set to be greater than a specified brightness reference.
  • a specified brightness reference For example, when an image is actually captured by a camera, brightness saturation values of pixels may exist due to a performance limitation of the camera.
  • the electronic device according to an embodiment since the electronic device according to an embodiment generates an image including a virtual light source, the light source included in the virtual light source image may have a higher brightness value regardless of camera performance.
  • by using the virtual light source image as training data of the artificial intelligence learning model it may be possible to generate a more sophisticated learning model than when only real images are used as training data.
  • the electronic device may generate virtual lighting for learning in a different manner in other domains as well as in the Bayer domain.
  • FIG. 7 is a diagram for explaining an operation of generating a training image set of an electronic device according to an exemplary embodiment.
  • the electronic device uses a training data set (eg, an input image) of an artificial intelligence learning model based on a virtual light source generation algorithm. 715 and three output images 735 or three input images 725 and 745) may be generated.
  • the electronic device generates a virtual light source (711, 721), inserts the virtual light source into an arbitrary image (713, 723), and performs a first filter process on the image into which the virtual light source is inserted to obtain an input image ( 715, 725) can be created.
  • the first filter may be a filter for applying a phenomenon due to diffraction of light that may occur when photographing an actual light source to the virtual light source image.
  • the electronic device generates a virtual light source (731, 741), performs a second filter process on the virtual light source (733, 743), and inserts the second filter-processed virtual light source into an arbitrary image to produce an output image (735, 745) can be created.
  • the electronic device generates a pair of an input image and an output image (eg, three input images 715 and 735, or three input images 725 and 745).
  • a pair of an input image and an output image can be provided as training data of an artificial intelligence learning model for image compensation.
  • the electronic device generates a first training data set (an input image 715 and an output image 735) for a virtual light source of a first type, or second training data for a virtual light source of a second type Three (input image 725 and output image 745) may be generated.
  • the electronic device may generate a training data set (an input image and an output image) including a virtual light source image, it may generate a training data set for various light sources and various parameters. For example, an electronic device generates an infinite training data set based on a smaller number of captured data, and trains an AI learning model based on it, thereby maximizing the training effect and developing a more sophisticated and accurate AI learning model.
  • the electronic device may generate a large amount of training data from a small number of source images by generating a plurality of virtual light source images and/or training data sets based on various parameters.
  • the electronic device generates an input image and an output image set including a virtual light source based on the same source image, thereby generating a parallax ( disparity), noise (noise change) according to the surrounding environment, movement during actual shooting, and/or errors and problems caused by limitations (eg, light saturation value) of the camera 220 (image sensor) may be minimized.
  • a parallax disparity
  • noise noise change
  • limitations eg. light saturation value
  • An electronic device includes a communication circuit; Memory; and a processor operatively coupled to the memory.
  • the memory when executed, causes the processor to acquire a first image, generate a virtual light source image, insert the virtual light source image into the first image to generate a first output image, , After inserting the virtual light source image into the first image, a first filter process is performed to generate a first input image, and an artificial intelligence learning model for compensating an image for the first input image and the first output image. Instructions for generating the first training data set may be stored.
  • the instructions include, by the processor, generating a raw image of a virtual light source, low-pass filtering the raw image below a specified band, resizing the low-pass filter-processed image, and the resizing One image may be colorized and Bayerized, and the colored and Bayerized image may be demosaiced to generate the virtual light source image.
  • the instructions may cause the processor to generate a raw image of the virtual light source in a Bayer domain.
  • the instructions include, by the processor, obtaining a second image, generating the second image as a second output image, and processing the second image with the first filter to obtain a second input image and generate the second input image and the second output image as a second training data set of the artificial intelligence learning model.
  • the instructions cause the processor to perform a second filter process on the virtual light source image and insert the second filtered virtual light source image into the first image to generate a first output image.
  • the instructions may be configured by the processor based on at least one parameter of a size, color, intensity, shape, brightness, location, light spread, and degree of light splitting of the virtual light source. You can create a light source image.
  • the instructions may cause the processor to process the first filter using a point spread function (PSF) after inserting the virtual light source image into the first image.
  • PSF point spread function
  • the instructions may cause the processor to recognize a pattern from the training data set and train the AI learning model based on at least one of pattern recognition algorithms that can be applied to actual data. have.
  • the instructions include, by the processor, a neural network, a support vector machine (SVN), a hidden Markov model (HMM), and a Baysian.
  • the artificial intelligence learning model may be trained based on at least one of machine learning including at least one.
  • An electronic device includes a display; a camera disposed under at least a partial area of the display; communication circuit; Memory; and a processor operatively coupled to the display, the camera, the communication circuitry, and the memory.
  • the processor may capture an image using the camera and compensate the captured image based on an artificial intelligence learning model.
  • the AI learning model may be trained based on a training data set including an input image and an output image generated using a virtual light source image.
  • FIG. 8 is a flowchart of a method for training an artificial intelligence learning model of an electronic device according to an exemplary embodiment.
  • the electronic device may acquire a first image.
  • the first image may include an image pre-stored in the electronic device, an image received from an external electronic device, and/or an image acquired through crawling.
  • the first image may include arbitrary images for generating a training data set (eg, an input image and an output image).
  • the electronic device may generate a virtual light source image.
  • the processor may generate the virtual light source image based on at least one parameter among a size, color, intensity, shape, brightness, location, light spread, and degree of light splitting of the virtual light source.
  • the at least one parameter may be designated according to a user input or may be automatically designated by a processor.
  • the electronic device may generate a first output image by inserting the virtual light source image into the first image.
  • the first output image may be output data for training an artificial intelligence learning model.
  • the first output image may be a target image to be acquired through a trained artificial intelligence learning model when the first input image is input.
  • the electronic device may generate a first output image by performing a second filter process on the virtual light source image and inserting the second filtered virtual light source image into the first image.
  • the second filter may be a filter for correcting the virtual light source image similarly to the case of photographing the light source through a real camera.
  • the electronic device inserts the virtual light source image into the first image and performs first filter processing to generate the first input image.
  • the first filter may be a filter for correcting the first image in which the virtual light source image is inserted to correspond to the actual photographed image.
  • the first filter may be a filter for applying a diffraction effect appearing when an image is captured through a real camera to the first image in which the virtual light source image is inserted.
  • the first input image may be input data for training an artificial intelligence learning model.
  • the electronic device may generate and/or provide the first input image and the first output image as a training data set of an artificial intelligence learning model for compensating for an image captured by the camera.
  • the AI learning model may be trained to derive a first output image when a first input image is input.
  • the artificial intelligence learning model may be trained to derive a result image similar to the first output image.
  • the electronic device trains an artificial intelligence learning model by itself using a training data set (eg, a first training data set and/or a second training data set), or an artificial intelligence learning external to the electronic device. It is possible to train an artificial intelligence learning model through the system.
  • the electronic device may provide the generated training data set to an external artificial intelligence learning system through a communication circuit.
  • the electronic device may generate a training data set that does not include a virtual light source image separately from the above operations. For example, the electronic device may acquire the second image, generate the second image as a second output image, and process the second image with a first filter to generate a second input image. For example, the electronic device may generate the second input image and the second output image as the second training data set of the AI learning model.
  • an artificial intelligence learning model for image compensation in training an artificial intelligence learning model for image compensation, not only actual measurement data (eg, an image captured by the camera 220 ) but also synthetic data (eg, a composite image)
  • synthetic data eg, a composite image
  • a training data set eg, a first input image and a first output image
  • an actual photographed image is generated Disparity that may occur when used as training data for an artificial intelligence learning model, noise (noise change) depending on the surrounding environment, movement during actual shooting, and/or limitations of the camera 220 (image sensor) (eg: It is possible to minimize errors and problems due to the saturation value of light).
  • FIG. 9 is a flowchart of an operation of generating a virtual light source image of an electronic device according to an exemplary embodiment.
  • the electronic device may generate a raw image of the virtual light source.
  • the processor may generate a raw image of the virtual light source in a Bayer domain.
  • the electronic device may low-pass filter the raw image.
  • the electronic device may reflect a blurring effect that may occur depending on a lens and/or an optical structure when an actual camera is photographed in a virtual light source of a raw image through the LPF.
  • the electronic device may resize the low-pass filter-processed image.
  • the electronic device may resize the virtual light source to correspond to an actual light source located at a specific distance.
  • the electronic device may downsample at least a portion of an image of the virtual light source to process an edge of the virtual light source similarly to an actual light source.
  • the electronic device may colorize and bayerize the resized image.
  • the electronic device may convert an image having a grayscale into three R, G, and B channels and set a color.
  • the electronic device may demosaicing the colored and Bayer-ized images. For example, the electronic device may perform demosaicing to finally generate a virtual light source image having R, G, and B 3-channels.
  • At least some of the operations of generating the virtual light source image may be omitted.
  • FIG. 10 is a flowchart of an image compensation operation of an electronic device according to an exemplary embodiment.
  • the electronic device may capture an image using a camera.
  • the camera of the electronic device may be disposed below at least a partial area of the display of the electronic device.
  • the camera may be an under display camera (UDC).
  • the electronic device may compensate an image based on the artificial intelligence learning model.
  • the AI learning model may be a learning model for rewarding an image.
  • the AI learning model may be trained by an external electronic device (eg, the electronic device 200a of FIG. 2A ) in the manner described with reference to FIGS. 8 to 9 .
  • the electronic device may receive an artificial intelligence learning model trained based on a training data set including an input image and an output image generated using a virtual light source image from an external electronic device.
  • the electronic device may compensate an image based on an artificial intelligence learning model received from an external electronic device.
  • a training data set eg, a first input image and a first output image
  • a virtual light source including a virtual light source
  • a method for generating training data for an artificial intelligence learning model of an electronic device includes: acquiring a first image; generating a virtual light source image; inserting the virtual light source image into the first image to generate a first output image; inserting the virtual light source image into the first image and performing a first filter process to generate a first input image; and generating the first input image and the first output image as a first training data set of an artificial intelligence learning model for compensating for an image captured by the camera.
  • the generating of the virtual light source image may include generating a raw image of the virtual light source; processing the raw image with a low-pass filter of less than or equal to a specified band; resizing the low-pass filter-processed image; Colorization and Bayerization of the resized image; and demosaicing the colored and Bayer-ized images to generate the virtual light source image.
  • the generating of the raw image of the virtual light source may include generating the raw image of the virtual light source in a Bayer domain.
  • the method includes: acquiring a second image; generating the second image as a second output image; generating a second input image by processing the second image with a first filter; and generating the second input image and the second output image as a second training data set of the artificial intelligence learning model.
  • the generating of the first output image may include: second filtering the virtual light source image; and inserting the second filter-processed virtual light source image into the first image to generate a first output image.
  • the generating of the virtual light source image is based on at least one parameter among the size, color, intensity, shape, brightness, location, light spread, and degree of light splitting of the virtual light source. and generating the virtual light source image.
  • the generating of the first input image includes inserting the virtual light source image into the first image and then processing the first filter using a point spread function (PSF). It can include actions.
  • PSF point spread function
  • the AI learning model may be trained based on at least one of pattern recognition algorithms that can recognize a pattern from the training data set and apply it to actual data.
  • the AI learning model is at least one of a neural network, a support vector machine (SVN), a hidden Markov model (HMM), and a Baysian. It may be trained based on at least one of machine learning including one.
  • SVN support vector machine
  • HMM hidden Markov model
  • Baysian Baysian
  • an image compensation method of an electronic device includes: capturing an image using a camera of the electronic device; and compensating for the captured image based on an artificial intelligence learning model.
  • the AI learning model may be trained based on a training data set including an input image and an output image generated using a virtual light source image.
  • FIG. 11 is a plan view of a display 1200 of an electronic device (eg, the electronic device 101 of FIG. 1 or the electronic device 200b of FIG. 2B ) according to an exemplary embodiment.
  • an electronic device eg, the electronic device 101 of FIG. 1 or the electronic device 200b of FIG. 2B .
  • the display 1200 may include a flexible substrate.
  • the electronic device may include a deformable display such as a foldable display, a rollable display, an extendable display, and a flexible display.
  • the display 1200 of the electronic device may include a display area DA in which the plurality of pixels PX are located and a non-display area NA in which an image is not displayed.
  • the non-display area NA may be positioned to surround the display area DA.
  • the non-display area NA may not contact at least a portion of an edge of the display area DA.
  • the display area DA may include a first display area DA1 and a second display area DA2.
  • a camera module may be disposed under the first display area DA1.
  • at least a portion of the first display area DA1 of the display 1200 may overlap the camera module in the first direction Dr1.
  • the first display area DA1 may be determined by an angle of view of a camera module (not shown) and/or a distance between the display 1200 and the camera module.
  • the first display area DA1 may be a partial area of the display area DA through which light incident to the camera module passes.
  • the camera module may be a color camera that acquires a color image.
  • the camera module may include a first image sensor.
  • the first image sensor of the camera module may include a first color filter that transmits only light in a wavelength band representing the first color, a second color filter that transmits only light in a wavelength band representing the second color, and/or a third color A third color filter that transmits only light in a wavelength band may be included.
  • the first image sensor of the camera module may extract a chromaticity signal decomposed into three primary colors.
  • the first color may be red
  • the second color may be green
  • the third color may be blue.
  • the electronic device may further include a camera.
  • the electronic device according to an embodiment may further include a black-and-white camera and/or a camera including a time of flight (TOF) sensor.
  • TOF time of flight
  • the second display area DA2 may be the remaining area of the display area DA except for the first display area DA1.
  • the first display area DA1 may be surrounded by the second display area DA2 .
  • at least one pixel PX when viewed in the first direction Dr1 , at least one pixel PX may be located in the first display area DA1 overlapping the camera module.
  • the pixel PX may be a minimum unit for displaying an image.
  • the pixel PX may display a color image including at least one of a red sub-pixel (not shown), a green sub-pixel (not shown), or a blue sub-pixel (not shown).
  • the plurality of pixels PX may include an RGBG pentile structure in which red subpixels and blue subpixels are alternately positioned in the same column and green subpixels are positioned in adjacent columns.
  • the present invention is not limited thereto, and the plurality of pixels PX may include an RGB stripe structure.
  • the structure of the electronic device is not limited to that illustrated in FIG. 11 , and the structure and arrangement of components of the electronic device may be changed.
  • FIG. 12 is a cross-sectional view of a first display area DA1 of an electronic device (eg, the electronic device 101 of FIG. 1 or the electronic device 200b of FIG. 2B ) according to an exemplary embodiment.
  • an electronic device eg, the electronic device 101 of FIG. 1 or the electronic device 200b of FIG. 2B .
  • the electronic device may include a window 1250 , a display 1200 (eg, a display panel), and/or a camera module 1260 positioned below the display 1200 .
  • the display 1200 may include a substrate 1210 , a first light blocking member 1290 , a pixel circuit layer 1220 , an organic emission layer 1230 , and/or an encapsulation layer 1240 .
  • the substrate 1210 may be positioned on the front surface of the display area DA.
  • the substrate 1210 may be positioned in the first display area DA1 and the second display area DA2 .
  • the substrate 1210 may include a transparent material to allow light to pass therethrough.
  • the substrate 1210 may include polyethylene terephthalate (PET).
  • PET polyethylene terephthalate
  • the substrate 1210 may include polyimide (PI) or glass.
  • a pixel circuit layer 1220 may be positioned on the substrate 1210 .
  • the pixel circuit layer 1220 may include wirings (not shown) that transmit a signal for driving a pixel, a plurality of transistors (not shown), and/or an interlayer insulating layer (not shown).
  • the pixel circuit layer 1220 may not be located in the transmission area TA of the first display area DA1.
  • the pixel circuit layer 1220 may not be disposed at a position that overlaps the transmissive area TA of the first display area DA1 when viewed in the first direction Dr1 .
  • the pixel circuit layer 1220 may be located in the second display area DA2 and may be located in the first display area DA1 excluding the transparent area TA.
  • the pixel circuit layer 1220 may be positioned in the first display area DA1 that does not overlap the transmissive area TA of the first display area DA1 when viewed in the first direction Dr1 . However, in some embodiments, a portion of the pixel circuit layer 1220 may be located in the transmission area TA of the display 1200 .
  • a first light blocking member 1290 positioned between the substrate 1210 and the pixel circuit layer 1220 may be positioned in the first display area DA1 .
  • the first light blocking member 1290 may float and may not be electrically connected to the pixel circuit layer 1220 .
  • the first light blocking member 1290 may include a plurality of first openings OP1 passing through the first light blocking member 1290 in the first direction Dr1 .
  • An organic emission layer 1230 may be positioned on the pixel circuit layer 1220 .
  • the organic emission layer 1230 may include a low molecular weight organic material or a high molecular weight organic material such as poly 3,4-ethylenedioxythiophene (PEDOT).
  • the organic light emitting layer 1230 may include a hole injection layer (HIL), a hole transporting layer (HTL), an electron transporting layer (ETL), or an electron injection layer (electron injection layer, EIL) may be a multilayer further comprising at least one of.
  • HIL hole injection layer
  • HTL hole transporting layer
  • ETL electron transporting layer
  • EIL electron injection layer
  • the organic light emitting layer 1230 includes a first organic light emitting layer 1231 emitting a first color, a second organic light emitting layer 1232 emitting a second color, and/or a third organic light emitting layer 1233 emitting a third color. may include The organic emission layer 1230 may not be located in the transmission area TA of the first display area DA1 . The organic emission layer 1230 may not be disposed at a position that overlaps the transmission area TA of the first display area DA1 when viewed in the first direction Dr1 . The organic emission layer 1230 may be positioned in the first display area DA1 that does not overlap the transmissive area TA of the first display area DA1 when viewed in the first direction Dr1 .
  • the organic emission layer 1230 may be located in the second display area DA2 and may be located in the first display area DA1 except for the transmission area TA.
  • the density of the organic emission layer 1030 of the first display area DA1 (the area occupied by the organic emission layer within a unit area) may be different from the density of the organic emission layer of the second display area DA2 .
  • the density of the organic emission layer 1030 of the first display area DA1 may be lower than the density of the organic emission layer of the second display area DA2 .
  • An encapsulation layer 1240 may be positioned on the organic emission layer 1230 .
  • the encapsulation layer 1240 may cover and seal the pixel circuit layer 1220 and the organic emission layer 1230 .
  • the encapsulation layer 1240 may seal the pixel circuit layer 1220 and the organic light emitting layer 1230 to block the inflow of external moisture and oxygen. have.
  • the encapsulation layer 1240 may include a plurality of layers.
  • the encapsulation layer 1240 may include a composite film including both an inorganic film and an organic film.
  • the encapsulation layer 1240 may include a triple layer in which an inorganic layer, an organic layer, and an inorganic layer are sequentially disposed.
  • a window 1250 may be positioned on the encapsulation layer 1240 .
  • the window 1250 may be transparent and may transmit light.
  • the window 1250 may include tempered glass, reinforced plastic, or a flexible polymer material.
  • the window 1250 may be attached to the display 1200 by a transparent adhesive layer (not shown).
  • the window 1250 may include a plurality of layers.
  • the electronic device according to an embodiment may further include a coating layer (or a protective layer) positioned on one surface of the window 1250 .
  • the display 1200 may further include a color filter layer (not shown) positioned on the encapsulation layer 1240 .
  • the camera module 1260 may be located below the display 1200 . When viewed in the first direction Dr1 (or when viewed from above the display 1200 ), the camera module 1260 may overlap the first display area DA1 of the display 1200 . The camera module 1260 may acquire an image by using light passing through the first display area DA1 of the display 1200 . In an embodiment, the camera module 1260 may be a color camera that acquires a color image.
  • the camera module 1260 may include a first image sensor, and the first image sensor may include a color filter.
  • the structure of the electronic device is not limited to that illustrated in FIG. 12 , and the structure and arrangement of components of the electronic device (eg, display) may be changed.
  • the electronic device may have various types of devices.
  • the electronic device may include, for example, a portable communication device (eg, a smart phone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance device.
  • a portable communication device eg, a smart phone
  • a computer device e.g., a smart phone
  • a portable multimedia device e.g., a portable medical device
  • a camera e.g., a portable medical device
  • a camera e.g., a portable medical device
  • a camera e.g., a portable medical device
  • a wearable device e.g., a smart bracelet
  • a home appliance device e.g., a home appliance
  • first, second, or first or second may be used simply to distinguish an element from other elements in question, and may refer elements to other aspects (e.g., importance or order) is not limited. It is said that one (eg, first) component is “coupled” or “connected” to another (eg, second) component, with or without the terms “functionally” or “communicatively”. When referenced, it means that one component can be connected to the other component directly (eg by wire), wirelessly, or through a third component.
  • module used in various embodiments of this document may include a unit implemented in hardware, software, or firmware, and is interchangeable with terms such as, for example, logic, logic block, component, or circuit.
  • a module may be an integrally formed part or a minimum unit or a part of the part that performs one or more functions.
  • the module may be implemented in the form of an application-specific integrated circuit (ASIC).
  • ASIC application-specific integrated circuit
  • one or more instructions stored in a storage medium may be implemented as software (eg, the program 140) including
  • a processor eg, processor 120
  • a device eg, electronic device 101
  • the one or more instructions may include code generated by a compiler or code executable by an interpreter.
  • the device-readable storage medium may be provided in the form of a non-transitory storage medium.
  • 'non-transitory' only means that the storage medium is a tangible device and does not include a signal (eg, electromagnetic wave), and this term is used in cases where data is semi-permanently stored in the storage medium and It does not distinguish between temporary storage cases.
  • a signal eg, electromagnetic wave
  • the method according to various embodiments disclosed in this document may be provided as included in a computer program product.
  • Computer program products may be traded between sellers and buyers as commodities.
  • the computer program product is distributed in the form of a machine-readable storage medium (eg compact disc read only memory (CD-ROM)), or via an application store (eg Play StoreTM) or on two user devices ( It can be distributed (eg downloaded or uploaded) directly between smartphones (eg: smartphones) and online.
  • a part of the computer program product may be temporarily stored or temporarily generated in a machine-readable storage medium such as a memory of a server of a manufacturer, a server of an application store, or a relay server.
  • each component (eg, module or program) of the above-described components may include a singular or a plurality of entities, and some of the plurality of entities may be separately disposed in other components. have.
  • one or more components or operations among the above-described corresponding components may be omitted, or one or more other components or operations may be added.
  • a plurality of components eg, a module or a program
  • the integrated component may perform one or more functions of each component of the plurality of components identically or similarly to those performed by the corresponding component among the plurality of components prior to the integration. .
  • operations performed by a module, program, or other component are executed sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations are executed in a different order, or omitted. or one or more other operations may be added.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

Selon un mode de réalisation, le présent document divulgue un dispositif électronique qui peut comprendre un circuit de communication, une mémoire et un processeur connecté de manière fonctionnelle à la mémoire. Selon un mode de réalisation, la mémoire peut stocker des instructions qui, lorsqu'elles sont exécutées, amènent le processeur à : obtenir une première image ; générer une image de source de lumière virtuelle ; insérer l'image de source de lumière virtuelle dans la première image pour générer une première image de sortie ; après l'insertion de l'image de source de lumière virtuelle dans la première image, effectuer un premier traitement de filtre pour générer une première image d'entrée ; et générer la première image d'entrée et la première image de sortie en tant que premier ensemble de données d'instruction d'un modèle d'apprentissage d'intelligence artificielle pour la compensation d'image. Divers autres modes de réalisation identifiés par la description sont également possibles.
PCT/KR2021/007194 2020-07-29 2021-06-09 Dispositif électronique et procédé de génération de données d'instruction d'un modèle d'apprentissage d'intelligence artificielle par dispositif électronique Ceased WO2022025413A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200094808A KR20220014764A (ko) 2020-07-29 2020-07-29 전자 장치 및 전자 장치의 인공 지능 학습 모델의 훈련 데이터 생성 방법
KR10-2020-0094808 2020-07-29

Publications (1)

Publication Number Publication Date
WO2022025413A1 true WO2022025413A1 (fr) 2022-02-03

Family

ID=80036495

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/007194 Ceased WO2022025413A1 (fr) 2020-07-29 2021-06-09 Dispositif électronique et procédé de génération de données d'instruction d'un modèle d'apprentissage d'intelligence artificielle par dispositif électronique

Country Status (2)

Country Link
KR (1) KR20220014764A (fr)
WO (1) WO2022025413A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117082359A (zh) * 2023-10-16 2023-11-17 荣耀终端有限公司 图像处理方法及相关设备
US11954833B2 (en) 2021-04-02 2024-04-09 Samsung Electronics Co., Ltd Electronic device for supporting machine learning-based image processing
US12149836B2 (en) 2021-11-03 2024-11-19 Samsung Electronics Co., Ltd. Electronic device for improving image quality

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11721001B2 (en) 2021-02-16 2023-08-08 Samsung Electronics Co., Ltd. Multiple point spread function based image reconstruction for a camera behind a display
US11722796B2 (en) 2021-02-26 2023-08-08 Samsung Electronics Co., Ltd. Self-regularizing inverse filter for image deblurring
US12393765B2 (en) 2021-08-06 2025-08-19 Samsung Electronics Co., Ltd. Automating search for improved display structure for under-display camera systems
US12216277B2 (en) 2021-10-14 2025-02-04 Samsung Electronics Co., Ltd. Optical element for deconvolution
KR102388581B1 (ko) 2022-02-25 2022-04-21 지성산업 주식회사 인공지능 딥 러닝 적용 nft 기반 이미지 처리 방법과 그 장치
US12482075B2 (en) 2022-06-08 2025-11-25 Samsung Electronics Co., Ltd. Restoring images using deconvolution
KR102769715B1 (ko) * 2022-07-20 2025-02-19 동국대학교 산학협력단 빛 번짐에 강인한 성별 인식 장치 및 방법
WO2024085414A1 (fr) * 2022-10-17 2024-04-25 삼성전자주식회사 Dispositif électronique et son procédé de commande
KR102519094B1 (ko) * 2022-10-18 2023-04-14 주식회사 페블러스 합성 데이터 생성을 위한 가상 환경을 제공하는 전자 장치, 전자 장치의 동작 방법 및 전자 장치를 포함하는 시스템
KR20240054111A (ko) * 2022-10-18 2024-04-25 삼성전자주식회사 전자 디바이스, 전자 디바이스의 동작 방법 및 이미지 편집 시스템
KR20240054154A (ko) * 2022-10-18 2024-04-25 주식회사 페블러스 합성 데이터의 품질을 평가하여 기여도를 산정하기 위한 방법 및 그러한 방법을 수행하는 전자 장치 및 시스템

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018098582A (ja) * 2016-12-09 2018-06-21 キヤノン株式会社 画像処理装置及び方法、及び撮像装置
US20180253869A1 (en) * 2017-03-02 2018-09-06 Adobe Systems Incorporated Editing digital images utilizing a neural network with an in-network rendering layer
KR20200052460A (ko) * 2018-10-31 2020-05-15 한국전력공사 이미지 기반 정보 예측 시스템 및 그 방법
JP6696095B1 (ja) * 2018-11-07 2020-05-20 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd 画像処理装置、撮像装置、画像処理方法、及びプログラム
US20200160593A1 (en) * 2018-11-16 2020-05-21 Nvidia Corporation Inverse rendering of a scene from a single image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018098582A (ja) * 2016-12-09 2018-06-21 キヤノン株式会社 画像処理装置及び方法、及び撮像装置
US20180253869A1 (en) * 2017-03-02 2018-09-06 Adobe Systems Incorporated Editing digital images utilizing a neural network with an in-network rendering layer
KR20200052460A (ko) * 2018-10-31 2020-05-15 한국전력공사 이미지 기반 정보 예측 시스템 및 그 방법
JP6696095B1 (ja) * 2018-11-07 2020-05-20 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd 画像処理装置、撮像装置、画像処理方法、及びプログラム
US20200160593A1 (en) * 2018-11-16 2020-05-21 Nvidia Corporation Inverse rendering of a scene from a single image

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11954833B2 (en) 2021-04-02 2024-04-09 Samsung Electronics Co., Ltd Electronic device for supporting machine learning-based image processing
US12149836B2 (en) 2021-11-03 2024-11-19 Samsung Electronics Co., Ltd. Electronic device for improving image quality
CN117082359A (zh) * 2023-10-16 2023-11-17 荣耀终端有限公司 图像处理方法及相关设备
CN117082359B (zh) * 2023-10-16 2024-04-19 荣耀终端有限公司 图像处理方法及相关设备

Also Published As

Publication number Publication date
KR20220014764A (ko) 2022-02-07

Similar Documents

Publication Publication Date Title
WO2022025413A1 (fr) Dispositif électronique et procédé de génération de données d'instruction d'un modèle d'apprentissage d'intelligence artificielle par dispositif électronique
WO2021141454A1 (fr) Module de caméra et dispositif électronique le comprenant
EP4139914A1 (fr) Dispositif électronique comprenant un dispositif d'affichage
WO2022215943A1 (fr) Dispositif électronique comprenant un appareil de prise de vues, et procédé de fonctionnement du dispositif électronique
WO2021256709A1 (fr) Dispositif électronique et procédé de fonctionnement de dispositif électronique
WO2022050599A1 (fr) Dispositif électronique dans lequel des lignes de signal de dispositif de prise de vues sont réduites
WO2023017999A1 (fr) Dispositif électronique comprenant un écran
WO2022265269A1 (fr) Dispositif électronique et procédé de prédiction et de compensation d'image résiduelle sur un dispositif d'affichage
WO2022114789A1 (fr) Dispositif électronique et procédé pour obtenir une quantité de lumière
WO2024232532A2 (fr) Dispositif électronique et procédé de commande d'éléments électroluminescents d'affichage
WO2023063560A1 (fr) Dispositif d'affichage pour commander l'angle de visionnement et dispositif électronique
WO2023128364A1 (fr) Panneau d'affichage pour caméra sous l'écran (udc), et dispositif électronique le comprenant
WO2023063608A1 (fr) Procédé de fourniture d'image et dispositif électronique pour la prise en charge dudit procédé
WO2023008854A1 (fr) Dispositif électronique comprenant un capteur optique intégré dans une unité d'affichage
WO2023282458A1 (fr) Dispositif électronique et son procédé de fonctionnement
WO2022239978A1 (fr) Dispositif électronique comprenant un capteur d'empreinte digitale et procédé
WO2022145673A1 (fr) Dispositif électronique et procédé de fonctionnement d'un dispositif électronique
WO2025159284A1 (fr) Dispositif d'affichage et dispositif électronique permettant de commander un angle de visualisation
WO2023022331A1 (fr) Procédé de fourniture d'image, et dispositif électronique le prenant en charge
WO2025239522A1 (fr) Dispositif électronique permettant de changer au moins partiellement l'angle de visionnement d'une image sur un écran d'affichage
WO2023085650A1 (fr) Module d'appareil photo et dispositif électronique le comprenant
WO2024025221A1 (fr) Dispositif électronique et son procédé de fonctionnement
WO2025183350A1 (fr) Dispositif électronique comprenant un écran souple pour éliminer les franges de couleur, procédé et support de stockage
WO2025239523A1 (fr) Dispositif électronique pouvant changer l'angle de visualisation de zone partielle d'écran d'affichage
WO2025018665A1 (fr) Dispositif électronique portable pour ajuster la luminance d'une lumière, son procédé de fonctionnement et support d'enregistrement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21850940

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21850940

Country of ref document: EP

Kind code of ref document: A1