[go: up one dir, main page]

WO2025095369A1 - Electronic device including artificial intelligence model for detecting free air on basis of abdominal ct image, and training method thereof - Google Patents

Electronic device including artificial intelligence model for detecting free air on basis of abdominal ct image, and training method thereof Download PDF

Info

Publication number
WO2025095369A1
WO2025095369A1 PCT/KR2024/014996 KR2024014996W WO2025095369A1 WO 2025095369 A1 WO2025095369 A1 WO 2025095369A1 KR 2024014996 W KR2024014996 W KR 2024014996W WO 2025095369 A1 WO2025095369 A1 WO 2025095369A1
Authority
WO
WIPO (PCT)
Prior art keywords
artificial intelligence
free air
intelligence model
images
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/KR2024/014996
Other languages
French (fr)
Korean (ko)
Inventor
김동진
김상욱
이중협
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industry Academic Cooperation Foundation of Catholic University of Korea
Original Assignee
Industry Academic Cooperation Foundation of Catholic University of Korea
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industry Academic Cooperation Foundation of Catholic University of Korea filed Critical Industry Academic Cooperation Foundation of Catholic University of Korea
Publication of WO2025095369A1 publication Critical patent/WO2025095369A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present invention relates to an electronic device including an artificial intelligence model and a learning method thereof, and more particularly, to an electronic device including an artificial intelligence model for free air detection based on abdominal CT images and a learning method thereof.
  • Free air refers to areas of increased gray density or black spots other than the hazy gray areas that appear on CT scans or X-rays, and particularly refers to the presence of air in areas where air should not exist, except in organ areas, and appears as black spots.
  • determining the presence or absence of free air in a CT image may vary depending on the skill level of the medical staff, and in particular, in emergency rooms where patients are crowded or CT image analysis is often required at night, accurate free air detection is physically difficult.
  • the present invention provides an electronic device including an artificial intelligence model for detecting free air based on an abdominal CT image, which detects free air located outside an organ region by annotating an abdominal CT image and using a U-NET-based artificial intelligence model, and a learning method thereof.
  • An electronic device including an artificial intelligence model for detecting free air based on abdominal CT images comprises at least one processor; and a memory storing a computer program executed by the at least one processor, wherein the at least one processor is configured to acquire a CT image for the abdomen of each patient, perform preprocessing to identify a free air region on the acquired CT image, train an artificial intelligence model using the preprocessed CT image, and predict the presence or absence of an abdominal perforation and the region for an input CT image using the trained artificial intelligence model.
  • the learning method of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images may include the steps of: acquiring a CT image for the abdomen of each patient; performing preprocessing to identify a free air region for the acquired CT image; training an artificial intelligence model using the preprocessed CT image; and predicting the presence or absence of an abdominal perforation and the region for an input CT image using the trained artificial intelligence model.
  • a computer program may be further provided that is stored in a medium so that a method for implementing the present invention is performed on a computer.
  • preprocessing is performed to facilitate free air detection in abdominal CT images
  • U-NET-based image segmentation is performed to improve the sensitivity and specificity of free air detection.
  • FIG. 1 is a block diagram schematically illustrating the configuration of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.
  • FIG. 2 is a schematic diagram illustrating the configuration of an artificial intelligence model of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.
  • FIG. 3 is a schematic diagram illustrating a segmentation process of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.
  • FIG. 4 is a schematic diagram illustrating an annotation process of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.
  • FIG. 5 illustrates images before and after filtering preprocessing of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.
  • FIG. 6 illustrates a process for calculating sensitivity and specificity of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.
  • FIG. 7 is a flow chart illustrating a learning method of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.
  • FIGS. 8 and 9 illustrate a process of providing a guide for an electronic device according to one embodiment of the present invention.
  • FIG. 10 illustrates a process for determining whether an electronic device is free air according to one embodiment of the present invention.
  • FIGS. 11 to 13 illustrate a process in which an electronic device detects free air by extracting multiple key images from a segmented image according to one embodiment of the present invention.
  • FIGS. 14 to 21 illustrate the results and accuracy of the processes performed by an electronic device according to one embodiment of the present invention in FIGS. 11 to 13.
  • the same reference numerals refer to the same components.
  • the present invention does not describe all elements of the embodiments, and any content that is general in the technical field to which the present invention belongs or that is redundant between the embodiments is omitted.
  • the terms 'part, module, element, block' used in the specification can be implemented by software or hardware, and according to the embodiments, a plurality of 'parts, modules, elements, blocks' can be implemented as a single component, or a single 'part, module, element, block' can include a plurality of components.
  • first, second, etc. are used to distinguish one component from another, and the components are not limited by the aforementioned terms.
  • each step is used for convenience of explanation and do not describe the order of each step. Each step may be performed in a different order than specified unless the context clearly indicates a specific order.
  • the 'device according to the present invention includes all kinds of devices that can perform computational processing and provide results to a user.
  • the device according to the present invention may include all of a computer, a server device, and a portable terminal, or may be in the form of any one of them.
  • the computer may include, for example, a notebook, desktop, laptop, tablet PC, slate PC, etc. equipped with a web browser.
  • the above server device is a server that processes information by communicating with an external device, and may include an application server, a computing server, a database server, a file server, a game server, a mail server, a proxy server, and a web server.
  • the above portable terminal may include, for example, all kinds of handheld-based wireless communication devices such as a PCS (Personal Communication System), GSM (Global System for Mobile communications), PDC (Personal Digital Cellular), PHS (Personal Handyphone System), PDA (Personal Digital Assistant), IMT (International Mobile Telecommunication)-2000, CDMA (Code Division Multiple Access)-2000, W-CDMA (W-Code Division Multiple Access), WiBro (Wireless Broadband Internet) terminal, a smart phone, and a wearable device such as a watch, a ring, a bracelet, an anklet, a necklace, glasses, contact lenses, or a head-mounted-device (HMD).
  • a PCS Personal Communication System
  • GSM Global System for Mobile communications
  • PDC Personal Digital Cellular
  • PHS Personal Handyphone System
  • PDA Personal Digital Assistant
  • IMT International Mobile Telecommunication
  • CDMA Code Division Multiple Access
  • W-CDMA Wideband Code Division Multiple Access
  • WiBro Wireless Broadband Internet
  • the function related to artificial intelligence is operated through a processor and a memory.
  • the processor may be composed of one or more processors.
  • one or more processors may be a general-purpose processor such as a CPU, an AP, a DSP (Digital Signal Processor), a graphics-only processor such as a GPU, a VPU (Vision Processing Unit), or an artificial intelligence-only processor such as an NPU.
  • One or more processors control to process input data according to a predefined operation rule or artificial intelligence model stored in the memory.
  • the artificial intelligence-only processor may be designed with a hardware structure specialized for processing a specific artificial intelligence model.
  • the predefined operation rules or artificial intelligence models are characterized by being created through learning.
  • being created through learning means that the basic artificial intelligence model is learned by using a plurality of learning data by a learning algorithm, thereby creating a predefined operation rules or artificial intelligence model set to perform a desired characteristic (or purpose).
  • Such learning may be performed in the device itself on which the artificial intelligence according to the present invention is performed, or may be performed through a separate server and/or system.
  • Examples of the learning algorithm include supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning, but are not limited to the examples described above.
  • the artificial intelligence model may be composed of a plurality of neural network layers.
  • Each of the plurality of neural network layers has a plurality of weight values, and performs a neural network operation through an operation between the operation result of the previous layer and the plurality of weights.
  • the plurality of weights of the plurality of neural network layers may be optimized by the learning result of the artificial intelligence model. For example, the plurality of weights may be updated so that a loss value or a cost value obtained from the artificial intelligence model is reduced or minimized during the learning process.
  • the artificial neural network may include a deep neural network (DNN), and examples thereof include, but are not limited to, a convolutional neural network (CNN), a deep neural network (DNN), a recurrent neural network (RNN), a restricted boltzmann machine (RBM), a deep belief network (DBN), a bidirectional recurrent deep neural network (BRDNN), or deep Q-networks.
  • DNN deep neural network
  • the processor can implement artificial intelligence.
  • Artificial intelligence refers to a machine learning method based on an artificial neural network that imitates human neurons (biological neurons) to enable a machine to learn.
  • the methodology of artificial intelligence can be divided into supervised learning in which input data and output data are provided together as training data according to a learning method, so that the answer (output data) to the problem (input data) is determined, unsupervised learning in which only input data is provided without output data, so that the answer (output data) to the problem (input data) is not determined, and reinforcement learning in which a reward (Reward) is given from an external environment whenever an action (Action) is taken in the current state (State), and learning is performed in a direction to maximize this reward.
  • Reward a reward
  • artificial intelligence methodologies can be categorized by architecture, which is the structure of the learning model.
  • architectures of widely used deep learning technologies can be categorized into convolutional neural networks (CNNs), recurrent neural networks (RNNs), transformers, and generative adversarial networks (GANs).
  • CNNs convolutional neural networks
  • RNNs recurrent neural networks
  • GANs generative adversarial networks
  • the present device and system may include an artificial intelligence model.
  • the artificial intelligence model may be one artificial intelligence model or may be implemented as multiple artificial intelligence models.
  • the artificial intelligence model may be composed of a neural network (or an artificial neural network) and may include a statistical learning algorithm that mimics biological neurons in machine learning and cognitive science.
  • a neural network may refer to a model in which artificial neurons (nodes) that form a network by combining synapses change the strength of the synapses through learning and have problem-solving capabilities.
  • Neurons of a neural network may include a combination of weights or biases.
  • a neural network may include one or more layers composed of one or more neurons or nodes.
  • the device may include an input layer, a hidden layer, and an output layer.
  • a neural network that constitutes the device may infer a desired result (output) from an arbitrary input (input) by changing the weights of neurons through learning.
  • the processor can generate a neural network, train (or learn) a neural network, perform a calculation based on received input data, generate an information signal based on the result of the calculation, or retrain the neural network.
  • the models of the neural network can include various types of models such as CNN (Convolution Neural Network) such as GoogleNet, AlexNet, VGG Network, R-CNN (Region with Convolution Neural Network), RPN (Region Proposal Network), RNN (Recurrent Neural Network), S-DNN (Stacking-based deep Neural Network), S-SDNN (State-Space Dynamic Neural Network), Deconvolution Network, DBN (Deep Belief Network), RBM (Restrcted Boltzman Machine), Fully Convolutional Network, LSTM (Long Short-Term Memory) Network, Classification Network, etc., but are not limited thereto.
  • the processor can include one or more processors for performing calculations according to the models of the neural network.
  • the neural network can include a deep It may include a neural network
  • Neural networks include CNN (Convolutional Neural Network), RNN (Recurrent Neural Network), perceptron, multilayer perceptron, FF (Feed Forward), RBF (Radial Basis Network), DFF (Deep Feed Forward), LSTM (Long Short Term Memory), GRU (Gated Recurrent Unit), AE (Auto Encoder), VAE (Variational Auto) Encoder), DAE (Denoising Auto Encoder), SAE (Sparse Auto Encoder), MC (Markov Chain), HN (Hopfield Network), BM (Boltzmann Machine), RBM (Restricted Boltzmann Machine), DBN (Depp Belief Network), DCN (Deep Convolutional Network), DN (Deconvolutional Network), DCIGN (Deep Convolutional Inverse Graphics Network), Generative Adversarial Network (GAN), Liquid State Machine (LSM), Extreme Learning Machine (ELM), It will be understood by those skilled in the art that any neural network may
  • the processor may be configured to perform a CNN (Convolution Neural Network) such as GoogleNet, AlexNet, VGG Network, R-CNN (Region with Convolution Neural Network), RPN (Region Proposal Network), RNN (Recurrent Neural Network), S-DNN (Stacking-based deep Neural Network), S-SDNN (State-Space Dynamic Neural Network), Deconvolution Network, DBN (Deep Belief Network), RBM (Restrcted Boltzman Machine), Fully Convolutional Network, LSTM (Long Short-Term Memory) Network, Classification Network, Generative Modeling, eXplainable AI, Continual AI, Representation Learning, AI for Material Design, BERT, SP-BERT, MRC/QA for natural language processing, Text Analysis, Dialog System, GPT-3, GPT-4, Visual Analytics for vision processing, Visual Understanding, Video Synthesis, ResNet for data intelligence, Anomaly Detection, Prediction, Time-Series Forecasting, Optimization
  • FIG. 1 is a block diagram schematically illustrating the configuration of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.
  • an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.
  • the electronic device and its learning method according to the present invention will be described with reference to FIGS. 2 to 21.
  • the electronic device (100) may include a processor (110), a memory (120), a communication unit (130), and an input/output interface (140).
  • the internal components that the electronic device (100) may include are not limited thereto.
  • the electronic device (1000) of the present invention may perform the function of the processor (110) through a separate processing server or cloud server instead of the processor (110).
  • the processor (110) may be implemented to perform operations of the electronic device (100) using a memory (120) that stores data on an algorithm for controlling operations of components within the electronic device (100) or a program that reproduces the algorithm, and the data stored in the memory (120).
  • the processor (110) and the memory (120) may be implemented as separate chips.
  • the processor (110) and the memory (120) may be implemented as a single chip.
  • the memory (120) can store data supporting various functions of the electronic device (100) and a program for the operation of the processor (110), can store input/output data (e.g., images, videos, etc.), and can store a plurality of application programs (or applications) run on the electronic device (100), data for the operation of the electronic device (100), and commands. At least some of these application programs can be downloaded from an external server via wireless communication.
  • the memory (120) may include at least one type of storage medium among a flash memory type, a hard disk type, an SSD (Solid State Disk type), an SDD (Silicon Disk Drive type), a multimedia card micro type, a card type memory (for example, an SD or XD memory, etc.), a random access memory (RAM), a static random access memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, and an optical disk.
  • the memory may be a database that is separate from the electronic device (100) but connected by wire or wirelessly.
  • the communication unit (130) may include one or more components that enable communication with an external device, and may include, for example, at least one of a broadcast receiving module, a wired communication module, a wireless communication module, a short-range communication module, and a location information module.
  • the wired communication module may include various wired communication modules such as a Local Area Network (LAN) module, a Wide Area Network (WAN) module, or a Value Added Network (VAN) module, as well as various cable communication modules such as a Universal Serial Bus (USB), a High Definition Multimedia Interface (HDMI), a Digital Visual Interface (DVI), RS-232 (recommended standard232), power line communication, or plain old telephone service (POTS).
  • LAN Local Area Network
  • WAN Wide Area Network
  • VAN Value Added Network
  • USB Universal Serial Bus
  • HDMI High Definition Multimedia Interface
  • DVI Digital Visual Interface
  • RS-232 recommended standard232
  • POTS plain old telephone service
  • the wireless communication module may include a wireless communication module that supports various wireless communication methods such as GSM (global System for Mobile Communication), CDMA (Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), UMTS (universal mobile telecommunications system), TDMA (Time Division Multiple Access), LTE (Long Term Evolution), 4G, 5G, and 6G, in addition to a WiFi module and a Wireless broadband module.
  • GSM Global System for Mobile Communication
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • UMTS universalal mobile telecommunications system
  • TDMA Time Division Multiple Access
  • LTE Long Term Evolution
  • 4G Long Term Evolution
  • 5G Fifth Generation
  • 6G Wireless broadband module
  • the short-range communication module is for short-range communication and can support short-range communication using at least one of BluetoothTM, RFID (Radio Frequency Identification), Infrared Data Association (IrDA), UWB (Ultra Wideband), ZigBee, NFC (Near Field Communication), Wi-Fi (Wireless-Fidelity), Wi-Fi Direct, and Wireless USB (Wireless Universal Serial Bus) technologies.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • ZigBee Ultra Wideband
  • NFC Near Field Communication
  • Wi-Fi Wireless-Fidelity
  • Wi-Fi Direct Wireless USB (Wireless Universal Serial Bus) technologies.
  • the input/output interface (140) serves as a passage for various types of external devices connected to the electronic device (100) of the present invention.
  • the input/output interface (140) may include at least one of a wired/wireless headset port, an external charger port, a wired/wireless data port, a memory card port, a port for connecting a device equipped with an identification module (SIM), an audio I/O (Input/Output) port, a video I/O (Input/Output) port, and an earphone port.
  • SIM identification module
  • the electronic device (100) of the present invention may perform appropriate control related to an external device connected to the input/output interface (140).
  • Each component illustrated in Figure 1 represents software and/or hardware components such as a Field Programmable Gate Array (FPGA) and an Application Specific Integrated Circuit (ASIC).
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • an electronic device (100) including an artificial intelligence model for detecting free air based on abdominal CT images may include at least one processor (110) and a memory (120) storing a computer program executed by the at least one processor (110).
  • the at least one processor (110) may be configured to acquire a CT image of the abdomen for each patient, perform preprocessing to identify a free air region for the acquired CT image, train an artificial intelligence model using the preprocessed CT image, and predict the presence or absence of an abdominal perforation and the region for the input CT image using the trained artificial intelligence model.
  • the artificial intelligence model according to the present invention may include an encoder and a decoder (200).
  • Each layer of the encoder and decoder (200) may be composed of a residual conv block (220).
  • the residual conv block (220) is a block that additionally learns small information of each block by placing a skip connection in the convolution block (210) consisting of a 3x3 convolution layer, a batch normalization layer, and a ReLU layer.
  • the residual conv block (220) is a CNN (Convolution Neural Network) block equipped with multiple skip connections.
  • An abdominal CT image (10) is input to an encoder and decoder (200) composed of a residual conv block (220), and can be input to a U-NET based segmentation model (300) trained to detect free air in the output CT image (11).
  • the U-NET based segmentation model (300) trained to detect free air in the output CT image (11) can be referred to as FA-NET.
  • the U-NET-based segmentation model (300) can compare the free air area detected in the output CT image (11) with the ground truth (12) where the doctor detected the free air, and calculate the degree of overlap between the two images as a Dice score (S ⁇ rensen-Dice coefficient score).
  • the dice score can be 1, and if there is no match at all, the dice score can be 0.
  • the output CT image (11) can be obtained by dividing it with a 3D slicer (230), as shown in FIG. 3, and the CT image (110) can include at least three key images.
  • the drawing number of FIG. 3 shows the image area divided by the 3D slicer (230).
  • three to five key images can be extracted from the segmented CT images of each patient. If the artificial intelligence model determines that there is free air in two or more than three of the three to five extracted images, it can be determined that free air exists, and if the artificial intelligence model determines that there is free air in two or less than three of the three to five extracted images, it can be determined that there is no free air.
  • the processor (110) of the electronic device (100) may annotate an area determined based on a preset long-term area and the location of a spot as free air.
  • the processor (110) of the electronic device (100) can perform preprocessing by annotating that a spot (32) located within a predetermined long-term area in each of at least three key images is not free air, and annotating that a spot (31) located outside a predetermined long-term area is free air.
  • an abdominal CT image (10) can be input to an encoder and decoder (200), and the output CT image (110) can be segmented to extract at least three key images.
  • the U-NET-based segmentation model (300) can detect a free air region in the key image and perform annotation indicating free air in the detected region.
  • At least one processor (110) of the electronic device (100) according to the present invention may perform windowing at a preset threshold value on the acquired CT image. Through windowing, the CT image may be preprocessed more clearly.
  • the preset threshold value may be greater than or equal to 1200. This may further improve the accuracy of the artificial intelligence model.
  • At least one processor (110) of the electronic device (100) can measure the accuracy of the learned artificial intelligence model based on the Dice score.
  • the Dice score is a value obtained by multiplying the product of the ground truth (12) and the predicted mask by the sum of the output CT image (11) in which the ground truth (12) and the predicted mask are displayed, and it can be evaluated that the accuracy of the artificial intelligence model is higher as the Dice score is larger.
  • abdominal CT images from Test 1 to Test 12 can be acquired for each patient, and at this time, patients can be distinguished into patients with actual intestinal perforation and patients without actual intestinal perforation.
  • the ratio of the number of images with free air in the output CT images to the number of patients with actual intestinal perforation is called sensitivity
  • the ratio of the number of images without free air in the output CT images to the number of patients without actual intestinal perforation is called specificity.
  • sensitivity and specificity can be distinguished and evaluated.
  • the accuracy of sensitivity and the accuracy of specificity can be evaluated by distinguishing between True Positive, in which free air is detected for patients with intestinal perforation, and False Negative, in which free air is not detected, and True Negative, in which free air is not detected for patients without intestinal perforation, and False Positive, in which free air is detected.
  • a learning method of an electronic device (100) including an artificial intelligence model for detecting free air based on an abdominal CT image may include a step of obtaining a CT image for the abdomen of each patient (S710), a step of performing preprocessing to check a free air region for the obtained CT image (S720), a step of training an artificial intelligence model using the preprocessed CT image (S730), and a step of predicting the presence or absence of an abdominal perforation and the region for the input CT image using the trained artificial intelligence model (S740).
  • the step (S740) of predicting the presence or absence and area of intra-abdominal perforation according to the present invention can detect the presence or absence and size of free air in a CT image and output the presence or absence, area, or size of intra-abdominal perforation of a patient.
  • the step (S810) of providing a corresponding treatment or emergency surgery guide to a user interface based on the size of the perforation may be further included.
  • the user interface may be the input/output interface (140) described above.
  • the step (S740) of predicting the presence or absence and area of intra-abdominal perforation may further include, if there is intra-abdominal perforation of the patient, a step (S811) of collecting patient information including a pre-input patient age and the size of the perforation, and a step (S812) of providing a corresponding treatment or emergency surgery guide to a user interface based on the patient information and the size of the perforation.
  • the artificial intelligence model according to the present invention can not only detect free air using the U-NET-based segmentation model (300), but also assist medical staff's treatment by providing a corresponding treatment or emergency surgery guide based on the presence and size of free air.
  • the medical team may be guided that no further action is necessary.
  • an intra-abdominal perforation exists and the size of the perforation exceeds a threshold, it can guide medical staff to perform emergency surgery, and if it is below the threshold, it can guide that surgical measures may be necessary.
  • the medical staff can be guided to take immediate action based on the presence or absence of perforation in the abdomen. Since there is no small or large intestine in the upper abdomen, the probability of air in the organs is low, so there is little room for mistaking air in the organs as free air, and therefore, if free air is not detected, it can be trusted as is.
  • the electronic device (100) can separately receive an organ name for detecting the presence or absence of perforation from the user interface, and when the organ name included in the lower abdomen set in advance is input from the user interface, an additional inspection progress guide can be provided to the user interface.
  • the step (S740) of predicting the presence or absence and area of intra-abdominal perforation may include the steps of extracting at least three key images from the segmented CT images of each patient, the step of determining that free air exists if the artificial intelligence model determines that free air exists in two or more of the three extracted images, and the step of determining that free air does not exist if the artificial intelligence model determines that free air exists in less than two of the three extracted images.
  • three to five key images may be extracted from the segmented CT images, and if it is determined that free air exists in two or more than three of the extracted images, it may be finally determined that free air exists, and if it is determined that free air exists in two or less than three images, it may be finally determined that no free air exists.
  • Fig. 11 five patients and abdominal CT image images of each patient can be acquired from a group of patients with duodenal perforation (1111) and a group of patients without duodenal perforation (1112).
  • duodenal perforation is only an exemplary embodiment and all other abdominal perforations can be included.
  • each of the duodenal perforation patients (1111) can include key image 1, key image 2, and key image 3
  • each of the non-duodenal perforation patients (1112) can include key image 1, key image 2, and key image 3.
  • the electronic device (100) according to the present invention can perform free air detection for each key image of each patient.
  • the electronic device (100) according to the present invention can perform free air detection for each key image of Pt1, Pt2, Pt3, Pt4, and Pt5, which are duodenal perforation patients (1111).
  • Pt1, Pt2, Pt3, Pt4, and Pt5 which are duodenal perforation patients (1111).
  • TP True Positive
  • the sensitivity is 100%.
  • the electronic device (100) according to the present invention can perform free air detection for each key image of Pt1, Pt2, Pt3, Pt4, and Pt5, which are patients (1112) who do not have duodenal perforation.
  • Pt1, Pt2, Pt3, Pt4, and Pt5 which are patients (1112) who do not have duodenal perforation.
  • TN True Negative
  • Fulse Positive is output for Pt1, in which free air is detected in key image 3
  • Fulse Positive is output for Pt2, in which free air is detected in all key image 1, key image 2, and key image 3.
  • the electronic device (100) according to the present invention can predict that no free air is detected by determining that the key images are TN not only when all of the key images are TN, but also when two out of three key images are TN.
  • the electronic device (100) according to the present invention may have a high sensitivity but may have low accuracy in terms of specificity.
  • the final free air detection can be determined based on the results of determining more than half of the key images among a plurality of key images.
  • FIGS 14 to 21 show actual experimental results for the above-described contents.
  • Figure 14 shows a true positive image, which is, in order, an abdominal CT image, a free air area detected by FA-NET, a U-NET-based segmentation model (300), and a free air area directly diagnosed by a doctor (Ground truth).
  • the FA-NET Detected FA area and the ground truth are almost similar, so the Dice score is 0.93, and it can be confirmed that TP is output and the free air detection sensitivity is high.
  • Figures 15 and 16 are true positive images with high Dice scores of 0.83 and 0.9, respectively, because the FA-NET Detected FA area and the ground truth are almost similar, while Figure 17 is a false negative image with low Dice scores of 0.57, because the FA-NET Detected FA area and the ground truth are different.
  • the electronic device (100) according to the present invention has a high accuracy with a sensitivity of about 0.9.
  • Fig. 18 shows true negative images for two abdominal CT images, in order from left to right: an abdominal CT image, a free air area detected by FA-NET, a U-NET-based segmentation model (300), and a free air area directly diagnosed by a doctor (Ground truth). This is output with a black background without a separate white area as a patient without free air.
  • Figs. 19 and 20 are false positive images with a very low Dice score of 0.03 due to differences between the FA-NET Detected FA area and the ground truth.
  • 644 were TN images with high Dice scores, as in Fig. 18, and the rest were FP images with low Dice scores due to errors, as in Figs. 19 to 10.
  • the electronic device (100) according to the present invention has a specificity of about 0.58, which is somewhat lower than the sensitivity.
  • the electronic device (100) can divide an abdominal CT image to extract at least three key images and determine the result of outputting a majority of the three key images as the final free air detection result.
  • very fine air may be air or noise that exists naturally due to long-term activity rather than through perforation, so if a free air area smaller than a preset area is detected in the area of each image, it can be ignored.
  • the electronic device (100) can remove a free air area detected in an area of 1% or less of the total image area and determine the presence or absence of free air only based on a free air area detected in an area exceeding 1% of the total image area.
  • the artificial intelligence model can detect the presence or absence and size of free air in the input CT image and output the presence or area and size of perforation in the patient's abdomen.
  • the electronic device (100) can, when there is a perforation in the patient's abdomen, provide a corresponding treatment or emergency surgery guide to the user interface based on the size of the perforation, or can collect patient information including a pre-input patient age and the size of the perforation, and provide a corresponding treatment or emergency surgery guide to the user interface based on the patient information and the size of the perforation.
  • the electronic device (100) can extract at least three key images from among the segmented CT images of each patient, and if the artificial intelligence model determines that there is free air in two or more of the three extracted images, it can determine that there is free air, and if the artificial intelligence model determines that there is free air in less than two of the three extracted images, it can determine that there is no free air.
  • the electronic device (100) further includes a user interface for receiving an organ name to detect the presence or absence of perforation, and when an organ name included in a preset lower abdomen is input in the user interface, an additional inspection progress guide can be provided to the user interface.
  • the above-described configuration allows us to cover the shortage of medical staff by accurately classifying and treating critically ill patients with abdominal perforation according to their level of emergency without spending a lot of time on patients with abdominal perforation, which account for less than 5% of all emergency room visitors.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Molecular Biology (AREA)
  • Optics & Photonics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Geometry (AREA)
  • Urology & Nephrology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Quality & Reliability (AREA)
  • Pulmonology (AREA)
  • Physiology (AREA)
  • Image Analysis (AREA)

Abstract

According to the present invention, the electronic device including an artificial intelligence (AI) model for detecting free air on the basis of an abdominal CT image may include: at least one processor; and a memory for storing a computer program executed by the at least one processor, wherein the at least one processor is configured to: acquire a CT image of the abdomen for each patient; perform preprocessing on the acquired CT image to identify a free air area; train an artificial intelligence model by using the preprocessed CT image; and predict the presence/absence and area of perforation in the abdomen on the basis of an input CT image by using the trained artificial intelligence model.

Description

복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치 및 그의 학습 방법Electronic device including artificial intelligence model for detecting free air based on abdominal CT image and learning method thereof

본 발명은 인공 지능 모델을 포함하는 전자 장치 및 그의 학습 방법에 관한 것으로, 보다 구체적으로 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치 및 그의 학습 방법에 관한 것이다.The present invention relates to an electronic device including an artificial intelligence model and a learning method thereof, and more particularly, to an electronic device including an artificial intelligence model for free air detection based on abdominal CT images and a learning method thereof.

프리 에어(Free air, 유리공기음영)란 CT 스캔이나 X선에서 나타나는 흐릿한 회색 영역 외에 회색 밀도가 높아지거나 검은 스팟이 발생하는 영역을 의미하며, 특히 장기 영역 외에 공기가 존재하지 않아야 하는 부분에 공기가 존재하여 검은 스팟으로 나타나는 것을 의미한다. Free air refers to areas of increased gray density or black spots other than the hazy gray areas that appear on CT scans or X-rays, and particularly refers to the presence of air in areas where air should not exist, except in organ areas, and appears as black spots.

응급실에 내원하는 환자들 중 의식을 잃었거나 복통을 호소하는 환자의 경우, 복부 CT 영상을 찍게 되는데, 프리 에어가 검출되지 않는 경우가 많다. 그러나, 5% 미만으로 CT 영상 내 프리 에어가 검출된 복막염, 장 천공 발생된 환자가 있을 경우 긴급 응급수술 진행이 필요한 바, 프리 에어 검출이 빈번하지 않더라도 모든 환자의 CT 영상 내 프리 에어 검출이 반드시 수반되어야 한다.Among patients who visit the emergency room, those who have lost consciousness or complain of abdominal pain undergo abdominal CT images, but in many cases, free air is not detected. However, in patients with peritonitis or intestinal perforation, in which free air is detected in less than 5% of the CT images, urgent emergency surgery is required, so even if free air is not detected frequently, free air detection in the CT images of all patients must be accompanied.

다만, CT 영상 내에서 프리 에어 유무를 판단하는 것은 의료진의 숙련도에 따라 달라질 수 있으며, 특히 응급실의 경우, 환자들이 몰리거나 야간에 CT 영상 분석이 필요한 경우가 많아 정확한 프리 에어 검출이 물리적으로 어렵다.However, determining the presence or absence of free air in a CT image may vary depending on the skill level of the medical staff, and in particular, in emergency rooms where patients are crowded or CT image analysis is often required at night, accurate free air detection is physically difficult.

또한, 복부 CT 영상의 경우 소장이나 대장이 존재하여 장기 내의 공기와 장기 영역 외의 프리 에어를 구분하여 검출하기 어렵다.In addition, in the case of abdominal CT images, it is difficult to distinguish and detect air within the organs and free air outside the organ area due to the presence of the small or large intestine.

종래 특허문헌에서는 CNN을 이용하여 의료영상처리를 위한 학습 시스템을 개시하였으나, 종래 특허문헌만으로는 장기 영역 외의 프리 에어와 그 외 공기를 분류하여 프리 에어 검출을 수행할 수 없었다.In the existing patent documents, a learning system for medical image processing using CNN was disclosed, but it was not possible to perform free air detection by classifying free air outside the organ area and other air using the existing patent documents alone.

따라서, 신속하게 복부 CT 영상 내 프리 에어 검출을 진행하여 의료진에게 알람이나 가이드를 제공할 필요성이 존재하였다.Therefore, there was a need to rapidly detect free air in abdominal CT images to provide an alarm or guidance to medical staff.

본 발명의 실시에는 상술한 문제점을 해결하기 위하여 제안된 것으로, 복부 CT 이미지에 어노테이션(Annotation)을 달아 U-NET 기반 인공 지능 모델을 이용하여 장기 영역 외에 위치한 프리 에어를 검출하는, 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치 및 그의 학습 방법을 제공할 수 있다.In order to solve the above-described problem, the present invention provides an electronic device including an artificial intelligence model for detecting free air based on an abdominal CT image, which detects free air located outside an organ region by annotating an abdominal CT image and using a U-NET-based artificial intelligence model, and a learning method thereof.

본 발명이 해결하고자 하는 과제들은 이상에서 언급된 과제로 제한되지 않으며, 언급되지 않은 또 다른 과제들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.The problems to be solved by the present invention are not limited to the problems mentioned above, and other problems not mentioned will be clearly understood by those skilled in the art from the description below.

상술한 과제를 해결하기 위한 본 발명에 따른 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치는 적어도 하나의 프로세서; 및 상기 적어도 하나의 프로세서에 의해 수행되는 컴퓨터 프로그램이 저장된 메모리를 포함하며, 상기 적어도 하나의 프로세서는, 각 환자 별 복부에 대한 CT 이미지를 획득하고, 획득한 상기 CT 이미지에 대해 프리 에어(free air) 영역을 확인하는 전처리를 수행하고, 상기 전처리된 CT 이미지를 이용하여 인공 지능 모델을 학습시키며, 상기 학습된 인공 지능 모델을 이용하여 입력된 CT 이미지에 대해 복부 내 천공 유무 및 영역을 예측하도록 구성될 수 있다.An electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention for solving the above-described problem comprises at least one processor; and a memory storing a computer program executed by the at least one processor, wherein the at least one processor is configured to acquire a CT image for the abdomen of each patient, perform preprocessing to identify a free air region on the acquired CT image, train an artificial intelligence model using the preprocessed CT image, and predict the presence or absence of an abdominal perforation and the region for an input CT image using the trained artificial intelligence model.

상술한 과제를 해결하기 위한 본 발명에 따른 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치의 학습 방법은, 각 환자 별 복부에 대한 CT 이미지를 획득하는 단계; 획득한 상기 CT 이미지에 대해 프리 에어(free air) 영역을 확인하는 전처리를 수행하는 단계; 상기 전처리된 CT 이미지를 이용하여 인공 지능 모델을 학습시키는 단계; 및 상기 학습된 인공 지능 모델을 이용하여 입력된 CT 이미지에 대해 복부 내 천공 유무 및 영역을 예측하는 단계를 포함할 수 있다. The learning method of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention for solving the above-described problem may include the steps of: acquiring a CT image for the abdomen of each patient; performing preprocessing to identify a free air region for the acquired CT image; training an artificial intelligence model using the preprocessed CT image; and predicting the presence or absence of an abdominal perforation and the region for an input CT image using the trained artificial intelligence model.

이 외에도, 본 발명을 구현하기 위한 다른 방법, 다른 시스템 및 상기 방법을 실행하기 위한 컴퓨터 프로그램을 기록하는 컴퓨터 판독 가능한 기록 매체가 더 제공될 수 있다.In addition, other methods for implementing the present invention, other systems, and computer-readable recording media recording a computer program for executing the method may be further provided.

이 외에도, 본 발명을 구현하기 위한 방법이 컴퓨터 상에서 수행되도록 매체에 저장되는 컴퓨터 프로그램이 더 제공될 수 있다.In addition, a computer program may be further provided that is stored in a medium so that a method for implementing the present invention is performed on a computer.

본 발명의 전술한 과제 해결 수단에 의하면, 복부 CT 영상 내에서 프리 에어 검출이 용이하도록 전처리를 수행하고 U-NET 기반 이미지 분할을 수행하여 프리 에어 검출 민감도(sensitivity)와 특이도(specificity)를 향상시킬 수 있다.According to the above-described problem solving means of the present invention, preprocessing is performed to facilitate free air detection in abdominal CT images, and U-NET-based image segmentation is performed to improve the sensitivity and specificity of free air detection.

또한, 본 발명의 전술한 과제 해결 수단에 의하면 응급실의 환자가 몰리거나 숙련된 의료진이 부족하더라도 의사에게 전문적이고 구체적인 알람 또는 가이드를 제공하여 장 천공 환자를 놓치지 않는 보조 도구를 제공할 수 있다.In addition, according to the above-described problem solving means of the present invention, even when the emergency room is crowded with patients or there is a shortage of skilled medical staff, it is possible to provide an auxiliary tool that provides a doctor with a professional and specific alarm or guide so as not to miss a patient with intestinal perforation.

본 발명의 효과들은 이상에서 언급된 효과로 제한되지 않으며, 언급되지 않은 또 다른 효과들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있다.The effects of the present invention are not limited to the effects mentioned above, and other effects not mentioned can be clearly understood by those skilled in the art from the description below.

도 1은 본 발명에 따른 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치의 구성을 간략하게 도시한 블록도이다.FIG. 1 is a block diagram schematically illustrating the configuration of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.

도 2는 본 발명에 따른 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치의 인공 지능 모델의 구성을 간략하게 도시한 것이다.FIG. 2 is a schematic diagram illustrating the configuration of an artificial intelligence model of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.

도 3은 본 발명에 따른 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치의 분할 과정을 간략하게 도시한 것이다.FIG. 3 is a schematic diagram illustrating a segmentation process of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.

도 4는 본 발명에 따른 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치의 어노테이션 과정을 간략하게 도시한 것이다.FIG. 4 is a schematic diagram illustrating an annotation process of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.

도 5는 본 발명에 따른 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치의 필터링 전처리 전후 이미지를 도시한 것이다.FIG. 5 illustrates images before and after filtering preprocessing of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.

도 6은 본 발명에 따른 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치의 민감도와 특이도 산출 과정을 도시한 것이다.FIG. 6 illustrates a process for calculating sensitivity and specificity of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.

도 7은 본 발명에 따른 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치의 학습 방법의 플로우 차트를 도시한 것이다.FIG. 7 is a flow chart illustrating a learning method of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention.

도 8 및 도 9는 본 발명의 일 실시예에 따른 전자 장치의 가이드 제공하는 프로세스를 도시한 것이다. FIGS. 8 and 9 illustrate a process of providing a guide for an electronic device according to one embodiment of the present invention.

도 10은 본 발명의 일 실시예에 따른 전자 장치의 프리 에어 여부 판단 프로세스를 도시한 것이다.FIG. 10 illustrates a process for determining whether an electronic device is free air according to one embodiment of the present invention.

도 11 내지 도 13은 본 발명의 일 실시예에 따른 전자 장치가 분할된 이미지에서 복수의 키 이미지를 추출하여 프리 에어를 검출하는 프로세스를 도시한 것이다.FIGS. 11 to 13 illustrate a process in which an electronic device detects free air by extracting multiple key images from a segmented image according to one embodiment of the present invention.

도 14 내지 도 21 은 본 발명의 일 실시예에 따른 전자 장치가 도 11 내지 도 13에서 수행한 프로세스의 결과 값 및 정확도를 도시한 것이다.FIGS. 14 to 21 illustrate the results and accuracy of the processes performed by an electronic device according to one embodiment of the present invention in FIGS. 11 to 13.

본 발명 전체에 걸쳐 동일 참조 부호는 동일 구성요소를 지칭한다. 본 발명이 실시예들의 모든 요소들을 설명하는 것은 아니며, 본 발명이 속하는 기술분야에서 일반적인 내용 또는 실시예들 간에 중복되는 내용은 생략한다. 명세서에서 사용되는 '부, 모듈, 부재, 블록'이라는 용어는 소프트웨어 또는 하드웨어로 구현될 수 있으며, 실시예들에 따라 복수의 '부, 모듈, 부재, 블록'이 하나의 구성요소로 구현되거나, 하나의 '부, 모듈, 부재, 블록'이 복수의 구성요소들을 포함하는 것도 가능하다. Throughout the present invention, the same reference numerals refer to the same components. The present invention does not describe all elements of the embodiments, and any content that is general in the technical field to which the present invention belongs or that is redundant between the embodiments is omitted. The terms 'part, module, element, block' used in the specification can be implemented by software or hardware, and according to the embodiments, a plurality of 'parts, modules, elements, blocks' can be implemented as a single component, or a single 'part, module, element, block' can include a plurality of components.

명세서 전체에서, 어떤 부분이 다른 부분과 "연결"되어 있다고 할 때, 이는 직접적으로 연결되어 있는 경우뿐 아니라, 간접적으로 연결되어 있는 경우를 포함하고, 간접적인 연결은 무선 통신망을 통해 연결되는 것을 포함한다.Throughout the specification, when a part is said to be "connected" to another part, this includes not only a direct connection but also an indirect connection, and an indirect connection includes a connection via a wireless communications network.

또한 어떤 부분이 어떤 구성요소를 "포함"한다고 할 때, 이는 특별히 반대되는 기재가 없는 한 다른 구성요소를 제외하는 것이 아니라 다른 구성요소를 더 포함할 수 있는 것을 의미한다.Additionally, when a part is said to "include" a component, this does not mean that it excludes other components, but rather that it may include other components, unless otherwise specifically stated.

명세서 전체에서, 어떤 부재가 다른 부재 "상에" 위치하고 있다고 할 때, 이는 어떤 부재가 다른 부재에 접해 있는 경우뿐 아니라 두 부재 사이에 또 다른 부재가 존재하는 경우도 포함한다.Throughout the specification, when it is said that an element is "on" another element, this includes not only cases where the element is in contact with the other element, but also cases where there is another element between the two elements.

제 1, 제 2 등의 용어는 하나의 구성요소를 다른 구성요소로부터 구별하기 위해 사용되는 것으로, 구성요소가 전술된 용어들에 의해 제한되는 것은 아니다. The terms first, second, etc. are used to distinguish one component from another, and the components are not limited by the aforementioned terms.

단수의 표현은 문맥상 명백하게 예외가 있지 않는 한, 복수의 표현을 포함한다.Singular expressions include plural expressions unless the context clearly indicates otherwise.

각 단계들에 있어 식별부호는 설명의 편의를 위하여 사용되는 것으로 식별부호는 각 단계들의 순서를 설명하는 것이 아니며, 각 단계들은 문맥상 명백하게 특정 순서를 기재하지 않는 이상 명기된 순서와 다르게 실시될 수 있다. The identification codes in each step are used for convenience of explanation and do not describe the order of each step. Each step may be performed in a different order than specified unless the context clearly indicates a specific order.

이하 첨부된 도면들을 참고하여 본 발명의 작용 원리 및 실시예들에 대해 설명한다.The operating principle and embodiments of the present invention will be described with reference to the attached drawings below.

본 명세서에서 '본 발명에 따른 장치'는 연산처리를 수행하여 사용자에게 결과를 제공할 수 있는 다양한 장치들이 모두 포함된다. 예를 들어, 본 발명에 따른 장치는, 컴퓨터, 서버 장치 및 휴대용 단말기를 모두 포함하거나, 또는 어느 하나의 형태가 될 수 있다.In this specification, the 'device according to the present invention' includes all kinds of devices that can perform computational processing and provide results to a user. For example, the device according to the present invention may include all of a computer, a server device, and a portable terminal, or may be in the form of any one of them.

여기에서, 상기 컴퓨터는 예를 들어, 웹 브라우저(WEB Browser)가 탑재된 노트북, 데스크톱(desktop), 랩톱(laptop), 태블릿 PC, 슬레이트 PC 등을 포함할 수 있다.Here, the computer may include, for example, a notebook, desktop, laptop, tablet PC, slate PC, etc. equipped with a web browser.

상기 서버 장치는 외부 장치와 통신을 수행하여 정보를 처리하는 서버로써, 애플리케이션 서버, 컴퓨팅 서버, 데이터베이스 서버, 파일 서버, 게임 서버, 메일 서버, 프록시 서버 및 웹 서버 등을 포함할 수 있다.The above server device is a server that processes information by communicating with an external device, and may include an application server, a computing server, a database server, a file server, a game server, a mail server, a proxy server, and a web server.

상기 휴대용 단말기는 예를 들어, 휴대성과 이동성이 보장되는 무선 통신 장치로서, PCS(Personal Communication System), GSM(Global System for Mobile communications), PDC(Personal Digital Cellular), PHS(Personal Handyphone System), PDA(Personal Digital Assistant), IMT(International Mobile Telecommunication)-2000, CDMA(Code Division Multiple Access)-2000, W-CDMA(W-Code Division Multiple Access), WiBro(Wireless Broadband Internet) 단말, 스마트 폰(Smart Phone) 등과 같은 모든 종류의 핸드헬드(Handheld) 기반의 무선 통신 장치와 시계, 반지, 팔찌, 발찌, 목걸이, 안경, 콘택트 렌즈, 또는 머리 착용형 장치(head-mounted-device(HMD) 등과 같은 웨어러블 장치를 포함할 수 있다.The above portable terminal may include, for example, all kinds of handheld-based wireless communication devices such as a PCS (Personal Communication System), GSM (Global System for Mobile communications), PDC (Personal Digital Cellular), PHS (Personal Handyphone System), PDA (Personal Digital Assistant), IMT (International Mobile Telecommunication)-2000, CDMA (Code Division Multiple Access)-2000, W-CDMA (W-Code Division Multiple Access), WiBro (Wireless Broadband Internet) terminal, a smart phone, and a wearable device such as a watch, a ring, a bracelet, an anklet, a necklace, glasses, contact lenses, or a head-mounted-device (HMD).

본 발명에 따른 인공지능과 관련된 기능은 프로세서와 메모리를 통해 동작된다. 프로세서는 하나 또는 복수의 프로세서로 구성될 수 있다. 이때, 하나 또는 복수의 프로세서는 CPU, AP, DSP(Digital Signal Processor) 등과 같은 범용 프로세서, GPU, VPU(Vision Processing Unit)와 같은 그래픽 전용 프로세서 또는 NPU와 같은 인공지능 전용 프로세서일 수 있다. 하나 또는 복수의 프로세서는, 메모리에 저장된 기 정의된 동작 규칙 또는 인공지능 모델에 따라, 입력 데이터를 처리하도록 제어한다. 또는, 하나 또는 복수의 프로세서가 인공지능 전용 프로세서인 경우, 인공지능 전용 프로세서는, 특정 인공지능 모델의 처리에 특화된 하드웨어 구조로 설계될 수 있다.The function related to artificial intelligence according to the present invention is operated through a processor and a memory. The processor may be composed of one or more processors. At this time, one or more processors may be a general-purpose processor such as a CPU, an AP, a DSP (Digital Signal Processor), a graphics-only processor such as a GPU, a VPU (Vision Processing Unit), or an artificial intelligence-only processor such as an NPU. One or more processors control to process input data according to a predefined operation rule or artificial intelligence model stored in the memory. Alternatively, when one or more processors are artificial intelligence-only processors, the artificial intelligence-only processor may be designed with a hardware structure specialized for processing a specific artificial intelligence model.

기 정의된 동작 규칙 또는 인공지능 모델은 학습을 통해 만들어진 것을 특징으로 한다. 여기서, 학습을 통해 만들어진다는 것은, 기본 인공지능 모델이 학습 알고리즘에 의하여 다수의 학습 데이터들을 이용하여 학습됨으로써, 원하는 특성(또는, 목적)을 수행하도록 설정된 기 정의된 동작 규칙 또는 인공지능 모델이 만들어짐을 의미한다. 이러한 학습은 본 발명에 따른 인공지능이 수행되는 기기 자체에서 이루어질 수도 있고, 별도의 서버 및/또는 시스템을 통해 이루어질 수도 있다. 학습 알고리즘의 예로는, 지도형 학습(supervised learning), 비지도형 학습(unsupervised learning), 준지도형 학습(semi-supervised learning) 또는 강화 학습(reinforcement learning)이 있으나, 전술한 예에 한정되지 않는다.The predefined operation rules or artificial intelligence models are characterized by being created through learning. Here, being created through learning means that the basic artificial intelligence model is learned by using a plurality of learning data by a learning algorithm, thereby creating a predefined operation rules or artificial intelligence model set to perform a desired characteristic (or purpose). Such learning may be performed in the device itself on which the artificial intelligence according to the present invention is performed, or may be performed through a separate server and/or system. Examples of the learning algorithm include supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning, but are not limited to the examples described above.

인공지능 모델은, 복수의 신경망 레이어들로 구성될 수 있다. 복수의 신경망 레이어들 각각은 복수의 가중치들(weight values)을 갖고 있으며, 이전(previous) 레이어의 연산 결과와 복수의 가중치들 간의 연산을 통해 신경망 연산을 수행한다. 복수의 신경망 레이어들이 갖고 있는 복수의 가중치들은 인공지능 모델의 학습 결과에 의해 최적화될 수 있다. 예를 들어, 학습 과정 동안 인공지능 모델에서 획득한 로스(loss) 값 또는 코스트(cost) 값이 감소 또는 최소화되도록 복수의 가중치들이 갱신될 수 있다. 인공 신경망은 심층 신경망(DNN: Deep Neural Network)를 포함할 수 있으며, 예를 들어, CNN (Convolutional Neural Network), DNN (Deep Neural Network), RNN (Recurrent Neural Network), RBM (Restricted Boltzmann Machine), DBN (Deep Belief Network), BRDNN(Bidirectional Recurrent Deep Neural Network) 또는 심층 Q-네트워크 (Deep Q-Networks) 등이 있으나, 전술한 예에 한정되지 않는다.The artificial intelligence model may be composed of a plurality of neural network layers. Each of the plurality of neural network layers has a plurality of weight values, and performs a neural network operation through an operation between the operation result of the previous layer and the plurality of weights. The plurality of weights of the plurality of neural network layers may be optimized by the learning result of the artificial intelligence model. For example, the plurality of weights may be updated so that a loss value or a cost value obtained from the artificial intelligence model is reduced or minimized during the learning process. The artificial neural network may include a deep neural network (DNN), and examples thereof include, but are not limited to, a convolutional neural network (CNN), a deep neural network (DNN), a recurrent neural network (RNN), a restricted boltzmann machine (RBM), a deep belief network (DBN), a bidirectional recurrent deep neural network (BRDNN), or deep Q-networks.

본 발명의 예시적인 실시예에 따르면, 프로세서는 인공지능을 구현할 수 있다. 인공지능이란 사람의 신경세포(biological neuron)를 모사하여 기계가 학습하도록 하는 인공신경망(Artificial Neural Network) 기반의 기계 학습법을 의미한다. 인공지능의 방법론에는 학습 방식에 따라 훈련데이터로서 입력데이터와 출력데이터가 같이 제공됨으로써 문제(입력데이터)의 해답(출력데이터)이 정해져 있는 지도학습(supervised learning), 및 출력데이터 없이 입력데이터만 제공되어 문제(입력데이터)의 해답(출력데이터)이 정해지지 않는 비지도학습(unsupervised learning), 및 현재의 상태(State)에서 어떤 행동(Action)을 취할 때마다 외부 환경에서 보상(Reward)이 주어지는데, 이러한 보상을 최대화하는 방향으로 학습을 진행하는 강화학습(reinforcement learning)으로 구분될 수 있다. 또한, 인공지능의 방법론은 학습 모델의 구조인 아키텍처에 따라 구분될 수도 있는데, 널리 이용되는 딥러닝 기술의 아키텍처는, 합성곱신경망(CNN; Convolutional Neural Network), 순환신경망(RNN; Recurrent Neural Network), 트랜스포머(Transformer), 생성적 대립 신경망(GAN; generative adversarial networks) 등으로 구분될 수 있다.According to an exemplary embodiment of the present invention, the processor can implement artificial intelligence. Artificial intelligence refers to a machine learning method based on an artificial neural network that imitates human neurons (biological neurons) to enable a machine to learn. The methodology of artificial intelligence can be divided into supervised learning in which input data and output data are provided together as training data according to a learning method, so that the answer (output data) to the problem (input data) is determined, unsupervised learning in which only input data is provided without output data, so that the answer (output data) to the problem (input data) is not determined, and reinforcement learning in which a reward (Reward) is given from an external environment whenever an action (Action) is taken in the current state (State), and learning is performed in a direction to maximize this reward. In addition, artificial intelligence methodologies can be categorized by architecture, which is the structure of the learning model. The architectures of widely used deep learning technologies can be categorized into convolutional neural networks (CNNs), recurrent neural networks (RNNs), transformers, and generative adversarial networks (GANs).

본 장치와 시스템은 인공지능 모델을 포함할 수 있다. 인공지능 모델은 하나의 인공지능 모델일 수 있고, 복수의 인공지능 모델로 구현될 수도 있다. 인공지능 모델은 뉴럴 네트워크(또는 인공 신경망)로 구성될 수 있으며, 기계학습과 인지과학에서 생물학의 신경을 모방한 통계학적 학습 알고리즘을 포함할 수 있다. 뉴럴 네트워크는 시냅스의 결합으로 네트워크를 형성한 인공 뉴런(노드)이 학습을 통해 시냅스의 결합 세기를 변화시켜, 문제 해결 능력을 가지는 모델 전반을 의미할 수 있다. 뉴럴 네트워크의 뉴런은 가중치 또는 바이어스의 조합을 포함할 수 있다. 뉴럴 네트워크는 하나 이상의 뉴런 또는 노드로 구성된 하나 이상의 레이어(layer)를 포함할 수 있다. 예시적으로, 장치는 input layer, hidden layer, output layer를 포함할 수 있다. 장치를 구성하는 뉴럴 네트워크는 뉴런의 가중치를 학습을 통해 변화시킴으로써 임의의 입력(input)으로부터 예측하고자 하는 결과(output)를 추론할 수 있다.The present device and system may include an artificial intelligence model. The artificial intelligence model may be one artificial intelligence model or may be implemented as multiple artificial intelligence models. The artificial intelligence model may be composed of a neural network (or an artificial neural network) and may include a statistical learning algorithm that mimics biological neurons in machine learning and cognitive science. A neural network may refer to a model in which artificial neurons (nodes) that form a network by combining synapses change the strength of the synapses through learning and have problem-solving capabilities. Neurons of a neural network may include a combination of weights or biases. A neural network may include one or more layers composed of one or more neurons or nodes. For example, the device may include an input layer, a hidden layer, and an output layer. A neural network that constitutes the device may infer a desired result (output) from an arbitrary input (input) by changing the weights of neurons through learning.

프로세서는 뉴럴 네트워크를 생성하거나, 뉴럴 네트워크를 훈련(train, 또는 학습(learn)하거나, 수신되는 입력 데이터를 기초로 연산을 수행하고, 수행 결과를 기초로 정보 신호(information signal)를 생성하거나, 뉴럴 네트워크를 재훈련(retrain)할 수 있다. 뉴럴 네트워크의 모델들은 GoogleNet, AlexNet, VGG Network 등과 같은 CNN(Convolution Neural Network), R-CNN(Region with Convolution Neural Network), RPN(Region Proposal Network), RNN(Recurrent Neural Network), S-DNN(Stacking-based deep Neural Network), S-SDNN(State-Space Dynamic Neural Network), Deconvolution Network, DBN(Deep Belief Network), RBM(Restrcted Boltzman Machine), Fully Convolutional Network, LSTM(Long Short-Term Memory) Network, Classification Network 등 다양한 종류의 모델들을 포함할 수 있으나 이에 제한되지는 않는다. 프로세서는 뉴럴 네트워크의 모델들에 따른 연산을 수행하기 위한 하나 이상의 프로세서를 포함할 수 있다. 예를 들어 뉴럴 네트워크는 심층 뉴럴 네트워크 (Deep Neural Network)를 포함할 수 있다.The processor can generate a neural network, train (or learn) a neural network, perform a calculation based on received input data, generate an information signal based on the result of the calculation, or retrain the neural network. The models of the neural network can include various types of models such as CNN (Convolution Neural Network) such as GoogleNet, AlexNet, VGG Network, R-CNN (Region with Convolution Neural Network), RPN (Region Proposal Network), RNN (Recurrent Neural Network), S-DNN (Stacking-based deep Neural Network), S-SDNN (State-Space Dynamic Neural Network), Deconvolution Network, DBN (Deep Belief Network), RBM (Restrcted Boltzman Machine), Fully Convolutional Network, LSTM (Long Short-Term Memory) Network, Classification Network, etc., but are not limited thereto. The processor can include one or more processors for performing calculations according to the models of the neural network. For example, the neural network can include a deep It may include a neural network (Deep Neural Network).

뉴럴 네트워크는 CNN(Convolutional Neural Network), RNN(Recurrent Neural Network), 퍼셉트론(perceptron), 다층 퍼셉트론(multilayer perceptron), FF(Feed Forward), RBF(Radial Basis Network), DFF(Deep Feed Forward), LSTM(Long Short Term Memory), GRU(Gated Recurrent Unit), AE(Auto Encoder), VAE(Variational Auto Encoder), DAE(Denoising Auto Encoder), SAE(Sparse Auto Encoder), MC(Markov Chain), HN(Hopfield Network), BM(Boltzmann Machine), RBM(Restricted Boltzmann Machine), DBN(Depp Belief Network), DCN(Deep Convolutional Network), DN(Deconvolutional Network), DCIGN(Deep Convolutional Inverse Graphics Network), GAN(Generative Adversarial Network), LSM(Liquid State Machine), ELM(Extreme Learning Machine), ESN(Echo State Network), DRN(Deep Residual Network), DNC(Differentiable Neural Computer), NTM(Neural Turning Machine), CN(Capsule Network), KN(Kohonen Network) 및 AN(Attention Network)를 포함할 수 있으나 이에 한정되는 것이 아닌 임의의 뉴럴 네트워크를 포함할 수 있음은 통상의 기술자가 이해할 것이다.Neural networks include CNN (Convolutional Neural Network), RNN (Recurrent Neural Network), perceptron, multilayer perceptron, FF (Feed Forward), RBF (Radial Basis Network), DFF (Deep Feed Forward), LSTM (Long Short Term Memory), GRU (Gated Recurrent Unit), AE (Auto Encoder), VAE (Variational Auto) Encoder), DAE (Denoising Auto Encoder), SAE (Sparse Auto Encoder), MC (Markov Chain), HN (Hopfield Network), BM (Boltzmann Machine), RBM (Restricted Boltzmann Machine), DBN (Depp Belief Network), DCN (Deep Convolutional Network), DN (Deconvolutional Network), DCIGN (Deep Convolutional Inverse Graphics Network), Generative Adversarial Network (GAN), Liquid State Machine (LSM), Extreme Learning Machine (ELM), It will be understood by those skilled in the art that any neural network may be included, including but not limited to an ESN (Echo State Network), a DRN (Deep Residual Network), a DNC (Differentiable Neural Computer), an NTM (Neural Turning Machine), a CN (Capsule Network), a KN (Kohonen Network), and an AN (Attention Network).

본 발명의 예시적인 실시예에 따르면, 프로세서는 GoogleNet, AlexNet, VGG Network 등과 같은 CNN(Convolution Neural Network), R-CNN(Region with Convolution Neural Network), RPN(Region Proposal Network), RNN(Recurrent Neural Network), S-DNN(Stacking-based deep Neural Network), S-SDNN(State-Space Dynamic Neural Network), Deconvolution Network, DBN(Deep Belief Network), RBM(Restrcted Boltzman Machine), Fully Convolutional Network, LSTM(Long Short-Term Memory) Network, Classification Network, Generative Modeling, eXplainable AI, Continual AI, Representation Learning, AI for Material Design, 자연어 처리를 위한 BERT, SP-BERT, MRC/QA, Text Analysis, Dialog System, GPT-3, GPT-4, 비전 처리를 위한 Visual Analytics, Visual Understanding, Video Synthesis, ResNet 데이터 지능을 위한 Anomaly Detection, Prediction, Time-Series Forecasting, Optimization, Recommendation, Data Creation 등 다양한 인공지능 구조 및 알고리즘을 이용할 수 있으며, 이에 제한되지 않는다. 이하, 첨부된 도면을 참조하여 본 발명의 실시예를 상세하게 설명한다.According to an exemplary embodiment of the present invention, the processor may be configured to perform a CNN (Convolution Neural Network) such as GoogleNet, AlexNet, VGG Network, R-CNN (Region with Convolution Neural Network), RPN (Region Proposal Network), RNN (Recurrent Neural Network), S-DNN (Stacking-based deep Neural Network), S-SDNN (State-Space Dynamic Neural Network), Deconvolution Network, DBN (Deep Belief Network), RBM (Restrcted Boltzman Machine), Fully Convolutional Network, LSTM (Long Short-Term Memory) Network, Classification Network, Generative Modeling, eXplainable AI, Continual AI, Representation Learning, AI for Material Design, BERT, SP-BERT, MRC/QA for natural language processing, Text Analysis, Dialog System, GPT-3, GPT-4, Visual Analytics for vision processing, Visual Understanding, Video Synthesis, ResNet for data intelligence, Anomaly Detection, Prediction, Time-Series Forecasting, Optimization, Various artificial intelligence structures and algorithms such as Recommendation, Data Creation, etc. can be used, but are not limited thereto. Hereinafter, an embodiment of the present invention will be described in detail with reference to the attached drawings.

도 1은 본 발명에 따른 복부 CT 영상 기반 프리 에어를 검출하는 인공 지능 모델을 포함하는 전자 장치의 구성을 간략하게 도시한 블록도이다. 이하, 도 2 내지 도 21을 참조하여 본 발명에 따른 전자 장치 및 그의 학습 방법을 설명하기로 한다.FIG. 1 is a block diagram schematically illustrating the configuration of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images according to the present invention. Hereinafter, the electronic device and its learning method according to the present invention will be described with reference to FIGS. 2 to 21.

본 발명에 따른 전자 장치(100)는 프로세서(110), 메모리(120), 통신부(130) 및 입출력 인터페이스(140) 등을 포함할 수 있다. 전자 장치(100)가 포함할 수 있는 내부 구성 요소는 이에 한정되지 않는다. 본 발명의 전자 장치(1000는 프로세서(110) 대신 별도의 프로세싱 서버 또는 클라우드 서버를 통해 프로세서(110)의 기능을 수행할 수 있다.The electronic device (100) according to the present invention may include a processor (110), a memory (120), a communication unit (130), and an input/output interface (140). The internal components that the electronic device (100) may include are not limited thereto. The electronic device (1000) of the present invention may perform the function of the processor (110) through a separate processing server or cloud server instead of the processor (110).

도 1을 참고하면, 프로세서(110)는 전자 장치(100) 내의 구성요소들의 동작을 제어하기 위한 알고리즘 또는 알고리즘을 재현한 프로그램에 대한 데이터를 저장하는 메모리(120) 및 메모리(120)에 저장된 데이터를 이용하여 전자 장치(100)의 동작을 수행하도록 구현될 수 있다. 이 때, 프로세서(110)와 메모리(120)는 각각 별개의 칩으로 구현될 수 있다. 또는, 프로세서(110)와 메모리(120)는 단일의 칩으로 구현될 수도 있다.Referring to FIG. 1, the processor (110) may be implemented to perform operations of the electronic device (100) using a memory (120) that stores data on an algorithm for controlling operations of components within the electronic device (100) or a program that reproduces the algorithm, and the data stored in the memory (120). In this case, the processor (110) and the memory (120) may be implemented as separate chips. Alternatively, the processor (110) and the memory (120) may be implemented as a single chip.

실시예에 따른 메모리(120)는 전자 장치(100)의 다양한 기능을 지원하는 데이터와, 프로세서(110)의 동작을 위한 프로그램을 저장할 수 있고, 입/출력되는 데이터들(예를 들어, 이미지, 영상 등)을 저장할 있고, 전자 장치(100)에서 구동되는 다수의 응용 프로그램(application program 또는 어플리케이션(application), 전자 장치(100)의 동작을 위한 데이터들, 명령어들을 저장할 수 있다. 이러한 응용 프로그램 중 적어도 일부는, 무선 통신을 통해 외부 서버로부터 다운로드 될 수 있다.The memory (120) according to the embodiment can store data supporting various functions of the electronic device (100) and a program for the operation of the processor (110), can store input/output data (e.g., images, videos, etc.), and can store a plurality of application programs (or applications) run on the electronic device (100), data for the operation of the electronic device (100), and commands. At least some of these application programs can be downloaded from an external server via wireless communication.

이러한, 메모리(120)는 플래시 메모리 타입(flash memory type), 하드디스크 타입(hard disk type), SSD 타입(Solid State Disk type), SDD 타입(Silicon Disk Drive type), 멀티미디어 카드 마이크로 타입(multimedia card micro type), 카드 타입의 메모리(예를 들어 SD 또는 XD 메모리 등), 램(random access memory; RAM), SRAM(static random access memory), 롬(read-only memory; ROM), EEPROM(electrically erasable programmable read-only memory), PROM(programmable read-only memory), 자기 메모리, 자기 디스크 및 광디스크 중 적어도 하나의 타입의 저장매체를 포함할 수 있다. 또한, 메모리는 전자 장치(100)과는 분리되어 있으나, 유선 또는 무선으로 연결된 데이터베이스가 될 수도 있다.The memory (120) may include at least one type of storage medium among a flash memory type, a hard disk type, an SSD (Solid State Disk type), an SDD (Silicon Disk Drive type), a multimedia card micro type, a card type memory (for example, an SD or XD memory, etc.), a random access memory (RAM), a static random access memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, and an optical disk. In addition, the memory may be a database that is separate from the electronic device (100) but connected by wire or wirelessly.

실시예에 따른 통신부(130)는 외부 장치와 통신을 가능하게 하는 하나 이상의 구성 요소를 포함할 수 있으며, 예를 들어, 방송 수신 모듈, 유선통신 모듈, 무선통신 모듈, 근거리 통신 모듈, 위치정보 모듈 중 적어도 하나를 포함할 수 있다.The communication unit (130) according to the embodiment may include one or more components that enable communication with an external device, and may include, for example, at least one of a broadcast receiving module, a wired communication module, a wireless communication module, a short-range communication module, and a location information module.

유선 통신 모듈은, 지역 통신(Local Area Network; LAN) 모듈, 광역 통신(Wide Area Network; WAN) 모듈 또는 부가가치 통신(Value Added Network; VAN) 모듈 등 다양한 유선 통신 모듈뿐만 아니라, USB(Universal Serial Bus), HDMI(High Definition Multimedia Interface), DVI(Digital Visual Interface), RS-232(recommended standard232), 전력선 통신, 또는 POTS(plain old telephone service) 등 다양한 케이블 통신 모듈을 포함할 수 있다. The wired communication module may include various wired communication modules such as a Local Area Network (LAN) module, a Wide Area Network (WAN) module, or a Value Added Network (VAN) module, as well as various cable communication modules such as a Universal Serial Bus (USB), a High Definition Multimedia Interface (HDMI), a Digital Visual Interface (DVI), RS-232 (recommended standard232), power line communication, or plain old telephone service (POTS).

무선 통신 모듈은 와이파이(Wifi) 모듈, 와이브로(Wireless broadband) 모듈 외에도, GSM(global System for Mobile Communication), CDMA(Code Division Multiple Access), WCDMA(Wideband Code Division Multiple Access), UMTS(universal mobile telecommunications system), TDMA(Time Division Multiple Access), LTE(Long Term Evolution), 4G, 5G, 6G 등 다양한 무선 통신 방식을 지원하는 무선 통신 모듈을 포함할 수 있다.The wireless communication module may include a wireless communication module that supports various wireless communication methods such as GSM (global System for Mobile Communication), CDMA (Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), UMTS (universal mobile telecommunications system), TDMA (Time Division Multiple Access), LTE (Long Term Evolution), 4G, 5G, and 6G, in addition to a WiFi module and a Wireless broadband module.

근거리 통신 모듈은 근거리 통신(Short range communication)을 위한 것으로서, 블루투스(Bluetooth™), RFID(Radio Frequency Identification), 적외선 통신(Infrared Data Association; IrDA), UWB(Ultra Wideband), ZigBee, NFC(Near Field Communication), Wi-Fi(Wireless-Fidelity), Wi-Fi Direct, Wireless USB(Wireless Universal Serial Bus) 기술 중 적어도 하나를 이용하여, 근거리 통신을 지원할 수 있다.The short-range communication module is for short-range communication and can support short-range communication using at least one of Bluetooth™, RFID (Radio Frequency Identification), Infrared Data Association (IrDA), UWB (Ultra Wideband), ZigBee, NFC (Near Field Communication), Wi-Fi (Wireless-Fidelity), Wi-Fi Direct, and Wireless USB (Wireless Universal Serial Bus) technologies.

실시예에 따른 입출력 인터페이스(140)는 본 발명의 전자 장치(100)에 연결되는 다양한 종류의 외부 기기와의 통로 역할을 수행한다. 이러한 입출력 인터페이스(140)는 유/무선 헤드셋 포트(port), 외부 충전기 포트(port), 유/무선 데이터 포트(port), 메모리 카드(memory card) 포트(port), 식별 모듈(SIM)이 구비된 장치를 연결하는 포트(port), 오디오 I/O(Input/Output) 포트(port), 비디오 I/O(Input/Output) 포트(port), 이어폰 포트(port) 중 적어도 하나를 포함할 수 있다. 본 발명의 전자 장치(100)은, 입출력 인터페이스(140)에 연결된 외부 기기와 관련된 적절한 제어를 수행할 수 있다.The input/output interface (140) according to the embodiment serves as a passage for various types of external devices connected to the electronic device (100) of the present invention. The input/output interface (140) may include at least one of a wired/wireless headset port, an external charger port, a wired/wireless data port, a memory card port, a port for connecting a device equipped with an identification module (SIM), an audio I/O (Input/Output) port, a video I/O (Input/Output) port, and an earphone port. The electronic device (100) of the present invention may perform appropriate control related to an external device connected to the input/output interface (140).

도 1에서 도시된 각각의 구성요소는 소프트웨어 및/또는 Field Programmable Gate Array(FPGA) 및 주문형 반도체(ASIC, Application Specific Integrated Circuit)와 같은 하드웨어 구성요소를 의미한다.Each component illustrated in Figure 1 represents software and/or hardware components such as a Field Programmable Gate Array (FPGA) and an Application Specific Integrated Circuit (ASIC).

따라서, 본 발명의 일 실시예에 따른 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치(100)는 적어도 하나의 프로세서(110) 및 상기 적어도 하나의 프로세서(110)에 의해 수행되는 컴퓨터 프로그램이 저장된 메모리(120)를 포함할 수 있다. Therefore, an electronic device (100) including an artificial intelligence model for detecting free air based on abdominal CT images according to one embodiment of the present invention may include at least one processor (110) and a memory (120) storing a computer program executed by the at least one processor (110).

상기 적어도 하나의 프로세서(110)는, 각 환자 별 복부에 대한 CT 이미지를 획득하고, 획득한 상기 CT 이미지에 대해 프리 에어(free air) 영역을 확인하는 전처리를 수행하고, 전처리된 CT 이미지를 이용하여 인공 지능 모델을 학습시키며, 상기 학습된 인공 지능 모델을 이용하여 입력된 CT 이미지에 대해 복부 내 천공 유무 및 영역을 예측하도록 구성될 수 있다.The at least one processor (110) may be configured to acquire a CT image of the abdomen for each patient, perform preprocessing to identify a free air region for the acquired CT image, train an artificial intelligence model using the preprocessed CT image, and predict the presence or absence of an abdominal perforation and the region for the input CT image using the trained artificial intelligence model.

도 2를 참조하면, 본 발명에 따른 인공 지능 모델은 엔코더 및 디코더(200)를 포함할 수 있다. 엔코더 및 디코더(200)의 각 레이어는 Residual conv block(220)으로 구성될 수 있다. Referring to FIG. 2, the artificial intelligence model according to the present invention may include an encoder and a decoder (200). Each layer of the encoder and decoder (200) may be composed of a residual conv block (220).

Residual conv block(220)는 3x3 컨볼루션 레이어, 배치 노멀라이제이션 레이어, ReLU 레이어로 구성된 컨볼루션 블럭(210)에 스킵 커넥션(skip connection)을 배치하여 각각의 블럭들의 작은 정보들을 추가적으로 학습하는 블럭이다. The residual conv block (220) is a block that additionally learns small information of each block by placing a skip connection in the convolution block (210) consisting of a 3x3 convolution layer, a batch normalization layer, and a ReLU layer.

다시 말해, Residual conv block(220)은 복수의 스킵 커넥션을 구비한 CNN(Convolution Neural Network) 블럭이다. In other words, the residual conv block (220) is a CNN (Convolution Neural Network) block equipped with multiple skip connections.

복부 CT 이미지(10)는 Residual conv block(220)로 구성된 엔코더 및 디코더(200)로 입력되며, 출력된 CT 이미지(11) 내에서 프리 에어를 검출하도록 학습된 U-NET 기반 분할 모델(U-NET based segmentation model, 300)로 입력될 수 있다. 출력된 CT 이미지(11) 내에서 프리 에어를 검출하도록 학습된 U-NET 기반 분할 모델(300)을 FA-NET이라고 지칭할 수 있다.An abdominal CT image (10) is input to an encoder and decoder (200) composed of a residual conv block (220), and can be input to a U-NET based segmentation model (300) trained to detect free air in the output CT image (11). The U-NET based segmentation model (300) trained to detect free air in the output CT image (11) can be referred to as FA-NET.

도 2를 참조하면, 상기 U-NET 기반 분할 모델(300)은 출력된 CT 이미지(11) 내에서 검출된 프리 에어 영역을 의사가 프리 에어를 검출한 Ground truth(12)와 비교하여 두 이미지 간 중첩되는 정도를 다이스 스코어 (Sørensen-Dice coefficient score) 로 산출할 수 있다.Referring to FIG. 2, the U-NET-based segmentation model (300) can compare the free air area detected in the output CT image (11) with the ground truth (12) where the doctor detected the free air, and calculate the degree of overlap between the two images as a Dice score (Sørensen-Dice coefficient score).

완전히 일치하면 다이스 스코어는 1이 되고, 전혀 일치하지 않으면 다이스 스코어는 0이 될 수 있다.If there is a perfect match, the dice score can be 1, and if there is no match at all, the dice score can be 0.

이때, 출력된 CT 이미지(11)는, 도 3에 도시된 바와 같이, 3D 슬라이서(230)로 분할하여 획득될 수 있으며, 상기 CT 이미지(110)는 적어도 3개 이상의 키 이미지를 포함할 수 있다. 도 3의 도번은 3D 슬라이서(230)에 의해 분할된 영상 구역을 나타낸 것이다.At this time, the output CT image (11) can be obtained by dividing it with a 3D slicer (230), as shown in FIG. 3, and the CT image (110) can include at least three key images. The drawing number of FIG. 3 shows the image area divided by the 3D slicer (230).

예를 들어, 각 환자의 분할된 CT 이미지 중 3개 내지 5개의 키 이미지를 추출할 수 있다. 상기 인공 지능 모델이 추출된 3개 내지 5개의 이미지 중 2개 또는 3개 이상의 이미지에 프리 에어가 있다고 판단하면 프리 에어가 존재한다고 판단하고, 상기 인공 지능 모델이 추출된 3개 내지 5개의 이미지 중 2개 또는 3개 미만의 이미지에 프리 에어가 있다고 판단하면 프리 에어가 존재하지 않는다고 판단할 수 있다.For example, three to five key images can be extracted from the segmented CT images of each patient. If the artificial intelligence model determines that there is free air in two or more than three of the three to five extracted images, it can be determined that free air exists, and if the artificial intelligence model determines that there is free air in two or less than three of the three to five extracted images, it can be determined that there is no free air.

일 실시예로서, 전자 장치(100)의 프로세서(110)는 기설정된 장기 영역 및 스팟의 위치에 따라 결정된 영역을 프리 에어라고 어노테이션(annotation)할 수 있다.As an example, the processor (110) of the electronic device (100) may annotate an area determined based on a preset long-term area and the location of a spot as free air.

예를 들어, 도 4를 참조하면, 전자 장치(100)의 프로세서(110)는 적어도 3개 이상의 키 이미지 각각에서 장기로 기결정된 영역 내에 위치하는 스팟(32)은 프리 에어가 아니라고 어노테이션하고, 장기로 기결정된 영역 외에 위치하는 스팟(31)은 프리 에어라고 어노테이션하여 전처리 수행할 수 있다.For example, referring to FIG. 4, the processor (110) of the electronic device (100) can perform preprocessing by annotating that a spot (32) located within a predetermined long-term area in each of at least three key images is not free air, and annotating that a spot (31) located outside a predetermined long-term area is free air.

따라서, 복부 CT 영상 이미지(10)를 엔코더 및 디코더(200)에 입력하고, 출력된 CT 이미지(110)를 분할(segmentation)하여 적어도 3개 이상의 키 이미지를 추출할 수 있다. U-NET 기반 분할 모델(300)은 상기 키 이미지에서 프리 에어 영역을 검출하고 검출된 영역에 프리 에어를 나타내는 어노테이션을 수행할 수 있다.Therefore, an abdominal CT image (10) can be input to an encoder and decoder (200), and the output CT image (110) can be segmented to extract at least three key images. The U-NET-based segmentation model (300) can detect a free air region in the key image and perform annotation indicating free air in the detected region.

이때, 도 5에 도시된 바와 같이, 본 발명에 따른 전자 장치(100)의 적어도 하나의 프로세서(110)는, 획득한 상기 CT 이미지에 대해 기설정된 임계값에서 윈도우잉을 진행할 수 있다. 윈도우잉을 통해 CT 이미지를 보다 선명하게 전처리할 수 있다.At this time, as illustrated in FIG. 5, at least one processor (110) of the electronic device (100) according to the present invention may perform windowing at a preset threshold value on the acquired CT image. Through windowing, the CT image may be preprocessed more clearly.

일 실시예로서, 기설정된 임계값은 1200 이상일 수 있다. 이를 통해 인공 지능 모델 정확도를 더욱 향상시킬 수 있다.As an example, the preset threshold value may be greater than or equal to 1200. This may further improve the accuracy of the artificial intelligence model.

한편, 본 발명에 따른 전자 장치(100)의 적어도 하나의 프로세서(110)는, 다이스 스코어를 기반으로 학습된 인공 지능 모델의 정확도를 측정할 수 있다. 다이스 스코어는 Ground truth(12)와 predicted mask가 표시된 출력 CT 이미지(11)의 합에 대해 Ground truth(12)와 predicted mask의 곱을 2배한 값으로, 다이스 스코어가 클수록 인공 지능 모델의 정확도가 높음을 평가할 수 있다.Meanwhile, at least one processor (110) of the electronic device (100) according to the present invention can measure the accuracy of the learned artificial intelligence model based on the Dice score. The Dice score is a value obtained by multiplying the product of the ground truth (12) and the predicted mask by the sum of the output CT image (11) in which the ground truth (12) and the predicted mask are displayed, and it can be evaluated that the accuracy of the artificial intelligence model is higher as the Dice score is larger.

또한, 도 6을 참조하면, 각 환자 별로 Test 1부터 Test 12까지의 복부 CT 이미지를 획득할 수 있으며, 이때 환자는 실제 장 천공을 가진 환자와 실제 장 천공을 가지지 않은 환자로 구별될 수 있다.Also, referring to FIG. 6, abdominal CT images from Test 1 to Test 12 can be acquired for each patient, and at this time, patients can be distinguished into patients with actual intestinal perforation and patients without actual intestinal perforation.

실제 장 천공을 가진 환자의 수(Number of images with free air)에 대해 출력 CT 이미지에서 프리 에어가 검출된 수(Number of detected image)의 비율을 민감도(sensitivity)라고 하며, 실제 장 천공을 가지지 않은 환자의 수(Number of images without free air)에 대해 출력 CT 이미지에서 프리 에어가 검출되지 않은 수(Number of images not detected)의 비율을 특이도(specificity)라고 할 수 있다.The ratio of the number of images with free air in the output CT images to the number of patients with actual intestinal perforation is called sensitivity, and the ratio of the number of images without free air in the output CT images to the number of patients without actual intestinal perforation is called specificity.

학습된 인공 지능 모델의 정확도를 평가하기 위하여 민감도와 특이도를 구별하여 평가할 수 있다. 장 천공을 가진 환자에 대해 프리 에어가 검출되는 True Positive와 프리 에어가 검출되지 않는 False Negative, 장 천공을 가지지 않은 환자에 대해 프리 에어가 검출되지 않는 True Negative와 프리 에어가 검출되는 False Positive를 구별하여 민감도의 정확도와 특이도의 정확도를 평가할 수 있다.In order to evaluate the accuracy of the learned artificial intelligence model, sensitivity and specificity can be distinguished and evaluated. The accuracy of sensitivity and the accuracy of specificity can be evaluated by distinguishing between True Positive, in which free air is detected for patients with intestinal perforation, and False Negative, in which free air is not detected, and True Negative, in which free air is not detected for patients without intestinal perforation, and False Positive, in which free air is detected.

구체적으로, 도 7에 도시된 바와 같이, 본 발명에 따른 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치(100)의 학습 방법은, 각 환자 별 복부에 대한 CT 이미지를 획득하는 단계(S710), 획득한 상기 CT 이미지에 대해 프리 에어(free air) 영역을 확인하는 전처리를 수행하는 단계(S720), 전처리된 CT 이미지를 이용하여 인공 지능 모델을 학습시키는 단계(S730) 및 상기 학습된 인공 지능 모델을 이용하여 입력된 CT 이미지에 대해 복부 내 천공 유무 및 영역을 예측하는 단계(S740)를 포함할 수 있다. Specifically, as illustrated in FIG. 7, a learning method of an electronic device (100) including an artificial intelligence model for detecting free air based on an abdominal CT image according to the present invention may include a step of obtaining a CT image for the abdomen of each patient (S710), a step of performing preprocessing to check a free air region for the obtained CT image (S720), a step of training an artificial intelligence model using the preprocessed CT image (S730), and a step of predicting the presence or absence of an abdominal perforation and the region for the input CT image using the trained artificial intelligence model (S740).

본 발명에 따른 상기 복부 내 천공 유무 및 영역을 예측하는 단계(S740)는, CT 이미지 내에서 프리 에어의 유무 및 크기를 검출하여 환자의 복부 내 천공 유무, 영역 또는 크기를 출력할 수 있다.The step (S740) of predicting the presence or absence and area of intra-abdominal perforation according to the present invention can detect the presence or absence and size of free air in a CT image and output the presence or absence, area, or size of intra-abdominal perforation of a patient.

도 8에 도시된 바와 같이, 일 실시예로서, 상기 복부 내 천공 유무 및 영역을 예측하는 단계(S740)에서 상기 환자의 복부 내 천공이 있을 경우, 상기 천공의 크기에 기초하여 대응하는 치료 또는 응급 수술 가이드를 사용자 인터페이스에 제공하는 단계(S810)를 더 포함할 수 있다.As illustrated in FIG. 8, as an embodiment, in the step (S740) of predicting the presence or absence and area of intra-abdominal perforation, if there is intra-abdominal perforation in the patient, the step (S810) of providing a corresponding treatment or emergency surgery guide to a user interface based on the size of the perforation may be further included.

이때, 사용자 인터페이스는 상술한 입출력 인터페이스(140)일 수 있다.At this time, the user interface may be the input/output interface (140) described above.

또는, 도 9에 도시된 바와 같이, 다른 일 실시예로서, 상기 복부 내 천공 유무 및 영역을 예측하는 단계(S740)는, 상기 환자의 복부 내 천공이 있을 경우, 미리 입력된 환자 나이를 포함한 환자 정보 및 상기 천공의 크기를 취합하는 단계(S811) 및 상기 환자 정보 및 상기 천공의 크기에 기초하여 대응하는 치료 또는 응급 수술 가이드를 사용자 인터페이스에 제공하는 단계(S812)를 더 포함할 수 있다. Alternatively, as illustrated in FIG. 9, as another embodiment, the step (S740) of predicting the presence or absence and area of intra-abdominal perforation may further include, if there is intra-abdominal perforation of the patient, a step (S811) of collecting patient information including a pre-input patient age and the size of the perforation, and a step (S812) of providing a corresponding treatment or emergency surgery guide to a user interface based on the patient information and the size of the perforation.

따라서, 본 발명에 따른 인공 지능 모델은 U-NET 기반 분할 모델(300)을 이용하여 프리 에어를 검출할 뿐만 아니라 프리 에어 유무 및 크기를 기초로 대응하는 치료 또는 응급 수술 가이드를 제공하여 의료진의 치료 행위를 보조할 수 있다.Therefore, the artificial intelligence model according to the present invention can not only detect free air using the U-NET-based segmentation model (300), but also assist medical staff's treatment by providing a corresponding treatment or emergency surgery guide based on the presence and size of free air.

예를 들어, 복부 내 천공이 존재하지 않으면, 의료진에게 별도의 조치가 불필요하다고 가이드할 수 있다.For example, if no intra-abdominal perforation exists, the medical team may be guided that no further action is necessary.

또한, 복부 내 천공이 존재하고 천공의 크기가 임계값을 초과하면 의료진에게 응급 수술 수행을 가이드하고, 임계값 이하일 경우 수술 조치가 필요할 수 있음을 가이드할 수 있다.Additionally, if an intra-abdominal perforation exists and the size of the perforation exceeds a threshold, it can guide medical staff to perform emergency surgery, and if it is below the threshold, it can guide that surgical measures may be necessary.

이때, 키 이미지가 상복부 영역이면 복부 내 천공 존부에 따라 곧바로 의료진에게 대응되는 조치를 가이드할 수 있다. 상복부는 소장이나 대장이 없어 장기 내 공기가 존재할 확률이 낮은 바, 장기 내 공기를 프리 에어로 오인할 여지가 적어 프리 에어 미검출시 이를 그대로 신뢰할 수 있기 때문이다.At this time, if the key image is in the upper abdomen area, the medical staff can be guided to take immediate action based on the presence or absence of perforation in the abdomen. Since there is no small or large intestine in the upper abdomen, the probability of air in the organs is low, so there is little room for mistaking air in the organs as free air, and therefore, if free air is not detected, it can be trusted as is.

반면, 키 이미지가 하복부 영역일 경우, 하복부 영역에 천공이 존재하지 않는다고 검출하더라도 추가적인 영상 촬영을 제안할 수 있다. 하복부 영역은 장기 내 공기와 장기 외 프리 에어 구별이 쉽지 않고 키 이미지가 하복부 영역에 해당할 경우 추가적인 확인을 거치는 것이 안전하기 때문이다.On the other hand, if the key image is of the lower abdomen area, additional imaging may be suggested even if no perforation is detected in the lower abdomen area. This is because it is not easy to distinguish between air within the organ and free air outside the organ in the lower abdomen area, and it is safe to conduct additional confirmation if the key image corresponds to the lower abdomen area.

따라서, 본 발명에 따른 전자 장치(100)는 사용자 인터페이스로부터 천공 유무를 검출할 장기명을 별도로 입력 받고, 사용자 인터페이스로부터 미리 설정된 하복부에 포함된 장기명이 입력될 경우, 추가 검사 진행 가이드를 사용자 인터페이스에 제공할 수 있다. Accordingly, the electronic device (100) according to the present invention can separately receive an organ name for detecting the presence or absence of perforation from the user interface, and when the organ name included in the lower abdomen set in advance is input from the user interface, an additional inspection progress guide can be provided to the user interface.

한편, 본 발명에 따른 상기 복부 내 천공 유무 및 영역을 예측하는 단계(S740)는, 각 환자의 분할된 CT 이미지 중 적어도 3개의 키 이미지를 추출하는 단계, 상기 인공 지능 모델이 추출된 3개의 이미지 중 2개 이상의 이미지에 프리 에어가 있다고 판단하면 프리 에어가 존재한다고 판단하는 단계 및 상기 인공 지능 모델이 추출된 3개의 이미지 중 2개 미만의 이미지에 프리 에어가 있다고 판단하면 프리 에어가 존재하지 않는다고 판단하는 단계를 포함할 수 있다. Meanwhile, the step (S740) of predicting the presence or absence and area of intra-abdominal perforation according to the present invention may include the steps of extracting at least three key images from the segmented CT images of each patient, the step of determining that free air exists if the artificial intelligence model determines that free air exists in two or more of the three extracted images, and the step of determining that free air does not exist if the artificial intelligence model determines that free air exists in less than two of the three extracted images.

일 실시예로서, 분할된 CT 이미지 중 3개 내지 5개의 키 이미지를 추출하고, 추출된 이미지 중 2개 또는 3개 이상의 이미지에 프리 에어가 있다고 판단하면 프리 에어가 존재한다고 최종 결정하고, 2개 또는 3개 미만의 이미지에 프리 에어가 있다고 판단하면 프리 에어가 존재하지 않는다고 최종 결정할 수 있다.As an example, three to five key images may be extracted from the segmented CT images, and if it is determined that free air exists in two or more than three of the extracted images, it may be finally determined that free air exists, and if it is determined that free air exists in two or less than three images, it may be finally determined that no free air exists.

구체적으로 도 11을 참조하면, 십이지장 천공 환자(1111) 그룹과 십이지장 천공이 아닌 환자(1112) 그룹에서 5명의 환자 및 각 환자의 복부 CT 영상 이미지를 획득할 수 있다. 이때 십이지장 천공은 예시적인 실시예에 불과하며 기타 복부 천공을 모두 포함할 수 있다.Specifically, referring to Fig. 11, five patients and abdominal CT image images of each patient can be acquired from a group of patients with duodenal perforation (1111) and a group of patients without duodenal perforation (1112). In this case, duodenal perforation is only an exemplary embodiment and all other abdominal perforations can be included.

각 환자의 복부 CT 영상 이미지에서 프리 에어 영역을 분할하여 대표적인 키이미지를 적어도 3개 추출할 수 있다. 이를 통해 십이지장 천공 환자(1111)들 각각 키 이미지 1, 키 이미지 2, 키 이미지 3를 포함할 수 있으며, 십이지장 천공이 아닌 환자(1112)들 각각 키 이미지 1, 키 이미지 2, 키 이미지 3을 포함할 수 있다.By segmenting the free air region from the abdominal CT image of each patient, at least three representative key images can be extracted. As a result, each of the duodenal perforation patients (1111) can include key image 1, key image 2, and key image 3, and each of the non-duodenal perforation patients (1112) can include key image 1, key image 2, and key image 3.

본 발명에 따른 전자 장치(100)는 각 환자들의 각 키 이미지에 대해 프리 에어 검출을 수행할 수 있다.The electronic device (100) according to the present invention can perform free air detection for each key image of each patient.

도 12를 참조하면, 본 발명에 따른 전자 장치(100)는 십이지장 천공 환자(1111)인 Pt1, Pt2, Pt3, Pt4, Pt5의 각 키 이미지에 대해 프리 에어 검출을 수행할 수 있다. 이 경우, 십이지장 천공을 갖는 환자들 전부가 프리 에어를 가진다는 True Positive(TP)가 나오는 것을 확인할 수 있다. 이 경우 민감도는 100%이다.Referring to FIG. 12, the electronic device (100) according to the present invention can perform free air detection for each key image of Pt1, Pt2, Pt3, Pt4, and Pt5, which are duodenal perforation patients (1111). In this case, it can be confirmed that True Positive (TP) is generated, indicating that all patients with duodenal perforation have free air. In this case, the sensitivity is 100%.

도 13을 참조하면, 본 발명에 따른 전자 장치(100)는 십이지장 천공이 아닌 환자(1112)인 Pt1, Pt2, Pt3, Pt4, Pt5의 각 키 이미지에 대해 프리 에어 검출을 수행할 수 있다. 이 경우, Pt3, Pt4, Pt5는 키 이미지 1, 키 이미지 2, 키 이미지 3 모두에서 프리 에어를 가지지 않는다는 True Negative(TN)이 나옴을 확인할 수 있다. 그러나, Pt1은 키 이미지 3에서 프리 에어가 검출되는 Fulse Positive가 나오고, Pt2는 키 이미지 1, 키 이미지 2, 키 이미지 3 모두에서 프리 에어가 검출되는 Fulse Positive가 나옴을 확인할 수 있다.Referring to FIG. 13, the electronic device (100) according to the present invention can perform free air detection for each key image of Pt1, Pt2, Pt3, Pt4, and Pt5, which are patients (1112) who do not have duodenal perforation. In this case, it can be confirmed that True Negative (TN) is output for Pt3, Pt4, and Pt5, which do not have free air in key image 1, key image 2, and key image 3. However, it can be confirmed that Fulse Positive is output for Pt1, in which free air is detected in key image 3, and Fulse Positive is output for Pt2, in which free air is detected in all key image 1, key image 2, and key image 3.

일 실시예로서, 본 발명에 따른 전자 장치(100)는 키 이미지 전부가 TN 나오는 경우뿐만 아니라, 키 이미지 3개 중 2개가 TN이 나오는 경우에도 TN으로 판단하여 프리 에어가 검출되지 않은 것으로 예측할 수 있다.As an example, the electronic device (100) according to the present invention can predict that no free air is detected by determining that the key images are TN not only when all of the key images are TN, but also when two out of three key images are TN.

이를 통해 Pt1은 Ground truth와 일치하도록 프리 에어 미검출 결과를 가질 수 있으며, Pt2만 불일치 결과를 획득함으로써 특이도를 80%로 유지할 수 있다.This allows Pt1 to have a free air non-detection result that matches the ground truth, while maintaining the specificity at 80% by only obtaining a mismatch result for Pt2.

따라서, 본 발명에 따른 전자 장치(100)는 민감도는 높게 유지되고 있으나 특이도에서 정확도가 떨어질 수 있다. 이를 보완하기 위해 복수의 키 이미지 중에서 과반수 이상의 키 이미지가 판단하는 결과에 따라 최종 프리 에어 검출 여부를 결정할 수 있다.Accordingly, the electronic device (100) according to the present invention may have a high sensitivity but may have low accuracy in terms of specificity. To compensate for this, the final free air detection can be determined based on the results of determining more than half of the key images among a plurality of key images.

예를 들어, 키 이미지 3개 중 2개가 TP가 나오는 경우, 프리 에어가 검출된 것으로 판단하고, 키 이미지 3개 중 2개가 TN이 나오는 경우, 프리 에어가 검출되지 않은 것으로 출력할 수 있다.For example, if 2 out of 3 key images show TP, it can be determined that free air has been detected, and if 2 out of 3 key images show TN, it can be determined that free air has not been detected.

도 14 내지 도 21은 상술한 내용에 대해 실제 실험 결과를 나타낸 것이다.Figures 14 to 21 show actual experimental results for the above-described contents.

도 14는 True positive 이미지를 나타낸 것으로, 순서대로 복부 CT 영상 이미지, U-NET 기반 분할 모델(300)인 FA-NET이 검출한 프리 에어 영역(FA-NET Detected FA area), 의사가 직접 진단한 프리 에어 영역(Ground truth) 이다.Figure 14 shows a true positive image, which is, in order, an abdominal CT image, a free air area detected by FA-NET, a U-NET-based segmentation model (300), and a free air area directly diagnosed by a doctor (Ground truth).

도 14에 도시된 바와 같이, FA-NET Detected FA area와 Ground truth가 거의 유사하여 다이스 스코어가 0.93인 바, TP가 출력되어 프리 에어 검출 민감도가 높음을 확인할 수 있다.As shown in Fig. 14, the FA-NET Detected FA area and the ground truth are almost similar, so the Dice score is 0.93, and it can be confirmed that TP is output and the free air detection sensitivity is high.

도 15 내지 도 16은 FA-NET Detected FA area와 Ground truth가 거의 유사하여 다이스 스코어가 0.83, 0.9로 높은 True positive인 이미지인 반면, 도 17은 FA-NET Detected FA area와 Ground truth가 상이하여 다이스 스코어가 0.57로 낮은 False negative 이미지이다. Figures 15 and 16 are true positive images with high Dice scores of 0.83 and 0.9, respectively, because the FA-NET Detected FA area and the ground truth are almost similar, while Figure 17 is a false negative image with low Dice scores of 0.57, because the FA-NET Detected FA area and the ground truth are different.

본 발명에 따른 전자 장치(100)로 실험한 결과, 십이지장 천공 환자의 복부 CT 영상 488개 중 439개는 도 15 및 도 16과 같이 높은 다이스 스코어로 TP이미지였고, 나머지는 오류가 발생하여 도 17과 같이 낮은 다이스 스코어로 FN 이미지가 출력되었다.As a result of an experiment using an electronic device (100) according to the present invention, 439 out of 488 abdominal CT images of patients with duodenal perforation were TP images with high Dice scores as shown in FIGS. 15 and 16, and the rest were FN images with low Dice scores due to errors as shown in FIG. 17.

따라서, 본 발명에 따른 전자 장치(100)는 민감도 약 0.9의 높은 정확도를 가짐을 알 수 있다.Accordingly, it can be seen that the electronic device (100) according to the present invention has a high accuracy with a sensitivity of about 0.9.

한편, 도 18은 두 개의 복부 CT 영상에 대한 True negative 이미지를 나타낸 것으로, 좌측에서 우측 방향의 순서대로 복부 CT 영상 이미지, U-NET 기반 분할 모델(300)인 FA-NET이 검출한 프리 에어 영역(FA-NET Detected FA area), 의사가 직접 진단한 프리 에어 영역(Ground truth) 이다. 이는 프리 에어가 없는 환자로 별도의 흰 영역 없이 검은 배경으로 출력되는 것이다.Meanwhile, Fig. 18 shows true negative images for two abdominal CT images, in order from left to right: an abdominal CT image, a free air area detected by FA-NET, a U-NET-based segmentation model (300), and a free air area directly diagnosed by a doctor (Ground truth). This is output with a black background without a separate white area as a patient without free air.

도 18에 도시된 바와 같이, 두 개의 복부 CT 영상에 대해 FA-NET Detected FA area와 Ground truth가 동일하여 다이스 스코어가 1.0이고 모두 TN이 출력됨을 확인할 수 있다. As shown in Fig. 18, it can be confirmed that the FA-NET Detected FA area and the ground truth are the same for the two abdominal CT images, so the Dice score is 1.0 and both output TN.

그러나, 도 19 내지 도 20은 FA-NET Detected FA area와 Ground truth가 상이하여 다이스 스코어가 0.03으로 매우 낮은 False positive인 이미지이다. 십이지장 천공이 없는 환자의 복부 CT 영상 1110개 중 644개는 도 18과 같이 높은 다이스 스코어로 TN이미지였고, 나머지는 오류가 발생하여 도 19 내지 도 10과 같이 낮은 다이스 스코어로 FP 이미지가 출력되었다.However, Figs. 19 and 20 are false positive images with a very low Dice score of 0.03 due to differences between the FA-NET Detected FA area and the ground truth. Of the 1,110 abdominal CT images of patients without duodenal perforation, 644 were TN images with high Dice scores, as in Fig. 18, and the rest were FP images with low Dice scores due to errors, as in Figs. 19 to 10.

따라서, 본 발명에 따른 전자 장치(100)는 특이도가 약 0.58로 민감도에 비해 다소 낮음을 확인할 수 있다.Accordingly, it can be confirmed that the electronic device (100) according to the present invention has a specificity of about 0.58, which is somewhat lower than the sensitivity.

이를 보완하기 위해, 본 발명에 따른 전자 장치(100)는 복부 CT 영상을 분할하여 적어도 3개의 키 이미지를 추출하고 3개의 키 이미지 중에서 과반수의 이미지가 출력한 결과를 최종 프리 에어 검출 결과로 결정할 수 있다.To compensate for this, the electronic device (100) according to the present invention can divide an abdominal CT image to extract at least three key images and determine the result of outputting a majority of the three key images as the final free air detection result.

또한, 도 21에 도시된 바와 같이, 매우 미세한 공기는 천공에 의한 것이 아니라 장기 활동에 의해 자연스럽게 존재하는 공기거나 노이즈일 수 있어 각 이미지의 면적에서 기설정된 면적 이하의 프리 에어 영역이 검출될 경우 이를 무시할 수 있다.In addition, as shown in Fig. 21, very fine air may be air or noise that exists naturally due to long-term activity rather than through perforation, so if a free air area smaller than a preset area is detected in the area of each image, it can be ignored.

예를 들어, 본 발명에 따른 전자 장치(100)는 전체 이미지 면적의 1% 이하 면적으로 검출된 프리 에어 영역은 제거하고 전체 이미지 면적의 1% 초과 면적에서 검출된 프리 에어 영역으로만 프리 에어 유무를 판단할 수 있다.For example, the electronic device (100) according to the present invention can remove a free air area detected in an area of 1% or less of the total image area and determine the presence or absence of free air only based on a free air area detected in an area exceeding 1% of the total image area.

이를 통해 민감도에 비해 정확도가 떨어지는 특이도의 정확도를 보완하여 프리 에어 유무에 구애받지 않고 항상 높은 정확도를 유지할 수 있다.This compensates for the accuracy of specificity, which is lower than the accuracy of sensitivity, so that high accuracy can always be maintained regardless of the presence of free air.

따라서, 상기 인공 지능 모델은 입력된 CT 이미지 내에서 프리 에어의 유무 및 크기를 검출하여 환자의 복부 내 천공 유무 또는 영역 및 크기를 출력할 수 있다.Therefore, the artificial intelligence model can detect the presence or absence and size of free air in the input CT image and output the presence or area and size of perforation in the patient's abdomen.

본 발명에 따른 전자 장치(100)는 상기 환자의 복부 내 천공이 있을 경우, 상기 천공의 크기에 기초하여 대응하는 치료 또는 응급 수술 가이드를 사용자 인터페이스에 제공하거나, 또는 미리 입력된 환자 나이를 포함한 환자 정보 및 상기 천공의 크기를 취합하고, 상기 환자 정보 및 상기 천공의 크기에 기초하여 대응하는 치료 또는 응급 수술 가이드를 사용자 인터페이스에 제공할 수 있다.The electronic device (100) according to the present invention can, when there is a perforation in the patient's abdomen, provide a corresponding treatment or emergency surgery guide to the user interface based on the size of the perforation, or can collect patient information including a pre-input patient age and the size of the perforation, and provide a corresponding treatment or emergency surgery guide to the user interface based on the patient information and the size of the perforation.

또한, 본 발명에 따른 전자 장치(100)는, 각 환자의 분할된 CT 이미지 중 적어도 3개의 키 이미지를 추출하고, 상기 인공 지능 모델이 추출된 3개의 이미지 중 2개 이상의 이미지에 프리 에어가 있다고 판단하면 프리 에어가 존재한다고 판단하고, 상기 인공 지능 모델이 추출된 3개의 이미지 중 2개 미만의 이미지에 프리 에어가 있다고 판단하면 프리 에어가 존재하지 않는다고 판단할 수 있다.In addition, the electronic device (100) according to the present invention can extract at least three key images from among the segmented CT images of each patient, and if the artificial intelligence model determines that there is free air in two or more of the three extracted images, it can determine that there is free air, and if the artificial intelligence model determines that there is free air in less than two of the three extracted images, it can determine that there is no free air.

본 발명에 따른 전자 장치(100)는 천공 유무를 검출할 장기명을 입력 받는 사용자 인터페이스를 더 포함하고, 상기 사용자 인터페이스에서 미리 설정된 하복부에 포함된 장기명이 입력될 경우, 추가 검사 진행 가이드를 사용자 인터페이스에 제공할 수 있다.The electronic device (100) according to the present invention further includes a user interface for receiving an organ name to detect the presence or absence of perforation, and when an organ name included in a preset lower abdomen is input in the user interface, an additional inspection progress guide can be provided to the user interface.

상술한 구성을 통해 전체 응급실 방문 환자 중 5% 미만의 복부 천공 환자에게 큰 시간을 할애하지 않으면서도 위급한 상황의 천공 환자를 응급 정도에 따라 정확하게 분류하여 조치할 수 있어 의료진 부족을 커버할 수 있다.The above-described configuration allows us to cover the shortage of medical staff by accurately classifying and treating critically ill patients with abdominal perforation according to their level of emergency without spending a lot of time on patients with abdominal perforation, which account for less than 5% of all emergency room visitors.

이상에서와 같이 첨부된 도면을 참조하여 개시된 실시예들을 설명하였다. 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자는 본 발명의 기술적 사상이나 필수적인 특징을 변경하지 않고도, 개시된 실시예들과 다른 형태로 본 개시가 실시될 수 있음을 이해할 것이다. 개시된 실시예들은 예시적인 것이며, 한정적으로 해석되어서는 안 된다.As described above, the disclosed embodiments have been described with reference to the attached drawings. Those skilled in the art to which the present invention pertains will understand that the present invention can be implemented in a different form from the disclosed embodiments without changing the technical idea or essential features of the present invention. The disclosed embodiments are exemplary and should not be construed as limiting.

Claims (15)

적어도 하나의 프로세서; 및at least one processor; and 상기 적어도 하나의 프로세서에 의해 수행되는 컴퓨터 프로그램이 저장된 메모리; 를 포함하며,A memory having a computer program stored therein, which is executed by at least one processor; 상기 적어도 하나의 프로세서는, At least one processor of the above, 각 환자 별 복부에 대한 CT 이미지를 획득하고,Obtain CT images of the abdomen for each patient, 획득한 상기 CT 이미지에 대해 프리 에어(free air) 영역을 확인하는 전처리를 수행하고,Preprocessing is performed to identify the free air region for the acquired CT image, 상기 전처리된 CT 이미지를 이용하여 인공 지능 모델을 학습시키며,An artificial intelligence model is trained using the above preprocessed CT images. 상기 학습된 인공 지능 모델을 이용하여 입력된 CT 이미지에 대해 복부 내 천공 유무 및 영역을 예측하도록 구성된,The artificial intelligence model learned above is configured to predict the presence and area of intra-abdominal perforation for the input CT image. 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치. An electronic device comprising an artificial intelligence model for detecting free air based on abdominal CT images. 제1 항에 있어서,In the first paragraph, 상기 적어도 하나의 프로세서는, At least one processor of the above, 획득한 상기 CT 이미지에서 기설정된 장기 영역 및 스팟의 위치에 따라 결정된 영역을 프리 에어라고 어노테이션(annotation)하여 전처리 수행하도록 구성된,It is configured to perform preprocessing by annotating the area determined based on the location of the predetermined organ area and spot in the acquired CT image as free air. 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치. An electronic device comprising an artificial intelligence model for detecting free air based on abdominal CT images. 제2 항에 있어서,In the second paragraph, 적어도 하나의 프로세서는, At least one processor, 획득한 상기 CT 이미지에 대해 기설정된 임계값에서 윈도우잉을 진행하고, Windowing is performed at a preset threshold value for the acquired CT image, 획득한 상기 CT 이미지를 3D 슬라이서를 이용하여 분할하도록 더 구성된,Further configured to segment the acquired CT image using a 3D slicer, 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치.An electronic device comprising an artificial intelligence model for detecting free air based on abdominal CT images. 제3 항에 있어서,In the third paragraph, 상기 인공 지능 모델은, The above artificial intelligence model, 전처리된 상기 CT 이미지 내에서 프리 에어를 검출하도록 학습된 U-NET 기반 분할 모델(U-NET based segmentation model); 및A U-NET based segmentation model trained to detect free air within the preprocessed CT image; and 복수의 스킵 커넥션(skip connection)을 구비한 CNN(Convolution Neural Network) 블럭으로 구성된 엔코더 및 디코더를 포함하는,Comprising an encoder and a decoder composed of CNN (Convolution Neural Network) blocks with multiple skip connections, 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치.An electronic device comprising an artificial intelligence model for detecting free air based on abdominal CT images. 제4 항에 있어서, In the fourth paragraph, 상기 인공 지능 모델은, 입력된 CT 이미지 내에서 프리 에어의 유무 및 크기를 검출하여 환자의 복부 내 천공 유무 및 크기를 출력하는,The above artificial intelligence model detects the presence and size of free air in the input CT image and outputs the presence and size of perforation in the patient's abdomen. 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치.An electronic device comprising an artificial intelligence model for detecting free air based on abdominal CT images. 제5 항에 있어서,In clause 5, 상기 적어도 하나의 프로세서는, At least one processor of the above, 상기 환자의 복부 내 천공이 있을 경우, 상기 천공의 크기에 기초하여 대응하는 치료 또는 응급 수술 가이드를 사용자 인터페이스에 제공하도록 구성된,If the patient has an intra-abdominal perforation, the user interface is configured to provide a corresponding treatment or emergency surgical guide based on the size of the perforation. 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치.An electronic device comprising an artificial intelligence model for detecting free air based on abdominal CT images. 제5 항에 있어서,In clause 5, 상기 적어도 하나의 프로세서는, At least one processor of the above, 상기 환자의 복부 내 천공이 있을 경우, 미리 입력된 환자 나이를 포함한 환자 정보 및 상기 천공의 크기를 취합하고,If the patient has an intra-abdominal perforation, collect patient information including the pre-entered patient age and the size of the perforation. 상기 환자 정보 및 상기 천공의 크기에 기초하여 대응하는 치료 또는 응급 수술 가이드를 사용자 인터페이스에 제공하도로 구성된, configured to provide a corresponding treatment or emergency surgical guide to the user interface based on the above patient information and the size of the perforation; 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치.An electronic device comprising an artificial intelligence model for detecting free air based on abdominal CT images. 제3 항에 있어서, In the third paragraph, 상기 적어도 하나의 프로세서는, At least one processor of the above, 각 환자의 분할된 CT 이미지 중 적어도 3개의 키 이미지를 추출하고, Extract at least three key images from the segmented CT images of each patient, 상기 인공 지능 모델이 추출된 3개의 이미지 중 2개 이상의 이미지에 프리 에어가 있다고 판단하면 프리 에어가 존재한다고 판단하고,If the above AI model determines that there is free air in two or more of the three extracted images, it determines that there is free air. 상기 인공 지능 모델이 추출된 3개의 이미지 중 2개 미만의 이미지에 프리 에어가 있다고 판단하면 프리 에어가 존재하지 않는다고 판단하도록 구성된, The above artificial intelligence model is configured to determine that there is no free air if it determines that there is free air in less than two of the three extracted images. 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치.An electronic device comprising an artificial intelligence model for detecting free air based on abdominal CT images. 제1 항에 있어서, In the first paragraph, 상기 적어도 하나의 프로세서는, At least one processor of the above, 다이스 스코어를 기반으로 학습된 인공 지능 모델의 정확도를 측정하도록 구성된, Configured to measure the accuracy of an artificial intelligence model learned based on dice scores, 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치.An electronic device comprising an artificial intelligence model for detecting free air based on abdominal CT images. 제1 항에 있어서, In the first paragraph, 천공 유무를 검출할 장기명을 입력 받는 사용자 인터페이스를 더 포함하고, Further comprising a user interface for entering the name of an organ to detect the presence or absence of perforation, 상기 적어도 하나의 프로세서는, At least one processor of the above, 상기 사용자 인터페이스에서 미리 설정된 하복부에 포함된 장기명이 입력될 경우, 추가 검사 진행 가이드를 사용자 인터페이스에 제공하도로 구성된,When the organ name included in the preset lower abdomen is entered in the above user interface, it is configured to provide additional examination progress guide on the user interface. 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치.An electronic device comprising an artificial intelligence model for detecting free air based on abdominal CT images. 복부 CT 영상 기반 프리 에어 검출하는 인공 지능 모델을 포함하는 전자 장치의 학습 방법으로서, A learning method of an electronic device including an artificial intelligence model for detecting free air based on abdominal CT images, 각 환자 별 복부에 대한 CT 이미지를 획득하는 단계; Step of acquiring CT images of the abdomen for each patient; 획득한 상기 CT 이미지에 대해 프리 에어(free air) 영역을 확인하는 전처리를 수행하는 단계; A step of performing preprocessing to identify a free air region for the acquired CT image; 전처리된 CT 이미지를 이용하여 인공 지능 모델을 학습시키는 단계; 및 A step of training an artificial intelligence model using preprocessed CT images; and 상기 학습된 인공 지능 모델을 이용하여 입력된 CT 이미지에 대해 복부 내 천공 유무 및 영역을 예측하는 단계;를 포함하는,A step of predicting the presence and area of intra-abdominal perforation for an input CT image using the learned artificial intelligence model; including; 인공 지능 모델을 포함하는 전자 장치의 학습 방법. A learning method for an electronic device including an artificial intelligence model. 제11 항에 있어서,In Article 11, 상기 전처리를 수행하는 단계는, The steps for performing the above preprocessing are: 획득한 상기 CT 이미지에서 기설정된 장기 영역 및 스팟의 위치에 따라 결정된 영역을 프리 에어라고 어노테이션(annotation)하는 단계를 포함하는, Including a step of annotating an area determined based on the location of a predetermined organ area and spot in the acquired CT image as free air. 인공 지능 모델을 포함하는 전자 장치의 학습 방법. A learning method for an electronic device including an artificial intelligence model. 제12 항에 있어서,In Article 12, 상기 전처리를 수행하는 단계는, The steps for performing the above preprocessing are: 획득한 상기 CT 이미지에 대해 기설정된 임계값에서 윈도우잉을 진행하는 단계; 및A step of performing windowing at a preset threshold value for the acquired CT image; and 획득한 상기 CT 이미지를 3D 슬라이서를 이용하여 분할하는 단계를 더 포함하는, Further comprising a step of dividing the acquired CT image using a 3D slicer, 인공 지능 모델을 포함하는 전자 장치의 학습 방법.A learning method for an electronic device including an artificial intelligence model. 제13 항에 있어서,In Article 13, 상기 전처리된 CT 이미지를 이용하여 인공 지능 모델을 학습시키는 단계는, The step of training an artificial intelligence model using the above preprocessed CT images is as follows. 상기 인공 지능 모델의 U-NET 기반 분할 모델(U-NET based segmentation model)을 이용하여 전처리된 상기 CT 이미지 내에서 프리 에어를 검출하는 단계; 및 A step of detecting free air in the preprocessed CT image using the U-NET based segmentation model of the artificial intelligence model; and 상기 인공 지능 모델의 복수의 스킵 커넥션(skip connection)을 구비한 CNN(Convolution Neural Network) 블럭를 이용하여 인코딩 및 디코딩을 수행하는 단계를 포함하는, A step of performing encoding and decoding using a CNN (Convolution Neural Network) block having multiple skip connections of the artificial intelligence model, 인공 지능 모델을 포함하는 전자 장치의 학습 방법.A learning method for an electronic device including an artificial intelligence model. 제14 항에 있어서, In Article 14, 상기 복부 내 천공 유무를 예측하는 단계는, The step of predicting the presence or absence of intra-abdominal perforation is as follows: CT 이미지 내에서 프리 에어의 유무 및 크기를 검출하여 환자의 복부 내 천공 유무 및 크기를 출력하는,Detects the presence and size of free air in a CT image and outputs the presence and size of perforation in the patient's abdomen. 인공 지능 모델을 포함하는 전자 장치의 학습 방법.A learning method for an electronic device including an artificial intelligence model.
PCT/KR2024/014996 2023-11-02 2024-10-02 Electronic device including artificial intelligence model for detecting free air on basis of abdominal ct image, and training method thereof Pending WO2025095369A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2023-0150147 2023-11-02
KR1020230150147A KR20250064415A (en) 2023-11-02 2023-11-02 Electronic device including artificial intelligence model for detecting free air based on abdominal ct images and learning method thereof

Publications (1)

Publication Number Publication Date
WO2025095369A1 true WO2025095369A1 (en) 2025-05-08

Family

ID=95582080

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2024/014996 Pending WO2025095369A1 (en) 2023-11-02 2024-10-02 Electronic device including artificial intelligence model for detecting free air on basis of abdominal ct image, and training method thereof

Country Status (2)

Country Link
KR (1) KR20250064415A (en)
WO (1) WO2025095369A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180240235A1 (en) * 2017-02-23 2018-08-23 Zebra Medical Vision Ltd. Convolutional neural network for segmentation of medical anatomical images
KR20210020618A (en) * 2019-08-16 2021-02-24 서울여자대학교 산학협력단 Abnominal organ automatic segmentation based on deep learning in a medical image
KR20210042432A (en) * 2019-10-08 2021-04-20 사회복지법인 삼성생명공익재단 Automatic multi-organ and tumor contouring system based on artificial intelligence for radiation treatment planning
KR20230092947A (en) * 2020-10-22 2023-06-26 비져블 페이션트 Method and system for segmenting and identifying at least one coronal structure within a medical image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10783634B2 (en) 2017-11-22 2020-09-22 General Electric Company Systems and methods to deliver point of care alerts for radiological findings

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180240235A1 (en) * 2017-02-23 2018-08-23 Zebra Medical Vision Ltd. Convolutional neural network for segmentation of medical anatomical images
KR20210020618A (en) * 2019-08-16 2021-02-24 서울여자대학교 산학협력단 Abnominal organ automatic segmentation based on deep learning in a medical image
KR20210042432A (en) * 2019-10-08 2021-04-20 사회복지법인 삼성생명공익재단 Automatic multi-organ and tumor contouring system based on artificial intelligence for radiation treatment planning
KR20230092947A (en) * 2020-10-22 2023-06-26 비져블 페이션트 Method and system for segmenting and identifying at least one coronal structure within a medical image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KIM DONG JIN, KIM SANGWOOK, LEE JOONGHYUP, SONG KYO-YOUNG: "Development of Artificial Intelligent Device to Detect Free Air in Abdominal Computed Tomography Image for Supporting Surgeons", ABSTRACTS OF ANNUAL CONGRESS OF KSS / 74TH CONGRESS OF THE KOREAN SURGICAL SOCIETY, 3 November 2022 (2022-11-03) - 5 November 2022 (2022-11-05), pages 512 - 512, XP093312284 *

Also Published As

Publication number Publication date
KR20250064415A (en) 2025-05-09

Similar Documents

Publication Publication Date Title
US20200380339A1 (en) Integrated neural networks for determining protocol configurations
WO2022019402A1 (en) Computer program and method for training artificial neural network model on basis of time series bio-signal
CN117617921B (en) Intelligent blood pressure monitoring system and method based on Internet of things
WO2021177771A1 (en) Method and system for predicting expression of biomarker from medical image
WO2022139170A1 (en) Medical-image-based lesion analysis method
WO2022124588A1 (en) Electronic device and method for predicting preterm birth
KR20240058031A (en) Electronic device and prediction method for performing cancer diagnosis on thyroid ultrasound image based on deep learning
Bhatt et al. An intelligent system for diagnosing thyroid disease in pregnant ladies through artificial neural network
WO2022173232A2 (en) Method and system for predicting risk of occurrence of lesion
Assegie A support vector machine based heart disease prediction
US20230282333A1 (en) Deep learning-assisted approach for accurate histologic grading and early detection of dysplasia
WO2025095369A1 (en) Electronic device including artificial intelligence model for detecting free air on basis of abdominal ct image, and training method thereof
WO2024123021A1 (en) Electronic device for predicting metastasis of early gastric cancer into lymph node on basis of ensemble model, and training method therefor
WO2025058347A1 (en) Method and apparatus for early prediction of septic shock through artificial intelligence-based biometric data analysis, and computer program
CN111657921A (en) Real-time electrocardio abnormality monitoring method and device, computer equipment and storage medium
Mirzapure et al. Deep Learning for Personalized Medicine: A Comprehensive Review
WO2025018493A1 (en) Electronic device to which machine learning ensemble model for malaria diagnosis and detection is applicable, and operation method thereof
WO2025037853A1 (en) Method, device, and system for automatically triaging and controlling response
WO2025192776A1 (en) Diagnosis assistance device and method for predicting risk of lymph node metastasis before resection in early gastric cancer on basis of artificial intelligence model
Praveena et al. Evaluating ILD designs in HRCT images using deep learning
Maragatharajan et al. A Comprehensive Approach to Identify Stroke based on Real-time Medical Data
WO2025188169A1 (en) Apparatus and method for measuring muscle for each part and detecting vertebral segment in medical image
WO2021137395A1 (en) Problematic behavior classification system and method based on deep neural network algorithm
WO2025147156A1 (en) Artificial intelligence-based apparatus and method for examining and diagnosing thyroid frozen section
WO2025095532A1 (en) Artificial intelligence-based method and device for providing medical diagnosis service using correlation between symptoms and diseases

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24886047

Country of ref document: EP

Kind code of ref document: A1