[go: up one dir, main page]

TW202120906A - Artificial intelligence based cell detection method by using optical kinetics and system thereof - Google Patents

Artificial intelligence based cell detection method by using optical kinetics and system thereof Download PDF

Info

Publication number
TW202120906A
TW202120906A TW109117822A TW109117822A TW202120906A TW 202120906 A TW202120906 A TW 202120906A TW 109117822 A TW109117822 A TW 109117822A TW 109117822 A TW109117822 A TW 109117822A TW 202120906 A TW202120906 A TW 202120906A
Authority
TW
Taiwan
Prior art keywords
cells
time point
cell
images
neural network
Prior art date
Application number
TW109117822A
Other languages
Chinese (zh)
Other versions
TWI754945B (en
Inventor
王偉中
宋泊錡
唐傳義
Original Assignee
靜宜大學
國立清華大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 靜宜大學, 國立清華大學 filed Critical 靜宜大學
Publication of TW202120906A publication Critical patent/TW202120906A/en
Application granted granted Critical
Publication of TWI754945B publication Critical patent/TWI754945B/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Image Analysis (AREA)
  • Apparatus Associated With Microorganisms And Enzymes (AREA)
  • Image Processing (AREA)

Abstract

An artificial intelligence based cell detection method by using optical kinetics includes acquiring a plurality of cells, sampling N images of the cells during a first time point and a second time point, determining an analytic region of each cell, executing an image processing process to the analytic region of the each cell according to the N images of the cells for generating a plurality of optical kinetics vector parameters, generating deformation vector information of the cells during the first time point and the second time point according to the plurality of optical kinetics vector parameters, acquiring at least one key deformation vector characteristic value of the cells according to the deformation vector information, and inputting the deformation vector information and the at least one key deformation vector characteristic value to a neural network for detecting qualities of the cells.

Description

利用光動力學技術之人工智慧的細胞檢測方法及其系統Cell detection method and system using artificial intelligence of photodynamic technology

本發明描述一種利用光動力學技術之人工智慧的細胞檢測方法及其系統,尤指一種具有偵測細胞品質以及辨識細胞功能之人工智慧的細胞檢測方法及其系統。The present invention describes a cell detection method and system using artificial intelligence of photodynamic technology, especially a cell detection method and system with artificial intelligence capable of detecting cell quality and identifying cell functions.

隨著科技日新月異,適逢生育年齡的婦女常會因為工作壓力、飲食習慣、文明病、排卵功能異常、賀爾蒙失調或是一些慢性病而導致不孕。在現今,不孕症是高自費的療程,在台灣、中國以及國際市場需求巨大,且其需求每年是高度成長。許多婦女會選擇體外人工受孕(In Vitro Fertilization,IVF)來治療不孕症的問題。體外人工受孕是將卵子與精子取出,在人為操作下進行體外受精,並培養成胚胎,再將胚胎植回母體內。然而,現有的不孕症療程之成功率僅三成。不孕症的療程之重點在於胚胎的選擇。然而,現有選擇胚胎的方法主要還是由胚胎醫師利用胚胎照片或是縮時影片的資料,以主觀方式判斷胚胎的優劣。以目前的技術而論,由於缺乏一種系統化以及自動化的方式判斷胚胎的良劣,故在不孕症療程中,由醫師主觀地選擇植入胚胎,其植入成功率仍低,故也是目前不孕症療程的瓶頸。With the rapid development of technology, women of childbearing age often suffer from infertility due to work pressure, eating habits, civilization diseases, abnormal ovulation function, hormone imbalance or some chronic diseases. Nowadays, infertility is a high-paying treatment course, which is in huge demand in Taiwan, China and the international market, and its demand is growing rapidly every year. Many women choose In Vitro Fertilization (IVF) to treat infertility. In vitro artificial insemination is to take out the eggs and sperm, perform in vitro fertilization under manual operation, and cultivate them into embryos, and then implant the embryos back into the mother's body. However, the success rate of existing infertility treatments is only 30%. The focus of infertility treatment is the choice of embryos. However, the existing methods for selecting embryos are mainly used by embryologists to use embryo photos or time-lapse video data to judge the quality of embryos in a subjective manner. In terms of current technology, due to the lack of a systematic and automated way of judging the quality of embryos, in the course of infertility treatment, the physician will subjectively select the implantation of embryos, and the implantation success rate is still low, so it is also current The bottleneck of infertility treatment.

換句話說,在目前的不孕症療程下,醫生僅能以主觀的方式觀察胚胎在分裂時的情況。例如,醫生會以主觀的方式,依據胚胎在發育中的細胞數目、細胞分裂的均勻程度以及分裂時的碎片程度,將胚胎由優到劣區分為多個等級。例如,均勻地分裂為雙數細胞的胚胎較優,而產生不完整的單數細胞分裂以及碎片越多的胚胎,生長潛力較差。然而,如前述提及,由於目前判斷胚胎的優劣主要還是依據醫生的經驗,以主觀的判斷方式來選擇較佳胚胎。因此,目前技術治療不孕症的成功率很難提升,且容易受到不同醫生的主觀見解而影響成功率(例如誤判)。In other words, under the current course of infertility treatment, doctors can only observe the condition of the embryo when it divides in a subjective way. For example, doctors will use a subjective way to classify embryos into multiple levels based on the number of cells in the embryo, the uniformity of cell division, and the degree of fragmentation during division. For example, embryos that divide evenly into even-numbered cells are better, while embryos that produce incomplete odd-numbered cell divisions and more fragments have poor growth potential. However, as mentioned above, since the current judgment of the quality of embryos is mainly based on the experience of doctors, the best embryos are selected by subjective judgment. Therefore, it is difficult to improve the success rate of the current technology in the treatment of infertility, and it is easy to be affected by the subjective opinions of different doctors (for example, misjudgment).

本發明一實施例提出一種利用光動力學技術之人工智慧的細胞檢測方法。利用光動力學技術之人工智慧的細胞檢測方法包含取得複數個細胞,在第一時間點以及第二時間點之間,取樣該些細胞的N張影像,決定每一個細胞的分析區域,依據該些細胞的N張影像,對每一個細胞的分析區域進行影像處理程序,以產生複數個光動力學向量參數,依據該些光動力學向量參數,產生該些細胞在第一時間點以及第二時間點之間的形變向量資訊,依據形變向量資訊,取得該些細胞的至少一個關鍵形變向量特徵數值,將形變向量資訊及至少一個關鍵形變向量特徵數值輸入至類神經網路,以訓練類神經網路,以及利用類神經網路,以人工智慧的程序建立細胞品質檢測模型,以檢測細胞品質及/或辨識細胞。第一時間點在第二時間點之前,且N為大於2的正整數。An embodiment of the present invention provides a cell detection method using artificial intelligence of photodynamic technology. The cell detection method using artificial intelligence of photodynamic technology includes obtaining a plurality of cells, sampling N images of the cells between the first time point and the second time point, and determining the analysis area of each cell, according to the For N images of these cells, the image processing procedure is performed on the analysis area of each cell to generate a plurality of photodynamic vector parameters. According to the photodynamic vector parameters, the cells at the first time point and the second time point are generated according to the photodynamic vector parameters. For the deformation vector information between time points, obtain at least one key deformation vector characteristic value of the cells according to the deformation vector information, and input the deformation vector information and at least one key deformation vector characteristic value into the neural network to train the neural network Network, and the use of neural networks to establish cell quality detection models with artificial intelligence procedures to detect cell quality and/or identify cells. The first time point is before the second time point, and N is a positive integer greater than 2.

本發明另一實施例提出一種利用光動力學技術之人工智慧的細胞檢測系統。利用光動力學技術之人工智慧的細胞檢測系統包含載具、透鏡模組、影像擷取裝置、處理器及記憶體。載具具有容置槽,用以放置複數個細胞。透鏡模組面對載具,用以放大該些細胞的細節。影像擷取裝置面對透鏡模組,用以透過透鏡模組取得該些細胞的影像。處理器耦接於透鏡模組及影像擷取裝置,用以調整透鏡模組的放大倍率以及處理該些細胞的影像。載具之容置槽放置該些細胞後,處理器控制影像擷取裝置,透過透鏡模組在第一時間點以及第二時間點之間,取樣該些細胞的N張影像,處理器決定每一個細胞的分析區域,依據該些細胞的N張影像,對每一個細胞的分析區域進行影像處理程序,以產生複數個光動力學向量參數,依據該些光動力學向量參數,產生該些細胞在第一時間點以及第二時間點之間的形變向量資訊,依據形變向量資訊,取得該些細胞的至少一個關鍵形變向量特徵數值。處理器包含類神經網路。形變向量資訊及至少一個關鍵形變向量特徵數值用以訓練類神經網路。處理器利用類神經網路,以人工智慧的程序建立細胞品質檢測模型,以檢測細胞品質及/或辨識細胞。第一時間點在第二時間點之前,且N為大於2的正整數。Another embodiment of the present invention provides a cell detection system using artificial intelligence of photodynamic technology. The cell detection system using artificial intelligence of photodynamic technology includes a carrier, a lens module, an image capture device, a processor, and a memory. The carrier has a containing slot for placing a plurality of cells. The lens module faces the carrier for magnifying the details of the cells. The image capturing device faces the lens module and is used to obtain the images of the cells through the lens module. The processor is coupled to the lens module and the image capturing device for adjusting the magnification of the lens module and processing the images of the cells. After the cells are placed in the accommodating slot of the carrier, the processor controls the image capturing device to sample N images of the cells between the first time point and the second time point through the lens module, and the processor determines each For the analysis area of a cell, perform image processing on the analysis area of each cell based on the N images of the cells to generate a plurality of photodynamic vector parameters, and generate the cells according to the photodynamic vector parameters According to the deformation vector information between the first time point and the second time point, at least one key deformation vector characteristic value of the cells is obtained according to the deformation vector information. The processor contains a neural network-like. The deformation vector information and at least one key deformation vector feature value are used to train the neural network. The processor uses a neural network to establish a cell quality detection model with an artificial intelligence process to detect cell quality and/or identify cells. The first time point is before the second time point, and N is a positive integer greater than 2.

第1圖係為本發明之利用光動力學技術之人工智慧的細胞檢測系統100之實施例的方塊圖。為了簡化描述,利用光動力學技術之人工智慧的細胞檢測系統100後文稱為「細胞檢測系統100」。細胞檢測系統100包含載具10、透鏡模組11、影像擷取裝置12、處理器13以及記憶體14。載具10具有容置槽,用以放置複數個細胞。舉例而言,載具10可為培養皿,其內部的容置槽可包含一些培養液。複數個細胞可在培養液中發育。複數個細胞可為複數個生殖細胞、複數個胚胎或是任何欲觀察且可分裂的複數個細胞。透鏡模組11面對載具10,用以放大該些細胞的細節。透鏡模組11可為任何具有光學或是數位變焦能力的透鏡模組,例如顯微鏡模組。影像擷取裝置12面對透鏡模組11,用以透過透鏡模組11取得該些細胞的影像。在細胞檢測系統100中,影像擷取裝置12可為具有感光元件的相機或是高光譜儀。換句話說,影像擷取裝置12透過透鏡模組11取得該些細胞的影像,可為灰階影像、高光譜影像或是景深合成影像。若影像擷取裝置12為高光譜儀,該些細胞的影像可為:(A) 高光譜儀影像中之任意一個波長對應的細胞影像,(B) 高光譜影像各波長之細胞影像所合成的灰階細胞影像。任何合理的影像格式都屬於本發明所揭露的範疇。處理器13耦接於透鏡模組11及影像擷取裝置12,用以調整透鏡模組11的放大倍率以及處理該些細胞的影像。處理器13可為中央處理器、微處理器、或是任何的可程式化處理單元。處理器13具有類神經網路,例如深度神經網路(Deep Neural Networks,DNN),可以執行機器學習以及深度學習的功能。因此,處理器13的類神經網路可以被訓練,可視為人工智慧的處理核心。記憶體14耦接於處理器13,用以儲存訓練資料以及影像處理時的分析資料。FIG. 1 is a block diagram of an embodiment of the cell detection system 100 using artificial intelligence of photodynamic technology according to the present invention. In order to simplify the description, the cell detection system 100 using artificial intelligence of photodynamic technology is hereinafter referred to as the "cell detection system 100". The cell detection system 100 includes a carrier 10, a lens module 11, an image capturing device 12, a processor 13 and a memory 14. The carrier 10 has a containing slot for placing a plurality of cells. For example, the carrier 10 may be a petri dish, and the containing tank inside may contain some culture liquid. Multiple cells can develop in the culture medium. The plurality of cells can be a plurality of germ cells, a plurality of embryos, or any cells that can be divided and to be observed. The lens module 11 faces the carrier 10 for magnifying the details of the cells. The lens module 11 can be any lens module with optical or digital zoom capabilities, such as a microscope module. The image capturing device 12 faces the lens module 11 to obtain images of the cells through the lens module 11. In the cell detection system 100, the image capturing device 12 can be a camera with a photosensitive element or a hyperspectrometer. In other words, the image capturing device 12 obtains the image of the cells through the lens module 11, which can be a grayscale image, a hyperspectral image, or a depth-of-field composite image. If the image capturing device 12 is a hyperspectrometer, the images of these cells can be: (A) a cell image corresponding to any wavelength in the hyperspectrometer image, (B) a gray scale synthesized by the cell images of each wavelength in the hyperspectral image Cell imaging. Any reasonable image format belongs to the scope disclosed by the present invention. The processor 13 is coupled to the lens module 11 and the image capturing device 12 for adjusting the magnification of the lens module 11 and processing the images of the cells. The processor 13 can be a central processing unit, a microprocessor, or any programmable processing unit. The processor 13 has a neural network, such as a deep neural network (Deep Neural Networks, DNN), and can perform machine learning and deep learning functions. Therefore, the neural network of the processor 13 can be trained and can be regarded as the processing core of artificial intelligence. The memory 14 is coupled to the processor 13 for storing training data and analysis data during image processing.

在細胞檢測系統100中,載具10之容置槽在放置該些細胞後,處理器13控制影像擷取裝置12,透過透鏡模組11在第一時間點以及第二時間點之間,取樣該些細胞的N張影像。接著,處理器13決定每一個細胞的分析區域,依據該些細胞的N張影像,對每一個細胞的分析區域進行影像處理程序,以產生複數個光動力學向量參數。處理器13依據該些光動力學向量參數,產生該些細胞在第一時間點以及第二時間點之間的形變向量資訊,並依據形變向量資訊,取得該些細胞的至少一個關鍵形變向量特徵數值。如前述提及,處理器13包含可訓練的類神經網路。因此,形變向量資訊及至少一個關鍵形變向量特徵數值可用以訓練類神經網路。在類神經網路訓練完成後,處理器13可以利用類神經網路,以人工智慧的程序建立細胞品質檢測模型,以檢測細胞品質及/或辨識細胞。在細胞檢測系統100中,第一時間點在第二時間點之前,且N為大於2的正整數。換句話說,細胞檢測系統100可以用兩個不同時間點之間的時間序列的細胞影像資訊來訓練類神經網路。在類神經網路訓練完成後,細胞檢測系統100即具備人工智慧的細胞檢測功能,具有自動化地檢測細胞品質及/或辨識細胞的能力。細胞檢測系統100如何訓練類神經網路以執行人工智慧的細胞檢測功能的細節,將描述於後文。In the cell detection system 100, after the cells of the carrier 10 are placed, the processor 13 controls the image capturing device 12 to sample through the lens module 11 between the first time point and the second time point N images of these cells. Next, the processor 13 determines the analysis area of each cell, and performs image processing on the analysis area of each cell based on the N images of the cells to generate a plurality of photodynamic vector parameters. The processor 13 generates deformation vector information of the cells between the first time point and the second time point according to the photodynamic vector parameters, and obtains at least one key deformation vector feature of the cells according to the deformation vector information Numerical value. As mentioned above, the processor 13 includes a trainable neural network. Therefore, the deformation vector information and at least one key deformation vector feature value can be used to train the neural network. After the neural network-like training is completed, the processor 13 can use the neural network to establish a cell quality detection model with artificial intelligence procedures to detect cell quality and/or identify cells. In the cell detection system 100, the first time point is before the second time point, and N is a positive integer greater than 2. In other words, the cell detection system 100 can train the neural network with the time series of cell image information between two different time points. After the neural network-like training is completed, the cell detection system 100 has the cell detection function of artificial intelligence, and has the ability to automatically detect cell quality and/or identify cells. The details of how the cell detection system 100 trains a neural network to perform the cell detection function of artificial intelligence will be described later.

第2圖係為利用光動力學技術之人工智慧的細胞檢測系統100執行細胞檢測方法的流程圖。細胞檢測方法可包含步驟S201至步驟S208。任何合理的步驟變或是技術更動都屬於本發明所揭露的範疇。步驟S201至步驟S208的內容描述於下: 步驟S201: 取得複數個細胞; 步驟S202: 在第一時間點以及第二時間點之間,取樣該些細胞的N張影像; 步驟S203: 決定每一個細胞的分析區域; 步驟S204: 依據該些細胞的N張影像,對每一個細胞的分析區域進行影像處理程序,以產生複數個光動力學向量參數; 步驟S205: 依據該些光動力學向量參數,產生該些細胞在第一時間點以及第二時間點之間的形變向量資訊; 步驟S206: 依據形變向量資訊,取得該些細胞的至少一個關鍵形變向量特徵數值; 步驟S207: 將形變向量資訊及至少一個關鍵形變向量特徵數值輸入至類神經網路,以訓練類神經網路; 步驟S208: 利用類神經網路,以人工智慧的程序建立細胞品質檢測模型,以檢測細胞品質及/或辨識細胞。 FIG. 2 is a flowchart of a cell detection method performed by the cell detection system 100 using artificial intelligence of photodynamic technology. The cell detection method may include step S201 to step S208. Any reasonable step changes or technical changes belong to the scope disclosed by the present invention. The contents of step S201 to step S208 are described as follows: Step S201: Obtain a plurality of cells; Step S202: Sample N images of the cells between the first time point and the second time point; Step S203: Determine the analysis area of each cell; Step S204: According to the N images of the cells, perform an image processing procedure on the analysis area of each cell to generate a plurality of photodynamic vector parameters; Step S205: Generate the deformation vector information of the cells between the first time point and the second time point according to the photodynamic vector parameters; Step S206: Obtaining at least one key deformation vector characteristic value of the cells according to the deformation vector information; Step S207: Input the deformation vector information and at least one key deformation vector characteristic value into the neural network-like to train the neural network-like; Step S208: A neural network is used to establish a cell quality detection model with artificial intelligence procedures to detect cell quality and/or identify cells.

為了描述簡化,後文的「細胞」僅以「胚胎」為實施例進行說明,然而本發明並不限於此,細胞的定義可為生殖細胞、神經細胞、組織細胞、動植物細胞或是任何需要研究及觀察的細胞。在步驟S201中,研究人員或是醫療人員可先取得多個胚胎。在步驟S202中,處理器13可以控制影像擷取裝置12,在第一時間點以及第二時間點之間,取樣該些胚胎的N張影像。影像擷取裝置12可用任何方式取得兩個不同時間點之間的N張影像。舉例而言,影像擷取裝置12可在一段時間內(第一時間點以及第二時間點之間),以錄影的方式(如30fps或是60fps)取得多張幀(Frames)構成的影像集合。或是,影像擷取裝置12可在一段時間內(第一時間點以及第二時間點之間),週期性地對多個胚胎拍照,以取得該些胚胎的N張影像。並且,第一時間點以及第二時間點可為該些胚胎於培養液中發育且分裂之觀察週期中的任兩時間點。舉例而言,第一時間點以及第二時間點可以分別選為第一天以及第五天,以觀察該些胚胎發育以及分裂的狀況。換句話說,影像擷取裝置12可以在第0個小時到第120小時之內對該些胚胎連續地拍照,以取得N張影像。在步驟S203中,處理器13可以決定每一個胚胎的分析區域。醫療人員或是研究人員也可以用手動的方式決定每一個胚胎的分析區域。於此,每一個胚胎的分析區域可選為兩胚胎之間差異性最大的區域、或是優等胚胎與劣等胚胎之間差異性最大的區域。舉例而言,分析區域可為該些胚胎中每一個胚胎的囊胚(Blastocyst)區域或自定義區域。並且,分析區域的範圍小於單一胚胎的尺寸。In order to simplify the description, the following "cell" only uses "embryo" as an example for illustration. However, the present invention is not limited to this. The definition of cell can be germ cells, nerve cells, tissue cells, animal and plant cells, or anything that needs to be studied. And observed cells. In step S201, the researcher or medical staff may first obtain multiple embryos. In step S202, the processor 13 may control the image capturing device 12 to sample N images of the embryos between the first time point and the second time point. The image capturing device 12 can obtain N images between two different time points in any manner. For example, the image capturing device 12 can obtain an image collection composed of multiple frames (Frames) in a period of time (between the first time point and the second time point) by means of recording (such as 30fps or 60fps) . Alternatively, the image capturing device 12 may periodically take pictures of a plurality of embryos within a period of time (between the first time point and the second time point) to obtain N images of the embryos. In addition, the first time point and the second time point may be any two time points in the observation cycle when the embryos develop and divide in the culture medium. For example, the first time point and the second time point can be selected as the first day and the fifth day, respectively, to observe the development and division of the embryos. In other words, the image capturing device 12 can continuously take pictures of the embryos from the 0th hour to the 120th hour to obtain N images. In step S203, the processor 13 may determine the analysis area of each embryo. Medical staff or researchers can also manually determine the analysis area of each embryo. Here, the analysis area of each embryo can be selected as the area with the greatest difference between the two embryos, or the area with the greatest difference between the superior embryo and the inferior embryo. For example, the analysis area may be a blastocyst area or a custom area of each embryo in these embryos. Moreover, the range of the analysis area is smaller than the size of a single embryo.

接著,在步驟S204中,處理器13可依據該些胚胎的N張影像,對每一個胚胎的分析區域進行影像處理程序,以產生複數個光動力學向量參數。舉例而言,處理器13可依據該些胚胎的N張影像,對每一個胚胎的分析區域進行數位影像相關(Digital Image Correlation,DIC)技術的分析。數位影像相關(DIC)技術為一種於前後時間點的時間區間內,分析物體特徵變化以計算物體形變量之技術。換句話說,在DIC技術下,該些光動力學向量參數可為在N張影像中,依據第M-1張影像(前)及第M影像(後)之該些胚胎的形變量變化而得,且M≤N。並且,在細胞檢測系統100中,該些光動力學向量參數可包含位移向量資訊、應變向量資訊與變形速度向量資訊,說明如下。在步驟S204中,處理器13可依據該些胚胎的N張影像,對每一個胚胎的分析區域進行DIC技術的分析。利用DIC技術將N張影像分析之後,處理器13可以得到前後影像(第M-1張影像及第M影像)內,每一個胚胎的垂直位移資訊、水平位移資訊、應變向量分布資訊以及變形速度向量資訊。垂直位移資訊可定義為胚胎在兩張影像之間,其分析區域之垂直位移的距離以及座標。水平位移資訊可定義為胚胎在兩張影像之間,其分析區域之水平位移的距離以及座標。應變向量分布資訊可定義為胚胎在兩張影像之間,將分析區域擴張或是收縮的程度量化。變形速度向量資訊可以依據位移距離和應變量除上時間差推導出來。例如,第M-1張影像及第M影像的時間差為t,水平位移距離為p,則變形速度向量之水平分量的數值為p/t。第M-1張影像及第M影像的時間差為t,垂直位移距離為q,則變形速度向量之垂直分量的數值為q/t。第M-1張影像及第M影像的時間差為t,應變量為r,則變形速度向量之應變分量的數值為r/t。接著,於步驟S205中,處理器13可以依據該些光動力學向量參數,產生該些胚胎在第一時間點以及第二時間點之間的形變向量資訊。形變向量資訊可由上述的該些光動力學向量參數以線性或是非線性之公式產生。Next, in step S204, the processor 13 may perform an image processing procedure on the analysis area of each embryo according to the N images of the embryos to generate a plurality of photodynamic vector parameters. For example, the processor 13 may perform Digital Image Correlation (DIC) analysis on the analysis area of each embryo based on the N images of the embryos. Digital Image Correlation (DIC) technology is a technology that analyzes the changes in the characteristics of an object in a time interval before and after the time point to calculate the shape of the object. In other words, under the DIC technology, the photodynamic vector parameters can be based on the changes in the deformation of the embryos in the M-1th image (front) and the Mth image (rear) in N images. Obtained, and M≤N. Moreover, in the cell detection system 100, the photodynamic vector parameters may include displacement vector information, strain vector information, and deformation speed vector information, as described below. In step S204, the processor 13 can perform DIC analysis on the analysis area of each embryo based on the N images of the embryos. After analyzing the N images using DIC technology, the processor 13 can obtain the vertical displacement information, horizontal displacement information, strain vector distribution information, and deformation speed of each embryo in the front and rear images (the M-1th image and the Mth image). Vector information. The vertical displacement information can be defined as the distance and coordinates of the vertical displacement of the embryo in the analysis area between the two images. The horizontal displacement information can be defined as the distance and coordinates of the horizontal displacement of the analysis area of the embryo between the two images. The strain vector distribution information can be defined as the embryo between two images, quantifying the extent of expansion or contraction of the analysis area. The deformation speed vector information can be derived from the displacement distance and the amount of strain divided by the time difference. For example, if the time difference between the M-1th image and the Mth image is t and the horizontal displacement distance is p, the value of the horizontal component of the deformation velocity vector is p/t. The time difference between the M-1th image and the Mth image is t, the vertical displacement distance is q, and the value of the vertical component of the deformation velocity vector is q/t. The time difference between the M-1th image and the Mth image is t, and the amount of strain is r, so the value of the strain component of the deformation velocity vector is r/t. Then, in step S205, the processor 13 can generate the deformation vector information of the embryos between the first time point and the second time point according to the photodynamic vector parameters. The deformation vector information can be generated by the aforementioned photodynamic vector parameters using linear or non-linear formulas.

接著,在步驟S206中,處理器13可以依據形變向量資訊,取得該些胚胎的至少一個關鍵形變向量特徵數值。舉例而言,處理器13可以依據該些胚胎的N張影像,對每一個胚胎的分析區域進行DIC技術的分析,以在該些胚胎的形變向量資訊中,取得至少一個局部極大值(Local Maximum)的應變量。類似地,處理器13可以依據該些胚胎的N張影像,對每一個胚胎的分析區域進行DIC的分析,以在該些胚胎的形變向量資訊中,取得至少一個局部極小值(Local Minimum)的應變量。並且,至少一個關鍵形變向量特徵數值可包含至少一個局部極大值的應變量及/或至少一個局部極小值的應變量。為了便於理解,表T1中描述了不同時間點下的應變量,如下所示。 時間 0-00 0-01 0-02 0-03 最大應變量 0.0795 0.227 0.113 0.336 最小應變量 0.002 -0.013 -0.006 -0.01 應變分布   應變大   應變大 胚胎行為連結 無分裂 無分裂 無分裂 無分裂 時間 0-04 0-05 0-06 0-07 最大應變量 0.104 0.147 0.061 0.138 最小應變量 -0.001 -0.016 -0.0065 -0.014 應變分布         胚胎行為連結 無分裂 無分裂 無分裂 無分裂 表T1Then, in step S206, the processor 13 can obtain at least one key deformation vector characteristic value of the embryos according to the deformation vector information. For example, the processor 13 may perform DIC analysis on the analysis area of each embryo based on the N images of the embryos, so as to obtain at least one local maximum value (Local Maximum) from the deformation vector information of the embryos. ) Of the dependent variable. Similarly, the processor 13 can perform DIC analysis on the analysis area of each embryo based on the N images of the embryos, so as to obtain at least one local minimum in the deformation vector information of the embryos. strain. In addition, the at least one key deformation vector characteristic value may include at least one local maximum strain and/or at least one local minimum strain. For ease of understanding, the strain at different time points is described in Table T1, as shown below. time 0-00 0-01 0-02 0-03 Maximum strain 0.0795 0.227 0.113 0.336 Minimum strain 0.002 -0.013 -0.006 -0.01 Strain distribution Strain Strain Embryo Behavior Link No split No split No split No split time 0-04 0-05 0-06 0-07 Maximum strain 0.104 0.147 0.061 0.138 Minimum strain -0.001 -0.016 -0.0065 -0.014 Strain distribution Embryo Behavior Link No split No split No split No split Table T1

依據表T1,在時間點0-01時,最大應變量為0.227。0.227明顯高於時間點0-00的最大應變量0.0795以及時間點0-02的最大應變量0.113。因此,時間點0-01的最大應變量0.227可以被定義為局部極大值(Local Maximum)的應變量。因此,對應時間點0-01的應變分布指向「應變大」的註記。類似地,在時間點0-03時,最大應變量為0.336。0.336明顯高於時間點0-02的最大應變量0.113、時間點0-04的最大應變量0.104、時間點0-05的最大應變量0.147、時間點0-06的最大應變量0.061以及時間點0-07的最大應變量0.138。因此,時間點0-03的最大應變量0.336可以被定義為局部極大值的應變量。因此,對應時間點0-03的應變分布指向「應變大」的註記。類似地,至少一個局部極小值(Local Minimum)的應變量也可以由統計而得。處理器13可將表T1、至少一個局部極大值的應變量、至少一個局部極小值的應變量之資料存入記憶體14中。According to Table T1, at time 0-01, the maximum strain is 0.227. 0.227 is significantly higher than the maximum strain 0.0795 at time 0-00 and the maximum strain 0.113 at time 0-02. Therefore, the maximum strain of 0.227 at the time point 0-01 can be defined as the strain of the local maximum (Local Maximum). Therefore, the strain distribution corresponding to time point 0-01 points to the note of "strain large". Similarly, at time 0-03, the maximum strain is 0.336. 0.336 is significantly higher than the maximum strain at time 0-02, 0.113, the maximum strain at time 0-04, and 0.104, and the maximum at time 0-05. The strain is 0.147, the maximum strain at time 0-06 is 0.061, and the maximum strain at time 0-07 is 0.138. Therefore, the maximum strain of 0.336 at time point 0-03 can be defined as the strain of the local maximum. Therefore, the strain distribution corresponding to time point 0-03 points to the note of "strain large". Similarly, the strain of at least one local minimum (Local Minimum) can also be obtained by statistics. The processor 13 can store the data of the table T1, the strain of at least one local maximum value, and the strain of at least one local minimum value in the memory 14.

如前述提及,處理器13具有類神經網路,例如深度神經網路(DNN),可以執行機器學習以及深度學習的功能。因此,處理器13的類神經網路可以被訓練。為了訓練處理器13的類神經網路,於步驟S207中,細胞檢測系統100可將形變向量資訊及至少一個關鍵形變向量特徵數值輸入至處理器13內的類神經網路,以訓練類神經網路。在類神經網路被訓練完成後,依據步驟S208,處理器13可以利用類神經網路,以人工智慧的程序建立細胞品質檢測模型,以檢測胚胎品質及/或辨識胚胎。換句話說,細胞檢測系統100利用人工智慧檢測胚胎品質及/或辨識胚胎可包含兩個階段。第一階段為訓練階段,細胞檢測系統100可將時間序列的整個資料量,或是縮減時間之資料量的特徵數值輸入至類神經網路中,也可進一步將至少一個關鍵形變向量特徵數值輸入至類神經網路中,以建立細胞品質檢測模型。第二階段為人工智慧之偵測階段。在類神經網路訓練完成後,處理器13可以利用已經訓練完成的類神經網路之細胞品質檢測模型,判斷胚胎品質以及辨識胚胎。因此,本發明的細胞檢測系統100可以避免醫療人員或是研究人員以主觀的方式來判斷胚胎的優劣,因此針對不孕症的療程,其受孕成功率能夠大幅度地提升。As mentioned above, the processor 13 has a neural network, such as a deep neural network (DNN), which can perform machine learning and deep learning functions. Therefore, the neural network of the processor 13 can be trained. In order to train the neural network of the processor 13, in step S207, the cell detection system 100 can input the deformation vector information and at least one key deformation vector characteristic value to the neural network in the processor 13 to train the neural network. road. After the quasi-neural network is trained, according to step S208, the processor 13 can use the quasi-neural network to establish a cell quality detection model with artificial intelligence procedures to detect embryo quality and/or identify embryos. In other words, the cell detection system 100 using artificial intelligence to detect embryo quality and/or identify embryos may include two stages. The first stage is the training stage. The cell detection system 100 can input the entire data volume of the time series or the feature value of the reduced time data volume into the neural network, and can further input at least one key deformation vector feature value Into a neural network to build a cell quality detection model. The second stage is the detection stage of artificial intelligence. After the neural network-like training is completed, the processor 13 can use the trained neural network-like cell quality detection model to determine embryo quality and identify embryos. Therefore, the cell detection system 100 of the present invention can prevent medical personnel or researchers from judging the quality of embryos in a subjective manner, so that the pregnancy success rate can be greatly improved for the course of infertility treatment.

第3圖係為利用光動力學技術之人工智慧的細胞檢測系統100中,加入額外步驟以增強細胞檢測的精確度的示意圖。為了進一步加強判斷胚胎品質以及辨識胚胎的精確度,細胞檢測系統100還可以引入型態學的偵測技術來強化類神經網路的細胞檢測能力,如下所示。 步驟S201: 取得複數個細胞; 步驟S301: 利用型態學的邊緣偵測技術偵測該些細胞於第一時間點以及第二時間點之間的邊緣特徵數據,並將邊緣特徵數據輸入至類神經網路; 步驟S302: 利用型態學的橢圓偵測方法,依據邊緣特徵數據,偵測該些細胞於第一時間點以及第二時間點之間,每一個細胞的分裂時間點以及分裂個數,並將每一個細胞的分裂時間點以及分裂個數輸入至類神經網路,以訓練類神經網路。 FIG. 3 is a schematic diagram of the cell detection system 100 using artificial intelligence of photodynamic technology, adding additional steps to enhance the accuracy of cell detection. In order to further enhance the accuracy of judging embryo quality and identifying embryos, the cell detection system 100 can also introduce morphological detection technology to enhance the neural network-like cell detection capability, as shown below. Step S201: Obtain a plurality of cells; Step S301: Detect the edge feature data of the cells between the first time point and the second time point using morphological edge detection technology, and input the edge feature data to the neural network; Step S302: Using the ellipse detection method of typology, according to the edge feature data, detect the division time point and the number of each cell between the first time point and the second time point of the cells, and compare each cell The split time point and the number of splits are input to the neural network to train the neural network.

類似地,為了描述簡化,後文的「細胞」僅以「胚胎」為實施例進行說明,然而本發明並不限於此,細胞的定義可為生殖細胞、神經細胞、組織細胞、動植物細胞或是任何需要研究及觀察的細胞。為了進一步加強判斷胚胎品質以及辨識胚胎的精確度,細胞檢測系統100在前述步驟S201取得複數個細胞(胚胎)後,依據步驟S301,處理器13可以利用型態學的邊緣偵測技術偵測該些胚胎於第一時間點以及第二時間點之間的邊緣特徵數據,並將邊緣特徵數據輸入至類神經網路。邊緣特徵數據可為胚胎整體或是某一個特定部分(例如囊胚)的輪廓,其數據格式可用多個座標的方式表示。例如在二維平面上,單一胚胎的輪廓可為封閉型的線段,可用(X1 ,Y1 )至(XL ,YL )表示,L為正整數且L越大解析度越高。接著,在步驟S302中,處理器13可以利用型態學的橢圓偵測方法,依據邊緣特徵數據,偵測該些胚胎於第一時間點以及第二時間點之間,每一個胚胎的分裂時間點以及分裂個數,並將每一個胚胎的分裂時間點以及分裂個數輸入至類神經網路,以訓練類神經網路。於此說明,處理器13可以預先設定好至少一個橢圓的曲線擬合函數。當前述步驟S301偵測出至少一個封閉線段對應的座標(X1 ,Y1 )至(XL ,YL )後,處理器13可以用至少一個橢圓的曲線擬合函數去匹配其座標(X1 ,Y1 )至(XL ,YL )是否符合橢圓形的封閉曲線。因此,處理器13可以得出在某個時間點下的影像中,擬合成功之橢圓形的封閉曲線之個數。處理器13可將擬合成功之橢圓形的封閉曲線之個數視為胚胎的分裂個數。因此,依據不同時間點下之N張影像的資訊,處理器13可以偵測出每一個胚胎的分裂時間點以及分裂個數。換句話說,對比於第2圖所述之步驟S201至步驟S208的細胞偵測方法,細胞檢測系統100可以引入額外的步驟S301至步驟S302,以獲取更多的資訊(如邊緣特徵數據、每一個胚胎的分裂時間點以及分裂個數)來訓練類神經網路。因此,類神經網路的訓練會更加優化,從而增加了人工智慧偵測細胞品質的精確度。Similarly, in order to simplify the description, the following "cell" only uses "embryo" as an example for illustration, but the present invention is not limited to this. The definition of cell can be germ cell, nerve cell, tissue cell, animal or plant cell or Any cell that needs to be studied and observed. In order to further strengthen the accuracy of judging embryo quality and identifying embryos, after the cell detection system 100 obtains a plurality of cells (embryos) in the aforementioned step S201, the processor 13 can use the edge detection technology of morphology to detect the The edge feature data of these embryos between the first time point and the second time point are input to the neural network. The edge feature data can be the outline of the entire embryo or a specific part (for example, a blastocyst), and the data format can be expressed in multiple coordinates. For example, on a two-dimensional plane, the outline of a single embryo can be a closed line segment, which can be represented by (X 1 , Y 1 ) to (X L , Y L ), where L is a positive integer and the larger the L, the higher the resolution. Then, in step S302, the processor 13 can use the morphological ellipse detection method to detect the division time of each embryo between the first time point and the second time point based on the edge feature data. The points and the number of splits, and the split time point and the number of splits of each embryo are input into the neural network to train the neural network. It is explained here that the processor 13 may preset at least one ellipse curve fitting function. After the aforementioned step S301 detects the coordinates (X 1 , Y 1 ) to (X L , Y L ) corresponding to at least one closed line segment, the processor 13 may use at least one elliptical curve fitting function to match its coordinates (X 1 ,Y 1 ) to (X L ,Y L ) whether it conforms to the elliptical closed curve. Therefore, the processor 13 can obtain the number of elliptical closed curves that are successfully fitted in the image at a certain point in time. The processor 13 may regard the number of successfully fitted elliptical closed curves as the number of divisions of the embryo. Therefore, based on the information of the N images at different time points, the processor 13 can detect the split time point and the number of splits of each embryo. In other words, compared to the cell detection method from step S201 to step S208 described in Figure 2, the cell detection system 100 can introduce additional steps S301 to S302 to obtain more information (such as edge feature data, each The split time point and number of an embryo) to train the neural network. Therefore, the training of the similar neural network will be more optimized, thereby increasing the accuracy of artificial intelligence in detecting the quality of cells.

第4圖係為利用光動力學技術之人工智慧的細胞檢測系統100中,具有類神經網路的處理器13之輸入資料以及輸出資料的示意圖。如前述提及,影像擷取裝置12可以在兩個不同時間點之間,對多個胚胎拍照,以產生N張影像。細胞檢測系統100可以利用DIC技術,對N張影像進行分析,以產生該些光動力學向量參數。並且,該些光動力學向量參數可用於產生形變向量資訊D1以及至少一個關鍵形變向量特徵數值D2。產生形變向量資訊D1以及至少一個關鍵形變向量特徵數值D2可用於訓練處理器13內的類神經網路。當類神經網路訓練完成後,處理器13即可利用人工智慧的程序判斷胚胎品質及/或辨識胚胎。處理器13可輸出細胞品質輸出資料D6。應當理解的是,N張影像中之每一張影像可為二維影像或三維影像。若N張影像中之每一張影像為二維影像時,該些光動力學向量參數、形變向量資訊及至少一個關鍵形變向量特徵數值為K維資料的格式。舉例而言,在時間點T、高光譜儀之特定波長λ下,座標(x, y)的畫素S1的光訊號可以表示為S1(λ, T, x, y)。畫素S1的光訊號S1(λ, T, x, y)為四個維度的訊號格式。同理,若N張影像中之每一張影像為三維影像時,該些光動力學向量參數、該形變向量資訊及該至少一個關鍵形變向量特徵數值係為(K+1)維資料的格式。舉例而言,在時間點T、高光譜儀之特定波長λ下,座標(x, y, z)的畫素S2的光訊號可以表示為S2(λ, T, x, y, z)。畫素S2的光訊號S2(λ, T, x, y, z)為五個維度的訊號格式。K為大於2的正整數。因此可預期地,當細胞檢測系統100的資料格式為較高的維度時,運算複雜度將變高。當細胞檢測系統100的資料格式為較低的維度時,運算複雜度將變低。FIG. 4 is a schematic diagram of the input data and output data of the neural network-like processor 13 in the cell detection system 100 using artificial intelligence of photodynamic technology. As mentioned above, the image capturing device 12 can take pictures of multiple embryos between two different time points to generate N images. The cell detection system 100 can use DIC technology to analyze N images to generate the photodynamic vector parameters. Moreover, the photodynamic vector parameters can be used to generate deformation vector information D1 and at least one key deformation vector characteristic value D2. The generated deformation vector information D1 and at least one key deformation vector characteristic value D2 can be used to train the neural network in the processor 13. After the neural network-like training is completed, the processor 13 can use artificial intelligence procedures to judge embryo quality and/or identify embryos. The processor 13 can output cell quality output data D6. It should be understood that each of the N images can be a two-dimensional image or a three-dimensional image. If each of the N images is a two-dimensional image, the photodynamic vector parameters, deformation vector information, and at least one key deformation vector characteristic value are in a K-dimensional data format. For example, at the time point T and the specific wavelength λ of the hyperspectrometer, the optical signal of the pixel S1 at the coordinate (x, y) can be expressed as S1(λ, T, x, y). The optical signal S1(λ, T, x, y) of the pixel S1 is a four-dimensional signal format. Similarly, if each of the N images is a three-dimensional image, the photodynamic vector parameters, the deformation vector information, and the at least one key deformation vector feature value are in the format of (K+1)-dimensional data . For example, at the time point T and the specific wavelength λ of the hyperspectrometer, the optical signal of the pixel S2 at the coordinates (x, y, z) can be expressed as S2(λ, T, x, y, z). The optical signal S2 (λ, T, x, y, z) of the pixel S2 is a signal format of five dimensions. K is a positive integer greater than 2. Therefore, it can be expected that when the data format of the cell detection system 100 has a higher dimension, the computational complexity will become higher. When the data format of the cell detection system 100 is of a lower dimension, the computational complexity will be lower.

並且,如前述提及,處理器13的類神經網路,可以利用邊緣特徵數據D3及每一個胚胎的分裂時間點D5以及分裂個數D4進行訓練後,類神經網路可以利用人工智慧的程序優化細胞品質檢測模型。因此,如第4圖所示,處理器內的類神經網路可以接收形變向量資訊D1、至少一個關鍵形變向量特徵數值D2、邊緣特徵數據D3、分裂個數D4、分裂時間點D5。並且,當處理器內的類神經網路訓練完成後,處理器即可利用人工智慧的程序對不孕症婦女的取卵/胚胎進行遴選,以輸出細胞品質輸出資料D6。細胞品質輸出資料D6可為任何形式的資料格式,如輸出胚胎優劣的分級資料、輸出至少一個胚胎的優劣排序百分率、或是輸出至少一個胚胎的細胞品質。細胞品質可為細胞之詳細化學成分數值含量多寡、遺傳基因優劣,也可以為細胞於特定時間內之發育狀態的優劣,或是細胞是否發生病變而定。若為生殖細胞也可為懷孕與否、新生兒健康以及性別而定。And, as mentioned above, the neural network of the processor 13 can use edge feature data D3 and the split time point D5 of each embryo and the number of divisions D4 for training. The neural network can use artificial intelligence programs. Optimize the cell quality detection model. Therefore, as shown in Figure 4, the neural network in the processor can receive deformation vector information D1, at least one key deformation vector feature value D2, edge feature data D3, the number of splits D4, and the split time point D5. In addition, after the neural network training in the processor is completed, the processor can use artificial intelligence procedures to select the egg retrieval/embryo of infertile women to output cell quality output data D6. The cell quality output data D6 can be in any form of data format, such as outputting grading data of embryo quality, outputting the percentage ranking of at least one embryo, or outputting the cell quality of at least one embryo. The quality of a cell can be determined by the numerical content of the detailed chemical composition of the cell, the pros and cons of the genetic gene, the pros and cons of the cell's developmental state in a specific time, or whether the cell has a disease. In the case of germ cells, it can also be determined by pregnancy, newborn health, and gender.

綜上所述,本發明描述一種利用光動力學技術之人工智慧的細胞檢測方法及其系統。細胞檢測系統的應用族群可為不孕症的婦女。醫療人員可先依據大量細胞數據,建立人工智慧之細胞品質檢測模型後,即可對不孕症的婦女進行療程。並且,人工智慧的類神經網路可以接收光動力學及型態學相關的各種參數,例如形變向量資訊、至少一個關鍵形變向量特徵數值、邊緣特徵數據、分裂個數、分裂時間點。因此,類神經網路的使用可以避免醫療人員或是研究人員以主觀的方式來判斷胚胎的優劣。不孕症的婦女可以先進行多個胚胎培養,細胞檢測系統利用人工智慧之細胞品質間檢測模型決定最佳胚胎後,再以人工的方式植入母體子宮,以增加受孕的成功率。 以上所述僅為本發明之較佳實施例,凡依本發明申請專利範圍所做之均等變化與修飾,皆應屬本發明之涵蓋範圍。In summary, the present invention describes a cell detection method and system using artificial intelligence of photodynamic technology. The application group of the cell detection system can be women with infertility. Medical staff can first establish an artificial intelligence cell quality detection model based on a large amount of cell data, and then they can treat women with infertility. In addition, the artificial intelligence-like neural network can receive various parameters related to photodynamics and morphology, such as deformation vector information, at least one key deformation vector feature value, edge feature data, number of splits, and split time points. Therefore, the use of similar neural networks can prevent medical staff or researchers from judging the quality of embryos in a subjective way. Women with infertility can culture multiple embryos first. The cell detection system uses artificial intelligence to determine the best embryos based on the inter-cell quality detection model, and then artificially implants them into the mother's uterus to increase the success rate of conception. The foregoing descriptions are only preferred embodiments of the present invention, and all equivalent changes and modifications made in accordance with the scope of the patent application of the present invention shall fall within the scope of the present invention.

100:利用光動力學技術之人工智慧的細胞檢測系統 10:載具 11:透鏡模組 12:影像擷取裝置 13:處理器 14:記憶體 S201至S208:步驟 S301至S302:步驟 D1:形變向量資訊 D2:至少一個關鍵形變向量特徵數值 D3:邊緣特徵數據 D4:分裂個數 D5:分裂時間點 D6:細胞品質輸出資料100: Cell detection system using artificial intelligence of photodynamic technology 10: Vehicle 11: Lens module 12: Image capture device 13: processor 14: Memory S201 to S208: steps S301 to S302: steps D1: Deformation vector information D2: At least one key deformation vector eigenvalue D3: Edge feature data D4: Number of splits D5: Split time point D6: Cell quality output data

第1圖係為本發明之利用光動力學技術之人工智慧的細胞檢測系統之實施例的方塊圖。 第2圖係為第1圖之利用光動力學技術之人工智慧的細胞檢測系統執行細胞檢測方法的流程圖。 第3圖係為第1圖之利用光動力學技術之人工智慧的細胞檢測系統中,加入額外步驟以增強細胞檢測的精確度的示意圖。 第4圖係為第1圖之利用光動力學技術之人工智慧的細胞檢測系統中,具有類神經網路的處理器之輸入資料以及輸出資料的示意圖。FIG. 1 is a block diagram of an embodiment of the cell detection system using artificial intelligence using photodynamic technology of the present invention. Figure 2 is a flow chart of the method of cell detection performed by the cell detection system using artificial intelligence based on photodynamic technology in Figure 1. Figure 3 is a schematic diagram of the artificial intelligence cell detection system using photodynamic technology in Figure 1 with additional steps to enhance the accuracy of cell detection. Figure 4 is a schematic diagram of the input data and output data of the neural network-like processor in the cell detection system using artificial intelligence using photodynamic technology in Figure 1.

100:利用光動力學技術之人工智慧的細胞檢測系統100: Cell detection system using artificial intelligence of photodynamic technology

10:載具10: Vehicle

11:透鏡模組11: Lens module

12:影像擷取裝置12: Image capture device

13:處理器13: processor

14:記憶體14: Memory

Claims (12)

一種利用光動力學技術之人工智慧的細胞檢測方法,包含: 取得複數個細胞; 在第一時間點以及第二時間點之間,取樣該些細胞的N張影像; 決定每一個細胞的一分析區域; 依據該些細胞的N張影像,對每一個細胞的該分析區域進行一影像處理程序,以產生複數個光動力學向量參數; 依據該些光動力學向量參數,產生該些細胞在該第一時間點以及該第二時間點之間的形變向量資訊; 依據該形變向量資訊,取得該些細胞的至少一個關鍵形變向量特徵數值; 將該形變向量資訊及該至少一個關鍵形變向量特徵數值輸入至一類神經網路,以訓練該類神經網路;及 利用該類神經網路,以一人工智慧的程序建立一細胞品質檢測模型,以檢測細胞品質及/或辨識細胞; 其中該第一時間點在第二時間點之前,且N為大於2的正整數。A cell detection method using artificial intelligence of photodynamic technology, including: Obtain a plurality of cells; Sample N images of the cells between the first time point and the second time point; Determine an analysis area of each cell; According to the N images of the cells, an image processing procedure is performed on the analysis area of each cell to generate a plurality of photodynamic vector parameters; Generating the deformation vector information of the cells between the first time point and the second time point according to the photodynamic vector parameters; Obtaining at least one key deformation vector characteristic value of the cells according to the deformation vector information; Inputting the deformation vector information and the at least one key deformation vector characteristic value to a type of neural network to train the type of neural network; and Use this type of neural network to establish a cell quality detection model with an artificial intelligence process to detect cell quality and/or identify cells; The first time point is before the second time point, and N is a positive integer greater than 2. 如請求項1所述之方法,其中該些細胞係為複數個生殖細胞,且該第一時間點以及該第二時間點係為該些生殖細胞於培養液中發育且分裂之一觀察週期中的任兩時間點。The method according to claim 1, wherein the cell lines are a plurality of germ cells, and the first time point and the second time point are during an observation cycle of the germ cells developing and dividing in the culture medium Any two points in time. 如請求項1所述之方法,其中該些細胞係為複數個胚胎,且該分析區域係為該些胚胎中每一個胚胎的一囊胚(Blastocyst)區域或一自定義區域,且該分析區域的一範圍小於單一細胞的尺寸。The method according to claim 1, wherein the cell lines are a plurality of embryos, and the analysis region is a blastocyst region or a custom region of each embryo in the embryos, and the analysis region A range of is smaller than the size of a single cell. 如請求項1所述之方法,其中對該每一個細胞的該分析區域進行該影像分析,以產生該些光動力學向量參數,係為對該每一個細胞的該分析區域進行一數位影像相關(Digital Image Correlation,DIC)技術的分析,且該些光動力學向量參數係為在該N張影像中,依據第M-1張影像及第M影像之該些細胞的形變量變化而得,M≤N。The method according to claim 1, wherein the image analysis is performed on the analysis area of each cell to generate the photodynamic vector parameters, which is to perform a digital image correlation on the analysis area of each cell (Digital Image Correlation, DIC) technology analysis, and the photodynamic vector parameters are obtained based on the changes in the deformation of the cells in the M-1 image and the M image in the N images, M≤N. 如請求項4所述之方法,其中該些光動力學向量參數包含位移向量資訊、應變向量資訊與變形速度向量資訊。The method according to claim 4, wherein the photodynamic vector parameters include displacement vector information, strain vector information, and deformation speed vector information. 如請求項1所述之方法,另包含: 依據該些細胞的N張影像,對該每一個細胞的該分析區域進行一數位影像相關(Digital Image Correlation,DIC)技術的分析,以在該些細胞的該形變向量資訊中,取得至少一個局部極大值(Local Maximum)的應變量; 其中該至少一個關鍵形變向量特徵數值包含該至少一個局部極大值的應變量。The method described in claim 1, which additionally includes: According to the N images of the cells, perform a Digital Image Correlation (DIC) analysis on the analysis area of each cell to obtain at least one part of the deformation vector information of the cells Local Maximum (Local Maximum) dependent variable; The at least one key deformation vector characteristic value includes the at least one local maximum value of the strain. 如請求項1所述之方法,另包含: 依據該些細胞的N張影像,對該每一個細胞的該分析區域進行該數位影像相關技術的分析,以在該些細胞的該形變向量資訊中,取得至少一個局部極小值(Local Minimum)的應變量; 其中該至少一個關鍵形變向量特徵數值包含該至少一個局部極小值的應變量。The method described in claim 1, which additionally includes: According to the N images of the cells, the analysis area of each cell is analyzed by the digital image-related technology to obtain at least one local minimum in the deformation vector information of the cells strain; The at least one key deformation vector characteristic value includes the at least one local minimum value of the strain. 如請求項1所述之方法,其中該N張影像中之每一張影像係為一二維影像或一三維影像,若該N張影像中之該每一張影像係為該二維影像時,該些光動力學向量參數、該形變向量資訊及該至少一個關鍵形變向量特徵數值係為K維資料的格式,且若該N張影像中之該每一張影像係為該三維影像時,該些光動力學向量參數、該形變向量資訊及該至少一個關鍵形變向量特徵數值係為(K+1)維資料的格式,K為大於2的正整數。The method according to claim 1, wherein each of the N images is a two-dimensional image or a three-dimensional image, if each of the N images is the two-dimensional image , The photodynamic vector parameters, the deformation vector information, and the at least one key deformation vector characteristic value are in a K-dimensional data format, and if each of the N images is the three-dimensional image, The photodynamic vector parameters, the deformation vector information, and the at least one key deformation vector characteristic value are in the format of (K+1)-dimensional data, and K is a positive integer greater than 2. 如請求項1所述之方法,其中該N張影像中之每一張影像係為一灰階影像、一高光譜影像或一景深合成影像。The method according to claim 1, wherein each of the N images is a gray-scale image, a hyperspectral image, or a depth-of-field composite image. 如請求項1所述之方法,另包含: 利用一型態學的一邊緣偵測技術偵測該些細胞於該第一時間點以及該第二時間點之間的邊緣特徵數據,並將該邊緣特徵數據輸入至該類神經網路,以訓練該類神經網路;及 利用該型態學的一橢圓偵測方法,依據該邊緣特徵數據,偵測該些細胞於該第一時間點以及該第二時間點之間,每一個細胞的一分裂時間點以及一分裂個數,並將該每一個細胞的該分裂時間點以及該分裂個數輸入至該類神經網路,以訓練該類神經網路。The method described in claim 1, which additionally includes: Use a typological edge detection technology to detect the edge feature data of the cells between the first time point and the second time point, and input the edge feature data into the neural network to Train this type of neural network; and Using an ellipse detection method of the morphology, according to the edge feature data, the cells are detected between the first time point and the second time point, a division time point and a division time point of each cell And input the division time point and the division number of each cell into the neural network to train the neural network. 如請求項10所述之方法,其中該類神經網路利用該邊緣特徵數據及該每一個細胞的該分裂時間點以及該分裂個數進行訓練後,該類神經網路利用該人工智慧的程序優化該細胞品質檢測模型。The method according to claim 10, wherein after the neural network uses the edge feature data and the split time point of each cell and the number of splits for training, the neural network uses the artificial intelligence program Optimize the cell quality detection model. 一種利用光動力學技術之人工智慧的細胞檢測系統,包含: 一載具,具有一容置槽,用以放置複數個細胞; 一透鏡模組,面對該載具,用以放大該些細胞的細節; 一影像擷取裝置,面對該透鏡模組,用以透過該透鏡模組取得該些細胞的影像; 一處理器,耦接於該透鏡模組及該影像擷取裝置,用以調整該透鏡模組的一放大倍率以及處理該些細胞的影像;及 一記憶體,耦接於該處理器,用以儲存訓練資料以及影像處理的分析資料; 其中該載具之該容置槽放置該些細胞後,處理器控制該影像擷取裝置,透過該透鏡模組在第一時間點以及第二時間點之間,取樣該些細胞的N張影像,該處理器決定每一個細胞的一分析區域,依據該些細胞的N張影像,對每一個細胞的該分析區域進行一影像處理程序,以產生複數個光動力學向量參數,依據該些光動力學向量參數,產生該些細胞在該第一時間點以及該第二時間點之間的形變向量資訊,依據該形變向量資訊,取得該些細胞的至少一個關鍵形變向量特徵數值,且該處理器包含一類神經網路,該形變向量資訊及該至少一個關鍵形變向量特徵數值用以訓練該類神經網路,該處理器利用該類神經網路,以一人工智慧的程序建立一細胞品質檢測模型,以檢測細胞品質及/或辨識細胞,該第一時間點在第二時間點之前,且N為大於2的正整數。A cell detection system using artificial intelligence of photodynamic technology, including: A carrier with a accommodating slot for placing a plurality of cells; A lens module facing the carrier for magnifying the details of the cells; An image capturing device facing the lens module for obtaining images of the cells through the lens module; A processor coupled to the lens module and the image capturing device for adjusting a magnification of the lens module and processing the images of the cells; and A memory, coupled to the processor, for storing training data and image processing analysis data; After the cells are placed in the accommodating slot of the carrier, the processor controls the image capturing device to sample N images of the cells between the first time point and the second time point through the lens module , The processor determines an analysis area of each cell, and performs an image processing procedure on the analysis area of each cell based on the N images of the cells to generate a plurality of photodynamic vector parameters, according to the light The dynamic vector parameter generates the deformation vector information of the cells between the first time point and the second time point, and obtains at least one key deformation vector characteristic value of the cells according to the deformation vector information, and the processing The device includes a type of neural network. The deformation vector information and the at least one key deformation vector feature value are used to train the type of neural network. The processor uses the type of neural network to establish a cell quality test by an artificial intelligence process. Model to detect cell quality and/or identify cells, the first time point is before the second time point, and N is a positive integer greater than 2.
TW109117822A 2019-11-27 2020-05-28 Artificial intelligence based cell detection method by using optical kinetics and system thereof TWI754945B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962941612P 2019-11-27 2019-11-27
US62/941,612 2019-11-27

Publications (2)

Publication Number Publication Date
TW202120906A true TW202120906A (en) 2021-06-01
TWI754945B TWI754945B (en) 2022-02-11

Family

ID=75996128

Family Applications (2)

Application Number Title Priority Date Filing Date
TW109117840A TWI781408B (en) 2019-11-27 2020-05-28 Artificial intelligence based cell detection method by using hyperspectral data analysis technology
TW109117822A TWI754945B (en) 2019-11-27 2020-05-28 Artificial intelligence based cell detection method by using optical kinetics and system thereof

Family Applications Before (1)

Application Number Title Priority Date Filing Date
TW109117840A TWI781408B (en) 2019-11-27 2020-05-28 Artificial intelligence based cell detection method by using hyperspectral data analysis technology

Country Status (2)

Country Link
CN (2) CN112862742A (en)
TW (2) TWI781408B (en)

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004088574A1 (en) * 2003-04-02 2004-10-14 Amersham Biosciences Uk Limited Method of, and computer software for, classification of cells into subpopulations
CN1563947A (en) * 2004-03-18 2005-01-12 中国科学院上海技术物理研究所 High microspectrum imaging system
US7711174B2 (en) * 2004-05-13 2010-05-04 The Charles Stark Draper Laboratory, Inc. Methods and systems for imaging cells
ES2549205T3 (en) * 2005-10-14 2015-10-26 Unisense Fertilitech A/S Determination of a change in a population of cells
US8744775B2 (en) * 2007-12-28 2014-06-03 Weyerhaeuser Nr Company Methods for classification of somatic embryos comprising hyperspectral line imaging
WO2009119330A1 (en) * 2008-03-24 2009-10-01 株式会社ニコン Method for analyzing image for cell observation, image processing program, and image processing device
JP2009229274A (en) * 2008-03-24 2009-10-08 Nikon Corp Method for analyzing image for cell observation, image processing program and image processor
BRPI1012763B1 (en) * 2009-06-25 2019-04-09 Ramot At Tel Aviv University Ltd. Non-Invasive Method for Screening an Avian Egg, Specimen Holder for Collecting a Specter of an Avian Egg and Apparatus for Non-Invasive Screening of an Avian Egg
WO2013004239A1 (en) * 2011-07-02 2013-01-10 Unisense Fertilitech A/S Adaptive embryo selection criteria optimized through iterative customization and collaboration
EP2757372B1 (en) * 2011-09-16 2023-08-23 AVE Science & Technology Co., Ltd. Device and method for erythrocyte morphology analysis
EP2890781B1 (en) * 2012-08-30 2020-09-23 Unisense Fertilitech A/S Automatic surveillance of in vitro incubating embryos
WO2016061586A1 (en) * 2014-10-17 2016-04-21 Cireca Theranostics, Llc Methods and systems for classifying biological samples, including optimization of analyses and use of correlation
US9971966B2 (en) * 2016-02-26 2018-05-15 Google Llc Processing cell images using neural networks
CN106226247A (en) * 2016-07-15 2016-12-14 暨南大学 A kind of cell detection method based on EO-1 hyperion micro-imaging technique
IL267009B (en) * 2016-12-01 2022-07-01 Berkeley Lights Inc Automatic detection and repositioning of micro-objects in micropoloidal devices
CN106815566B (en) * 2016-12-29 2021-04-16 天津中科智能识别产业技术研究院有限公司 Face retrieval method based on multitask convolutional neural network
CN107064019B (en) * 2017-05-18 2019-11-26 西安交通大学 The device and method for acquiring and dividing for dye-free pathological section high spectrum image
CN110945593A (en) * 2017-06-16 2020-03-31 通用电气健康护理生物科学股份公司 Method for predicting the production of and modeling a process in a bioreactor
CN108550133B (en) * 2018-03-02 2021-05-18 浙江工业大学 Cancer cell detection method based on fast R-CNN
TWI664582B (en) * 2018-11-28 2019-07-01 靜宜大學 Method, apparatus and system for cell detection
CN109883966B (en) * 2019-02-26 2021-09-10 江苏大学 Method for detecting aging degree of eriocheir sinensis based on multispectral image technology
CN110136775A (en) * 2019-05-08 2019-08-16 赵壮志 A kind of cell division and anti-interference detection system and method
CN110390676A (en) * 2019-07-26 2019-10-29 腾讯科技(深圳)有限公司 The cell detection method of medicine dye image, intelligent microscope system under microscope

Also Published As

Publication number Publication date
TWI781408B (en) 2022-10-21
TWI754945B (en) 2022-02-11
TW202121241A (en) 2021-06-01
CN112862742A (en) 2021-05-28
CN112862743A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
JP7072067B2 (en) Systems and methods for estimating embryo viability
US20240249540A1 (en) Image feature detection
JPWO2018101004A1 (en) Cell image evaluation apparatus and cell image evaluation control program
US12067712B2 (en) Complex system for contextual mask generation based on quantitative imaging
EP3812448A1 (en) Information processing device, information processing method, program, and information processing system
CN116051560B (en) Embryo dynamics intelligent prediction system based on embryo multidimensional information fusion
CN114972167B (en) Embryo pregnancy prediction method and system based on spatiotemporal attention and cross-modal fusion
EP3485458A1 (en) Information processing device, information processing method, and information processing system
JP2022547900A (en) Automated evaluation of quality assurance metrics used in assisted reproductive procedures
US10748288B2 (en) Methods and systems for determining quality of an oocyte
TWI782557B (en) Cell counting and culture interpretation method, system and computer readable medium thereof
Kanakasabapathy et al. Deep learning mediated single time-point image-based prediction of embryo developmental outcome at the cleavage stage
Malmsten et al. Automated cell stage predictions in early mouse and human embryos using convolutional neural networks
Tapia et al. Benchmarking YOLO models for intracranial hemorrhage detection using varied CT data sources
TWI754945B (en) Artificial intelligence based cell detection method by using optical kinetics and system thereof
Chen et al. Automating blastocyst formation and quality prediction in time-lapse imaging with adaptive key frame selection
Eswaran et al. Deep learning algorithms for timelapse image sequence-based automated blastocyst quality detection
Betegón-Putze et al. MyROOT: A novel method and software for the semi-automatic measurement of plant root length
Sharma et al. Exploring Embryo Development at the Morula Stage-an AI-based Approach to Determine Whether to Use or Discard an Embryo
Koppelman Utilizing Machine Learning for Grading Bovine Cumulus-Oocyte Complexes
RU2810125C1 (en) Automated assessment of quality assurance indicators for assisted reproduction procedures
AU2019101174A4 (en) Systems and methods for estimating embryo viability
CN119207583A (en) Tissue sample quality evaluation method and device, analysis equipment, storage medium
RU2800079C2 (en) Systems and methods of assessing the viability of embryos
Pastor Escuredo et al. Unraveling the embryonic fate map through the mechanical signature of of cells and their trajectories