[go: up one dir, main page]

CN118736292A - A method, device and program product for auxiliary diagnosis of ovarian adnexal masses based on ultrasound - Google Patents

A method, device and program product for auxiliary diagnosis of ovarian adnexal masses based on ultrasound Download PDF

Info

Publication number
CN118736292A
CN118736292A CN202410802050.5A CN202410802050A CN118736292A CN 118736292 A CN118736292 A CN 118736292A CN 202410802050 A CN202410802050 A CN 202410802050A CN 118736292 A CN118736292 A CN 118736292A
Authority
CN
China
Prior art keywords
ovarian
image
mass
classification
ultrasound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410802050.5A
Other languages
Chinese (zh)
Inventor
孙立涛
吴瑛男
代汶利
孔德兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Provincial Peoples Hospital
Original Assignee
Zhejiang Provincial Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Provincial Peoples Hospital filed Critical Zhejiang Provincial Peoples Hospital
Priority to CN202410802050.5A priority Critical patent/CN118736292A/en
Publication of CN118736292A publication Critical patent/CN118736292A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

本申请涉及智能医疗领域,具体涉及一种基于超声的卵巢附件肿块辅助诊断方法、设备及程序产品。包括S1获取患者妇科附件区超声图像;S2将所述超声图像输至第一神经网络中进行分类得到正常卵巢分类概率和卵巢肿块分类概率;S3当图像判定为卵巢肿块时,将所述超声图像输至第二神经网络中进行分类得到良性肿块分类概率和恶性肿块分类概率。本申请通过整合卵巢肿块超声图像信息和临床病理信息,有助于更有效的利用医疗数据,节省医疗资源,辅助医生精确诊断,提高诊断效率,具有很好临床价值。

The present application relates to the field of intelligent medical treatment, and specifically to an ultrasound-based auxiliary diagnosis method, device and program product for ovarian adnexal masses. It includes S1 obtaining an ultrasound image of the patient's gynecological adnexal area; S2 inputting the ultrasound image into a first neural network for classification to obtain the normal ovarian classification probability and the ovarian mass classification probability; S3 when the image is determined to be an ovarian mass, inputting the ultrasound image into a second neural network for classification to obtain the benign mass classification probability and the malignant mass classification probability. By integrating the ultrasound image information and clinical pathological information of ovarian masses, the present application helps to more effectively utilize medical data, save medical resources, assist doctors in accurate diagnosis, and improve diagnostic efficiency, and has great clinical value.

Description

一种基于超声的卵巢附件肿块辅助诊断的方法、设备及程序 产品A method, device and program for auxiliary diagnosis of ovarian adnexal masses based on ultrasound Product

技术领域Technical Field

本申请涉及智能医疗领域,具体涉及一种基于超声的卵巢附件肿块辅助诊断方法、设备、程序产品及计算机可读存储介质。The present application relates to the field of intelligent medical treatment, and specifically to an ultrasound-based auxiliary diagnosis method, device, program product and computer-readable storage medium for ovarian adnexal masses.

背景技术Background Art

卵巢癌是妇科三大常见的恶性肿瘤之一,致死率居妇科恶性肿瘤的首位,已成为妇女生命健康的重要威胁。根据《国家癌症中心:2024年全国癌症报告》,我国卵巢癌发病率及死亡率明显升高。卵巢癌发生于骨盆深处,起病隐匿且缺乏特异性临床症状,极易被误诊、漏诊。超过75%的患者晚期时才被初步诊断,已错过最佳治疗时机,5年相对生存率仅为29%。因此,准确识别及诊断卵巢癌对提高卵巢癌患者存活率和改善预后效果具有重大意义。超声具备便捷、价廉、实时、无创、可重复等优势,是我国不同等级医疗机构的首选附件肿块影像学检查方法。然而,由于附件肿块解剖位置和生物行为的复杂性以及超声成像极为耗时和操作者依赖性强的特点,单独使用超声诊断卵巢癌的准确性仍相对较低。因此,为了提高附件肿块的诊断准确性,许多学者基于患者的一般临床数据、超声图像特征、实验室检验等构建了各种临床诊断模型和附件肿块分类系统,如恶性风险指数(RMI)、妇科超声成像报告与数据系统(GI-RADS)、简易规则(SR)、附件区病变分析模型(ADNEX)和卵巢附件报告与数据系统(O-RADS)。然而,由于卵巢肿块超声图像的复杂性高,以及盆腔深部的解剖学特征,目前对附件肿块的超声诊断方法大多依赖于专业放射科医生手动提取特征,主观评估需要丰富的经验和技能,识别卵巢和卵巢肿块,以及筛查卵巢癌更具挑战性。其诊断准确性和效率有待提高,缺乏实用性和普适性,不利于在基层医疗机构广泛推广和应用。Ovarian cancer is one of the three most common gynecological malignancies, with the highest mortality rate among gynecological malignancies, and has become a major threat to women's life and health. According to the National Cancer Center: 2024 National Cancer Report, the incidence and mortality of ovarian cancer in my country have increased significantly. Ovarian cancer occurs deep in the pelvis, with an insidious onset and lack of specific clinical symptoms, and is easily misdiagnosed or missed. More than 75% of patients are initially diagnosed in the late stage, having missed the best time for treatment, and the 5-year relative survival rate is only 29%. Therefore, accurate identification and diagnosis of ovarian cancer is of great significance to improving the survival rate and prognosis of ovarian cancer patients. Ultrasound has the advantages of convenience, low cost, real-time, non-invasive, and repeatability. It is the preferred imaging method for adnexal masses in medical institutions of different levels in my country. However, due to the complexity of the anatomical location and biological behavior of adnexal masses and the extremely time-consuming and operator-dependent characteristics of ultrasound imaging, the accuracy of ultrasound alone in diagnosing ovarian cancer is still relatively low. Therefore, in order to improve the diagnostic accuracy of adnexal masses, many scholars have constructed various clinical diagnostic models and adnexal mass classification systems based on patients' general clinical data, ultrasound image features, laboratory tests, etc., such as the malignancy risk index (RMI), gynecological ultrasound imaging reporting and data system (GI-RADS), simplified rules (SR), adnexal lesion analysis model (ADNEX) and ovarian adnexal reporting and data system (O-RADS). However, due to the high complexity of ultrasound images of ovarian masses and the anatomical features of the deep pelvic cavity, the current ultrasound diagnostic methods for adnexal masses mostly rely on professional radiologists to manually extract features, and subjective evaluation requires rich experience and skills. It is more challenging to identify ovaries and ovarian masses, and to screen for ovarian cancer. Its diagnostic accuracy and efficiency need to be improved, and it lacks practicality and universality, which is not conducive to its widespread promotion and application in primary medical institutions.

近年来,随着人工智能(Artificial Intelligence,AI)技术的快速发展,“医疗人工智能”突显出巨大的优势。其中,深度学习能从影像数据中自动提取复杂、从浅层到深层的海量特征,量化分析,避免操作者依赖的特征选择,采取端到端的学习模式,达到智能识别和分类的效果,成为当前人工智能研究的核心应用技术。具有减轻超声医师的工作量,提高诊断效率;减少人工判读的主观性偏差,提高诊断准确性;为疾病预测、风险评估、治疗方案的选择提供参考性的临床建议和解决方案,提升基层医疗服务水平,促进分级诊疗的优势。因此,开发一个快速准确的自动检测及诊断的超声智能系统对于筛查及诊断卵巢癌来说非常重要。而目前现有的智能诊断模型仍然需要人工裁剪超声图像,效率低,缺乏检测、分割、分类诊断、可解释性分析一体化的卵巢肿块超声智能诊断系统,缺乏临床实用价值。In recent years, with the rapid development of artificial intelligence (AI) technology, "medical artificial intelligence" has shown great advantages. Among them, deep learning can automatically extract complex, massive features from shallow to deep layers from image data, quantitatively analyze, avoid operator-dependent feature selection, adopt an end-to-end learning model, and achieve the effect of intelligent recognition and classification, becoming the core application technology of current artificial intelligence research. It has the advantages of reducing the workload of ultrasound physicians and improving diagnostic efficiency; reducing the subjective bias of manual interpretation and improving diagnostic accuracy; providing reference clinical suggestions and solutions for disease prediction, risk assessment, and treatment plan selection, improving the level of primary medical services, and promoting hierarchical diagnosis and treatment. Therefore, it is very important to develop a fast and accurate automatic detection and diagnosis ultrasound intelligent system for screening and diagnosis of ovarian cancer. However, the existing intelligent diagnosis model still requires manual cropping of ultrasound images, which is inefficient, lacks an integrated ovarian mass ultrasound intelligent diagnosis system for detection, segmentation, classification diagnosis, and interpretability analysis, and lacks clinical practical value.

发明内容Summary of the invention

针对上述问题,本发明提出一种基于超声的卵巢附件肿块辅助诊断的方法,具体包括:In view of the above problems, the present invention proposes a method for auxiliary diagnosis of ovarian adnexal masses based on ultrasound, which specifically includes:

S1获取患者妇科附件区超声图像;S1 obtains ultrasound images of the patient's gynecological adnexal area;

S2将所述超声图像输至第一神经网络中进行分类得到正常卵巢分类概率和卵巢肿块分类概率;S2 inputs the ultrasound image into a first neural network for classification to obtain a normal ovary classification probability and an ovarian mass classification probability;

S3当图像判定为卵巢肿块时,将所述超声图像输至第二神经网络中进行分类得到良性肿块分类概率和恶性肿块分类概率。S3: When the image is determined to be an ovarian mass, the ultrasound image is input into a second neural network for classification to obtain a benign mass classification probability and a malignant mass classification probability.

所述分类过程包括先进行目标检测确定目标位置,再基于所述目标位置进行目标区域分割得到分割区域图像,再基于分割区域图像进行分类得到结果;The classification process includes first performing target detection to determine the target position, then performing target region segmentation based on the target position to obtain a segmented region image, and then performing classification based on the segmented region image to obtain a result;

可选地,所述第一神经网络的分类过程包括先对超声图像进行卵巢检测得到卵巢位置,再基于所述卵巢位置进行图像分割得到卵巢图像;再基于所述卵巢图像进行分类得到正常卵巢或卵巢肿块;Optionally, the classification process of the first neural network includes first performing ovarian detection on the ultrasound image to obtain the ovarian position, then performing image segmentation based on the ovarian position to obtain the ovarian image; and then classifying based on the ovarian image to obtain a normal ovary or an ovarian mass;

可选地,所述第二神经网络的分类过程包括先对卵巢肿块进行肿块检测得到肿块位置,再基于肿块位置进行图像分割得到肿块区域图像,再基于所述肿块区域图像进行分类得到良性肿块或恶性肿块;Optionally, the classification process of the second neural network includes first performing mass detection on the ovarian mass to obtain the mass location, then performing image segmentation based on the mass location to obtain a mass area image, and then performing classification based on the mass area image to obtain a benign mass or a malignant mass;

可选地,所述S1替换为S11,获取患者妇科附件区超声图像组,对所述超声图像组中的各个图像依次进行S2-S3的步骤得到各个图像的分类概率,对各个图像的分类概率进行加权平均计算得到患者的诊断预测结果。Optionally, S1 is replaced by S11, obtaining an ultrasound image group of the patient's gynecological adnexal area, performing steps S2-S3 on each image in the ultrasound image group in sequence to obtain a classification probability of each image, and performing a weighted average calculation on the classification probability of each image to obtain a diagnosis prediction result for the patient.

所述S1替换为S12,获取患者妇科附件区实时超声视频,并将超声视频转换为视频帧图像集;所述S2替换为S21,将所述视频帧图像集逐帧输入第一神经网络进行分类得到逐帧图像分类概率,对逐帧图像分类概率进行加权平均计算得到实时视频是正常卵巢或卵巢肿块的概率;The S1 is replaced by S12, obtaining a real-time ultrasound video of the patient's gynecological adnexa, and converting the ultrasound video into a video frame image set; the S2 is replaced by S21, inputting the video frame image set into a first neural network frame by frame for classification to obtain a frame-by-frame image classification probability, and performing a weighted average calculation on the frame-by-frame image classification probability to obtain a probability that the real-time video is a normal ovary or an ovarian mass;

所述S3替换为S31,当实时视频判定为卵巢肿块,将所述视频帧图像集逐帧输至第二神经网络中进行分类得到逐帧分类概率结果,对各图像帧的分类概率结果进行加权平均计算得到实时视频的恶性或良性肿块概率;基于所述恶性或良性肿块概率得到患者实时视频的良恶性诊断结果;The S3 is replaced by S31, when the real-time video is determined to be an ovarian mass, the video frame image set is input into the second neural network frame by frame for classification to obtain a frame-by-frame classification probability result, and the classification probability result of each image frame is weighted averaged to obtain the probability of a malignant or benign mass in the real-time video; based on the probability of a malignant or benign mass, a benign or malignant diagnosis result of the patient's real-time video is obtained;

可选地,将所述S12替换为S13;获取患者妇科附件区实时超声视频,并将超声视频转换为视频帧图像集,对所述视频帧图像集进行连续分组并进行平均计算得到N个平均图像将所述视频帧图像集逐帧输入第一神经网络得到分类概率结果,N为大于等于6的自然数。Optionally, S12 is replaced by S13; real-time ultrasound video of the patient's gynecological accessory area is obtained, and the ultrasound video is converted into a video frame image set, the video frame image set is continuously grouped and averaged to obtain N average images, and the video frame image set is input into the first neural network frame by frame to obtain a classification probability result, where N is a natural number greater than or equal to 6.

所述第一神经网络的训练步骤为:The training steps of the first neural network are:

获取卵巢及卵巢肿块图像集;Obtaining an image set of ovaries and ovarian masses;

对所述图像集进行数据标注得到标注后的图像;所述数据标注通过病例-肿块-单切面图像的标注层级进行标注;Performing data annotation on the image set to obtain annotated images; the data annotation is performed through an annotation hierarchy of case-mass-single-section image;

将所述标注后的图像输至神经网络中进行训练得到第一神经网络;Inputting the labeled image into a neural network for training to obtain a first neural network;

可选地,所述病例-肿块-单切面图像中的所述病例包括多肿块结果,所述肿块包括多切面图像结果;Optionally, the case in the case-mass-single-section image includes multiple mass results, and the mass includes multiple-section image results;

可选地,所述标注的标签包括正常卵巢标签、卵巢肿块标签;Optionally, the annotated labels include normal ovarian labels and ovarian mass labels;

可选地,所述第一神经网络的训练步骤还包括数据预处理,对所述卵巢及卵巢肿块图像集进行数据预处理得到预处理后的数据,再基于所述预处理后的数据进行数据标注;Optionally, the training step of the first neural network further includes data preprocessing, performing data preprocessing on the ovarian and ovarian mass image set to obtain preprocessed data, and then performing data labeling based on the preprocessed data;

可选地,所述数据预处理包括下列的一种或几种:数据增强、图像翻转、图像旋转。Optionally, the data preprocessing includes one or more of the following: data enhancement, image flipping, and image rotation.

所述第二神经网络的训练步骤为:The training steps of the second neural network are:

获取卵巢肿块图像集;Obtaining an image set of an ovarian mass;

对所述卵巢肿块图像集进行数据标注得到肿块标注后的图像;Performing data annotation on the ovarian mass image set to obtain an annotated mass image;

将所述肿块标注后的图像输至神经网络中进行训练得到第二神经网络;Inputting the labeled image of the mass into a neural network for training to obtain a second neural network;

可选地,所述第二神经网络的训练步骤还包括数据处理,对所述卵巢肿块进行数据处理得到处理后的图像,再将所述处理后的图像输至神经网络中进行训练得到第二神经网络;Optionally, the training step of the second neural network further includes data processing, performing data processing on the ovarian mass to obtain a processed image, and then inputting the processed image into the neural network for training to obtain the second neural network;

可选地,所述数据处理包括下列的一种或几种:图像裁剪、图像归一化、图像边界扩展;Optionally, the data processing includes one or more of the following: image cropping, image normalization, image boundary expansion;

可选地,所述图像边界扩展是对卵巢图像的各个方向向上扩展L个像素,L为大于等于25的自然数;Optionally, the image boundary extension is to extend the ovarian image upward by L pixels in each direction, where L is a natural number greater than or equal to 25;

可选地,所述标注的标签包括良恶性分类标签;Optionally, the annotated labels include benign or malignant classification labels;

可选地,所述良恶性分类标签通过肿块的临床分期分级、病理亚型、肿块的特征以及图像的完整度的结果进行标注;Optionally, the benign or malignant classification label is annotated by the clinical staging and grading of the mass, the pathological subtype, the characteristics of the mass, and the completeness of the image;

可选地,所述病理亚型的标签通过三维独热码表示。Optionally, the label of the pathological subtype is represented by a three-dimensional one-hot code.

进一步,所述神经网络包括分割模块、分类模块;Further, the neural network includes a segmentation module and a classification module;

可选地,所述神经网络采用下列的一种或几种:MTANet、Mask R-CNN、U-Net withClassifier、Panoptic FPN、DeepLabV3+、SOLO、SOLOv2、HRNet;Optionally, the neural network adopts one or more of the following: MTANet, Mask R-CNN, U-Net withClassifier, Panoptic FPN, DeepLabV3+, SOLO, SOLOv2, HRNet;

可选地,神经网络包括检测模块、分割模块和分类模块;Optionally, the neural network includes a detection module, a segmentation module and a classification module;

可选地,所述检测模块采用下列的一种或几种:YOLO、Faster R-CNN、SSD、DETR、CornerNet、CenterNet;Optionally, the detection module adopts one or more of the following: YOLO, Faster R-CNN, SSD, DETR, CornerNet, CenterNet;

可选地,所述神经网络还包括可视化模块,卵巢或卵巢肿块图像通过检测模块或分割模块后得到感兴趣的目标区域特征图,通过所述可视化模块对所述目标区域特征图进行可视化得到可视化结果。Optionally, the neural network further includes a visualization module, and the ovarian or ovarian mass image is passed through a detection module or a segmentation module to obtain a feature map of a target region of interest, and the feature map of the target region is visualized by the visualization module to obtain a visualization result.

所述超声图像通过第一神经网络进行分类得到正常卵巢或卵巢肿块的分类概率,将所述卵巢肿块的分类概率与第一预设阈值进行比较,当卵巢肿块的分类概率大于第一阈值则判定为卵巢肿块,反之,判定为正常卵巢;The ultrasound image is classified by a first neural network to obtain a classification probability of a normal ovary or an ovarian mass, and the classification probability of the ovarian mass is compared with a first preset threshold value, and when the classification probability of the ovarian mass is greater than the first threshold value, it is determined to be an ovarian mass, otherwise, it is determined to be a normal ovary;

可选地,所述第二神经网络进行分类得到良性肿块和恶性肿块的分类概率,将所述分类概率与第二预设阈值进行比较,当良性肿块的分类概率大于第二预设阈值时判定为良性肿块,反之,判定为恶性肿块。Optionally, the second neural network performs classification to obtain classification probabilities of benign masses and malignant masses, and compares the classification probabilities with a second preset threshold value; when the classification probability of the benign mass is greater than the second preset threshold value, it is determined to be a benign mass; otherwise, it is determined to be a malignant mass.

本发明的目的在于提供一种计算机程序产品,其上有计算机程序/指令,所述计算机程序/指令被处理器执行实现上述的基于超声的卵巢附件肿块辅助诊断方法。The object of the present invention is to provide a computer program product having a computer program/instruction thereon, wherein the computer program/instruction is executed by a processor to implement the above-mentioned ultrasound-based auxiliary diagnosis method for ovarian adnexal masses.

本发明的目的在于提供一种计算机设备,包括存储器与处理器及存储在存储器上的计算机程序/指令,所述计算机程序/指令被所述处理器执行实现上述的基于超声的卵巢附件肿块辅助诊断方法。The object of the present invention is to provide a computer device, comprising a memory and a processor and a computer program/instruction stored in the memory, wherein the computer program/instruction is executed by the processor to implement the above-mentioned ultrasound-based auxiliary diagnosis method for ovarian adnexal masses.

本发明的目的在于提供一种计算机可读存储介质,其上存储有计算机程序/指令,所述计算机程序/指令被处理器执行上述的基于超声的卵巢附件肿块辅助诊断方法。The object of the present invention is to provide a computer-readable storage medium having a computer program/instruction stored thereon, wherein the computer program/instruction is executed by a processor to implement the above-mentioned ultrasound-based auxiliary diagnosis method for ovarian adnexal masses.

本发明的优势:Advantages of the present invention:

1.集成多任务进行一体化自动诊断:实时肿块和正常卵巢定位检测、自动区分出正常卵巢和肿块、自动分割正常卵巢和肿块、自动诊断肿块性质并做出概率判定、自动进行可解释性分析。1. Integrate multiple tasks for integrated automatic diagnosis: real-time detection of tumor and normal ovary location, automatic differentiation of normal ovary and tumor, automatic segmentation of normal ovary and tumor, automatic diagnosis of tumor properties and probability judgment, and automatic interpretability analysis.

2.通过向建立的模型输入附件超声视频,实现卵巢肿块的筛查,并应用在临床卵巢肿块诊断中,能够实时检测超声视频中卵巢位置并进行分割分类,实现视频检测卵巢位置及卵巢肿块的性质。2. By inputting the attached ultrasound video into the established model, the screening of ovarian masses can be achieved and applied in the clinical diagnosis of ovarian masses. It can detect the position of the ovaries in the ultrasound video in real time and perform segmentation and classification, and realize video detection of the ovarian position and the nature of ovarian masses.

3.针对卵巢结构的生理位置,基于临床检查的流程特点,本发明提出的辅助诊断方法能够有效根据临床检测的流程完成卵巢及卵巢肿块的检测、分割、分类及可解释性分析,有利于实际临床的应用,辅助初级或中级医生的检查,提高初级或中级医师的检查效率和准确率。3. In view of the physiological position of the ovarian structure and based on the process characteristics of clinical examination, the auxiliary diagnosis method proposed in the present invention can effectively complete the detection, segmentation, classification and interpretability analysis of ovaries and ovarian masses according to the process of clinical examination, which is conducive to actual clinical application, assisting the examination of junior or intermediate doctors, and improving the examination efficiency and accuracy of junior or intermediate doctors.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获取其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required for use in the description of the embodiments will be briefly introduced below. Obviously, the drawings described below are only some embodiments of the present invention. For those skilled in the art, other drawings can be obtained based on these drawings without creative work.

图1为本发明实施例提供的一种基于超声的卵巢附件肿块辅助诊断方法流程示意图;FIG1 is a schematic flow chart of an ultrasound-based auxiliary diagnosis method for ovarian adnexal masses provided by an embodiment of the present invention;

图2为本发明实施例提供的一种基于超声的卵巢附件肿块辅助诊断系统示意图;FIG2 is a schematic diagram of an ultrasound-based auxiliary diagnosis system for ovarian adnexal masses provided by an embodiment of the present invention;

图3为本发明实施例提供的一种基于超声的卵巢附件肿块辅助诊断设备示意图;FIG3 is a schematic diagram of an ultrasound-based auxiliary diagnosis device for ovarian adnexal masses provided by an embodiment of the present invention;

图4为本发明实施例提供的诊断系统整体设计过程结构图;FIG4 is a structural diagram of the overall design process of the diagnostic system provided by an embodiment of the present invention;

图5为本发明实施例提供的辅助诊断系统在基于图像的测试中的诊断性能;FIG5 shows the diagnostic performance of the auxiliary diagnosis system provided by an embodiment of the present invention in an image-based test;

图6为本发明实施例提供的辅助诊断系统、放射科医师和非放射科全科医师之间诊断性能的比较结果;FIG6 is a comparison result of the diagnostic performance between the auxiliary diagnosis system provided by an embodiment of the present invention, a radiologist, and a non-radiologist general practitioner;

图7为本发明实施例提供的辅助诊断系统对2种类型卵巢肿块进行检测、分割、诊断和判读。FIG. 7 shows the detection, segmentation, diagnosis and interpretation of two types of ovarian masses by the auxiliary diagnosis system provided by an embodiment of the present invention.

具体实施方式DETAILED DESCRIPTION

为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。In order to enable those skilled in the art to better understand the solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention.

在本发明的说明书和权利要求书及上述附图中的描述的一些流程中,包含了按照特定顺序出现的多个操作,但是应该清楚了解,这些操作可以不按照其在本文中出现的顺序来执行或并行执行,操作的序号如S101、S102等,仅仅是用于区分开各个不同的操作,序号本身不代表任何的执行顺序。另外,这些流程可以包括更多或更少的操作,并且这些操作可以按顺序执行或并行执行。需要说明的是,本文中的“第一”、“第二”等描述,是用于区分不同的消息、设备、模块等,不代表先后顺序,也不限定“第一”和“第二”是不同的类型。In some of the processes described in the specification and claims of the present invention and the above-mentioned figures, multiple operations that appear in a specific order are included, but it should be clearly understood that these operations may not be executed in the order in which they appear in this article or executed in parallel. The sequence numbers of the operations, such as S101, S102, etc., are only used to distinguish between different operations, and the sequence numbers themselves do not represent any execution order. In addition, these processes may include more or fewer operations, and these operations may be executed in sequence or in parallel. It should be noted that the descriptions of "first", "second", etc. in this article are used to distinguish different messages, devices, modules, etc., do not represent the order of precedence, and do not limit the "first" and "second" to be different types.

图1本发明实施例提供的一种基于超声的卵巢附件肿块辅助诊断方法示意图,具体包括:FIG1 is a schematic diagram of an ultrasound-based auxiliary diagnosis method for ovarian adnexal masses provided by an embodiment of the present invention, specifically comprising:

S1获取患者妇科附件区超声图像;S1 obtains ultrasound images of the patient's gynecological adnexal area;

在一个实施例中,超声图像是反映介质中声学参数的差异,可得到不同于光学、X射线、y射线等的信息。超声对人体软组织有良好的分辨能力,可得到高达120dB以上动态范围的有用信号,有利于识别生物组织的微小病变。超声图像显示活体组织时不用染色处理,即可获得所需图像。In one embodiment, the ultrasound image reflects the difference in acoustic parameters in the medium, and can obtain information different from optics, X-rays, y-rays, etc. Ultrasound has good resolution for human soft tissues, and can obtain useful signals with a dynamic range of up to 120 dB or more, which is conducive to identifying micro-lesions of biological tissues. When ultrasound images display living tissues, the required images can be obtained without dyeing.

在一个具体实施例中,数据采集:在不同层级不同地域的多中心医院调取收集附件肿块超声图像和临床数据的病历信息。In a specific embodiment, data collection: medical record information of ultrasound images and clinical data of adnexal masses are retrieved and collected from multi-center hospitals at different levels and in different regions.

在一个具体实施例中,本发明连续纳入了在超声检查中出现卵巢肿块的患者及显示无卵巢相关异常的健康女性。卵巢肿块患者的纳入标准为患者接受了外科手术并获得组织病理学结果,或超声随访至少6个月,或超声随访至病变消失。排除标准为经组织病理学分析证实为非卵巢肿瘤的患者,或无组织病理学及超声随访失访的患者,超声图像质量差。根据世界卫生组织女性生殖器肿瘤分类(2020版),病理诊断为良性和恶性肿瘤(其中恶性肿瘤中包括了交界性肿瘤)。数据由训练有素的研究人员使用统一表格回顾性收集,并由两名研究人员审查。In a specific embodiment, the present invention continuously included patients who had ovarian masses on ultrasound examination and healthy women who showed no ovarian-related abnormalities. The inclusion criteria for patients with ovarian masses were that the patients underwent surgery and obtained histopathological results, or ultrasound follow-up for at least 6 months, or ultrasound follow-up until the lesion disappeared. The exclusion criteria were patients with non-ovarian tumors confirmed by histopathological analysis, or patients without histopathology and lost ultrasound follow-up, and poor ultrasound image quality. According to the World Health Organization Classification of Female Genital Tumors (2020 edition), pathological diagnoses were benign and malignant tumors (malignant tumors included borderline tumors). The data were collected retrospectively by trained researchers using a unified form and reviewed by two researchers.

S2将所述超声图像输至第一神经网络中进行分类得到正常卵巢分类概率和卵巢肿块分类概率;S2 inputs the ultrasound image into a first neural network for classification to obtain a normal ovary classification probability and an ovarian mass classification probability;

在一个实施例中,所述分类过程包括先进行目标检测确定目标位置,再基于所述目标位置进行目标区域分割得到分割区域图像,再基于分割区域图像进行分类得到结果。In one embodiment, the classification process includes first performing target detection to determine the target position, then performing target region segmentation based on the target position to obtain a segmented region image, and then performing classification based on the segmented region image to obtain a result.

在一个实施例中,所述第一神经网络的分类过程包括先对超声图像进行卵巢检测得到卵巢位置,再基于所述卵巢位置进行图像分割得到卵巢图像;再基于所述卵巢图像进行分类得到正常卵巢或卵巢肿块。In one embodiment, the classification process of the first neural network includes first performing ovarian detection on the ultrasound image to obtain the ovarian position, then performing image segmentation based on the ovarian position to obtain the ovarian image; and then classifying based on the ovarian image to obtain a normal ovary or an ovarian mass.

在一个实施例中,所述第一神经网络的训练步骤为:In one embodiment, the training steps of the first neural network are:

获取卵巢及卵巢肿块图像集;Obtaining an image set of ovaries and ovarian masses;

对所述图像集进行数据标注得到标注后的图像;所述数据标注通过病例-肿块-单切面图像的标注层级进行标注;Performing data annotation on the image set to obtain annotated images; the data annotation is performed through an annotation hierarchy of case-mass-single-section image;

将所述标注后的图像输至神经网络中进行训练得到第一神经网络。The labeled image is input into a neural network for training to obtain a first neural network.

在一个实施例中,所述病例-肿块-单切面图像中的所述病例包括多肿块结果,所述肿块包括多切面图像结果;In one embodiment, the case in the case-mass-single-section image includes multiple mass results, and the mass includes multiple-section image results;

可选地,所述标注的标签包括正常卵巢标签、卵巢肿块标签、图像完整度标签。Optionally, the annotated labels include a normal ovary label, an ovarian mass label, and an image integrity label.

在一个实施例中,所述第一神经网络的训练步骤还包括数据预处理,对所述卵巢及卵巢肿块图像集进行数据预处理得到预处理后的数据,再基于所述预处理后的数据进行数据标注。In one embodiment, the training step of the first neural network further includes data preprocessing, performing data preprocessing on the ovarian and ovarian mass image set to obtain preprocessed data, and then performing data labeling based on the preprocessed data.

在一个实施例中,所述数据预处理包括下列的一种或几种:数据增强、图像翻转、图像旋转。In one embodiment, the data preprocessing includes one or more of the following: data enhancement, image flipping, and image rotation.

在一个实施例中,所述神经网络包括分割模块、分类模块;In one embodiment, the neural network includes a segmentation module and a classification module;

可选地,所述神经网络采用下列的一种或几种:MTANet、Mask R-CNN、U-Net withClassifier、Panoptic FPN、DeepLabV3+、SOLO、SOLOv2、HRNet;Optionally, the neural network adopts one or more of the following: MTANet, Mask R-CNN, U-Net withClassifier, Panoptic FPN, DeepLabV3+, SOLO, SOLOv2, HRNet;

在一个实施例中,神经网络包括检测模块、分割模块和分类模块;In one embodiment, the neural network includes a detection module, a segmentation module, and a classification module;

可选地,所述检测模块采用下列的一种或几种:YOLO、Faster R-CNN、SSD、DETR、CornerNet、CenterNet。Optionally, the detection module adopts one or more of the following: YOLO, Faster R-CNN, SSD, DETR, CornerNet, CenterNet.

在一个实施例中,所述神经网络还包括可视化模块,卵巢或卵巢肿块图像通过检测模块或分割模块后得到感兴趣的目标区域特征图,通过所述可视化模块对所述目标区域特征图进行可视化得到可视化结果。In one embodiment, the neural network further includes a visualization module, and the ovarian or ovarian mass image is passed through a detection module or a segmentation module to obtain a feature map of a target region of interest, and the feature map of the target region is visualized by the visualization module to obtain a visualization result.

在一个实施例中,所述超声图像通过第一神经网络进行分类得到正常卵巢或卵巢肿块的分类概率,将所述卵巢肿块的分类概率与第一预设阈值进行比较,当卵巢肿块的分类概率大于第一阈值则判定为卵巢肿块,反之,判定为正常卵巢。In one embodiment, the ultrasound image is classified by a first neural network to obtain a classification probability of a normal ovary or an ovarian mass, and the classification probability of the ovarian mass is compared with a first preset threshold value. When the classification probability of the ovarian mass is greater than the first threshold value, it is determined to be an ovarian mass, otherwise, it is determined to be a normal ovary.

在一个实施例中,所述第一预设阈值是进行第一神经网络训练过程中得到的,当第一神经网络在训练过程中损失函数不发生变化是的设置参数作为预设阈值;In one embodiment, the first preset threshold is obtained during the training of the first neural network, and the parameter is set as the preset threshold when the loss function of the first neural network does not change during the training process;

可选地,所述第一预设阈值在数据集为卵巢及卵巢肿块中训练第一神经网络时得到的预设阈值为0.9。Optionally, the first preset threshold value obtained when the first neural network is trained in a data set of ovaries and ovarian masses is 0.9.

在一个具体实施例中,数据整理:包括数据清洗、重命名、格式转换等步骤,预处理数据经评审合格入选基础数据库:①由具有临床经验的超声医师依照图像质量要求去除不合格图像,并形成记录;②同时应根据命名规则,对图像进行重命名;③并将不同的图像格式转换为统一的格式。应按比例进行抽查,并形成记录;④数据经整理后形成基础数据库,应根据收集的信息进行统计,并形成记录。In a specific embodiment, data collation includes steps such as data cleaning, renaming, and format conversion. The pre-processed data is selected into the basic database after evaluation: ① Unqualified images are removed by ultrasound physicians with clinical experience according to image quality requirements and records are formed; ② At the same time, images should be renamed according to naming rules; ③ Different image formats are converted into a unified format. Spot checks should be carried out in proportion and records should be formed; ④ After the data is collated to form a basic database, statistics should be conducted based on the collected information and records should be formed.

在一个具体实施例中,本发明提出的OvaMTA系统由两个神经网络组成:用于自动肿瘤检测的OvaMTA-Seg网络和用于预测恶性肿瘤的OvaMTA-Diagnosis网络。最后,本发明使用ROC曲线、混淆矩阵、准确率、敏感性、特异性和Kappa系数来评估模型性能,并在视频测试中与医生进行比较,如图4所示。本发明的目标是开发和验证一种深度学习卵巢多任务注意力网络(OvaMTA),用于在超声扫描期间实时导航诊断平面,以识别和标记健康的卵巢和卵巢肿块,并使用异质多中心数据集的图像进一步筛查卵巢癌进行回顾性测试和外部验证。该工具的可扩展性是前瞻性使用真实世界的临床视频数据集进行评估的。OvaMTA系统在超声扫描视频中的肿瘤定位和卵巢定位方面表现良好。同时,对卵巢肿块的良恶性诊断能力优异,提高了包括全科医生在内的医生的诊断准确性。In a specific embodiment, the OvaMTA system proposed in the present invention consists of two neural networks: the OvaMTA-Seg network for automatic tumor detection and the OvaMTA-Diagnosis network for predicting malignant tumors. Finally, the present invention uses ROC curves, confusion matrices, accuracy, sensitivity, specificity, and Kappa coefficients to evaluate model performance and compare with doctors in video tests, as shown in Figure 4. The goal of the present invention is to develop and validate a deep learning ovarian multi-task attention network (OvaMTA) for real-time navigation of diagnostic planes during ultrasound scanning to identify and mark healthy ovaries and ovarian masses, and to further screen for ovarian cancer using images from heterogeneous multicenter datasets for retrospective testing and external validation. The scalability of the tool was prospectively evaluated using real-world clinical video datasets. The OvaMTA system performs well in tumor localization and ovarian localization in ultrasound scanning videos. At the same time, the ability to diagnose benign and malignant ovarian masses is excellent, which improves the diagnostic accuracy of doctors, including general practitioners.

在一个具体实施例中,基于超声图像对卵巢肿块进行检测、分割、分类诊断、可解释性分析的多任务深度学习模型包括两个部分,OvaMTA-Seg大致分割超声图像中的感兴趣区域(ROIs)并输出卵巢肿块存在的概率。当存在卵巢肿块时,我们的系统将裁剪和归一化ROI的所有连接域,并将相应的图像补丁输入到OvaMTA-diagnosis中,以精确分割肿块区域并对良性和恶性进行分类。最后,对于图像,将输出通过分割卵巢肿块区域生成的检测框架以及卵巢肿块的相应恶性概率。将由OvaMTA-Seg和OvaMTA-diagnosis组成的这个自动化智能诊断系统流程称为OvaMTA。In a specific embodiment, a multi-task deep learning model for detecting, segmenting, classifying and diagnosing ovarian masses based on ultrasound images, and interpretability analysis includes two parts. OvaMTA-Seg roughly segments the region of interest (ROIs) in the ultrasound image and outputs the probability of the presence of an ovarian mass. When an ovarian mass is present, our system will crop and normalize all connected domains of the ROI and input the corresponding image patch into OvaMTA-diagnosis to accurately segment the mass area and classify benign and malignant. Finally, for the image, the detection frame generated by segmenting the ovarian mass area and the corresponding malignancy probability of the ovarian mass will be output. This automated intelligent diagnosis system process consisting of OvaMTA-Seg and OvaMTA-diagnosis is called OvaMTA.

在一个具体实施例中,两个模型是分开训练的,考虑到每个模型的功能,OvaMTA-Seg的训练集包括所有训练案例卵巢和卵巢肿块的超声图像,而OvaMTA-diagnosis的训练集仅包括所有训练案例卵巢肿块的超声图像。这两个模型都是基于MTANet开发的。MTANet是一个单阶段多任务注意力网络,包括用于分类的金字塔视觉变换器(PVT)和用于分割的类UNet多注意力解码器。本发明将其用作OvaMTA-Seg和OvaMTA-diagnosis的框架。In a specific embodiment, the two models are trained separately, and considering the functions of each model, the training set of OvaMTA-Seg includes ultrasound images of all training case ovaries and ovarian masses, while the training set of OvaMTA-diagnosis only includes ultrasound images of all training case ovarian masses. Both models are developed based on MTANet. MTANet is a single-stage multi-task attention network, including a pyramid visual transformer (PVT) for classification and a UNet-like multi-attention decoder for segmentation. The present invention uses it as a framework for OvaMTA-Seg and OvaMTA-diagnosis.

在一个具体实施例中,所有卵巢超声图像均转换为PNG格式。接下来,使用XXXMedical Technology Co.,Ltd开发的软件版本为4.4.0.1的标签系统来勾勒所有卵巢肿块。图中标记ROI的工作由一位在超声拥有8年经验的放射科医生完成,然后由一位具有30多年超声经验的放射科医生进行审查和修改。生成所有数据集的JSON格式文件后,本发明使用OpenCV-python(版本1564.8.1.78)来计算带注释的ROI的边界矩形。对于OvaMTA-Seg,为了定位整体图像中的卵巢或结节,直接使完整的超声图像、ROI二进制掩码和病理学标签进行监督学习。对于OvaMTA-diagnosis,为了更准确地关注ROI,本发明将其边界矩形在每个方向上扩展25像素,并裁剪超声图像以获得卵巢肿块斑块。病理学标签被处理为三维独热编码,掩模被处理为与图像成比例的二进制图像。In a specific embodiment, all ovarian ultrasound images are converted to PNG format. Next, all ovarian masses are outlined using a labeling system with software version 4.4.0.1 developed by XXXMedical Technology Co., Ltd. The work of marking ROI in the figure is completed by a radiologist with 8 years of experience in ultrasound, and then reviewed and modified by a radiologist with more than 30 years of ultrasound experience. After generating JSON format files for all data sets, the present invention uses OpenCV-python (version 1564.8.1.78) to calculate the bounding rectangle of the annotated ROI. For OvaMTA-Seg, in order to locate the ovary or nodule in the overall image, the complete ultrasound image, ROI binary mask and pathology label are directly supervised for learning. For OvaMTA-diagnosis, in order to focus on the ROI more accurately, the present invention expands its bounding rectangle by 25 pixels in each direction and crops the ultrasound image to obtain the ovarian mass plaque. The pathology label is processed as a three-dimensional unique hot encoding, and the mask is processed as a binary image proportional to the image.

这项研究中的图像具有各种尺寸,因此,本发明在训练和验证期间通过调整大小将所有输入图像统一调整为352×352像素。为了减少过度拟合并增加训练集的数量和多样性,在图像和掩码上应用了以下变换作为数据增强:水平翻转,概率为0.5,垂直翻转,概率为0.5,随机旋转90度。The images in this study have various sizes, so the present invention uniformly resizes all input images to 352×352 pixels by resizing during training and validation. In order to reduce overfitting and increase the number and diversity of training sets, the following transformations are applied to images and masks as data augmentation: horizontal flip with probability 0.5, vertical flip with probability 0.5, and random rotation of 90 degrees.

在一个具体实施例中,根据OvaMTA模型在训练集和验证集上的输出概率值,采用了两步决策方法,并选择了两个阈值。当图像中存在卵巢肿块的概率小于0.9时,图像仅包含健康的卵巢,并输出卵巢的分割结果。相反,当概率大于0.9时,预测图像包含卵巢肿块。In a specific embodiment, a two-step decision method is adopted and two thresholds are selected according to the output probability values of the OvaMTA model on the training set and the validation set. When the probability of the presence of an ovarian mass in the image is less than 0.9, the image contains only healthy ovaries and the segmentation result of the ovaries is output. On the contrary, when the probability is greater than 0.9, it is predicted that the image contains an ovarian mass.

OvaMTA-diagnosis和OvaMTA-Seg模型使用PyTorch在NVIDIA GeForce RTX3080GPU上实现。所有模型均使用批量大小为10和AdamW优化器,本发明进行了200个时期的训练,并在30个时期时衰减学习率。从训练和验证数据集中随机选择10%的图像作为验证集,用于优化网络并设置超参数。最终,OvaMTA将输出ROIs的边界框及其健康卵巢、良性或恶性肿瘤的概率,其中这三个概率值的总和等于1,并且所有概率值分布在0到1的范围内。使用中位数Dice系数(mDSC)作为评估模型在分割上性能的指标。模型可解释行分析采用最后一层特征图方法可视化模型关注区域。The OvaMTA-diagnosis and OvaMTA-Seg models were implemented on an NVIDIA GeForce RTX3080 GPU using PyTorch. All models were trained for 200 epochs using a batch size of 10 and the AdamW optimizer, with the learning rate decayed at 30 epochs. 10% of the images were randomly selected from the training and validation datasets as the validation set to optimize the network and set hyperparameters. Ultimately, OvaMTA will output the bounding box of the ROIs and its probability of a healthy ovary, benign, or malignant tumor, where the sum of these three probability values is equal to 1 and all probability values are distributed in the range of 0 to 1. The median Dice coefficient (mDSC) was used as a metric to evaluate the performance of the model on segmentation. The model interpretable line analysis uses the last layer feature map method to visualize the model's focus area.

S3当图像判定为卵巢肿块时,将所述超声图像输至第二神经网络中进行分类得到良性肿块分类概率和恶性肿块分类概率。S3: When the image is determined to be an ovarian mass, the ultrasound image is input into a second neural network for classification to obtain a benign mass classification probability and a malignant mass classification probability.

在一个实施例中,第二神经网络输入的超声图像为经过分割得到ROI并判定为卵巢肿块的超声图像。In one embodiment, the ultrasound image input to the second neural network is an ultrasound image that is segmented to obtain a ROI and is determined to be an ovarian mass.

在一个实施例中,所述第二神经网络的分类过程包括先对卵巢肿块进行肿块检测得到肿块位置,再基于肿块位置进行图像分割得到肿块区域图像,再基于所述肿块区域图像进行分类得到良性肿块或恶性肿块。In one embodiment, the classification process of the second neural network includes first performing mass detection on the ovarian mass to obtain the mass location, then performing image segmentation based on the mass location to obtain a mass area image, and then performing classification based on the mass area image to obtain a benign mass or a malignant mass.

在一个实施例中,所述第二神经网络的训练步骤为:In one embodiment, the training steps of the second neural network are:

获取卵巢肿块图像集;Obtaining an image set of an ovarian mass;

对所述卵巢肿块图像集进行数据标注得到肿块标注后的图像;Performing data annotation on the ovarian mass image set to obtain an annotated mass image;

将所述肿块标注后的图像输至神经网络中进行训练得到第二神经网络。The labeled image of the mass is input into a neural network for training to obtain a second neural network.

在一个实施例中,所述第二神经网络的训练步骤还包括数据处理,对所述卵巢肿块进行数据处理得到处理后的图像,再将所述处理后的图像输至神经网络中进行训练得到第二神经网络。In one embodiment, the training step of the second neural network further includes data processing, performing data processing on the ovarian mass to obtain a processed image, and then inputting the processed image into the neural network for training to obtain the second neural network.

在一个实施例中,所述数据处理包括下列的一种或几种:图像裁剪、图像归一化、图像边界扩展;In one embodiment, the data processing includes one or more of the following: image cropping, image normalization, image boundary expansion;

可选地,所述图像边界扩展是对卵巢图像的各个方向向上扩展L个像素,L为大于等于25的自然数。Optionally, the image boundary expansion is to expand the ovarian image upward by L pixels in each direction, where L is a natural number greater than or equal to 25.

在一个实施例中,所述标注的标签包括良恶性分类标签。In one embodiment, the annotated labels include benign or malignant classification labels.

在一个实施例中,所述良恶性分类标签通过肿块的临床分期分级、病理亚型、肿块的特征以及图像的完整度的结果进行标注;In one embodiment, the benign and malignant classification labels are annotated by the clinical stage grade of the mass, the pathological subtype, the characteristics of the mass, and the completeness of the image;

在一个实施例中,所述病理亚型的标签通过三维独热码表示。In one embodiment, the label of the pathological subtype is represented by a three-dimensional one-hot code.

在一个实施例中,所述第二神经网络进行分类得到良性肿块和恶性肿块的分类概率,将所述分类概率与第二预设阈值进行比较,当良性肿块的分类概率大于第二预设阈值时判定为良性肿块,反之,判定为恶性肿块。In one embodiment, the second neural network performs classification to obtain classification probabilities of benign masses and malignant masses, and compares the classification probabilities with a second preset threshold value. When the classification probability of a benign mass is greater than the second preset threshold value, it is determined to be a benign mass, otherwise, it is determined to be a malignant mass.

在一个实施例中,所述第二预设阈值是进行第二神经网络训练过程中得到的,当第二神经网络在训练过程中损失函数不发生变化是的设置参数作为预设阈值;In one embodiment, the second preset threshold is obtained during the training of the second neural network, and the parameter is set as the preset threshold when the loss function of the second neural network does not change during the training process;

可选地,所述第二预设阈值在数据集为卵巢肿块中训练第二神经网络时得到的预设阈值为0.65。Optionally, the second preset threshold value obtained when the data set is ovarian mass and the second neural network is trained is 0.65.

在一个具体实施例中,神经网络中的数据标注:①标注流程:明确标注对象、人员分工、标注步骤、结果审核,统一标注规则,确认标注结果的含义、形式和输出格式,并描述决策机制;②标注对象和标注任务:明确标注对象和标注任务,勾画不同模式超声图像中的正常卵巢和卵巢肿块,分别基于病理诊断对卵巢肿块进行良、恶性分类标注,同时标注肿块的临床分期、分级、病理亚型、图像完整度、肿块的特征(ORADS标准术语);③标注规则:标注规则应具有依从性,依从性首先考虑国际指南,卵巢肿块超声特征采用目前国际上最常应用的ACR O-RADS卵巢-附件超声报告词汇白皮书,病理分类采用世界卫生组织卵巢肿瘤组织学分类(WHO,2020),临床分期采用国际妇产科学联盟FIGO手术病理学分期(2014)。对于依赖于主观判断临床经验部分的标注规则,采用更高级别的标准验证有效性,并对一致性考核,确保标注规则具有可移植性,可外推到真实的临床应用场景。纳入“病例(多肿块结果)-肿块(多切面图像结果)-单切面图像”标注层级。肿块勾画以高年资医师的结果作为金标准。分别勾画出卵巢结构与肿块区域,最后均由审核人员审核确认才可以进入下一步。采用勾画软件标注后,考虑标注结果导出格式和形式对后续导入的影响,对卵巢肿块进行诊断分类标签(良性、交界性、恶性);④标注质控:标注过程中,应对标注人员的标注质量进行监督,应同时考虑一致性指标和准确性指标。当标注人员表现出现显著下降时,应暂停标注工作,对标注人员再培训,考核通过后重新上岗;⑤标注质量评估:包括对准确性、一致性、可理解性、可访问性、数据安全性、可追溯性评估;⑥数据统计:数据经标注后形成标注数据库,应根据收集的信息进行统计,并形成记录。In a specific embodiment, data annotation in a neural network: ① Annotation process: clarify the annotation object, personnel division of labor, annotation steps, result review, unify the annotation rules, confirm the meaning, form and output format of the annotation results, and describe the decision-making mechanism; ② Annotation objects and annotation tasks: clarify the annotation objects and annotation tasks, outline the normal ovaries and ovarian masses in ultrasound images of different modes, classify and annotate the ovarian masses into benign and malignant based on pathological diagnosis, and annotate the clinical stage, grade, pathological subtype, image integrity, and characteristics of the mass (ORADS standard terminology) of the mass; ③ Annotation rules: Annotation rules should be compliant, and compliance should first consider international guidelines. The ultrasound characteristics of ovarian masses adopt the ACR O-RADS Ovarian-Attachment Ultrasound Reporting Vocabulary White Paper, which is the most commonly used internationally. The pathological classification adopts the World Health Organization's Ovarian Tumor Histological Classification (WHO, 2020), and the clinical staging adopts the International Federation of Gynecology and Obstetrics FIGO Surgical Pathology Staging (2014). For the annotation rules that rely on subjective judgment and clinical experience, higher-level standards are used to verify the effectiveness and consistency assessment to ensure that the annotation rules are portable and can be extrapolated to real clinical application scenarios. The annotation level of "case (multiple mass results)-mass (multiple section image results)-single section image" is included. The results of senior physicians are used as the gold standard for mass delineation. The ovarian structure and mass area are delineated separately, and finally they are reviewed and confirmed by the reviewer before proceeding to the next step. After annotating with the delineation software, the ovarian mass is diagnosed and classified (benign, borderline, malignant) considering the impact of the export format and form of the annotation results on the subsequent import; ④ Annotation quality control: During the annotation process, the annotation quality of the annotator should be supervised, and the consistency index and accuracy index should be considered at the same time. When the performance of the labeling personnel shows a significant decline, the labeling work should be suspended, the labeling personnel should be retrained, and they should return to work after passing the assessment; ⑤ Labeling quality assessment: including assessment of accuracy, consistency, comprehensibility, accessibility, data security, and traceability; ⑥ Data statistics: After the data is labeled, a labeling database is formed, which should be counted based on the collected information and recorded.

在一个具体实施例中,当预测图像包含卵巢肿块,由OvaMTA-Seg生成的分割区域的连接域通过边界矩形裁剪,并输入到OvaMTA-diagnosis中,以精确分割肿块并诊断良性和恶性。当OvaMTA-diagnosis输出的良性概率大于0.65时,模型将将相应的补丁诊断为良性,否则将诊断为恶性。In a specific embodiment, when the image is predicted to contain an ovarian mass, the connected domain of the segmented region generated by OvaMTA-Seg is clipped by a bounding rectangle and input into OvaMTA-diagnosis to accurately segment the mass and diagnose benign and malignant. When the benign probability output by OvaMTA-diagnosis is greater than 0.65, the model will diagnose the corresponding patch as benign, otherwise it will be diagnosed as malignant.

在一个具体实施例中,对模型进行临床验证:收集模型开发以外的其他不同中心超声检查附件肿块患者的超声图像及临床病例资料,验证模型的稳定性及诊断效能。收集未来半年不同中心的临床试验病患的资料,为每个病例撰写模型诊断报告;比较分析不同年资医生诊断报告与模型诊断报告;对模型进行修正,对系统理论进一步完善,对附件肿块诊断核心方法进行校核修正。In a specific embodiment, the model is clinically validated: ultrasound images and clinical case data of patients with adnexal masses from ultrasound examinations in different centers other than those used for model development are collected to verify the stability and diagnostic efficacy of the model. Data of patients in clinical trials from different centers in the next six months are collected to write a model diagnosis report for each case; diagnostic reports of doctors with different years of experience are compared and analyzed with model diagnostic reports; the model is revised, the system theory is further improved, and the core method of diagnosing adnexal masses is verified and revised.

在一个具体实施例中,本发明通过超声图像进行临床诊断性能的测试,如图5所示,内部测试(如图5中的a、b、c)和外部测试(如图5中的d、e、f)数据集的受试者工作特征曲线和混淆矩阵显示了本发明提出的方法在基于图像检测卵巢肿块和判别卵巢肿瘤良恶性方面的诊断性能。对于恶性肿块的检测,本发明的模型在内部测试集用例中实现了0.941(95%CI:0.940,0.942)的AUC,在外部测试集用例中实现了0.941(95% CI:0.938,0.941)的AUC。在视频情况下,本发明的模型在良性和恶性分类任务中实现了0.911(95% CI:0.909,0.913)的AUC和86.2的准确率86.2(137/159;95% CI:80.5,91.2)。In a specific embodiment, the present invention tests the clinical diagnostic performance through ultrasound images. As shown in FIG5, the receiver operating characteristic curves and confusion matrices of the internal test (a, b, c in FIG5) and the external test (d, e, f in FIG5) data sets show the diagnostic performance of the method proposed by the present invention in detecting ovarian masses based on images and distinguishing benign and malignant ovarian tumors. For the detection of malignant masses, the model of the present invention achieves an AUC of 0.941 (95% CI: 0.940, 0.942) in the internal test set case and an AUC of 0.941 (95% CI: 0.938, 0.941) in the external test set case. In the case of video, the model of the present invention achieves an AUC of 0.911 (95% CI: 0.909, 0.913) and an accuracy of 86.2 (137/159; 95% CI: 80.5, 91.2) in the benign and malignant classification tasks.

在一个实施例中,所述S1替换为S11,获取患者妇科附件区超声图像组,对所述超声图像组中的各个图像依次进行S2-S3的步骤得到各个图像的分类概率,对各个图像的分类概率进行加权平均计算得到患者的诊断预测结果。In one embodiment, S1 is replaced by S11, an ultrasound image group of the patient's gynecological adnexal area is obtained, steps S2-S3 are performed on each image in the ultrasound image group in sequence to obtain the classification probability of each image, and the classification probability of each image is weighted averaged to obtain the patient's diagnostic prediction result.

在一个实施例中,所述S1替换为S12,获取患者妇科附件区实时超声视频,并将超声视频转换为视频帧图像集;所述S2替换为S21,将所述视频帧图像集逐帧输入第一神经网络进行分类得到逐帧图像分类概率,对逐帧图像分类概率进行加权平均计算得到实时视频是正常卵巢或卵巢肿块的概率;In one embodiment, S1 is replaced by S12, obtaining a real-time ultrasound video of the patient's gynecological adnexa, and converting the ultrasound video into a video frame image set; S2 is replaced by S21, inputting the video frame image set into a first neural network frame by frame for classification to obtain a frame-by-frame image classification probability, and performing a weighted average calculation on the frame-by-frame image classification probability to obtain a probability that the real-time video is a normal ovary or an ovarian mass;

所述S3替换为S31,当实时视频判定为卵巢肿块,将所述视频帧图像集逐帧输至第二神经网络中进行分类得到逐帧分类概率结果,对各图像帧的分类概率结果进行加权平均计算得到实时视频的恶性或良性肿块概率;基于所述恶性或良性肿块概率得到患者实时视频的良恶性诊断结果;The S3 is replaced by S31, when the real-time video is determined to be an ovarian mass, the video frame image set is input into the second neural network frame by frame for classification to obtain a frame-by-frame classification probability result, and the classification probability result of each image frame is weighted averaged to obtain the probability of a malignant or benign mass in the real-time video; based on the probability of a malignant or benign mass, a benign or malignant diagnosis result of the patient's real-time video is obtained;

在一个实施例中,将所述S12替换为S13;获取患者妇科附件区实时超声视频,并将超声视频转换为视频帧图像集,对所述视频帧图像集进行连续分组并进行平均计算得到N个平均图像将所述视频帧图像集逐帧输入第一神经网络得到分类概率结果,N为大于等于6的自然数。In one embodiment, S12 is replaced by S13; real-time ultrasound video of the patient's gynecological accessory area is obtained, and the ultrasound video is converted into a video frame image set, the video frame image set is continuously grouped and averaged to obtain N average images, and the video frame image set is input into the first neural network frame by frame to obtain a classification probability result, where N is a natural number greater than or equal to 6.

在一个具体实施例中,本发明通过获取患者妇科附件区超声图像,将超声图像输至第一神经网络中进行分类得到正常卵巢分类概率和卵巢肿块分类概率,将所述卵巢肿块的分类概率与第一预设阈值进行比较,当卵巢肿块的分类概率大于第一阈值则判定为卵巢肿块,反之,判定为正常卵巢。当图像判定为卵巢肿块时,将所述超声图像输至第二神经网络中进行分类得到良性肿块分类概率和恶性肿块分类概率,将所述分类概率与第二预设阈值进行比较,当良性肿块的分类概率大于第二预设阈值时判定为良性肿块,反之,判定为恶性肿块。In a specific embodiment, the present invention obtains an ultrasound image of the patient's gynecological adnexa, inputs the ultrasound image into a first neural network for classification to obtain a normal ovarian classification probability and an ovarian mass classification probability, compares the ovarian mass classification probability with a first preset threshold, and determines it as an ovarian mass when the ovarian mass classification probability is greater than the first threshold, otherwise, it is determined to be a normal ovary. When the image is determined to be an ovarian mass, the ultrasound image is input into a second neural network for classification to obtain a benign mass classification probability and a malignant mass classification probability, and compares the classification probability with a second preset threshold, and determines it as a benign mass when the benign mass classification probability is greater than the second preset threshold, otherwise, it is determined to be a malignant mass.

在一个具体实施例中,本发明获取患者妇科附件区超声图像组,将图像组输至第一神经网络中得到每一张图像的分类概率,对各图像的分类概率进行加权平均计算得到患者个体正常卵巢的分类概率和卵巢肿块的分类概率,当判定为卵巢肿块时,将超声图像组输至第二神经网络中进行分类得到每一张图像的良性肿块分类概率和恶性肿块分类概率,对各图像的分类概率进行加权平均计算得到患者个体良性肿块的分类概率和恶性肿块的分类概率。In a specific embodiment, the present invention obtains an ultrasound image group of the patient's gynecological adnexal area, inputs the image group into a first neural network to obtain the classification probability of each image, performs weighted average calculation on the classification probability of each image to obtain the classification probability of the patient's individual normal ovary and the classification probability of an ovarian mass, and when it is determined to be an ovarian mass, inputs the ultrasound image group into a second neural network for classification to obtain the classification probability of a benign mass and the classification probability of a malignant mass for each image, performs weighted average calculation on the classification probability of each image to obtain the classification probability of a benign mass and the classification probability of a malignant mass for the patient.

在一个具体实施例中,本发明获取患者妇科附件区超声图像组,将图像组输至第一神经网络中得到每一张图像的分类概率,对各图像的分类概率进行加权平均计算得到患者个体正常卵巢的分类概率和卵巢肿块的分类概率,将所述卵巢肿块的分类概率与第一预设阈值进行比较,当卵巢肿块的分类概率大于第一阈值则判定为卵巢肿块,反之,判定为正常卵巢。当判定为卵巢肿块时,将超声图像组输至第二神经网络中进行分类得到每一张图像的良性肿块分类概率和恶性肿块分类概率,对各图像的分类概率进行加权平均计算得到患者个体良性肿块的分类概率和恶性肿块的分类概率,将所述分类概率与第二预设阈值进行比较,当良性肿块的分类概率大于第二预设阈值时判定为良性肿块,反之,判定为恶性肿块。In a specific embodiment, the present invention obtains an ultrasound image group of the gynecological adnexa of a patient, inputs the image group into a first neural network to obtain the classification probability of each image, performs a weighted average calculation on the classification probability of each image to obtain the classification probability of the patient's individual normal ovary and the classification probability of an ovarian mass, compares the classification probability of the ovarian mass with a first preset threshold, and when the classification probability of the ovarian mass is greater than the first threshold, it is determined to be an ovarian mass, otherwise, it is determined to be a normal ovary. When it is determined to be an ovarian mass, the ultrasound image group is input into a second neural network for classification to obtain the classification probability of a benign mass and the classification probability of a malignant mass of each image, performs a weighted average calculation on the classification probability of each image to obtain the classification probability of the patient's individual benign mass and the classification probability of a malignant mass, compares the classification probability with a second preset threshold, and when the classification probability of a benign mass is greater than the second preset threshold, it is determined to be a benign mass, otherwise, it is determined to be a malignant mass.

在一个具体实施例中,获取患者妇科附件区实时超声视频,并将超声视频转换为视频帧图像集;将所述视频帧图像集逐帧输入第一神经网络进行分类得到逐帧图像分类概率,对逐帧图像分类概率进行加权平均计算得到实时视频正常卵巢的分类概率或卵巢肿块的分类概率;将所述卵巢肿块的分类概率与第一预设阈值进行比较,当卵巢肿块的分类概率大于第一阈值则判定为卵巢肿块,反之,判定为正常卵巢。当实时视频判定为卵巢肿块,将所述视频帧图像集逐帧输至第二神经网络中进行分类得到逐帧分类概率结果,对各图像帧的分类概率结果进行加权平均计算得到实时视频的恶性或良性肿块概率;将所述分类概率与第二预设阈值进行比较,当良性肿块的分类概率大于第二预设阈值时判定为良性肿块,反之,判定为恶性肿块。In a specific embodiment, a real-time ultrasound video of the gynecological adnexa of a patient is obtained, and the ultrasound video is converted into a video frame image set; the video frame image set is input into a first neural network frame by frame for classification to obtain a frame-by-frame image classification probability, and the frame-by-frame image classification probability is weighted averaged to obtain the classification probability of a normal ovary or an ovarian mass in the real-time video; the classification probability of the ovarian mass is compared with a first preset threshold, and when the classification probability of the ovarian mass is greater than the first threshold, it is determined to be an ovarian mass, otherwise, it is determined to be a normal ovary. When the real-time video is determined to be an ovarian mass, the video frame image set is input into a second neural network frame by frame for classification to obtain a frame-by-frame classification probability result, and the classification probability result of each image frame is weighted averaged to obtain the probability of a malignant or benign mass in the real-time video; the classification probability is compared with a second preset threshold, and when the classification probability of a benign mass is greater than the second preset threshold, it is determined to be a benign mass, otherwise, it is determined to be a malignant mass.

在一个具体实施例中,获取患者妇科附件区实时超声视频,并将超声视频转换为视频帧图像集,对所述视频帧图像集进行连续分组并进行平均计算得到N个平均图像,将N个平均图像逐帧输入第一神经网络得到各图像的分类概率结果,N为大于等于6的自然数。对各图像分类概率进行加权平均计算得到实时视频的正常卵巢分类概率或卵巢肿块的分类概率;将所述卵巢肿块的分类概率与第一预设阈值进行比较,当卵巢肿块的分类概率大于第一阈值则判定为卵巢肿块,反之,判定为正常卵巢。当实时视频判定为卵巢肿块,将图像逐帧输至第二神经网络中进行分类得到逐帧分类概率结果,对各图像帧的分类概率结果进行加权平均计算得到实时视频的恶性分类概率或良性肿块分类概率;将所述分类概率与第二预设阈值进行比较,当良性肿块的分类概率大于第二预设阈值时判定为良性肿块,反之,判定为恶性肿块。In a specific embodiment, a real-time ultrasound video of the gynecological adnexa of a patient is obtained, and the ultrasound video is converted into a video frame image set, the video frame image set is continuously grouped and averaged to obtain N average images, and the N average images are input into the first neural network frame by frame to obtain the classification probability results of each image, where N is a natural number greater than or equal to 6. The classification probability of each image is weighted averaged to obtain the normal ovarian classification probability or the classification probability of an ovarian mass of the real-time video; the classification probability of the ovarian mass is compared with the first preset threshold, and when the classification probability of the ovarian mass is greater than the first threshold, it is determined to be an ovarian mass, otherwise, it is determined to be a normal ovary. When the real-time video is determined to be an ovarian mass, the image is input into the second neural network frame by frame for classification to obtain the frame-by-frame classification probability result, and the classification probability results of each image frame are weighted averaged to obtain the malignant classification probability or the benign mass classification probability of the real-time video; the classification probability is compared with the second preset threshold, and when the classification probability of a benign mass is greater than the second preset threshold, it is determined to be a benign mass, otherwise, it is determined to be a malignant mass.

在一个实施例中,本发明收集23家医院的超声资料及病理结果,并对卵巢肿块边缘进行标记。组成图像数据集和视频数据集。In one embodiment, the present invention collects ultrasound data and pathological results from 23 hospitals and marks the edges of ovarian masses to form image data sets and video data sets.

在一个具体实施例中,本研究共纳入2375名患者(10577张图像,159个视频)。从21家医院收集的数据分为以下几部分进行模型开发:训练和验证集(6938张图像,1514名患者);内部测试集(1584张图像,363名患者)。从另外两家医院的患者(1896张图像,339名患者)收集的数据被纳入外部测试集A,包括患者的人口统计学和临床特征,所有患者都有年龄信息。在训练、验证和内部测试数据集中,患者的年龄范围为12至86岁(中位数=41),而内部测试数据集中,患者的年龄范围为12至86岁(中位数=45)。在外部测试图像和视频数据集中,患者的年龄范围分别为15-81岁(中位数=39岁)和17-81岁(中位数=43岁)。In a specific embodiment, a total of 2375 patients (10577 images, 159 videos) were included in this study. The data collected from 21 hospitals were divided into the following parts for model development: training and validation sets (6938 images, 1514 patients); internal test set (1584 images, 363 patients). Data collected from patients from two other hospitals (1896 images, 339 patients) were included in the external test set A, including the demographic and clinical characteristics of the patients, and all patients had age information. In the training, validation, and internal test datasets, the age range of the patients was 12 to 86 years old (median = 41), while in the internal test dataset, the age range of the patients was 12 to 86 years old (median = 45). In the external test image and video datasets, the age range of the patients was 15-81 years old (median = 39 years old) and 17-81 years old (median = 43 years old), respectively.

在内部和外部测试集中测试了分割模型的Dice系数。OvaMTA-Seg模型在内部测试集中的平均Dice系数为0.887±0.34,在外部测试集中的平均Dice系数为0.819±0.201,内部测试集中良性结节的Dice系数为0.912±0.103,外部测试集中的Dice系数为0.843±0.192恶性肿瘤在内部测试集中的Dice系数为0.878±0.131,在外部测试集中达到0.851±0.141。The Dice coefficient of the segmentation model was tested in the internal and external test sets. The average Dice coefficient of the OvaMTA-Seg model in the internal test set was 0.887±0.34, and the average Dice coefficient in the external test set was 0.819±0.201. The Dice coefficient of benign nodules in the internal test set was 0.912±0.103, and the Dice coefficient in the external test set was 0.843±0.192. The Dice coefficient of malignant tumors in the internal test set was 0.878±0.131, and reached 0.851±0.141 in the external test set.

在一个具体实施例中,OvaMTA遵循两阶段决策过程,因此本发明分别对内部和外部测试集中的肿块和良性/恶性病变的存在进行了诊断测试。在肿块存在的测试中,OvaMTA-Seg模型在内部测试集中的AUC为0.970(95% CI:0.970,0.973),在外部测试集中的AUC为0.877(95% CI:0.877,0.882)。在肿块检测方面,考虑到0.5>IoU的交集检测成功,内部测试集的检测率为94.2,而外部测试集基于图像的检测率为88.1。在外部测试集B中,该模型在基于视频的测试中实现了100%的检测率,即检测到视频中的所有肿块。In one specific embodiment, OvaMTA follows a two-stage decision process, so the present invention performs diagnostic tests on the presence of masses and benign/malignant lesions in the internal and external test sets, respectively. In the test for the presence of masses, the OvaMTA-Seg model had an AUC of 0.970 (95% CI: 0.970, 0.973) in the internal test set and an AUC of 0.877 (95% CI: 0.877, 0.882) in the external test set. In terms of mass detection, considering the successful intersection detection of 0.5>IoU, the detection rate of the internal test set was 94.2, while the image-based detection rate of the external test set was 88.1. In the external test set B, the model achieved a 100% detection rate in the video-based test, that is, all masses in the video were detected.

在一个具体实施例中,本发明招募了6名放射科医生和2名全科医生来评估卵巢肿块,包括2名初级放射科医生(≥3年临床经验)、2名高级放射科医生(≥5年临床经验)、2名专家(≥10年临床经验)和2名非放射科医生(≥3年临床经验)。八位医生独立评估了外部数据集B中患者水平的所有匿名和随机视频,并为每个肿块分配了良性和恶性标签。两周后,将OvaMTA结果提供给放射科医生进行重新评估。由OvaMTA系统分割的肿瘤边缘和具有恶性肿瘤概率的边界框,将与原始超声视频并排显示给医生。OvaMTA的视频级诊断结果也将在视频末尾呈现给医生,所有阅片者都不知道病理诊断。In a specific embodiment, the present invention recruited 6 radiologists and 2 general practitioners to evaluate ovarian masses, including 2 junior radiologists (≥3 years of clinical experience), 2 senior radiologists (≥5 years of clinical experience), 2 experts (≥10 years of clinical experience) and 2 non-radiologists (≥3 years of clinical experience). The eight doctors independently evaluated all anonymous and random videos at the patient level in the external dataset B and assigned benign and malignant labels to each mass. Two weeks later, the OvaMTA results were provided to the radiologists for re-evaluation. The tumor margins and bounding boxes with the probability of malignant tumors segmented by the OvaMTA system will be displayed to the doctors side by side with the original ultrasound video. The video-level diagnostic results of OvaMTA will also be presented to the doctors at the end of the video, and all readers are unaware of the pathological diagnosis.

在一个具体实施例中,对于基于图像的个体水平预测,将每个人的多个图像的预测概率的加权平均值合并为一个分数。对于基于视频的预测,将视频逐帧归一化,然后输入到OvaMTA中,OvaMTA实时输出当前帧的诊断结果,最后输出整个视频的良恶诊断。为了考虑上下文信息,将前6帧的特征图平均为当前帧的特征图,并输出阈值为0.5的阈值的二值化分割区域。将当前输出分割区域扩展25像素以获取一个补丁,然后输出相应帧的健康、良性和恶性概率。输出每帧的概率后,对视频的帧级恶性概率进行平均加权计算,得到整个视频的良恶概率。In a specific embodiment, for image-based individual-level prediction, the weighted average of the predicted probabilities of multiple images for each person is combined into a single score. For video-based prediction, the video is normalized frame by frame and then input into OvaMTA, which outputs the diagnosis result of the current frame in real time and finally outputs the benign and malignant diagnosis of the entire video. In order to take into account contextual information, the feature maps of the previous 6 frames are averaged as the feature map of the current frame, and the binary segmentation region with a threshold of 0.5 is output. The current output segmentation region is expanded by 25 pixels to obtain a patch, and then the healthy, benign, and malignant probabilities of the corresponding frame are output. After outputting the probability of each frame, the frame-level malignant probability of the video is averaged and weighted to obtain the benign and malignant probability of the entire video.

在一个具体实施例中,本发明进行了本发明方法、放射科医师和非放射科全科医师之间的诊断性能的比较,比较结果如6所示。图6中的(a)为外部测试视频的接收者操作特征曲线显示了本发明方法的诊断性能。并与不同经验水平的放射科医师进行比较。大红星代表放射科医师的平均值。灰色圆代表初级放射科医师的表现,灰色菱形代表中级放射科医师的表现,灰色三角形代表专家的表现,灰色正方形代表全科医生的表现。图6中的(b)为8名医生的表现,通过配对Mc Nemar检验的p值进行组间的比较;图6中的(c)为归一化混淆矩阵分析本发明方法对外界测试视频中卵巢肿瘤良恶性的判别性能。图6中的(d)为外部测试视频中8名医生评估的Kappa矩阵。图6中的(e)为8名医生基于本发明的系统辅助评估外部测试视频的Kappa矩阵。矩阵中的每个值代表两名医生之间的观察者间一致性。在kappa矩阵的坐标轴上,非放射科全科医生1-2和放射科医生1-6在(e)、(d)中按从下到上、从左到右的顺序排列,*代表p<0.05。In a specific embodiment, the present invention compares the diagnostic performance of the method of the present invention, radiologists, and non-radiologists, and the comparison results are shown in Figure 6. Figure 6 (a) is a receiver operating characteristic curve of an external test video showing the diagnostic performance of the method of the present invention. And compared with radiologists with different experience levels. The big red star represents the average value of radiologists. The gray circle represents the performance of a primary radiologist, the gray diamond represents the performance of an intermediate radiologist, the gray triangle represents the performance of an expert, and the gray square represents the performance of a general practitioner. Figure 6 (b) is the performance of 8 doctors, and the inter-group comparison is performed by the p value of the paired Mc Nemar test; Figure 6 (c) is a normalized confusion matrix analysis of the method of the present invention for the discrimination performance of benign and malignant ovarian tumors in external test videos. Figure 6 (d) is a Kappa matrix evaluated by 8 doctors in an external test video. Figure 6 (e) is a Kappa matrix of 8 doctors' evaluation of an external test video based on the system-assisted evaluation of the present invention. Each value in the matrix represents the inter-observer consistency between two doctors. On the coordinate axes of the kappa matrix, non-radiologists 1-2 and radiologists 1-6 are arranged in order from bottom to top and from left to right in (e) and (d), and * represents p < 0.05.

在一个具体实施例中,在外部测试集B中,高级放射科医生的评估准确率为88.1(95% CI:84.3,91.5),敏感性为88.6(95% CI:82.9,93.9),特异性为87.6(95% CI:82.6,92.3)。初级放射科医生的评估准确率为74.8(95% CI:70.1,79.6),敏感性为68.9(95% CI:61.0,76.9),特异性为86.0(95% CI:80.9,90.9)。中级放射科医生评估的准确度为75.8(95%CI:71.1,80.5),敏感性为80.3(95% CI:73.1,86.9),特异性为72.6(95%CI:66.3,79.2)。OvaMTA系统给出的评估与高级放射科医生相当,准确度为86.2(137/159;95% CI:80.5,91.2;p=0.504,灵敏度为81.8(54/66;95% CI:72.3,91.0;p=0.163),特异性为89.2(83/93;95% CI:82.8,95.2;p=0.678)。OvaMTA系统的准确性优于中级放射科医生(p=0.0002)、初级放射科医生(p<0.0001)和全科医生(p<0.0001)。OvaMTA系统的灵敏度优于初级放射科医生(p=0.005)。OvaMTA系统的特异性高于中级放射科医生(p<0.0001)、初级放射科医生(p=0.002)和全科医生(p<0.0001)。总体而言,OvaMTA系统的诊断能力优于中低资历医生,可与高资历放射科医生相媲美。In a specific embodiment, in the external test set B, the accuracy of the evaluation of senior radiologists was 88.1 (95% CI: 84.3, 91.5), the sensitivity was 88.6 (95% CI: 82.9, 93.9), and the specificity was 87.6 (95% CI: 82.6, 92.3). The accuracy of the evaluation of junior radiologists was 74.8 (95% CI: 70.1, 79.6), the sensitivity was 68.9 (95% CI: 61.0, 76.9), and the specificity was 86.0 (95% CI: 80.9, 90.9). The accuracy of the evaluation of the intermediate radiologists was 75.8 (95% CI: 71.1, 80.5), the sensitivity was 80.3 (95% CI: 73.1, 86.9), and the specificity was 72.6 (95% CI: 66.3, 79.2). The OvaMTA system gave an assessment comparable to that of senior radiologists, with an accuracy of 86.2 (137/159; 95% CI: 80.5, 91.2; p = 0.504), a sensitivity of 81.8 (54/66; 95% CI: 72.3, 91.0; p = 0.163), and a specificity of 89.2 (83/93; 95% CI: 72.3, 91.0; p = 0.163). CI: 82.8, 95.2; p = 0.678). The accuracy of the OvaMTA system was better than that of mid-level radiologists (p = 0.0002), junior radiologists (p < 0.0001), and general practitioners (p < 0.0001). The sensitivity of the OvaMTA system was better than that of junior radiologists (p = 0.005). The specificity of the OvaMTA system was higher than that of mid-level radiologists (p < 0.0001), junior radiologists (p = 0.002), and general practitioners (p < 0.0001). Overall, the diagnostic ability of the OvaMTA system was better than that of mid- and low-level doctors, and comparable to that of highly qualified radiologists.

在OvaMTA的协助下,全科医生的准确率从69.2显著提高到74.8,p<0.0001。初级放射科医师的准确率从74.8提高到81.1,p=0.004,中级放射科医师的准确率从75.8提高到80.5,p=0.024。OvaMTA系统大大增强了初级医生的特异性,包括全科医生59.1提高至81.2,p<0.0001,初级放射科医生79.0至86.0,p=0.024。这些在OvaMTA辅助下的初级医生的特异性达到了与高级放射科医生相当的水平,分别为86.0vs 87.6,p=0.690。非放射科医生在OvaMTA辅助下的敏感性达到了与高级医生相当的水平,84.8vs 88.6,p=0.383。在OvaMTA的协助下,全科医生已经达到了放射科医生的平均水平(ACC:82.7vs 81.8,p=0.804;灵敏度:84.8vs 82.6,p=0.720;特异性:81.2vs 81.2,p=1.0)。从图6(d)、(e)中计算和可视化的Kappa矩阵可以看出,在OvaMTA辅助下,不同医生之间的诊断变得更加一致。因此,OvaMTA系统不仅提高了医生的诊断水平,而且提高了他们的观察员间准则。With the assistance of OvaMTA, the accuracy of general practitioners significantly improved from 69.2 to 74.8, p<0.0001. The accuracy of junior radiologists improved from 74.8 to 81.1, p=0.004, and the accuracy of mid-level radiologists improved from 75.8 to 80.5, p=0.024. The OvaMTA system significantly enhanced the specificity of junior physicians, including 59.1 to 81.2 for general practitioners, p<0.0001, and 79.0 to 86.0 for junior radiologists, p=0.024. The specificity of these junior physicians assisted by OvaMTA reached a level comparable to that of senior radiologists, 86.0 vs 87.6, p=0.690. The sensitivity of non-radiologists assisted by OvaMTA reached a level comparable to that of senior physicians, 84.8 vs 88.6, p=0.383. With the assistance of OvaMTA, general practitioners have reached the average level of radiologists (ACC: 82.7 vs 81.8, p = 0.804; sensitivity: 84.8 vs 82.6, p = 0.720; specificity: 81.2 vs 81.2, p = 1.0). From the Kappa matrix calculated and visualized in Figure 6 (d) and (e), it can be seen that with the assistance of OvaMTA, the diagnosis between different doctors has become more consistent. Therefore, the OvaMTA system not only improves the diagnosis level of doctors, but also improves their inter-observer criterion.

在一个具体实施例中,模型的临床应用:通过向建立的模型提交附件超声视频,实现卵巢肿块的筛查,并应用在临床卵巢肿块诊断中,如图7所示,显示了OvaMTA-Seg对超声图像的分割结果示例,以及与手动注释的比较。人工智能框架可以执行精确的超声图像分割。在视频测试中,该模型可以完成从肿瘤出现到消失的整个检测和跟踪过程。因此,分割系统可以单独用作放射科医生突出显示病变区域的可视化工具。In a specific embodiment, the clinical application of the model: by submitting the attached ultrasound video to the established model, the screening of ovarian masses is achieved and applied in the clinical diagnosis of ovarian masses, as shown in Figure 7, which shows an example of the segmentation result of OvaMTA-Seg on the ultrasound image and the comparison with manual annotation. The artificial intelligence framework can perform accurate ultrasound image segmentation. In the video test, the model can complete the entire detection and tracking process from the appearance to the disappearance of the tumor. Therefore, the segmentation system can be used alone as a visualization tool for radiologists to highlight the lesion area.

具体地,图7及所附视频所示,OvaMTA不仅能准确定位肿瘤,还能发现肿瘤的边缘卵巢组织,对视频的每一帧进行合理分析。本发明的模型可以帮助医生诊断良性和恶性卵巢肿块。从视频1和2中可以看出,OvaMTA系统稳定、连续地逐帧分割和诊断结果。图7(a)显示了一名64岁女性的灰度超声视频(视频1)的关键帧,该女性在多房囊肿内有实性成分。OvaMTA系统可以检测和绘制肿块的边缘,实时进行恶性分类和概率。OvaMTA系统提供的最终诊断结果与术后病理结果一致(子宫内膜样癌)。OvaMTA系统实时提供可解释的热图,使医生能够更加关注不规则的分隔和实性部分。图7(b)显示了一名有单房无回声囊肿的52岁女性的灰度超声视频(视频2)的关键帧。OvaMTA系统可以检测和绘制肿块的边缘,并实时进行良性分类和概率。OvaMTA系统提供的最终诊断结果与术后病理结果一致(浆液性囊腺纤维瘤)。OvaMTA系统实时提供可解释的热图,让医生对无回声囊性部分给予关注。Specifically, as shown in FIG7 and the attached video, OvaMTA can not only accurately locate the tumor, but also find the marginal ovarian tissue of the tumor, and reasonably analyze each frame of the video. The model of the present invention can help doctors diagnose benign and malignant ovarian masses. As can be seen from Videos 1 and 2, the OvaMTA system stably and continuously segments and diagnoses the results frame by frame. FIG7 (a) shows the key frames of a grayscale ultrasound video (Video 1) of a 64-year-old woman with a solid component in a multilocular cyst. The OvaMTA system can detect and draw the edges of the mass, and perform malignant classification and probability in real time. The final diagnosis provided by the OvaMTA system is consistent with the postoperative pathological results (endometrioid carcinoma). The OvaMTA system provides interpretable heat maps in real time, allowing doctors to pay more attention to irregular septa and solid parts. FIG7 (b) shows the key frames of a grayscale ultrasound video (Video 2) of a 52-year-old woman with a unilocular anechoic cyst. The OvaMTA system can detect and draw the edges of the mass, and perform benign classification and probability in real time. The final diagnosis provided by the OvaMTA system was consistent with the postoperative pathological result (serous cystadenofibroma). The OvaMTA system provides interpretable heat maps in real time, allowing doctors to focus on the anechoic cystic part.

在一个具体实施例中,OvaMTA不仅承担了提高准确性的功能,而且在一定程度上提高了可解释性。这是医生与之合作的重要组成部分。另一方面,OvaMTA-Seg模块使OvaMTA系统能够对可疑病变及其周围背景产生更准确的区域关注。该分支多任务模型优于仅使用一种分类模型,同时还保证了诊断的准确性。最后,使用健康的卵巢标签也可以改善整个系统。如果仅使用批量图像来训练DL模型,然后在超声扫描过程中直接使用它们,可能会产生严重的误解。当健康的卵巢出现在超声扫描中时,它将被分割并给出不正确的病理诊断。因此,对于系统开发来说,在训练中添加负样本和正态样本对于模型的功能至关重要,仅区分不同异常的模型在功能上很难在实际场景中泛化。In a specific embodiment, OvaMTA not only assumes the function of improving accuracy, but also improves interpretability to a certain extent. This is an important part of the doctor's cooperation with it. On the other hand, the OvaMTA-Seg module enables the OvaMTA system to generate more accurate regional attention to suspicious lesions and their surrounding background. This branch multi-task model outperforms the use of only one classification model while also ensuring the accuracy of the diagnosis. Finally, the use of healthy ovary labels can also improve the entire system. If only batch images are used to train the DL model and then they are directly used during ultrasound scanning, serious misunderstandings may occur. When a healthy ovary appears in an ultrasound scan, it will be segmented and give an incorrect pathological diagnosis. Therefore, for system development, adding negative and positive samples in training is crucial to the functionality of the model. Models that only distinguish different abnormalities are functionally difficult to generalize in real scenarios.

综上所述,本发明提出的基于整合超声图像的卵巢多任务注意力模型(OvaMTA)可以对正常卵巢、良性和恶性卵巢肿块进行检测和分类,其诊断性能可与专家主观评估相当。人工智能系统可以显著提高初级和中级放射科医生的诊断水平。而全科医生在AI系统的辅助下可以达到放射科医生的平均表现。OvaMTA有可能成为一种有效的和广泛适用的工具,以帮助医生诊断卵巢肿块和卵巢癌筛查。In summary, the ovarian multi-task attention model (OvaMTA) based on integrated ultrasound images proposed in the present invention can detect and classify normal ovaries, benign and malignant ovarian masses, and its diagnostic performance is comparable to that of expert subjective evaluation. The artificial intelligence system can significantly improve the diagnostic level of junior and mid-level radiologists. General practitioners can reach the average performance of radiologists with the assistance of the AI system. OvaMTA has the potential to become an effective and widely applicable tool to help doctors diagnose ovarian masses and ovarian cancer screening.

本公开实施例还提供了一种计算机程序产品或系统,包括计算机程序,该计算机程序被处理器执行时实现上述基于超声的卵巢附件肿块辅助诊断方法的步骤。The embodiments of the present disclosure also provide a computer program product or system, including a computer program, which, when executed by a processor, implements the steps of the above-mentioned ultrasound-based auxiliary diagnosis method for ovarian adnexal masses.

在一个实施例中,本发明提出一种基于超声的附件肿块辅助诊断系统,包括超声图像及临床数据采集,数据整理,数据标注,基于超声图像对卵巢肿块进行检测、分割、分类诊断、可解释性分析的多任务深度学习模型,临床验证,临床应用的步骤设定。通过历史医疗数据综合利用,整合卵巢肿块超声图像信息和临床病理信息,有助于更有效的利用历史医疗数据,节省医疗资源,辅助医生精确诊断,提高诊断效率。在附件肿块的基层筛查及临床诊断中加入了人工智能的方法,利用超声影像建立附件肿块诊断模型,有助于提高诊断效率,节省医生时间,减少误诊、漏诊率,减轻病患痛苦,节省就医时间,更有助于治疗和预后,最终实现附件肿块的筛查和非侵入性分类辅助诊断。In one embodiment, the present invention proposes an ultrasound-based auxiliary diagnosis system for adnexal masses, including ultrasound image and clinical data collection, data collation, data annotation, multi-task deep learning model for detecting, segmenting, classifying and diagnosing ovarian masses based on ultrasound images, and interpretability analysis, clinical verification, and step setting for clinical application. Through the comprehensive utilization of historical medical data, the integration of ovarian mass ultrasound image information and clinical pathological information can help to more effectively utilize historical medical data, save medical resources, assist doctors in accurate diagnosis, and improve diagnostic efficiency. Artificial intelligence methods are added to the primary screening and clinical diagnosis of adnexal masses, and an adnexal mass diagnosis model is established using ultrasound images, which helps to improve diagnostic efficiency, save doctors' time, reduce misdiagnosis and missed diagnosis rates, alleviate patients' pain, save medical time, and is more helpful for treatment and prognosis, ultimately achieving screening and non-invasive classification-assisted diagnosis of adnexal masses.

图2本发明实施例提供的一种基于超声的卵巢附件肿块辅助诊断系统示意图,具体包括:FIG2 is a schematic diagram of an ultrasound-based auxiliary diagnosis system for ovarian adnexal masses provided by an embodiment of the present invention, specifically comprising:

获取单元:获取患者妇科附件区超声图像;Acquisition unit: acquiring ultrasound images of the patient's gynecological adnexal area;

第一分类单元:将所述超声图像输至第一神经网络中进行分类得到正常卵巢分类概率和卵巢肿块分类概率;The first classification unit: inputs the ultrasound image into the first neural network for classification to obtain the normal ovary classification probability and the ovarian mass classification probability;

第二分类单元:当图像判定为卵巢肿块时,将所述超声图像输至第二神经网络中进行分类得到良性肿块分类概率和恶性肿块分类概率。Second classification unit: when the image is determined to be an ovarian mass, the ultrasound image is input into the second neural network for classification to obtain the classification probability of a benign mass and the classification probability of a malignant mass.

图3本发明实施例提供的一种基于超声的卵巢附件肿块辅助诊断设备示意图,具体包括:FIG3 is a schematic diagram of an ultrasound-based auxiliary diagnosis device for ovarian adnexal masses provided by an embodiment of the present invention, specifically comprising:

存储器和处理器;所述存储器用于存储程序指令;所述处理器用于调用程序指令,当程序指令被执行任意一项上述的一种基于超声的卵巢附件肿块辅助诊断方法。A memory and a processor; the memory is used to store program instructions; the processor is used to call program instructions, and when the program instructions are executed, any one of the above-mentioned ultrasound-based auxiliary diagnosis methods for ovarian adnexal masses is obtained.

本公开实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储计算机程序,所述计算机程序被处理器执行时任意一项上述的一种基于超声的卵巢附件肿块辅助诊断方法。The present disclosure also provides a computer-readable storage medium storing a computer program. When the computer program is executed by a processor, it is any one of the above-mentioned ultrasound-based auxiliary diagnosis methods for ovarian adnexal masses.

本验证实施例的验证结果表明,为适应症分配固有权重相对于默认设置来说可以改善本方法的性能。所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。The verification results of this verification example show that assigning inherent weights to indications can improve the performance of the method relative to the default settings. Those skilled in the art can clearly understand that for the convenience and brevity of description, the specific working processes of the systems, devices and units described above can refer to the corresponding processes in the aforementioned method embodiments, and will not be repeated here. In the several embodiments provided in this application, it should be understood that the disclosed systems, devices and methods can be implemented in other ways.

例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取存储器(RAM,Random AccessMemory)、磁盘或光盘等。For example, the device embodiments described above are only schematic. For example, the division of the units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed. Another point is that the coupling or direct coupling or communication connection between each other shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms. The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the scheme of this embodiment. In addition, each functional unit in each embodiment of the present invention may be integrated into a processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be implemented in the form of hardware or in the form of software functional units. A person skilled in the art may understand that all or part of the steps in the various methods of the above embodiments may be completed by instructing related hardware through a program, and the program may be stored in a computer-readable storage medium, and the storage medium may include: a read-only memory (ROM), a random access memory (RAM), a disk or an optical disk, etc.

本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的介质存储可以是只读存储器,磁盘或光盘等。A person skilled in the art will understand that all or part of the steps in the above-mentioned embodiment method can be implemented by instructing related hardware through a program, and the program can be stored in a computer-readable storage medium. The above-mentioned medium storage can be a read-only memory, a disk or an optical disk, etc.

以上对本发明所提供的一种计算机设备进行了详细介绍,对于本领域的一般技术人员,依据本发明实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。The above is a detailed introduction to a computer device provided by the present invention. For a person skilled in the art, according to the concept of the embodiments of the present invention, there may be changes in the specific implementation method and application scope. In summary, the content of this specification should not be understood as limiting the present invention.

Claims (10)

1.一种基于超声的卵巢附件肿块辅助诊断方法,其特征在于,包括:1. An ultrasound-based auxiliary diagnosis method for ovarian adnexal masses, comprising: S1获取患者妇科附件区超声图像;S1 obtains ultrasound images of the patient's gynecological adnexal area; S2将所述超声图像输至第一神经网络中进行分类得到正常卵巢分类概率和卵巢肿块分类概率;S2 inputs the ultrasound image into a first neural network for classification to obtain a normal ovary classification probability and an ovarian mass classification probability; S3当图像判定为卵巢肿块时,将所述超声图像输至第二神经网络中进行分类得到良性肿块分类概率和恶性肿块分类概率。S3: When the image is determined to be an ovarian mass, the ultrasound image is input into a second neural network for classification to obtain a benign mass classification probability and a malignant mass classification probability. 2.根据权利要求1所述的基于超声的卵巢附件肿块辅助诊断方法,其特征在于,所述分类过程包括先进行目标检测确定目标位置,再基于所述目标位置进行目标区域分割得到分割区域图像,再基于分割区域图像进行分类得到结果;2. The ultrasound-assisted diagnosis method for ovarian adnexal masses according to claim 1, characterized in that the classification process includes first performing target detection to determine the target position, then performing target region segmentation based on the target position to obtain a segmented region image, and then performing classification based on the segmented region image to obtain a result; 可选地,所述第一神经网络的分类过程包括先对超声图像进行卵巢检测得到卵巢位置,再基于所述卵巢位置进行图像分割得到卵巢图像;再基于所述卵巢图像进行分类得到正常卵巢或卵巢肿块;Optionally, the classification process of the first neural network includes first performing ovarian detection on the ultrasound image to obtain the ovarian position, then performing image segmentation based on the ovarian position to obtain the ovarian image; and then classifying based on the ovarian image to obtain a normal ovary or an ovarian mass; 可选地,所述第二神经网络的分类过程包括先对卵巢肿块进行肿块检测得到肿块位置,再基于肿块位置进行图像分割得到肿块区域图像,再基于所述肿块区域图像进行分类得到良性肿块或恶性肿块;Optionally, the classification process of the second neural network includes first performing mass detection on the ovarian mass to obtain the mass location, then performing image segmentation based on the mass location to obtain a mass area image, and then performing classification based on the mass area image to obtain a benign mass or a malignant mass; 可选地,所述S1替换为S11,获取患者妇科附件区超声图像组,对所述超声图像组中的各个图像依次进行S2-S3的步骤得到各个图像的分类概率,对各个图像的分类概率进行加权平均计算得到患者的诊断预测结果。Optionally, S1 is replaced by S11, obtaining an ultrasound image group of the patient's gynecological adnexal area, performing steps S2-S3 on each image in the ultrasound image group in sequence to obtain a classification probability of each image, and performing a weighted average calculation on the classification probability of each image to obtain a diagnosis prediction result for the patient. 3.根据权利要求1所述的基于超声的卵巢附件肿块辅助诊断方法,其特征在于,所述S1替换为S12,获取患者妇科附件区实时超声视频,并将超声视频转换为视频帧图像集;所述S2替换为S21,将所述视频帧图像集逐帧输入第一神经网络进行分类得到逐帧图像分类概率,对逐帧图像分类概率进行加权平均计算得到实时视频是正常卵巢或卵巢肿块的概率;3. The ultrasound-assisted diagnosis method for ovarian adnexal masses according to claim 1 is characterized in that S1 is replaced by S12, a real-time ultrasound video of the patient's gynecological adnexal area is obtained, and the ultrasound video is converted into a video frame image set; S2 is replaced by S21, the video frame image set is input into the first neural network frame by frame for classification to obtain a frame-by-frame image classification probability, and the frame-by-frame image classification probability is weighted averaged to obtain the probability that the real-time video is a normal ovary or an ovarian mass; 所述S3替换为S31,当实时视频判定为卵巢肿块,将所述视频帧图像集逐帧输至第二神经网络中进行分类得到逐帧分类概率结果,对各图像帧的分类概率结果进行加权平均计算得到实时视频的恶性或良性肿块概率;The S3 is replaced by S31, when the real-time video is determined to be an ovarian mass, the video frame image set is input into the second neural network frame by frame for classification to obtain a frame-by-frame classification probability result, and the classification probability result of each image frame is weighted averaged to obtain the probability of a malignant or benign mass in the real-time video; 基于所述恶性或良性肿块概率得到患者实时视频的良恶性诊断结果;Obtaining a benign or malignant diagnosis result of the patient's real-time video based on the probability of the malignant or benign mass; 可选地,将所述S12替换为S13;获取患者妇科附件区实时超声视频,并将超声视频转换为视频帧图像集,对所述视频帧图像集进行连续分组并进行平均计算得到N个平均图像将所述视频帧图像集逐帧输入第一神经网络得到分类概率结果,N为大于等于6的自然数。Optionally, S12 is replaced by S13; real-time ultrasound video of the patient's gynecological accessory area is obtained, and the ultrasound video is converted into a video frame image set, the video frame image set is continuously grouped and averaged to obtain N average images, and the video frame image set is input into the first neural network frame by frame to obtain a classification probability result, where N is a natural number greater than or equal to 6. 4.根据权利要求1所述的基于超声的卵巢附件肿块辅助诊断方法,其特征在于,所述第一神经网络的训练步骤为:4. The ultrasound-assisted diagnosis method for ovarian adnexal masses according to claim 1, characterized in that the training steps of the first neural network are: 获取卵巢及卵巢肿块图像集;Obtaining an image set of ovaries and ovarian masses; 对所述图像集进行数据标注得到标注后的图像;所述数据标注通过病例-肿块-单切面图像的标注层级进行标注;Performing data annotation on the image set to obtain annotated images; the data annotation is performed through an annotation hierarchy of case-mass-single-section image; 将所述标注后的图像输至神经网络中进行训练得到第一神经网络;Inputting the labeled image into a neural network for training to obtain a first neural network; 可选地,所述病例-肿块-单切面图像中的所述病例包括多肿块结果,所述肿块包括多切面图像结果;Optionally, the case in the case-mass-single-section image includes multiple mass results, and the mass includes multiple-section image results; 可选地,所述标注的标签包括正常卵巢标签、卵巢肿块标签;Optionally, the annotated labels include normal ovarian labels and ovarian mass labels; 可选地,所述第一神经网络的训练步骤还包括数据预处理,对所述卵巢及卵巢肿块图像集进行数据预处理得到预处理后的数据,再基于所述预处理后的数据进行数据标注;Optionally, the training step of the first neural network further includes data preprocessing, performing data preprocessing on the ovarian and ovarian mass image set to obtain preprocessed data, and then performing data labeling based on the preprocessed data; 可选地,所述数据预处理包括下列的一种或几种:数据增强、图像翻转、图像旋转。Optionally, the data preprocessing includes one or more of the following: data enhancement, image flipping, and image rotation. 5.根据权利要求1所述的基于超声的卵巢附件肿块辅助诊断方法,其特征在于,所述第二神经网络的训练步骤为:5. The ultrasound-based auxiliary diagnosis method for ovarian adnexal masses according to claim 1, characterized in that the training steps of the second neural network are: 获取卵巢肿块图像集;Obtaining an image set of an ovarian mass; 对所述卵巢肿块图像集进行数据标注得到肿块标注后的图像;Performing data annotation on the ovarian mass image set to obtain an annotated mass image; 将所述肿块标注后的图像输至神经网络中进行训练得到第二神经网络;Inputting the labeled image of the mass into a neural network for training to obtain a second neural network; 可选地,所述第二神经网络的训练步骤还包括数据处理,对所述卵巢肿块进行数据处理得到处理后的图像,再将所述处理后的图像输至神经网络中进行训练得到第二神经网络;Optionally, the training step of the second neural network further includes data processing, performing data processing on the ovarian mass to obtain a processed image, and then inputting the processed image into the neural network for training to obtain the second neural network; 可选地,所述数据处理包括下列的一种或几种:图像裁剪、图像归一化、图像边界扩展;Optionally, the data processing includes one or more of the following: image cropping, image normalization, image boundary expansion; 可选地,所述图像边界扩展是对卵巢图像的各个方向向上扩展L个像素,L为大于等于25的自然数;Optionally, the image boundary extension is to extend the ovarian image upward by L pixels in each direction, where L is a natural number greater than or equal to 25; 可选地,所述标注的标签包括良恶性分类标签;Optionally, the annotated labels include benign or malignant classification labels; 可选地,所述良恶性分类标签通过肿块的临床分期分级、病理亚型、肿块的特征以及图像的完整度的结果进行标注;Optionally, the benign or malignant classification label is annotated by the clinical staging and grading of the mass, the pathological subtype, the characteristics of the mass, and the completeness of the image; 可选地,所述病理亚型的标签通过三维独热码表示。Optionally, the label of the pathological subtype is represented by a three-dimensional one-hot code. 6.根据权利要求4或5所述的基于超声的卵巢附件肿块辅助诊断方法,其特征在于,所述神经网络包括分割模块、分类模块;6. The ultrasound-based auxiliary diagnosis method for ovarian adnexal masses according to claim 4 or 5, characterized in that the neural network comprises a segmentation module and a classification module; 可选地,所述神经网络采用下列的一种或几种:MTANet、Mask R-CNN、U-Net withClassifier、Panoptic FPN、DeepLabV3+、SOLO、SOLOv2、HRNet;Optionally, the neural network adopts one or more of the following: MTANet, Mask R-CNN, U-Net withClassifier, Panoptic FPN, DeepLabV3+, SOLO, SOLOv2, HRNet; 可选地,神经网络包括检测模块、分割模块和分类模块;Optionally, the neural network includes a detection module, a segmentation module and a classification module; 可选地,所述检测模块采用下列的一种或几种:YOLO、Faster R-CNN、SSD、DETR、CornerNet、CenterNet;Optionally, the detection module adopts one or more of the following: YOLO, Faster R-CNN, SSD, DETR, CornerNet, CenterNet; 可选地,所述神经网络还包括可视化模块,卵巢或卵巢肿块图像通过检测模块或分割模块后得到感兴趣的目标区域特征图,通过所述可视化模块对所述目标区域特征图进行可视化得到可视化结果。Optionally, the neural network further includes a visualization module, and the ovarian or ovarian mass image is passed through a detection module or a segmentation module to obtain a feature map of a target region of interest, and the feature map of the target region is visualized by the visualization module to obtain a visualization result. 7.根据权利要求1所述的基于超声的卵巢附件肿块辅助诊断方法,其特征在于,所述超声图像通过第一神经网络进行分类得到正常卵巢或卵巢肿块的分类概率,将所述卵巢肿块的分类概率与第一预设阈值进行比较,当卵巢肿块的分类概率大于第一阈值则判定为卵巢肿块,反之,判定为正常卵巢;7. The ultrasound-assisted diagnosis method for ovarian adnexal masses according to claim 1, characterized in that the ultrasound image is classified by a first neural network to obtain a classification probability of a normal ovary or an ovarian mass, and the classification probability of the ovarian mass is compared with a first preset threshold value, and when the classification probability of the ovarian mass is greater than the first threshold value, it is determined to be an ovarian mass, otherwise, it is determined to be a normal ovary; 可选地,所述第二神经网络进行分类得到良性肿块和恶性肿块的分类概率,将所述分类概率与第二预设阈值进行比较,当良性肿块的分类概率大于第二预设阈值时判定为良性肿块,反之,判定为恶性肿块。Optionally, the second neural network performs classification to obtain classification probabilities of benign masses and malignant masses, and compares the classification probabilities with a second preset threshold value; when the classification probability of the benign mass is greater than the second preset threshold value, it is determined to be a benign mass; otherwise, it is determined to be a malignant mass. 8.一种计算机程序产品,其上有计算机程序/指令,其特征在于,所述计算机程序/指令被处理器执行实现任意一项权利要求1-7所述的基于超声的卵巢附件肿块辅助诊断方法。8. A computer program product having a computer program/instruction thereon, wherein the computer program/instruction is executed by a processor to implement any one of the ultrasound-based auxiliary diagnosis methods for ovarian adnexal masses according to claims 1-7. 9.一种计算机设备,包括存储器与处理器及存储在存储器上的计算机程序/指令,其特征在于,所述计算机程序/指令被所述处理器执行实现任意一项权利要求1-7所述的基于超声的卵巢附件肿块辅助诊断方法。9. A computer device, comprising a memory and a processor and a computer program/instruction stored in the memory, wherein the computer program/instruction is executed by the processor to implement any one of the ultrasound-based auxiliary diagnosis methods for ovarian adnexal masses according to claims 1-7. 10.一种计算机可读存储介质,其上存储有计算机程序/指令,其特征在于,所述计算机程序/指令被处理器执行实现任意一项权利要求1-7所述的基于超声的卵巢附件肿块辅助诊断方法。10. A computer-readable storage medium having a computer program/instruction stored thereon, wherein the computer program/instruction is executed by a processor to implement any one of the ultrasound-based auxiliary diagnosis methods for ovarian adnexal masses according to claims 1-7.
CN202410802050.5A 2024-06-20 2024-06-20 A method, device and program product for auxiliary diagnosis of ovarian adnexal masses based on ultrasound Pending CN118736292A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410802050.5A CN118736292A (en) 2024-06-20 2024-06-20 A method, device and program product for auxiliary diagnosis of ovarian adnexal masses based on ultrasound

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410802050.5A CN118736292A (en) 2024-06-20 2024-06-20 A method, device and program product for auxiliary diagnosis of ovarian adnexal masses based on ultrasound

Publications (1)

Publication Number Publication Date
CN118736292A true CN118736292A (en) 2024-10-01

Family

ID=92852203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410802050.5A Pending CN118736292A (en) 2024-06-20 2024-06-20 A method, device and program product for auxiliary diagnosis of ovarian adnexal masses based on ultrasound

Country Status (1)

Country Link
CN (1) CN118736292A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118941798A (en) * 2024-10-15 2024-11-12 厦门大学附属翔安医院 Identification and prediction system of breast cancer molecular subtypes based on ultrasound images

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118941798A (en) * 2024-10-15 2024-11-12 厦门大学附属翔安医院 Identification and prediction system of breast cancer molecular subtypes based on ultrasound images

Similar Documents

Publication Publication Date Title
CN110060774B (en) A Thyroid Nodule Recognition Method Based on Generative Adversarial Network
Song et al. A new xAI framework with feature explainability for tumors decision-making in Ultrasound data: comparing with Grad-CAM
CN108464840B (en) Automatic detection method and system for breast lumps
CN110473186B (en) Detection method based on medical image, model training method and device
CN100484474C (en) Galactophore cancer computer auxiliary diagnosis system based on galactophore X-ray radiography
CN114782307A (en) Deep learning-based enhanced CT image rectal cancer staging auxiliary diagnosis system
CN112529894A (en) Thyroid nodule diagnosis method based on deep learning network
WO2022110525A1 (en) Comprehensive detection apparatus and method for cancerous region
CN119557687A (en) Intelligent processing method and system for multimodal biomedical data
CN114565572A (en) Cerebral hemorrhage CT image classification method based on image sequence analysis
CN112862756A (en) Method for identifying pathological change type and gene mutation in thyroid tumor pathological image
Kumar et al. A Novel Approach for Breast Cancer Detection by Mammograms
CN118919078A (en) Thyroid tumor early diagnosis and risk assessment system based on artificial intelligence
CN112508943A (en) Breast tumor identification method based on ultrasonic image
CN114519705A (en) Ultrasonic standard data processing method and system for medical selection and identification
CN118736292A (en) A method, device and program product for auxiliary diagnosis of ovarian adnexal masses based on ultrasound
CN114708236B (en) Thyroid nodule benign and malignant classification method based on TSN and SSN in ultrasonic image
CN115375632A (en) Lung nodule intelligent detection system and method based on CenterNet model
Wei et al. Multi-feature fusion for ultrasound breast image classification of benign and malignant
CN114764855A (en) Intelligent cystoscope tumor segmentation method, device and equipment based on deep learning
Rani et al. RETRACTED: Accurate artificial intelligence method for abnormality detection of CT liver images
CN117911775A (en) Dual-modality fusion can explain the automated intelligent diagnosis method of breast cancer
Bunnell Early Breast Cancer Diagnosis via Breast Ultrasound and Deep Learning
Alshdaifat et al. (KAUH-BCUSD): Computer-aided breast cancer diagnosis using transformer and CNN using ultrasound dataset
Ramkumar et al. Experimental Analysis on Breast Cancer Using Random Forest Classifier on Histopathological Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination