[go: up one dir, main page]

CN116630201A - Image reconstruction model building and applying method and device - Google Patents

Image reconstruction model building and applying method and device Download PDF

Info

Publication number
CN116630201A
CN116630201A CN202310707533.2A CN202310707533A CN116630201A CN 116630201 A CN116630201 A CN 116630201A CN 202310707533 A CN202310707533 A CN 202310707533A CN 116630201 A CN116630201 A CN 116630201A
Authority
CN
China
Prior art keywords
loss
image
generator
network
undersampled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310707533.2A
Other languages
Chinese (zh)
Inventor
郭前进
黄梦圆
马颖
刘武
余卫勇
安翔
刘强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Petrochemical Technology
Original Assignee
Beijing Institute of Petrochemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Petrochemical Technology filed Critical Beijing Institute of Petrochemical Technology
Priority to CN202310707533.2A priority Critical patent/CN116630201A/en
Publication of CN116630201A publication Critical patent/CN116630201A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image reconstruction model establishing and applying method and device, which are applied to the technical field of image processing and comprise the following steps: based on the traditional GAN network structure, L1 loss, perception loss and total variation loss are introduced, the L1 loss can enable the GAN network to obtain better imaging effect when training paired pictures, deep information between a real image and a network generated image is approximated through the perception loss, detail information of output characteristics is enhanced, unnecessary details are removed through the total variation loss, important details such as edges are reserved, the image is smoother, a new framework for efficiently recovering image quality from sparse photoacoustic data is built through the L1 loss, the perception loss and the addition of the total variation loss, when images reconstructed by a small amount of undersampled data or limited view scanning are provided, the trained network can enhance the visibility of structures in any direction, and further better effects are obtained on the outline edges and the internal details of the images.

Description

一种图像重构模型建立、应用方法及装置Method and device for establishing and applying an image reconstruction model

技术领域technical field

本发明涉及图像处理技术领域,具体涉及一种图像重构模型建立、应用方法及装置。The invention relates to the technical field of image processing, in particular to an image reconstruction model establishment and application method and device.

背景技术Background technique

在传统光声图像重建领域的研究中,初始的方法是使用重建算法例如时间反演,傅里叶变换之类,在深度卷积神经网络兴起之后,Neda Davoudi等人创新性的将深度卷积神经网络结合其新式扫描仪,得到了更好的图像重建质量,但是在光声成像中,传感器数量和成本一直是一个需要解决的问题,在低数量传感器成像条件下例如16数量、32数量获取的欠采样图像,采用普通卷积神经网络例如CNN,resnet,unet等重建后的成像效果不好,成像过程中丢失信息较多。In the field of traditional photoacoustic image reconstruction research, the initial method is to use reconstruction algorithms such as time reversal, Fourier transform, etc. After the rise of deep convolutional neural networks, Neda Davoudi et al. innovatively combined deep convolution The neural network combined with its new scanner has obtained better image reconstruction quality, but in photoacoustic imaging, the number of sensors and cost have always been a problem that needs to be solved. Under low-number sensor imaging conditions such as 16 and 32 Under-sampled images, the imaging effect after reconstruction with ordinary convolutional neural networks such as CNN, resnet, unet, etc. is not good, and more information is lost during the imaging process.

发明内容Contents of the invention

有鉴于此,本发明的目的在于提供一种图像重构模型建立、应用方法及装置,以解决现有技术中,在低数量传感器成像条件下例如16数量、32数量获取的欠采样图像,采用普通卷积神经网络例如CNN,resnet,unet等重建后的成像效果不好,成像过程中丢失信息较多的问题。In view of this, the object of the present invention is to provide an image reconstruction model establishment, application method and device to solve the problem of under-sampled images obtained under low-number sensor imaging conditions such as 16 and 32 in the prior art. Ordinary convolutional neural networks such as CNN, resnet, unet, etc. have poor imaging effects after reconstruction, and a lot of information is lost during the imaging process.

根据本发明实施例的第一方面,提供一种图像重构模型建立方法,所述方法包括:According to a first aspect of an embodiment of the present invention, a method for establishing an image reconstruction model is provided, the method comprising:

获取多组欠采样图像以及对应的真实图像,得到训练集数据;Obtain multiple sets of undersampled images and corresponding real images to obtain training set data;

选取所述训练集数据中任意一组欠采样图像及其对应的真实图像,将所述欠采样图像输入到GAN网络结构中,得到网络生成图像;Select any group of undersampled images and their corresponding real images in the training set data, and input the undersampled images into the GAN network structure to obtain network generated images;

在所述网络生成图像中选取任意一个像素点,获取与该像素点在水平方向以及垂直方向相邻的两个像素点,通过水平方向以及垂直方向相邻的两个像素点计算该像素点的差异值,对所述网络生成图像中所有像素点的差异值进行求和,得到总变分损失;Select any pixel point in the image generated by the network, obtain two pixel points adjacent to the pixel point in the horizontal direction and vertical direction, and calculate the value of the pixel point through the two pixel points adjacent to the pixel point in the horizontal direction and vertical direction Difference value, summing the difference values of all pixels in the image generated by the network to obtain the total variation loss;

通过L1损失、感知损失、BCE损失以及总变分损失表示所述网络生成图像与真实图像的损失值,以所述网络生成图像与真实图像的损失值最小为目标,通过所述训练集数据对GAN网络结构进行训练,调整所述GAN网络结构的参数,直到损失值不再下降为止,建立图像重构模型。The loss value of the network generated image and the real image is represented by L1 loss, perceptual loss, BCE loss and total variation loss, with the minimum loss value of the network generated image and the real image as the goal, through the training set data pair The GAN network structure is trained, and the parameters of the GAN network structure are adjusted until the loss value no longer decreases, and an image reconstruction model is established.

优选地,Preferably,

所述通过所述训练集数据对GAN网络结构进行训练,直到损失值不再下降为止,建立图像重构模型包括:The GAN network structure is trained by the training set data until the loss value no longer declines, and the image reconstruction model is established including:

通过所述训练集数据对所述GAN网络结构的生成器进行训练,直到所述生成器的损失值不再下降为止,得到训练好的生成器;The generator of the GAN network structure is trained by the training set data until the loss value of the generator no longer declines to obtain a trained generator;

通过所述训练集数据对所述GAN网络结构的判别器进行训练,直到所述判别器的损失值不再下降为止,得到训练好的判别器;The discriminator of the GAN network structure is trained by the training set data until the loss value of the discriminator no longer declines to obtain a trained discriminator;

通过所述训练好的生成器以及判别器建立图像重构模型。An image reconstruction model is established through the trained generator and discriminator.

优选地,Preferably,

所述图像重构模型建立过程中,所述生成器以及所述判别器交替进行训练,当所述生成器训练时,所述判别器的参数固定,当所述判别器训练时,所述生成器的参数固定。During the establishment of the image reconstruction model, the generator and the discriminator are trained alternately, when the generator is trained, the parameters of the discriminator are fixed, and when the discriminator is trained, the generated The parameter of the device is fixed.

优选地,Preferably,

所述通过所述训练集数据对所述GAN网络结构的生成器进行训练,直到所述生成器的损失值不再下降为止,得到训练好的生成器包括:The generator of the GAN network structure is trained by the training set data until the loss value of the generator no longer declines, and the trained generator includes:

将所述欠采样图像输入到GAN网络结构的生成器中,得到网络生成图像;The subsampled image is input into the generator of the GAN network structure to obtain the image generated by the network;

将所述网络生成图像以及欠采样图像一并输入到所述GAN网络结构的判别器中,得到第一相似度矩阵;The image generated by the network and the under-sampled image are input into the discriminator of the GAN network structure to obtain the first similarity matrix;

通过所述网络生成图像以及真实图像得到感知损失,通过预设的L1损失函数得到L1损失,通过所述第一相似度矩阵以及N阶1矩阵得到第一BCE损失;The perceptual loss is obtained through the network generated image and the real image, the L1 loss is obtained through the preset L1 loss function, and the first BCE loss is obtained through the first similarity matrix and the N-order 1 matrix;

通过所述感知损失、L1损失、第一BCE损失以及总变分损失得到生成器损失,以所述生成器损失最小为目标,通过所述训练集数据对生成器进行训练,直到所述生成器损失不再下降,得到训练好的生成器。The generator loss is obtained by the perceptual loss, the L1 loss, the first BCE loss and the total variation loss, and the generator loss is minimized, and the generator is trained through the training set data until the generator The loss no longer drops, and a trained generator is obtained.

优选地,Preferably,

所述通过所述感知损失、L1损失、第一BCE损失以及总变分损失得到生成器损失包括:The generator loss obtained through the perceptual loss, L1 loss, first BCE loss and total variation loss includes:

引入阻尼系数,根据预设的阻尼系数计算公式计算每一次迭代的阻尼系数;The damping coefficient is introduced, and the damping coefficient of each iteration is calculated according to the preset damping coefficient calculation formula;

将每一次迭代的感知损失、L1损失以及总变分损失相加之和与该次迭代的阻尼系数相乘,得到阻尼损失;The sum of the perceptual loss, L1 loss and total variation loss of each iteration is multiplied by the damping coefficient of this iteration to obtain the damping loss;

将所述阻尼损失与该次迭代的第一BCE损失相加,得到生成器损失;adding the damping loss to the first BCE loss of this iteration to obtain the generator loss;

优选地,Preferably,

所述通过所述训练集数据对所述GAN网络结构的判别器进行训练,直到所述判别器的损失值不再下降为止,得到训练好的判别器包括:The discriminator of the GAN network structure is trained through the training set data until the loss value of the discriminator no longer decreases, and the trained discriminator includes:

将所述欠采样图像以及真实图像一并输入到所述判别器中,得到第二相似度矩阵;Inputting the undersampled image and the real image into the discriminator together to obtain a second similarity matrix;

通过所述第一相似度矩阵以及N阶0矩阵得到第二BCE损失,通过所述第二相似度矩阵以及N阶1矩阵得到第三BCE损失;A second BCE loss is obtained through the first similarity matrix and an N-order 0 matrix, and a third BCE loss is obtained through the second similarity matrix and an N-order 1 matrix;

通过所述第二BCE损失以及第三BCE损失得到判别器损失,以所述判别器损失最小为目标,通过所述训练集数据对所述判别器进行训练,直到所述判别器损失不再下降,得到训练好的判别器。The discriminator loss is obtained through the second BCE loss and the third BCE loss, and the discriminator loss is minimized, and the discriminator is trained through the training set data until the discriminator loss no longer decreases , to get the trained discriminator.

优选地,Preferably,

所述通过所述网络生成图像以及真实图像得到感知损失包括:The perceptual loss obtained by generating images and real images through the network includes:

将所述网络生成图像以及真实图像分别输入到所述生成器的特征提取模块中,所述特征提取模块分别输出网络特征以及真实特征,通过网络特征以及真实特征获取感知损失。The network-generated image and the real image are respectively input into the feature extraction module of the generator, and the feature extraction module outputs the network feature and the real feature respectively, and the perceptual loss is obtained through the network feature and the real feature.

根据本发明实施例的第二方面,提供一种图像重构模型应用方法,所述方法基于上述建立的图像重构模型,所述方法包括:According to the second aspect of the embodiments of the present invention, an image reconstruction model application method is provided, the method is based on the image reconstruction model established above, and the method includes:

获取所述待处理的欠采样图像,将所述待处理的欠采样图像输入到图像重构模型中;Acquiring the undersampled image to be processed, and inputting the undersampled image to be processed into an image reconstruction model;

所述图像重构模型基于感知损失增强所述待处理的欠采样图像的内部细节信息,基于L1损失增强所述待处理的欠采样图像的边缘轮廓,以及基于总变分损失使得输出图像平滑,输出增强图像。The image reconstruction model enhances internal detail information of the under-sampled image to be processed based on perceptual loss, enhances edge contours of the under-sampled image to be processed based on L1 loss, and smoothes the output image based on total variation loss, Output an enhanced image.

根据本发明实施例的第三方面,提供一种图像重构模型建立装置,所述装置包括:According to a third aspect of the embodiments of the present invention, an image reconstruction model establishment device is provided, the device comprising:

数据获取模块:用于获取多组欠采样图像以及对应的真实图像,得到训练集数据;Data acquisition module: used to acquire multiple sets of undersampled images and corresponding real images to obtain training set data;

网络图像获取模块:用于选取所述训练集数据中任意一组欠采样图像及其对应的真实图像,将所述欠采样图像输入到GAN网络结构中,得到网络生成图像;Network image acquisition module: used to select any group of undersampled images and their corresponding real images in the training set data, and input the undersampled images into the GAN network structure to obtain network generated images;

总变分损失获取模块:用于在所述网络生成图像中选取任意一个像素点,获取与该像素点在水平方向以及垂直方向相邻的两个像素点,通过水平方向以及垂直方向相邻的两个像素点计算该像素点的差异值,对所述网络生成图像中所有像素点的差异值进行求和,得到总变分损失;Total variation loss acquisition module: used to select any pixel point in the image generated by the network, obtain two pixel points adjacent to the pixel point in the horizontal direction and vertical direction, and obtain two pixel points adjacent to the pixel point in the horizontal direction and vertical direction. Two pixel points calculate the difference value of the pixel point, sum the difference values of all the pixel points in the image generated by the network, and obtain the total variation loss;

模型建立模块:用于通过L1损失、感知损失、BCE损失以及总变分损失表示所述网络生成图像与真实图像的损失值,以所述网络生成图像与真实图像的损失值最小为目标,通过所述训练集数据对GAN网络结构进行训练,调整所述GAN网络结构的参数,直到损失值不再下降为止,建立图像重构模型。Model building module: used to represent the loss value of the network generated image and the real image through L1 loss, perceptual loss, BCE loss and total variation loss, with the minimum loss value of the network generated image and the real image as the goal, through The training set data trains the GAN network structure, adjusts the parameters of the GAN network structure until the loss value no longer decreases, and establishes an image reconstruction model.

根据本发明实施例的第四方面,提供一种图像重构模型应用装置,所述装置包括:According to a fourth aspect of the embodiments of the present invention, there is provided an image reconstruction model application device, the device comprising:

输入模块:用于获取所述待处理的欠采样图像,将所述待处理的欠采样图像输入到图像重构模型中;Input module: used to acquire the under-sampled image to be processed, and input the under-sampled image to be processed into the image reconstruction model;

输出模块:用于所述图像重构模型基于感知损失增强所述待处理的欠采样图像的内部细节信息,基于L1损失增强所述待处理的欠采样图像的边缘轮廓,以及基于总变分损失使得输出图像平滑,输出增强图像。Output module: for the image reconstruction model to enhance internal detail information of the under-sampled image to be processed based on perceptual loss, to enhance the edge profile of the under-sampled image to be processed based on L1 loss, and to enhance the edge profile of the under-sampled image to be processed based on the total variation loss The output image is smoothed and the enhanced image is output.

本发明的实施例提供的技术方案可以包括以下有益效果:The technical solutions provided by the embodiments of the present invention may include the following beneficial effects:

本申请通过在模型的训练过程中,基于传统的GAN网络结构,引入了L1损失、感知损失以及总变分损失,L1正则项能够使GAN网络在训练成对图片时,得到更好的成像效果,通过感知损失逼近真实图像与网络生成图像之间的深层信息,也就是感知信息,可以增强输出特征的细节信息,通过总变分损失去除不需要的细节,同时保留诸如边缘的重要细节,使得图像更为平滑,通过L1损失、感知损失以及总变分损失的加入,从稀疏光声数据中高效恢复图像质量的新框架,当提供由少量欠采样数据或有限视角扫描重建的图像时,训练好的网络能够增强任意方向结构的可见性,进而在图像的轮廓边缘、内部细节方面取得了更好的效果,尽可能的恢复预期的图像质量。This application introduces L1 loss, perceptual loss and total variation loss based on the traditional GAN network structure during the training process of the model. The L1 regularization item can enable the GAN network to obtain better imaging effects when training paired pictures , Approximating the deep information between the real image and the network-generated image through perceptual loss, that is, perceptual information, can enhance the detailed information of the output features, remove unnecessary details through the total variational loss, while retaining important details such as edges, so that The image is smoother. Through the addition of L1 loss, perceptual loss and total variation loss, a new framework for efficiently recovering image quality from sparse photoacoustic data, when provided with images reconstructed from a small amount of undersampled data or limited viewing angle scans, training A good network can enhance the visibility of structures in any direction, and then achieve better results in the contour edges and internal details of the image, and restore the expected image quality as much as possible.

应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本发明。It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention.

附图说明Description of drawings

此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并与说明书一起用于解释本发明的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description serve to explain the principles of the invention.

图1是根据一示例性实施例示出的一种图像重构模型建立方法的流程示意图;Fig. 1 is a schematic flowchart of a method for establishing an image reconstruction model according to an exemplary embodiment;

图2是根据另一示例性实施例示出的生成器训练原理示意图;Fig. 2 is a schematic diagram showing the principle of generator training according to another exemplary embodiment;

图3是根据另一示例性实施例示出的判别器训练原理示意图;Fig. 3 is a schematic diagram showing the principle of discriminator training according to another exemplary embodiment;

图4是本根据另一示例性实施例示出的VGG16特征提取模块的结构示意图;Fig. 4 is a schematic structural diagram of a VGG16 feature extraction module shown according to another exemplary embodiment;

图5是根据另一示例性实施例示出的一种图像重构模型应用方法的流程示意图;Fig. 5 is a schematic flowchart of a method for applying an image reconstruction model according to another exemplary embodiment;

图6是根据另一示例性实施例示出的一种图像重构模型建立装置的系统示意图;Fig. 6 is a system schematic diagram of an apparatus for establishing an image reconstruction model according to another exemplary embodiment;

图7是根据另一示例性实施例示出的一种图像重构模型应用装置的系统示意图;Fig. 7 is a system schematic diagram of an image reconstruction model application device according to another exemplary embodiment;

附图中:101-数据获取模块,102-网络图像获取模块,103-总变分损失获取模块,104-模型建立模块,201-输入模块,202-输出模块。In the drawings: 101-data acquisition module, 102-network image acquisition module, 103-total variation loss acquisition module, 104-model building module, 201-input module, 202-output module.

具体实施方式Detailed ways

这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明的一些方面相一致的装置和方法的例子。Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present invention. Rather, they are merely examples of apparatuses and methods consistent with aspects of the invention as recited in the appended claims.

实施例一Embodiment one

图1是根据一示例性实施例示出的一种图像重构模型建立方法的流程示意图,如图1所示,该方法包括:Fig. 1 is a schematic flowchart of a method for establishing an image reconstruction model according to an exemplary embodiment. As shown in Fig. 1, the method includes:

S1,获取多组欠采样图像以及对应的真实图像,得到训练集数据;S1, obtaining multiple sets of under-sampled images and corresponding real images to obtain training set data;

S2,选取所述训练集数据中任意一组欠采样图像及其对应的真实图像,将所述欠采样图像输入到GAN网络结构中,得到网络生成图像;S2, selecting any group of undersampled images and their corresponding real images in the training set data, and inputting the undersampled images into the GAN network structure to obtain network-generated images;

S3,在所述网络生成图像中选取任意一个像素点,获取与该像素点在水平方向以及垂直方向相邻的两个像素点,通过水平方向以及垂直方向相邻的两个像素点计算该像素点的差异值,对所述网络生成图像中所有像素点的差异值进行求和,得到总变分损失;S3, select any pixel in the image generated by the network, obtain two pixels adjacent to the pixel in the horizontal direction and vertical direction, and calculate the pixel through the two adjacent pixels in the horizontal direction and vertical direction The difference value of the point, the difference value of all pixel points in the image generated by the network is summed to obtain the total variation loss;

S4,通过L1损失、感知损失、BCE损失以及总变分损失表示所述网络生成图像与真实图像的损失值,以所述网络生成图像与真实图像的损失值最小为目标,通过所述训练集数据对GAN网络结构进行训练,调整所述GAN网络结构的参数,直到损失值不再下降为止,建立图像重构模型;S4, expressing the loss value of the image generated by the network and the real image by L1 loss, perceptual loss, BCE loss and total variation loss, aiming at the minimum loss value of the image generated by the network and the real image, through the training set Data trains the GAN network structure, adjusts the parameters of the GAN network structure, until the loss value no longer declines, and establishes an image reconstruction model;

可以理解的是,本申请中,将成对的欠采样图片和512数量高精度采样图片输入GAN网络结构进行模型训练,值得强调的是,本申请中输入的图像为光声图像,本申请围绕光声图像的重建提出了一种新的模型,光声层析成像技术(PAT)在实际应用中,通常会涉及断层扫描数据的次优采样,在数据采集的过程中,为了得到高质量的图像,通常需要使用较多的换能器数量进行测量,但是更多的换能器数量会带来更高的制作成本、硬件复杂度以及对设备计算能力带来的更大挑战,因此需要对换能器数量进行限制,在稀疏视角情况下进行测量,稀疏视角下的光声成像可以有效降低系统复杂度,减少成本,同时减少需要处理的数据量以加快成像速度,但是也会导致重建的光声图像质量下降,出现伪影和模糊现象,所以需要提出一种新的模型使得光声图像在重建过程中,保留更多的细节信息以及边框轮廓;现有的GAN(生成对抗网络)的主要结构包括一个生成器G(Generator)和一个判别器D(Discriminator),通过生成器基于欠采样图片生成新的网络图像,通过判别器对网络生成图像以及高精度采样图片进行相似判断,本申请在现有的GAN网络结构的基础上,引入了L1loss,L1正则项能够使GAN网络在训练成对图片时,得到更好的成像效果,同时为了解决图片中的稀疏视角的问题和边缘模糊的问题,本申请还在GAN网络结构的基础上引用了感知损失函数,感知损失用于实时超分辨任务和风格迁移任务,后来也被应用于更多的领域,在图像去噪方向也有不少工作使用到了感知损失,在提取特征的过程中,较浅层通常提取边缘、颜色、亮度等低频信息,而网络较深层则提取一些细节纹理等高频信息,再深一点的网络层则提取一些具有辨别性的关键特征,也就是说,网络层越深提取的特征越抽象越高级,而感知损失就可以实现关注深层特征,从而在图像的轮廓边缘、内部细节方面取得了更好的效果,还引入了总变分损失,全变分在图像处理中最有效的应用为图像去噪和复原,受噪声污染的图像的总变分比无噪图像的总变分明显的大,限制总变分就会限制噪声,用在图像上,total variation loss(TV loss)可以使图像变得平滑,它基于这样的原理:具有过多和可能是虚假细节的信号具有高的总变分,即,信号的绝对梯度的积分是高的,根据该原理,减小信号的总变分,使其与原始信号紧密匹配,去除不需要的细节,同时保留诸如边缘的重要细节;TV Loss全称Total Variation Loss(总变分损失),计算网络生成图像的总变分,TV Loss常用作正则项出现在总体函数中去约束网络学习,可以有效促进网络输出结果的空间平滑性,在数字图像处理中,其定义通常如下:It can be understood that in this application, pairs of under-sampled pictures and 512 high-precision sampling pictures are input into the GAN network structure for model training. It is worth emphasizing that the input images in this application are photoacoustic images. This application focuses on optical A new model is proposed for the reconstruction of acoustic images. The practical application of photoacoustic tomography (PAT) usually involves suboptimal sampling of tomographic data. In the process of data acquisition, in order to obtain high-quality images , it is usually necessary to use a larger number of transducers for measurement, but more transducers will bring higher production costs, hardware complexity, and greater challenges to the computing power of the device, so it is necessary to exchange The number of transducers is limited, and measurements are performed under sparse viewing angles. Photoacoustic imaging under sparse viewing angles can effectively reduce system complexity, reduce costs, and reduce the amount of data that needs to be processed to speed up imaging, but it will also lead to reconstruction of light. The quality of the acoustic image is degraded, and artifacts and blurring appear, so it is necessary to propose a new model to preserve more detailed information and border outlines during the reconstruction process of the photoacoustic image; the existing GAN (generated confrontation network) is the main The structure includes a generator G (Generator) and a discriminator D (Discriminator). The generator generates new network images based on undersampled images, and the discriminator performs similarity judgments on network-generated images and high-precision sampling images. This application is in On the basis of the existing GAN network structure, L1loss is introduced. The L1 regularization item can enable the GAN network to obtain better imaging results when training pairs of pictures. At the same time, in order to solve the problem of sparse perspective and blurred edges in the picture , this application also cites the perceptual loss function on the basis of the GAN network structure. The perceptual loss is used for real-time super-resolution tasks and style transfer tasks. It was later applied to more fields, and there are many works in the direction of image denoising. When it comes to perceptual loss, in the process of extracting features, the shallower layer usually extracts low-frequency information such as edges, colors, and brightness, while the deeper layer of the network extracts some high-frequency information such as detailed textures, and the deeper network layer extracts some information with discriminative features. That is to say, the deeper the network layer, the more abstract and advanced the features extracted, and the perceptual loss can realize the focus on deep features, so as to achieve better results in the contour edges and internal details of the image. The total variation loss is eliminated. The most effective application of total variation in image processing is image denoising and restoration. The total variation of a noise-contaminated image is significantly larger than that of a noise-free image. Limiting the total variation is Will limit the noise, used on the image, the total variation loss (TV loss) can make the image smooth, it is based on the principle that the signal with too many and possibly false details has a high total variation, that is, the signal's The integral of the absolute gradient is high. According to this principle, the total variation of the signal is reduced to closely match the original signal, removing unnecessary details while retaining important details such as edges; TV Loss stands for Total Variation Loss (Total Variation Loss) Variational loss), calculates the total variation of the image generated by the network, TV Loss is often used as a regular term in the overall function to constrain network learning, and can effectively promote the spatial smoothness of network output results. In digital image processing, its definition is usually as follows:

式中,xi,j表示网络生成图像中的任意一个像素点,上述公式的含义是:分别计算每个像素点xi,j与水平方向(图像的宽W)、垂直方向(图像的高H)的下一个紧邻像素xi,j-1、xi+1,j之间的差的平方,然后开方,针对所有像素求和即可得到总变分损失;In the formula, x i, j represents any pixel in the image generated by the network. The meaning of the above formula is to calculate the relationship between each pixel x i, j and the horizontal direction (the width W of the image), the vertical direction (the height of the image H) The square of the difference between the next adjacent pixel x i,j-1 and x i+1,j , and then the square root, summing all pixels to get the total variation loss;

通过L1损失、感知损失以及总变分损失的加入,从稀疏光声数据中高效恢复图像质量的新框架,当提供由少量欠采样数据或有限视角扫描重建的图像时,训练好的网络能够增强任意方向结构的可见性,进而在图像的轮廓边缘、内部细节方面取得了更好的效果,尽可能的恢复预期的图像质量。Through the addition of L1 loss, perceptual loss, and total variational loss, a new framework for efficiently recovering image quality from sparse photoacoustic data, when provided with images reconstructed from small amounts of undersampled data or limited viewing angle scans, the trained network can enhance The visibility of structures in any direction, and thus better results in the contour edges and internal details of the image, restore the expected image quality as much as possible.

优选地,Preferably,

所述通过所述训练集数据对GAN网络结构进行训练,直到损失值不再下降为止,建立图像重构模型包括:The GAN network structure is trained by the training set data until the loss value no longer declines, and the image reconstruction model is established including:

通过所述训练集数据对所述GAN网络结构的生成器进行训练,直到所述生成器的损失值不再下降为止,得到训练好的生成器;The generator of the GAN network structure is trained by the training set data until the loss value of the generator no longer declines to obtain a trained generator;

通过所述训练集数据对所述GAN网络结构的判别器进行训练,直到所述判别器的损失值不再下降为止,得到训练好的判别器;The discriminator of the GAN network structure is trained by the training set data until the loss value of the discriminator no longer declines to obtain a trained discriminator;

通过所述训练好的生成器以及判别器建立图像重构模型;Establishing an image reconstruction model through the trained generator and discriminator;

可以理解的是,在模型的建立过程,也就是训练过程中,需要分别对GAN网络结构的生成器以及判别器进行对抗循环训练,使用梯度下降法,以各自的损失值不再下降为止,分别完成生成器以及判别器的训练工作,且在训练过程中引入L1损失以及感知损失的概念,而训练好的生成器以及判别器组成相较于现有的GAN网络结构对图像重建更有优势的LP-GAN网络结构。It is understandable that in the process of building the model, that is, during the training process, it is necessary to perform confrontational loop training on the generator and the discriminator of the GAN network structure, and use the gradient descent method until the respective loss values no longer decrease. Complete the training of the generator and the discriminator, and introduce the concepts of L1 loss and perceptual loss during the training process, and the trained generator and discriminator composition are more advantageous for image reconstruction than the existing GAN network structure LP-GAN network structure.

优选地,Preferably,

所述图像重构模型建立过程中,所述生成器以及所述判别器交替进行训练,当所述生成器训练时,所述判别器的参数固定,当所述判别器训练时,所述生成器的参数固定;During the establishment of the image reconstruction model, the generator and the discriminator are trained alternately, when the generator is trained, the parameters of the discriminator are fixed, and when the discriminator is trained, the generated The parameters of the device are fixed;

可以理解的是,由于是生成器与判别器交替对抗训练,为了避免生成器训练过程中,由于判别器的参数变化导致的干扰,所以在生成器的训练过程中,判别器的参数固定,通过一组数据先完成生成器的训练,调整生成器的参数,再基于这一组数据完成判别器的训练,调整判别器的参数,再选择下一组数据再分别对生成器以及判别器进行训练,直到生成器与判别器各自的损失不再下降为止。It is understandable that since the generator and the discriminator are alternately trained against each other, in order to avoid the interference caused by the parameter change of the discriminator during the training process of the generator, the parameters of the discriminator are fixed during the training process of the generator. A set of data first completes the training of the generator, adjusts the parameters of the generator, and then completes the training of the discriminator based on this set of data, adjusts the parameters of the discriminator, and then selects the next set of data to train the generator and the discriminator respectively , until the respective losses of the generator and the discriminator no longer decrease.

优选地,Preferably,

所述通过所述训练集数据对所述GAN网络结构的生成器进行训练,直到所述生成器的损失值不再下降为止,得到训练好的生成器包括:The generator of the GAN network structure is trained by the training set data until the loss value of the generator no longer declines, and the trained generator includes:

将所述欠采样图像输入到GAN网络结构的生成器中,得到网络生成图像;The subsampled image is input into the generator of the GAN network structure to obtain the image generated by the network;

将所述网络生成图像以及欠采样图像一并输入到所述GAN网络结构的判别器中,得到第一相似度矩阵;The image generated by the network and the under-sampled image are input into the discriminator of the GAN network structure to obtain the first similarity matrix;

通过所述网络生成图像以及真实图像得到感知损失,通过预设的L1损失函数得到L1损失,通过所述第一相似度矩阵以及N阶1矩阵得到第一BCE损失;The perceptual loss is obtained through the network generated image and the real image, the L1 loss is obtained through the preset L1 loss function, and the first BCE loss is obtained through the first similarity matrix and the N-order 1 matrix;

通过所述感知损失、L1损失、第一BCE损失以及总变分损失得到生成器损失,以所述生成器损失最小为目标,通过所述训练集数据对生成器进行训练,直到所述生成器损失不再下降,得到训练好的生成器;The generator loss is obtained by the perceptual loss, the L1 loss, the first BCE loss and the total variation loss, and the generator loss is minimized, and the generator is trained through the training set data until the generator The loss no longer drops, and the trained generator is obtained;

可以理解的是,如附图2所示的生成器训练过程原理示意图,生成器使用带有残差连接的Unet作为骨干网络,使用傅里叶损失、感知损失和L1损失作为梯度下降目标,参数j是预训练神经网络中j层的特征表示,N是特征层的数量,基于梯度的更新可以使用任何标准的基于梯度的学习规则,我们在实验中使用了动量,从欠采样图像中抽取一批m个样本{x(1),...,x(m)},再获取其对应的512数量高精度图片(真实图像){z(1),...,z(m)},首先将欠采样图像输入到GAN网络结构的生成器中,生成器基于欠采样图像生成新的网络生成图像,将新的网络生成图像以及欠采样图像一并输入到判别器中,通过判别器得到网络生成图像以及欠采样图像的第一相似度矩阵,通过所述网络生成图像以及真实图像得到感知损失PerceptualLoss,通过预设的L1损失函数得到L1损失,L1损失的表达式如下所示:It can be understood that, as shown in the schematic diagram of the training process of the generator shown in Figure 2, the generator uses Unet with residual connections as the backbone network, uses Fourier loss, perceptual loss and L1 loss as the gradient descent target, and the parameters j is the feature representation of layer j in the pre-trained neural network, N is the number of feature layers, any standard gradient-based learning rule can be used for gradient-based updates, we used momentum in our experiments, sampled from an undersampled image Batch m samples {x (1) , ..., x (m) }, and then obtain the corresponding 512 high-precision pictures (real images) {z (1) , ..., z (m) }, First, the undersampled image is input into the generator of the GAN network structure, the generator generates a new network generated image based on the undersampled image, and the new network generated image and the undersampled image are input to the discriminator together, and the discriminator obtains The first similarity matrix of the image generated by the network and the undersampled image, the perceptual loss PerceptualLoss is obtained through the image generated by the network and the real image, and the L1 loss is obtained through the preset L1 loss function. The expression of the L1 loss is as follows:

L1Loss=|z(i)-G(x(i))|L1Loss=|z (i) -G(x (i) )|

式中,G表示生成器;In the formula, G represents the generator;

设置真实图像值为1,输出为D(G(x(i)),即第一相似度矩阵,通过所述第一相似度矩阵以及N阶1矩阵得到第一BCE损失,所述第一BCE损失的表达式如下所示:Set the real image value to 1, the output is D(G(x (i) ), that is, the first similarity matrix, and the first BCE loss is obtained through the first similarity matrix and the N-order 1 matrix, and the first BCE The expression of the loss is as follows:

BCE loss=-1log(D(G(x(i))))BCE loss=-1log(D(G(x (i) )))

式中,D表示判别器;通过所述感知损失PerceptualLoss、L1损失以及第一BCE损失得到生成器损失,生成器损失表达式如下所示:In the formula, D represents the discriminator; the generator loss is obtained through the perceptual loss PerceptualLoss, L1 loss and the first BCE loss, and the generator loss expression is as follows:

式中,α,β是可调的权重系数;以所述生成器损失最小为目标,通过梯度下降法,使用所述训练集数据对生成器进行训练,直到所述生成器损失不再下降,得到训练好的生成器。In the formula, α, β are adjustable weight coefficients; with the goal of minimizing the loss of the generator, the generator is trained using the training set data through the gradient descent method until the loss of the generator no longer decreases, Get a trained generator.

优选地,Preferably,

所述通过所述感知损失、L1损失、第一BCE损失以及总变分损失得到生成器损失包括:The generator loss obtained through the perceptual loss, L1 loss, first BCE loss and total variation loss includes:

引入阻尼系数,根据预设的阻尼系数计算公式计算每一次迭代的阻尼系数;The damping coefficient is introduced, and the damping coefficient of each iteration is calculated according to the preset damping coefficient calculation formula;

将每一次迭代的感知损失、L1损失以及总变分损失相加之和与该次迭代的阻尼系数相乘,得到阻尼损失;The sum of the perceptual loss, L1 loss and total variation loss of each iteration is multiplied by the damping coefficient of this iteration to obtain the damping loss;

将所述阻尼损失与该次迭代的第一BCE损失相加,得到生成器损失;adding the damping loss to the first BCE loss of this iteration to obtain the generator loss;

可以理解的是,在上述方案的基础上,再引入阻尼系数的概念,阻尼系数是整个LP-GAN网络结构调整loss值的一个重要参数,通过调整阻尼系数的值来解决损失值的收敛的问题,在常见的物理应用情景中,经常采用的是固定的阻尼策略,若将阻尼系数设置成一个较大的值,则有可能导致算法因发生震荡而难以收敛,若将阻尼系数设置成一个较小的值,则算法生成的图像细节较差,很容易就到达局部最优,本实施例将阻尼系数引入模型训练过程中,将整体训练的损失值限制在0.1以下,是为了弥补丢失的部分图像的细节信息,突出图像的细节部分,阻尼系数定义如下所示:It is understandable that on the basis of the above scheme, the concept of damping coefficient is introduced. The damping coefficient is an important parameter for adjusting the loss value of the entire LP-GAN network structure. The problem of convergence of the loss value is solved by adjusting the value of the damping coefficient , in common physical application scenarios, a fixed damping strategy is often used. If the damping coefficient is set to a larger value, it may cause the algorithm to be difficult to converge due to oscillations. If the damping coefficient is set to a higher value If the value is small, the details of the image generated by the algorithm are poor, and it is easy to reach the local optimum. In this embodiment, the damping coefficient is introduced into the model training process, and the loss value of the overall training is limited to less than 0.1, in order to make up for the lost part The detailed information of the image, highlighting the detailed part of the image, the definition of the damping coefficient is as follows:

式中,x表示迭代次数,也就是每一次迭代过程中,阻尼系数y都会相应变化,当迭代次数为100epoch时,阻尼系数y为0.5;计算出每次迭代的阻尼系数y后,再将阻尼系数y乘以每次迭代的总变分损失、L1损失以及感知损失之和,通过阻尼系数y使得损失收敛,在第一次开始训练时,阻尼系数为0,这时只有BCE损失参与计算,可以更好的获取图像的总轮廓信息,当逐渐的阻尼系数越来越大,计算BCE损失与总变分损失、L1损失以及感知损失之和的和时,可以获取更多图像内部的细节信息,因此,加入阻尼系数之后,它可以在处理过程中平衡不同因素之间的影响大小,从而提升图像处理的质量;阻尼系数在图像处理领域中的应用是为了在处理过程中平衡不同因素之间的权衡,通过调整阻尼系数,可以控制平滑程度、图像质量、分割结果或稀疏性等因素,以满足具体任务的需求并改善图像处理结果的质量;加入阻尼系数之后模型的优点:In the formula, x represents the number of iterations, that is, during each iteration, the damping coefficient y will change accordingly. When the number of iterations is 100epoch, the damping coefficient y is 0.5; after calculating the damping coefficient y of each iteration, the damping The coefficient y is multiplied by the sum of the total variation loss, L1 loss and perceptual loss of each iteration, and the loss is converged through the damping coefficient y. When the training is started for the first time, the damping coefficient is 0. At this time, only the BCE loss is involved in the calculation. The total contour information of the image can be better obtained. When the gradual damping coefficient becomes larger and larger, and the sum of BCE loss and total variation loss, L1 loss and perceptual loss is calculated, more detailed information inside the image can be obtained. , therefore, after adding the damping coefficient, it can balance the influence of different factors in the processing process, thereby improving the quality of image processing; the application of damping coefficient in the field of image processing is to balance the different factors in the processing process By adjusting the damping coefficient, factors such as smoothness, image quality, segmentation results or sparsity can be controlled to meet the needs of specific tasks and improve the quality of image processing results; the advantages of the model after adding the damping coefficient:

平衡平滑和细节保留:Balance smoothing and detail preservation:

阻尼系数可以帮助平衡平滑和细节保留之间的权衡,在图像处理任务中,有时需要平滑图像以去除噪声或减少细节,而有时又需要保留重要的细节信息,通过调整阻尼系数,可以更精确地控制平滑的程度,使平滑和细节保留能够得到更好的平衡;The damping coefficient can help balance the trade-off between smoothing and detail preservation. In image processing tasks, sometimes it is necessary to smooth the image to remove noise or reduce details, and sometimes it is necessary to retain important details. By adjusting the damping coefficient, you can more precisely Control the degree of smoothing, so that smoothing and detail preservation can be better balanced;

控制图像质量和视觉效果:Control image quality and visual effects:

阻尼系数可以用于控制图像的质量和视觉效果,在图像压缩和增强等任务中,阻尼系数可以影响结果的质量和感知效果,通过调整阻尼系数,可以在压缩率和图像质量之间找到最佳平衡点,或者在增强效果和图像保真度之间取得良好的折衷;The damping coefficient can be used to control the quality and visual effect of the image. In tasks such as image compression and enhancement, the damping coefficient can affect the quality and perception of the result. By adjusting the damping coefficient, you can find the best between compression ratio and image quality balance point, or a good compromise between enhancement and image fidelity;

控制稀疏性和重构质量:Control sparsity and reconstruction quality:

在基于稀疏表示的图像处理方法中,阻尼系数可以用于平衡稀疏性和重构质量之间的权衡,通过调整阻尼系数,可以控制稀疏表示的稳定性和重构图像的质量,以获得更好的图像处理效果;In the image processing method based on sparse representation, the damping coefficient can be used to balance the trade-off between sparsity and reconstruction quality. By adjusting the damping coefficient, the stability of the sparse representation and the quality of the reconstructed image can be controlled to obtain better image processing effect;

综上所述,阻尼系数在图像处理中的应用可以提供更好的平滑与细节保留的权衡、更精确的图像质量和视觉效果控制、改善分割结果的连续性以及更好的稀疏性和重构质量的平衡,这些优点使得阻尼系数成为模型训练过程中一种有价值的工具,可以提高图像处理任务的质量和效果。In summary, the application of damping coefficients in image processing can provide a better trade-off between smoothing and detail preservation, more precise control of image quality and visual effects, improved continuity of segmentation results, and better sparsity and reconstruction. The balance of mass, these advantages make the damping coefficient a valuable tool in the model training process, which can improve the quality and effectiveness of image processing tasks.

优选地,Preferably,

所述通过所述训练集数据对所述GAN网络结构的判别器进行训练,直到所述判别器的损失值不再下降为止,得到训练好的判别器包括:The discriminator of the GAN network structure is trained through the training set data until the loss value of the discriminator no longer decreases, and the trained discriminator includes:

将所述欠采样图像以及真实图像一并输入到所述判别器中,得到第二相似度矩阵;Inputting the undersampled image and the real image into the discriminator together to obtain a second similarity matrix;

通过所述第一相似度矩阵以及N阶0矩阵得到第二BCE损失,通过所述第二相似度矩阵以及N阶1矩阵得到第三BCE损失;A second BCE loss is obtained through the first similarity matrix and an N-order 0 matrix, and a third BCE loss is obtained through the second similarity matrix and an N-order 1 matrix;

通过所述第二BCE损失以及第三BCE损失得到判别器损失,以所述判别器损失最小为目标,通过所述训练集数据对所述判别器进行训练,直到所述判别器损失不再下降,得到训练好的判别器;The discriminator loss is obtained through the second BCE loss and the third BCE loss, and the discriminator loss is minimized, and the discriminator is trained through the training set data until the discriminator loss no longer decreases , get the trained discriminator;

可以理解的是,判别器使用简单的patchGAN,输入图像经过多层卷积后得到一个NXN的矩阵,而不是原始GAN的二分类数,然后再跟label取BCEloss,训练判别器时要求经过网络虚拟出来的图片经过判别器得到的NXN矩阵与N阶0矩阵BCEloss尽可能的小,label经过判别器得到的NXN矩阵与N阶1矩阵BCEloss也尽可能的小,如附图3所示,基于上述训练生成器的相同的一组数据,将所述欠采样图像以及真实图像一并输入到所述判别器中,得到第二相似度矩阵,通过所述第一相似度矩阵以及N阶0矩阵得到第二BCE损失,通过所述第二相似度矩阵以及N阶1矩阵得到第三BCE损失,其中,第二BCE损失设置真实图像值为0,输出为D(x(i)),第二BCE损失的表达式如下所示:It is understandable that the discriminator uses a simple patchGAN. After the input image undergoes multi-layer convolution, an N×N matrix is obtained instead of the binary classification number of the original GAN, and then BCEloss is obtained with the label. When training the discriminator, it is required to go through the network virtual The NXN matrix and N-order 0 matrix BCEloss obtained by the discriminator for the output image are as small as possible, and the NXN matrix and N-order 1 matrix BCEloss obtained by the label through the discriminator are also as small as possible, as shown in Figure 3, based on the above The same set of data for training the generator, input the undersampled image and the real image into the discriminator together to obtain a second similarity matrix, obtained by the first similarity matrix and the N-order 0 matrix The second BCE loss, the third BCE loss is obtained by the second similarity matrix and the N-order 1 matrix, wherein the second BCE loss sets the real image value to 0, and the output is D(x (i) ), the second BCE The expression of the loss is as follows:

BCE loss=-log(1-D(x(i)))BCE loss=-log(1-D(x (i) ))

第三BCE损失设置真实图像值为1,输出为D(z(i)),第三BCE损失的表达式如下所示:The third BCE loss sets the real image value to 1, and the output is D(z (i) ), the expression of the third BCE loss is as follows:

BCE loss=-1log(D(z(i)))BCE loss=-1log(D(z (i) ))

将所述第二BCE损失加上第三BCE损失得到判别器损失,判别器损失的表达式如下所示:The second BCE loss is added to the third BCE loss to obtain the discriminator loss, and the expression of the discriminator loss is as follows:

以所述判别器损失最小为目标,采用梯度下降法通过所述训练集数据对所述判别器进行训练,直到所述判别器损失不再下降,得到训练好的判别器。Aiming at the minimum loss of the discriminator, the discriminator is trained by using the training set data using a gradient descent method until the loss of the discriminator no longer decreases, and a trained discriminator is obtained.

优选地,Preferably,

所述通过所述网络生成图像以及真实图像得到感知损失包括:The perceptual loss obtained by generating images and real images through the network includes:

将所述网络生成图像以及真实图像分别输入到所述生成器的特征提取模块中,所述特征提取模块分别输出网络特征以及真实特征,通过网络特征以及真实特征获取感知损失;The image generated by the network and the real image are respectively input into the feature extraction module of the generator, and the feature extraction module outputs the network feature and the real feature respectively, and obtains the perceptual loss through the network feature and the real feature;

可以理解的是,感知损失就是通过一个固定的网络(通常使用预训练的VGG16或者VGG19),分别以真实图像(Ground Truth)、生成器输出的网络生成图像(Prediciton)作为其输入,得到对应的输出特征:feature_gt、feature_pre,然后使用feature_gt与feature_pre构造损失(通常为L2损失),逼近真实图像与网络生成结果之间的深层信息,也就是感知信息,相比普通的L2损失而言,可以增强输出特征的细节信息,(可以简单理解为:此处的固定网络视为一个函数f,feature_gt=f(Ground Truth),feature_pre=f(Prediciton),我们的目的是最小化feature_gt与feature_pre之间的差异,即最小化feature_gt、feature_pre构成的感知损失;It can be understood that the perceptual loss is through a fixed network (usually using pre-trained VGG16 or VGG19), respectively using the real image (Ground Truth) and the network generated image (Prediciton) output by the generator as its input to obtain the corresponding Output features: feature_gt, feature_pre, and then use feature_gt and feature_pre to construct a loss (usually L2 loss), which approximates the deep information between the real image and the network-generated result, that is, perceptual information. Compared with ordinary L2 loss, it can be enhanced. The details of the output features, (can be simply understood as: the fixed network here is regarded as a function f, feature_gt=f(Ground Truth), feature_pre=f(Prediciton), our purpose is to minimize the difference between feature_gt and feature_pre Difference, that is, minimize the perceptual loss formed by feature_gt and feature_pre;

设置固定网络(本实施例使用的是ImageNet上预训练好的VGG16),该网络参数固定,不进行更新;A fixed network is set (this embodiment uses the pre-trained VGG16 on ImageNet), the network parameters are fixed and do not update;

以真实图像(Ground Truth)、网络生成图像(Prediciton)作为其输入,得到对应的输出特征:feature_gt、feature_pre,使用feature_gt与feature_pre构造损失,通常不只使用固定网络(如VGG16)的单一层提取特征,而是使用其网络结构中的浅层、较深层、更深层中的某几层组合提取特征,构造损失,比如VGG16的特征提取模块的第3、5、7个卷积层的输出特征进行累加,VGG16特征提取模块结构如附图4所示:本申请中使用框中所示的四个激活层的输出构造感知损失,感知损失的表达式如下所示:Using the real image (Ground Truth) and the network-generated image (Prediciton) as its input, the corresponding output features are obtained: feature_gt, feature_pre, using feature_gt and feature_pre to construct the loss, usually not only using a single layer of a fixed network (such as VGG16) to extract features, Instead, it uses the combination of shallow, deeper, and deeper layers in its network structure to extract features and construct losses. For example, the output features of the 3rd, 5th, and 7th convolutional layers of the feature extraction module of VGG16 are accumulated. , the VGG16 feature extraction module structure is shown in Figure 4: In this application, the output of the four activation layers shown in the box is used to construct the perceptual loss, and the expression of the perceptual loss is as follows:

式中,F表示特征提取模块。In the formula, F represents the feature extraction module.

实施例二Embodiment two

图5是根据另一示例性实施例示出的一种图像重构模型应用方法的流程示意图,包括:Fig. 5 is a schematic flowchart of a method for applying an image reconstruction model according to another exemplary embodiment, including:

S201,获取所述待处理的欠采样图像,将所述待处理的欠采样图像输入到图像重构模型中;S201. Acquire the undersampled image to be processed, and input the undersampled image to be processed into an image reconstruction model;

S202,所述图像重构模型基于感知损失增强所述待处理的欠采样图像的内部细节信息,基于L1损失增强所述待处理的欠采样图像的边缘轮廓,以及基于总变分损失使得输出图像平滑,输出增强图像;S202, the image reconstruction model enhances the internal detail information of the under-sampled image to be processed based on perceptual loss, enhances the edge profile of the under-sampled image to be processed based on L1 loss, and makes the output image based on total variation loss Smooth, output enhanced image;

可以理解的是,本申请中,还提供了一种图像重构模型应用方法,通过上述的模型建立方法获取模型之后,就可以将低采样的图像输入到模型之中,模型基于感知损失增强所述待处理的欠采样图像的内部细节信息,基于L1损失增强所述待处理的欠采样图像的边缘轮廓,基于总变分损失使得输出图像空间平滑,输出增强图像,本申请的模型用于增强图像,可以照亮任意定向的结构并改善图像重建,即使在采样不足的数据或有限视图扫描时也是如此,由定性分析和定量评估证实,本申请的解决方案在极低采样条件下可以实现0.7的结构相似指数,强调了网络从低质量数据集生成更高分辨率光声图像的能力,这种研究方法具有医学成像应用的潜力,因为它可以提高图像重建的质量和准确性,同时最大限度地减少实验需求,该方法保留了原始数据中的关键信息,在处理各种稀疏数据类型时表现出很高的适应性和可扩展性,它的实际意义在于它能够在各种临床情况下促进光声成像。It can be understood that this application also provides an image reconstruction model application method. After the model is obtained through the above-mentioned model building method, the low-sampled image can be input into the model, and the model is based on perceptual loss enhancement. Describe the internal detail information of the under-sampled image to be processed, enhance the edge profile of the under-sampled image to be processed based on L1 loss, make the output image space smooth based on the total variation loss, and output the enhanced image. The model of this application is used to enhance image, can illuminate arbitrarily oriented structures and improve image reconstruction even when scanning with undersampled data or limited views, as confirmed by qualitative analysis and quantitative evaluation, the solution of the present application can achieve 0.7 The Structural Similarity Index, which highlights the ability of the network to generate higher-resolution photoacoustic images from low-quality data sets, this research method has the potential for medical imaging applications, because it can improve the quality and accuracy of image reconstruction while maximizing Minimize the need for experiments, this method retains the key information in the original data, and shows high adaptability and scalability when dealing with various sparse data types, its practical significance lies in its ability to promote Photoacoustic imaging.

实施例三:Embodiment three:

图6是根据另一示例性实施例示出的一种图像重构模型建立装置的系统示意图,所述装置包括:Fig. 6 is a system schematic diagram of a device for establishing an image reconstruction model according to another exemplary embodiment, and the device includes:

数据获取模块101:用于获取多组欠采样图像以及对应的真实图像,得到训练集数据;Data acquisition module 101: used to acquire multiple sets of undersampled images and corresponding real images to obtain training set data;

网络图像获取模块102:用于选取所述训练集数据中任意一组欠采样图像及其对应的真实图像,将所述欠采样图像输入到GAN网络结构中,得到网络生成图像;Network image acquisition module 102: used to select any group of undersampled images and their corresponding real images in the training set data, and input the undersampled images into the GAN network structure to obtain network generated images;

总变分损失获取模块103:用于在所述网络生成图像中选取任意一个像素点,获取与该像素点在水平方向以及垂直方向相邻的两个像素点,通过水平方向以及垂直方向相邻的两个像素点计算该像素点的差异值,对所述网络生成图像中所有像素点的差异值进行求和,得到总变分损失;Total variation loss acquisition module 103: used to select any pixel point in the image generated by the network, and obtain two pixel points adjacent to the pixel point in the horizontal direction and vertical direction, and pass the adjacent pixel point in the horizontal direction and vertical direction Calculate the difference value of the two pixels of the pixel, and sum the difference values of all pixels in the image generated by the network to obtain the total variation loss;

模型建立模块104:用于通过L1损失、感知损失、BCE损失以及总变分损失表示所述网络生成图像与真实图像的损失值,以所述网络生成图像与真实图像的损失值最小为目标,通过所述训练集数据对GAN网络结构进行训练,调整所述GAN网络结构的参数,直到损失值不再下降为止,建立图像重构模型;Model building module 104: used to represent the loss value of the network-generated image and the real image through L1 loss, perceptual loss, BCE loss and total variation loss, with the minimum loss value of the network-generated image and the real image as the goal, The GAN network structure is trained by the training set data, and the parameters of the GAN network structure are adjusted until the loss value no longer declines, and an image reconstruction model is established;

可以理解的是,本申请还提供了一种图像重构模型建立装置,用于执行所述一种图像重构模型建立方法,包括:通过数据获取模块101用于获取多组欠采样图像以及对应的真实图像,得到训练集数据;通过网络图像获取模块102用于选取所述训练集数据中任意一组欠采样图像及其对应的真实图像,将所述欠采样图像输入到GAN网络结构中,得到网络生成图像;通过总变分损失获取模块103用于在所述网络生成图像中选取任意一个像素点,获取与该像素点在水平方向以及垂直方向相邻的两个像素点,通过水平方向以及垂直方向相邻的两个像素点计算该像素点的差异值,对所述网络生成图像中所有像素点的差异值进行求和,得到总变分损失;通过模型建立模块104用于通过L1损失、感知损失、BCE损失以及总变分损失表示所述网络生成图像与真实图像的损失值,以所述网络生成图像与真实图像的损失值最小为目标,通过所述训练集数据对GAN网络结构进行训练,调整所述GAN网络结构的参数,直到损失值不再下降为止,建立图像重构模型。It can be understood that the present application also provides an apparatus for establishing an image reconstruction model, which is used to execute the method for establishing an image reconstruction model, including: using the data acquisition module 101 to acquire multiple groups of under-sampled images and corresponding The real image of the training set data is obtained; the network image acquisition module 102 is used to select any group of undersampled images and their corresponding real images in the training set data, and the undersampled images are input into the GAN network structure, Obtain the image generated by the network; the total variation loss acquisition module 103 is used to select any pixel point in the image generated by the network, and obtain two pixels adjacent to the pixel point in the horizontal direction and the vertical direction, through the horizontal direction And calculate the difference value of the pixel point for two adjacent pixels in the vertical direction, and sum the difference values of all pixels in the image generated by the network to obtain the total variation loss; the model building module 104 is used to pass L1 Loss, perceptual loss, BCE loss and total variation loss represent the loss value of the network-generated image and the real image, with the minimum loss value between the network-generated image and the real image as the goal, through the training set data to the GAN network The structure is trained, and the parameters of the GAN network structure are adjusted until the loss value no longer decreases, and an image reconstruction model is established.

实施例四Embodiment four

图7是根据另一示例性实施例示出的一种图像重构模型应用装置的系统示意图,所述装置包括:Fig. 7 is a system schematic diagram of an image reconstruction model application device according to another exemplary embodiment, and the device includes:

输入模块201:用于获取所述待处理的欠采样图像,将所述待处理的欠采样图像输入到图像重构模型中;Input module 201: used to acquire the under-sampled image to be processed, and input the under-sampled image to be processed into an image reconstruction model;

输出模块202:用于所述图像重构模型基于感知损失增强所述待处理的欠采样图像的内部细节信息,基于L1损失增强所述待处理的欠采样图像的边缘轮廓,以及基于总变分损失使得输出图像平滑,输出增强图像;Output module 202: used for the image reconstruction model to enhance the internal detail information of the under-sampled image to be processed based on perceptual loss, enhance the edge profile of the under-sampled image to be processed based on L1 loss, and based on the total variation The loss smoothes the output image and outputs an enhanced image;

可以理解的是,本申请还提供了一种图像重构模型应用装置,用于执行所述一种图像重构模型应用方法,包括:通过输入模块201用于获取所述待处理的欠采样图像,将所述待处理的欠采样图像输入到图像重构模型中;通过输出模块202用于所述图像重构模型基于感知损失增强所述待处理的欠采样图像的内部细节信息,基于L1损失增强所述待处理的欠采样图像的边缘轮廓,以及基于总变分损失使得输出图像平滑,输出增强图像。It can be understood that the present application also provides an image reconstruction model application device, which is used to execute the image reconstruction model application method, including: using the input module 201 to obtain the under-sampled image to be processed , input the undersampled image to be processed into the image reconstruction model; use the output module 202 to use the image reconstruction model to enhance the internal detail information of the undersampled image to be processed based on perceptual loss, based on L1 loss The edge profile of the under-sampled image to be processed is enhanced, and the output image is smoothed based on the total variation loss, and the enhanced image is output.

可以理解的是,上述各实施例中相同或相似部分可以相互参考,在一些实施例中未详细说明的内容可以参见其他实施例中相同或相似的内容。It can be understood that, the same or similar parts in the above embodiments can be referred to each other, and the content that is not described in detail in some embodiments can be referred to the same or similar content in other embodiments.

需要说明的是,在本发明的描述中,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性。此外,在本发明的描述中,除非另有说明,“多个”的含义是指至少两个。It should be noted that, in the description of the present invention, terms such as "first" and "second" are only used for description purposes, and should not be understood as indicating or implying relative importance. In addition, in the description of the present invention, unless otherwise specified, the meaning of "plurality" means at least two.

流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。Any process or method descriptions in flowcharts or otherwise described herein may be understood to represent modules, segments or portions of code comprising one or more executable instructions for implementing specific logical functions or steps of the process , and the scope of preferred embodiments of the invention includes alternative implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order depending on the functions involved, which shall It is understood by those skilled in the art to which the embodiments of the present invention pertain.

应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of the present invention can be realized by hardware, software, firmware or their combination. In the embodiments described above, various steps or methods may be implemented by software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques known in the art: Discrete logic circuits, ASICs with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.

本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。Those of ordinary skill in the art can understand that all or part of the steps carried by the methods of the above embodiments can be completed by instructing related hardware through a program, and the program can be stored in a computer-readable storage medium. During execution, one or a combination of the steps of the method embodiments is included.

此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing module, each unit may exist separately physically, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. If the integrated modules are implemented in the form of software function modules and sold or used as independent products, they can also be stored in a computer-readable storage medium.

上述提到的存储介质可以是只读存储器,磁盘或光盘等。The storage medium mentioned above may be a read-only memory, a magnetic disk or an optical disk, and the like.

在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。In the description of this specification, descriptions referring to the terms "one embodiment", "some embodiments", "example", "specific examples", or "some examples" mean that specific features described in connection with the embodiment or example , structure, material or characteristic is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the specific features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.

尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。Although the embodiments of the present invention have been shown and described above, it can be understood that the above embodiments are exemplary and should not be construed as limiting the present invention, those skilled in the art can make the above-mentioned The embodiments are subject to changes, modifications, substitutions and variations.

Claims (10)

1. A method for building an image reconstruction model, the method comprising:
acquiring a plurality of groups of undersampled images and corresponding real images to obtain training set data;
selecting any group of undersampled images and corresponding real images in the training set data, and inputting the undersampled images into a GAN network structure to obtain network generated images;
selecting any pixel point from the network generated image, acquiring two pixel points adjacent to the pixel point in the horizontal direction and the vertical direction, calculating the difference value of the pixel point through the two pixel points adjacent to the pixel point in the horizontal direction and the vertical direction, and summing the difference values of all the pixel points in the network generated image to obtain total variation loss;
and representing the loss values of the network generated image and the real image through L1 loss, perception loss, BCE loss and total variation loss, taking the minimum loss value of the network generated image and the real image as a target, training the GAN network structure through the training set data, and adjusting parameters of the GAN network structure until the loss value is not reduced any more, and establishing an image reconstruction model.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
training the GAN network structure through the training set data until the loss value no longer decreases, and establishing the image reconstruction model includes:
training the generator of the GAN network structure through the training set data until the loss value of the generator is no longer reduced, so as to obtain a trained generator;
training the discriminators of the GAN network structure through the training set data until the loss value of the discriminators is no longer reduced, so as to obtain trained discriminators;
and establishing an image reconstruction model through the trained generator and the discriminator.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
in the image reconstruction model building process, the generator and the discriminator perform training alternately, when the generator is trained, the parameters of the discriminator are fixed, and when the discriminator is trained, the parameters of the generator are fixed.
4. The method of claim 2, wherein the step of determining the position of the substrate comprises,
training the generator of the GAN network structure through the training set data until the loss value of the generator is no longer reduced, and obtaining the trained generator includes:
Inputting the undersampled image into a generator of a GAN network structure to obtain a network generated image;
the network generated image and the undersampled image are input into a discriminator of the GAN network structure together to obtain a first similarity matrix;
generating an image and a real image through the network to obtain perceived loss, obtaining L1 loss through a preset L1 loss function, and obtaining first BCE loss through the first similarity matrix and an N-order 1 matrix;
and obtaining generator loss through the perception loss, the L1 loss, the first BCE loss and the total variation loss, and training the generator through the training set data by taking the minimum generator loss as a target until the generator loss is not reduced any more, so as to obtain a trained generator.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
the deriving the generator penalty from the perceptual penalty, L1 penalty, first BCE penalty, and total variation penalty comprises:
introducing a damping coefficient, and calculating the damping coefficient of each iteration according to a preset damping coefficient calculation formula;
multiplying the sum of the perception loss, the L1 loss and the total variation loss of each iteration by the damping coefficient of the iteration to obtain damping loss;
And adding the damping loss to the first BCE loss of the iteration to obtain a generator loss.
6. The method of claim 5, wherein the step of determining the position of the probe is performed,
training the discriminators of the GAN network structure through the training set data until the loss value of the discriminators is no longer reduced, and obtaining the trained discriminators comprises the following steps:
inputting the undersampled image and the real image into the discriminator together to obtain a second similarity matrix;
obtaining a second BCE loss through the first similarity matrix and an N-order 0 matrix, and obtaining a third BCE loss through the second similarity matrix and an N-order 1 matrix;
and obtaining a discriminator loss through the second BCE loss and the third BCE loss, and training the discriminator through the training set data by taking the minimum discriminator loss as a target until the discriminator loss is not reduced any more, so as to obtain a trained discriminator.
7. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
the generating the image through the network and obtaining the perceived loss of the real image comprise:
and respectively inputting the network generated image and the real image into a feature extraction module of the generator, wherein the feature extraction module respectively outputs network features and real features, and obtains perception loss through the network features and the real features.
8. A method of applying an image reconstruction model, the method being based on an image reconstruction model established according to any one of claims 1-6, the method comprising:
acquiring the undersampled image to be processed, and inputting the undersampled image to be processed into an image reconstruction model;
the image reconstruction model enhances internal detail information of the undersampled image to be processed based on a perceptual loss, enhances an edge contour of the undersampled image to be processed based on an L1 loss, and smoothes an output image based on a total variation loss, outputting an enhanced image.
9. An image reconstruction model building apparatus, characterized in that the apparatus comprises:
and a data acquisition module: the method comprises the steps of obtaining a plurality of groups of undersampled images and corresponding real images to obtain training set data;
a network image acquisition module: the method comprises the steps of selecting any group of undersampled images and corresponding real images in training set data, inputting the undersampled images into a GAN network structure, and obtaining network generated images;
total variation loss acquisition module: the method comprises the steps of selecting any one pixel point from a network generated image, obtaining two pixel points adjacent to the pixel point in the horizontal direction and the vertical direction, calculating the difference value of the pixel point through the two pixel points adjacent to the pixel point in the horizontal direction and the vertical direction, and summing the difference values of all the pixel points in the network generated image to obtain total variation loss;
And a model building module: the method is used for representing the loss values of the network generated image and the real image through L1 loss, perception loss, BCE loss and total variation loss, taking the minimum loss values of the network generated image and the real image as targets, training the GAN network structure through the training set data, and adjusting parameters of the GAN network structure until the loss values are not reduced any more, and establishing an image reconstruction model.
10. An image reconstruction model application apparatus, the apparatus comprising:
an input module: the method comprises the steps of acquiring an undersampled image to be processed, and inputting the undersampled image to be processed into an image reconstruction model;
and an output module: the image reconstruction model enhances internal detail information of the undersampled image to be processed based on a perceptual loss, enhances an edge contour of the undersampled image to be processed based on an L1 loss, and smoothes an output image based on a total variation loss, outputting an enhanced image.
CN202310707533.2A 2023-06-14 2023-06-14 Image reconstruction model building and applying method and device Pending CN116630201A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310707533.2A CN116630201A (en) 2023-06-14 2023-06-14 Image reconstruction model building and applying method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310707533.2A CN116630201A (en) 2023-06-14 2023-06-14 Image reconstruction model building and applying method and device

Publications (1)

Publication Number Publication Date
CN116630201A true CN116630201A (en) 2023-08-22

Family

ID=87617127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310707533.2A Pending CN116630201A (en) 2023-06-14 2023-06-14 Image reconstruction model building and applying method and device

Country Status (1)

Country Link
CN (1) CN116630201A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118297836A (en) * 2024-04-22 2024-07-05 上海大学 Rapid image generation method and device based on space sparse diffusion model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118297836A (en) * 2024-04-22 2024-07-05 上海大学 Rapid image generation method and device based on space sparse diffusion model
CN118297836B (en) * 2024-04-22 2024-09-27 上海大学 Rapid image generation method and device based on space sparse diffusion model

Similar Documents

Publication Publication Date Title
Østvik et al. Myocardial function imaging in echocardiography using deep learning
CN110461228B (en) Improving the quality of medical images using multi-contrast and deep learning
WO2022267641A1 (en) Image defogging method and system based on cyclic generative adversarial network
WO2022047625A1 (en) Image processing method and system, and computer storage medium
CN112085677A (en) An image processing method, system and computer storage medium
CN106408524A (en) Two-dimensional image-assisted depth image enhancement method
CN112581378B (en) Image blind deblurring method and device based on significance strength and gradient prior
CN110717956B (en) A Finite Angle Projection Superpixel Guided L0 Norm Optimal Reconstruction Method
CN111462012A (en) SAR image simulation method for generating countermeasure network based on conditions
CN110428385A (en) SD-OCT (secure digital-optical coherence tomography) denoising method based on unsupervised antagonistic neural network
CN106780338B (en) Rapid super-resolution reconstruction method based on anisotropy
CN107146228A (en) A method for supervoxel generation of brain magnetic resonance images based on prior knowledge
CN110992292A (en) Enhanced low-rank sparse decomposition model medical CT image denoising method
CN110211193B (en) 3D CT inter-slice image interpolation restoration and super-resolution processing method and device
CN106157249A (en) Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood
CN111986216A (en) RSG liver CT image interactive segmentation algorithm based on neural network improvement
CN116563146A (en) Image Enhancement Method and System Based on Learnable Curvature Map
CN110047075A (en) A kind of CT image partition method based on confrontation network
Iddrisu et al. 3D reconstructions of brain from MRI scans using neural radiance fields
CN109658464B (en) Sparse angle CT image reconstruction method based on minimum weighted nuclear norm
CN116630201A (en) Image reconstruction model building and applying method and device
CN117132638A (en) A volume data acquisition method based on image scanning
CN107292316A (en) A kind of method of the improving image definition based on rarefaction representation
CN116468763B (en) Electron microscope image registration method based on cost volume
Wang et al. Fast convergence strategy for multi-image superresolution via adaptive line search

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination