[go: up one dir, main page]

CN112184550A - Neural network training method, image fusion method, apparatus, equipment and medium - Google Patents

Neural network training method, image fusion method, apparatus, equipment and medium Download PDF

Info

Publication number
CN112184550A
CN112184550A CN202010986245.1A CN202010986245A CN112184550A CN 112184550 A CN112184550 A CN 112184550A CN 202010986245 A CN202010986245 A CN 202010986245A CN 112184550 A CN112184550 A CN 112184550A
Authority
CN
China
Prior art keywords
network
sub
low
resolution image
exposed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010986245.1A
Other languages
Chinese (zh)
Other versions
CN112184550B (en
Inventor
邓欣
张雨童
徐迈
段一平
关振宇
李大伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Beihang University
Original Assignee
Tsinghua University
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Beihang University filed Critical Tsinghua University
Priority to CN202010986245.1A priority Critical patent/CN112184550B/en
Publication of CN112184550A publication Critical patent/CN112184550A/en
Application granted granted Critical
Publication of CN112184550B publication Critical patent/CN112184550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to the technical field of image processing, and discloses a neural network training method, an image fusion device, equipment and a medium. The method comprises the following steps: designing a first sub-network and a second sub-network with the same network structure, wherein any sub-network comprises a neural network of a primary feature extraction module, a high-level feature extraction module and a coupling feedback module; the primary characteristic module is used for extracting low-level characteristics of the underexposed low-resolution image and the overexposed low-resolution image; the high-level feature extraction module is used for further extracting high-level features of the underexposed low-resolution image and the overexposed low-resolution image from the corresponding low-level features of the underexposed low-resolution image and the overexposed low-resolution image; the feedback module is coupled to alternately fuse the low-level features and the high-level features corresponding to the under-exposed low-resolution image and the over-exposed low-resolution image. By the technical scheme, multi-exposure fusion processing and super-resolution processing of the image are simultaneously performed by using one neural network, and the image processing speed and the processing accuracy are improved.

Description

神经网络训练方法、图像融合方法、装置、设备和介质Neural network training method, image fusion method, apparatus, equipment and medium

技术领域technical field

本公开涉及图像处理技术领域,尤其涉及一种神经网络训练方法、图像融合方法、装置、设备和介质。The present disclosure relates to the technical field of image processing, and in particular, to a neural network training method, an image fusion method, an apparatus, a device and a medium.

背景技术Background technique

随着技术的发展,人们越来越习惯于用照片来记录自己生活的点滴。然而,由于相机传感器的硬件限制,拍摄出的图像通常存在各种失真,这使得图像与真实的自然场景差别很大。与真实场景相比,用相机拍摄出的图像趋向于具有低动态范围(Low DynamicRange,LDR)和低分辨率(Low-Resolution,LR)的特点。为了缩小拍摄图像与真实的拍摄场景之间的差异,需要对图像进行处理。With the development of technology, people are more and more accustomed to using photos to record the details of their lives. However, due to the hardware limitations of the camera sensor, the captured images usually have various distortions, which make the images very different from the real natural scene. Compared with the real scene, the images captured by the camera tend to have the characteristics of low dynamic range (Low DynamicRange, LDR) and low resolution (Low-Resolution, LR). In order to narrow the difference between the captured image and the actual shooting scene, the image needs to be processed.

目前,主要利用多曝光图像融合技术(Multi-exposure image fusion,MEF)来修正图像的低动态范围问题,利用图像超分辨技术(Image Super-Resolution,ISR)来修正图像低分辨率的问题。多曝光图像融合技术旨在将数张具有不同曝光度级别的LDR图像进行融合,从而生成一张具有高动态范围(High Dynamic Range,HDR)的图像。图像超分辨技术旨在将一张LR图像重建为一张高分辨率(High-Resolution,HR)图像。At present, multi-exposure image fusion (MEF) is mainly used to correct the low dynamic range of the image, and Image Super-Resolution (ISR) is used to correct the low resolution of the image. The multi-exposure image fusion technology aims to fuse several LDR images with different exposure levels to generate an image with High Dynamic Range (HDR). Image super-resolution technology aims to reconstruct an LR image into a High-Resolution (HR) image.

但是,实际情况是一张拍摄图像同时具有LDR和LR两个特点,而多曝光图像融合技术和图像超分辨技术是两个独立的图像处理技术,这就意味着一张拍摄图像需要先后进行多曝光图像融合处理和图像超分辨处理。并且,两种图像处理技术的先后执行顺序有时会对最终的图像处理结果产生影响。所以,现有的图像处理方式,不仅处理过程繁琐,而且图像处理效果不理想。However, the actual situation is that a shot image has two characteristics of LDR and LR at the same time, and the multi-exposure image fusion technology and image super-resolution technology are two independent image processing technologies, which means that a shot image needs to be processed successively. Exposure image fusion processing and image super-resolution processing. Moreover, the sequence of execution of the two image processing techniques may sometimes affect the final image processing result. Therefore, the existing image processing methods are not only complicated in the processing process, but also have unsatisfactory image processing effects.

发明内容SUMMARY OF THE INVENTION

为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种神经网络训练方法、图像融合方法、装置、设备和介质。In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides a neural network training method, an image fusion method, an apparatus, a device and a medium.

第一方面,本公开提供了一种神经网络训练方法,所述神经网络包括网络结构相同的第一子网络和第二子网络,且任一子网络中包含初级特征提取模块、高层特征提取模块和耦合反馈模块;该方法包括:In a first aspect, the present disclosure provides a neural network training method, the neural network includes a first sub-network and a second sub-network with the same network structure, and any sub-network includes a primary feature extraction module and a high-level feature extraction module and a coupled feedback module; the method includes:

获取一张欠曝低分辨率图像和一张过曝低分辨率图像;Obtain an underexposed low-resolution image and an overexposed low-resolution image;

将所述欠曝低分辨率图像和所述过曝低分辨率图像,分别输入所述第一子网络和所述第二子网络中的所述初始特征提取模块,生成欠曝低层次特征和过曝低层次特征;Input the under-exposed low-resolution image and the over-exposed low-resolution image into the initial feature extraction module in the first sub-network and the second sub-network, respectively, to generate under-exposed low-level features and Overexposure low-level features;

将所述欠曝低层次特征和所述过曝低层次特征,分别输入所述第一子网络和所述第二子网络中的所述高层特征提取模块,生成欠曝高层次特征和过曝高层次特征;Input the under-exposed low-level features and the over-exposed low-level features into the high-level feature extraction modules in the first sub-network and the second sub-network, respectively, to generate under-exposed high-level features and over-exposed low-level features high-level features;

将所述欠曝低层次特征、所述欠曝高层次特征和所述过曝高层次特征,输入所述第一子网络中的所述耦合反馈模块,生成所述第一子网络对应的耦合反馈结果;Input the under-exposed low-level features, the under-exposed high-level features, and the over-exposed high-level features into the coupling feedback module in the first sub-network to generate the coupling corresponding to the first sub-network feedback results;

将所述过曝低层次特征、所述过曝高层次特征和所述欠曝高层次特征,输入所述第二子网络中的所述耦合反馈模块,生成所述第二子网络对应的耦合反馈结果;Input the over-exposed low-level features, the over-exposed high-level features, and the under-exposed high-level features into the coupling feedback module in the second sub-network to generate the coupling corresponding to the second sub-network feedback results;

基于所述欠曝低分辨率图像、所述欠曝高层次特征和所述第一子网络对应的耦合反馈结果,以及所述过曝低分辨率图像、所述过曝高层次特征和所述第二子网络对应的耦合反馈结果,调整所述神经网络的参数。based on the underexposed low-resolution image, the underexposed high-level feature, and the coupling feedback result corresponding to the first sub-network, and the overexposed low-resolution image, the overexposed high-level feature, and the The coupling feedback result corresponding to the second sub-network adjusts the parameters of the neural network.

在一些实施例中,所述神经网络包含多个所述耦合反馈模块,且各所述耦合反馈模块不共享模型参数。In some embodiments, the neural network includes a plurality of the coupled feedback modules, and each of the coupled feedback modules does not share model parameters.

在一些实施例中,各所述耦合反馈模块串行处理;In some embodiments, each of the coupled feedback modules is processed serially;

所述将所述欠曝低层次特征、所述欠曝高层次特征和所述过曝高层次特征,输入所述第一子网络中的所述耦合反馈模块,生成所述第一子网络对应的耦合反馈结果包括:The under-exposure low-level feature, the under-exposure high-level feature, and the over-exposure high-level feature are input into the coupling feedback module in the first sub-network to generate a corresponding corresponding to the first sub-network. The coupled feedback results include:

将所述欠曝低层次特征、所述欠曝高层次特征和所述过曝高层次特征,输入所述第一子网络中的第一个耦合反馈模块,生成所述第一子网络对应的耦合反馈结果;Input the under-exposed low-level features, the under-exposed high-level features, and the over-exposed high-level features into the first coupling feedback module in the first sub-network to generate the corresponding Coupling feedback results;

针对所述第一子网络中除了所述第一个耦合反馈模块之外的任一个后续耦合反馈模块,将所述欠曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及所述第二子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成所述第一子网络对应的耦合反馈结果;With respect to any subsequent coupling feedback module except the first coupling feedback module in the first sub-network, the underexposed low-level feature, the coupling feedback module of the previous adjacent coupling feedback module of the subsequent coupling feedback module The coupling feedback result and the coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the second sub-network are input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the first sub-network ;

所述将所述过曝低层次特征、所述过曝高层次特征和所述欠曝高层次特征,输入所述第二子网络中的所述耦合反馈模块,生成所述第二子网络对应的耦合反馈结果包括:Inputting the over-exposed low-level features, the over-exposed high-level features and the under-exposed high-level features into the coupling feedback module in the second sub-network to generate the corresponding second sub-network The coupled feedback results include:

将所述过曝低层次特征、所述过曝高层次特征和所述欠曝高层次特征,输入所述第二子网络中的第一个耦合反馈模块,生成所述第二子网络对应的耦合反馈结果;Input the overexposure low-level features, the overexposure high-level features and the underexposure high-level features into the first coupling feedback module in the second sub-network to generate the corresponding Coupling feedback results;

针对所述第二子网络中除了所述第一个耦合反馈模块之外的任一个后续耦合反馈模块,将所述过曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及所述第一子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成所述第二子网络对应的耦合反馈结果。For any subsequent coupling feedback module except the first coupling feedback module in the second sub-network, the overexposure low-level feature, the coupling feedback module of the previous adjacent coupling feedback module of the subsequent coupling feedback module The coupling feedback result and the coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the first sub-network are input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the second sub-network .

在一些实施例中,所述耦合反馈模块中包含至少两个联结子模块和至少两个特征映射组,其中,每个所述特征映射组包含一个滤波器、一个反卷积层和一个卷积层;In some embodiments, the coupled feedback module includes at least two coupling sub-modules and at least two feature map groups, wherein each feature map group includes a filter, a deconvolution layer and a convolution layer Floor;

第一个所述联结子模块位于各所述特征映射组之前;The first described connection sub-module is located before each of the feature map groups;

除了第一个所述联结子模块外的任一其他联结子模块位于任意两个相邻的所述特征映射组之间,且任意两个所述其他联结子模块位于不同位置。Any other connection submodules except the first connection submodule are located between any two adjacent feature map groups, and any two other connection submodules are located in different positions.

在一些实施例中,所述基于所述欠曝低分辨率图像、所述欠曝高层次特征和所述第一子网络对应的耦合反馈结果,以及所述过曝低分辨率图像、所述过曝高层次特征和所述第二子网络对应的耦合反馈结果,调整所述神经网络的参数包括:In some embodiments, based on the under-exposed low-resolution image, the under-exposed high-level feature and a coupling feedback result corresponding to the first sub-network, and the over-exposed low-resolution image, the Overexposing the high-level features and the coupling feedback result corresponding to the second sub-network, and adjusting the parameters of the neural network includes:

分别对所述欠曝低分辨率图像和所述过曝低分辨率图像进行上采样操作;respectively performing an upsampling operation on the underexposed low-resolution image and the overexposed low-resolution image;

将所述欠曝高层次特征对应的图像和所述第一子网络对应的耦合反馈结果对应的图像,分别与上采样后的所述欠曝低分辨率图像相加,生成欠曝高分辨率图像和所述第二子网络对应的融合曝光高分辨率图像;The image corresponding to the under-exposed high-level feature and the image corresponding to the coupling feedback result corresponding to the first sub-network are respectively added to the up-sampled under-exposed low-resolution image to generate an under-exposed high-resolution image a fused exposure high-resolution image corresponding to the image and the second sub-network;

将所述过曝高层次特征对应的图像和所述第二子网络对应的耦合反馈结果对应的图像,分别与上采样后的所述过曝低分辨率图像相加,生成过曝高分辨率图像和所述第二子网络对应的融合曝光高分辨率图像;The image corresponding to the over-exposure high-level feature and the image corresponding to the coupling feedback result corresponding to the second sub-network are respectively added to the up-sampled over-exposed low-resolution image to generate an over-exposure high-resolution image a fused exposure high-resolution image corresponding to the image and the second sub-network;

基于所述欠曝高分辨率图像、所述第一子网络对应的融合曝光高分辨率图像、所述过曝高分辨率图像和所述第二子网络对应的融合曝光高分辨率图像,调整所述神经网络的参数。Based on the under-exposed high-resolution image, the fused-exposure high-resolution image corresponding to the first sub-network, the over-exposed high-resolution image, and the fused-exposure high-resolution image corresponding to the second sub-network, adjust the parameters of the neural network.

在一些实施例中,通过如下公式所示的损失函数,调整所述神经网络的参数:In some embodiments, the parameters of the neural network are adjusted through a loss function shown in the following formula:

Figure BDA0002689356980000041
Figure BDA0002689356980000041

其中,Ltotal表示总损失函数值,λo、λu

Figure BDA0002689356980000042
分别表示每一部分损失函数值对应的权重,
Figure BDA0002689356980000043
Figure BDA0002689356980000044
分别表示所述第一子网络中的所述高层特征提取模块和所述耦合反馈模块对应的损失函数值,
Figure BDA0002689356980000045
Figure BDA0002689356980000046
分别表示所述第二子网络中的所述高层特征提取模块和所述耦合反馈模块对应的损失函数值,LMS表示基于图像的结构相似性指数确定的两个图像之间的损失值,
Figure BDA0002689356980000047
Figure BDA0002689356980000048
分别表示所述过曝高分辨率图像和过曝高分辨率参考图像,
Figure BDA0002689356980000049
Figure BDA00026893569800000410
分别表示所述欠曝高分辨率图像和欠曝高分辨率参考图像,
Figure BDA0002689356980000051
和Igt分别表示第t个所述第二子网络对应的融合曝光高分辨率图像、第t个所述第一子网络对应的融合曝光高分辨率图像和融合曝光高分辨率参考图像,T表示所述耦合反馈模块的数量。where L total represents the total loss function value, λ o , λ u and
Figure BDA0002689356980000042
respectively represent the weight corresponding to each part of the loss function value,
Figure BDA0002689356980000043
and
Figure BDA0002689356980000044
respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the first sub-network,
Figure BDA0002689356980000045
and
Figure BDA0002689356980000046
respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the second sub-network, L MS represents the loss value between two images determined based on the structural similarity index of the images,
Figure BDA0002689356980000047
and
Figure BDA0002689356980000048
represent the overexposed high-resolution image and the overexposed high-resolution reference image, respectively,
Figure BDA0002689356980000049
and
Figure BDA00026893569800000410
represent the underexposed high-resolution image and the underexposed high-resolution reference image, respectively,
Figure BDA0002689356980000051
and I gt respectively represent the fused exposure high-resolution image corresponding to the t-th second sub-network, the fused-exposure high-resolution image corresponding to the t-th first sub-network, and the fused-exposure high-resolution reference image, T Indicates the number of the coupled feedback modules.

第二方面,本公开提供了一种图像融合方法,该方法包括:In a second aspect, the present disclosure provides an image fusion method, the method comprising:

获取一张欠曝低分辨率图像和一张过曝低分辨率图像;Obtain an underexposed low-resolution image and an overexposed low-resolution image;

将所述欠曝低分辨率图像和所述过曝低分辨率图像输入预先训练的神经网络,生成第一融合曝光高分辨率图像和第二融合曝光高分辨率图像;其中,所述神经网络通过本公开任一实施例中的神经网络训练方法训练得到;Inputting the under-exposed low-resolution image and the over-exposed low-resolution image into a pre-trained neural network to generate a first fused-exposure high-resolution image and a second fused-exposure high-resolution image; wherein the neural network Obtained by training the neural network training method in any embodiment of the present disclosure;

基于所述第一融合曝光高分辨率图像和所述第二融合曝光高分辨率图像,生成图像融合结果。An image fusion result is generated based on the first fused exposure high-resolution image and the second fused exposure high-resolution image.

在一些实施例中,所述基于所述第一融合曝光高分辨率图像和所述第二融合曝光高分辨率图像,生成图像融合结果包括:In some embodiments, generating an image fusion result based on the first fused exposure high-resolution image and the second fused exposure high-resolution image includes:

分别利用第一权重和第二权重,对所述第一融合曝光高分辨率图像和所述第二融合曝光高分辨率图像进行加权求和处理,生成所述图像融合结果。Using the first weight and the second weight respectively, the first fusion exposure high-resolution image and the second fusion exposure high-resolution image are weighted and summed to generate the image fusion result.

第三方面,本公开提供了一种神经网络训练装置,所述神经网络包括网络结构相同的第一子网络和第二子网络,且任一子网络中包含初级特征提取模块、高层特征提取模块和耦合反馈模块;该装置包括:In a third aspect, the present disclosure provides a neural network training device, the neural network includes a first sub-network and a second sub-network with the same network structure, and any sub-network includes a primary feature extraction module and a high-level feature extraction module and a coupled feedback module; the device includes:

图像获取单元,用于获取一张欠曝低分辨率图像和一张过曝低分辨率图像;an image acquisition unit for acquiring an under-exposed low-resolution image and an over-exposed low-resolution image;

低层次特征生成单元,用于将所述欠曝低分辨率图像和所述过曝低分辨率图像,分别输入所述第一子网络和所述第二子网络中的所述初始特征提取模块,生成欠曝低层次特征和过曝低层次特征;A low-level feature generation unit, configured to input the under-exposed low-resolution image and the over-exposed low-resolution image into the initial feature extraction module in the first sub-network and the second sub-network respectively , to generate under-exposed low-level features and over-exposed low-level features;

高层次特征生成单元,用于将所述欠曝低层次特征和所述过曝低层次特征,分别输入所述第一子网络和所述第二子网络中的所述高层特征提取模块,生成欠曝高层次特征和过曝高层次特征;The high-level feature generation unit is configured to input the under-exposed low-level features and the over-exposed low-level features into the high-level feature extraction modules in the first sub-network and the second sub-network, respectively, to generate Underexposed high-level features and overexposed high-level features;

第一耦合反馈结果生成单元,用于将所述欠曝低层次特征、所述欠曝高层次特征和所述过曝高层次特征,输入所述第一子网络中的所述耦合反馈模块,生成所述第一子网络对应的耦合反馈结果;a first coupling feedback result generating unit, configured to input the under-exposed low-level feature, the under-exposed high-level feature, and the over-exposed high-level feature into the coupling feedback module in the first sub-network, generating a coupling feedback result corresponding to the first sub-network;

第二耦合反馈结果生成单元,用于将所述过曝低层次特征、所述过曝高层次特征和所述欠曝高层次特征,输入所述第二子网络中的所述耦合反馈模块,生成所述第二子网络对应的耦合反馈结果;A second coupling feedback result generating unit, configured to input the overexposed low-level feature, the overexposed high-level feature, and the underexposed high-level feature into the coupling feedback module in the second sub-network, generating a coupling feedback result corresponding to the second sub-network;

参数调整单元,用于基于所述欠曝低分辨率图像、所述欠曝高层次特征和所述第一子网络对应的耦合反馈结果,以及所述过曝低分辨率图像、所述过曝高层次特征和所述第二子网络对应的耦合反馈结果,调整所述神经网络的参数。A parameter adjustment unit, configured to base on the under-exposed low-resolution image, the under-exposed high-level feature and the coupling feedback result corresponding to the first sub-network, as well as the over-exposed low-resolution image, the over-exposed high-level feature The high-level feature and the corresponding coupling feedback result of the second sub-network adjust the parameters of the neural network.

在一些实施例中,神经网络包含多个耦合反馈模块,且各耦合反馈模块不共享模型参数。In some embodiments, the neural network includes multiple coupled feedback modules, and each coupled feedback module does not share model parameters.

在一些实施例中,各耦合反馈模块串行处理;In some embodiments, the coupled feedback modules are processed serially;

相应地,第一耦合反馈结果生成单元具体用于:Correspondingly, the first coupling feedback result generating unit is specifically used for:

将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的第一个耦合反馈模块,生成第一子网络对应的耦合反馈结果;Input the under-exposed low-level features, under-exposed high-level features and over-exposed high-level features into the first coupling feedback module in the first sub-network to generate the coupling feedback result corresponding to the first sub-network;

针对第一子网络中除了第一个耦合反馈模块之外的任一个后续耦合反馈模块,将欠曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及第二子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成第一子网络对应的耦合反馈结果;For any subsequent coupling feedback module except the first coupling feedback module in the first sub-network, the underexposure low-level features, the coupling feedback result of the previous adjacent coupling feedback module of the subsequent coupling feedback module, and the The coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the two sub-networks is input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the first sub-network;

相应地,第二耦合反馈结果生成单元具体用于:Correspondingly, the second coupling feedback result generating unit is specifically used for:

将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的第一个耦合反馈模块,生成第二子网络对应的耦合反馈结果;Input the over-exposed low-level features, over-exposed high-level features and under-exposed high-level features into the first coupling feedback module in the second sub-network to generate the coupling feedback result corresponding to the second sub-network;

针对第二子网络中除了第一个耦合反馈模块之外的任一个后续耦合反馈模块,将过曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及第一子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成第二子网络对应的耦合反馈结果。For any subsequent coupling feedback module except the first coupling feedback module in the second sub-network, the low-level features, the coupling feedback result of the previous adjacent coupling feedback module of the subsequent coupling feedback module, and the first coupling feedback module will be overexposed. The coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in a sub-network is input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the second sub-network.

在一些实施例中,耦合反馈模块中包含至少两个联结子模块和至少两个特征映射组,其中,每个特征映射组包含一个滤波器、一个反卷积层和一个卷积层;In some embodiments, the coupled feedback module includes at least two connection sub-modules and at least two feature map groups, wherein each feature map group includes a filter, a deconvolution layer, and a convolution layer;

第一个联结子模块位于各特征映射组之前;The first link submodule is located before each feature map group;

除了第一个联结子模块外的任一其他联结子模块位于任意两个相邻的特征映射组之间,且任意两个其他联结子模块位于不同位置。Any other linking submodules except the first linking submodule are located between any two adjacent feature map groups, and any two other linking submodules are located at different positions.

在一些实施例中,参数调整单元具体用于:In some embodiments, the parameter adjustment unit is specifically used to:

分别对欠曝低分辨率图像和过曝低分辨率图像进行上采样操作;Perform up-sampling operations on the under-exposed low-resolution image and the over-exposed low-resolution image respectively;

将欠曝高层次特征对应的图像和第一子网络对应的耦合反馈结果对应的图像,分别与上采样后的欠曝低分辨率图像相加,生成欠曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像;The image corresponding to the under-exposed high-level feature and the image corresponding to the coupling feedback result corresponding to the first sub-network are added to the up-sampled under-exposed low-resolution image to generate the under-exposed high-resolution image and the second sub-network. Corresponding fused exposure high-resolution images;

将过曝高层次特征对应的图像和第二子网络对应的耦合反馈结果对应的图像,分别与上采样后的过曝低分辨率图像相加,生成过曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像;The image corresponding to the overexposed high-level feature and the image corresponding to the coupling feedback result corresponding to the second sub-network are added to the up-sampled over-exposed low-resolution image to generate the over-exposed high-resolution image and the second sub-network. Corresponding fused exposure high-resolution images;

基于欠曝高分辨率图像、第一子网络对应的融合曝光高分辨率图像、过曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像,调整神经网络的参数。The parameters of the neural network are adjusted based on the under-exposed high-resolution image, the fused-exposure high-resolution image corresponding to the first sub-network, the over-exposed high-resolution image, and the fused-exposure high-resolution image corresponding to the second sub-network.

进一步地,参数调整单元具体用于:Further, the parameter adjustment unit is specifically used for:

通过如下公式所示的损失函数,调整神经网络的参数:Adjust the parameters of the neural network through the loss function shown in the following formula:

Figure BDA0002689356980000071
Figure BDA0002689356980000071

其中,Ltotal表示总损失函数值,λo、λu

Figure BDA0002689356980000072
分别表示每一部分损失函数值对应的权重,
Figure BDA0002689356980000073
Figure BDA0002689356980000074
分别表示第一子网络中的高层特征提取模块和耦合反馈模块对应的损失函数值,
Figure BDA0002689356980000075
Figure BDA0002689356980000076
分别表示第二子网络中的高层特征提取模块和耦合反馈模块对应的损失函数值,LMS表示基于图像的结构相似性指数确定的两个图像之间的损失值,
Figure BDA0002689356980000081
Figure BDA0002689356980000082
分别表示过曝高分辨率图像和过曝高分辨率参考图像,
Figure BDA0002689356980000083
Figure BDA0002689356980000084
分别表示欠曝高分辨率图像和欠曝高分辨率参考图像,
Figure BDA0002689356980000085
和Igt分别表示第t个第二子网络对应的融合曝光高分辨率图像、第t个第一子网络对应的融合曝光高分辨率图像和融合曝光高分辨率参考图像,T表示耦合反馈模块的数量。where L total represents the total loss function value, λ o , λ u and
Figure BDA0002689356980000072
respectively represent the weight corresponding to each part of the loss function value,
Figure BDA0002689356980000073
and
Figure BDA0002689356980000074
respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the first sub-network,
Figure BDA0002689356980000075
and
Figure BDA0002689356980000076
respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the second sub-network, L MS represents the loss value between the two images determined based on the structural similarity index of the image,
Figure BDA0002689356980000081
and
Figure BDA0002689356980000082
represent the overexposed high-resolution image and the overexposed high-resolution reference image, respectively,
Figure BDA0002689356980000083
and
Figure BDA0002689356980000084
represent the underexposed high-resolution image and the underexposed high-resolution reference image, respectively,
Figure BDA0002689356980000085
and I gt represent the fused exposure high-resolution image corresponding to the t-th second sub-network, the fused-exposure high-resolution image corresponding to the t-th first sub-network, and the fused-exposure high-resolution reference image, respectively, and T represents the coupling feedback module quantity.

第四方面,本公开提供了一种图像融合装置,该装置包括:In a fourth aspect, the present disclosure provides an image fusion device, the device comprising:

图像获取单元,用于获取一张欠曝低分辨率图像和一张过曝低分辨率图像;an image acquisition unit for acquiring an under-exposed low-resolution image and an over-exposed low-resolution image;

融合曝光高分辨率图像生成单元,用于将所述欠曝低分辨率图像和所述过曝低分辨率图像输入预先训练的神经网络,生成第一融合曝光高分辨率图像和第二融合曝光高分辨率图像;其中,所述神经网络通过本公开中的神经网络训练方法的任一实施例训练得到;A fusion exposure high-resolution image generation unit, configured to input the under-exposed low-resolution image and the over-exposed low-resolution image into a pre-trained neural network to generate a first fusion exposure high-resolution image and a second fusion exposure high-resolution image A high-resolution image; wherein, the neural network is obtained by training any embodiment of the neural network training method in the present disclosure;

图像融合结果生成单元,用于基于所述第一融合曝光高分辨率图像和所述第二融合曝光高分辨率图像,生成图像融合结果。An image fusion result generating unit, configured to generate an image fusion result based on the first fusion exposure high-resolution image and the second fusion exposure high-resolution image.

在一些实施例中,图像融合结果生成单元具体用于:In some embodiments, the image fusion result generating unit is specifically used for:

分别利用第一权重和第二权重,对第一融合曝光高分辨率图像和第二融合曝光高分辨率图像进行加权求和处理,生成图像融合结果。Using the first weight and the second weight respectively, the first fusion exposure high-resolution image and the second fusion exposure high-resolution image are weighted and summed to generate an image fusion result.

第五方面,本公开提供了一种的电子设备,该电子设备包括:In a fifth aspect, the present disclosure provides an electronic device comprising:

一个或多个处理器;one or more processors;

存储装置,用于存储一个或多个程序,storage means for storing one or more programs,

当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现上述的神经网络训练方法或图像融合方法中的任一实施例。When the one or more programs are executed by the one or more processors, the one or more processors implement any one of the above-mentioned embodiments of the neural network training method or the image fusion method.

第四方面,本公开提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述的神经网络训练方法或图像融合方法中的任一实施例。In a fourth aspect, the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements any one of the foregoing neural network training methods or image fusion methods.

本公开实施例提供的技术方案,通过设计包括网络结构相同的第一子网络和第二子网络,且任一子网络中包含初级特征提取模块、高层特征提取模块和耦合反馈模块的神经网络,且初级特征模块用于提取欠曝低分辨率图像和过曝低分辨率图像的低层次特征,高层特征提取模块用于从欠曝低分辨率图像和过曝低分辨率图像各自对应的低层次特征中进一步提取其高层次特征,初步实现将低分辨率图像映射为高分辨率的特征。通过耦合反馈模块来交叉式地融合欠曝低分辨率图像和过曝低分辨率图像对应的低层次特征和高层次特征,实现了过曝光图像和欠曝光图像的多曝光融合,同时进一步提高图像的分辨率,获得一张同时具有高分辨率和高动态范围的图像,达到了同时进行图像的多曝光融合处理和超分辨处理的目的,不仅简化了拍摄图像的处理流程,提高了图像处理速度,而且利用多曝光融合和超分辨之间的互补特性,进一步提高了图像处理精确度。In the technical solution provided by the embodiments of the present disclosure, by designing a neural network that includes a first sub-network and a second sub-network with the same network structure, and any sub-network includes a primary feature extraction module, a high-level feature extraction module, and a coupled feedback module, And the primary feature module is used to extract the low-level features of the under-exposed low-resolution image and the over-exposed low-resolution image, and the high-level feature extraction module is used to extract the corresponding low-level features from the under-exposed low-resolution image and the over-exposed low-resolution image. The high-level features are further extracted from the features, and the low-resolution images are initially mapped to high-resolution features. Through the coupling feedback module, the low-level features and high-level features corresponding to the under-exposed low-resolution image and the over-exposed low-resolution image are cross-fused to realize the multi-exposure fusion of the over-exposed image and the under-exposed image, and further improve the image quality. It can obtain an image with high resolution and high dynamic range at the same time, and achieve the purpose of multi-exposure fusion processing and super-resolution processing of images at the same time, which not only simplifies the processing flow of captured images, but also improves the speed of image processing. , and the complementary properties between multi-exposure fusion and super-resolution are used to further improve the image processing accuracy.

附图说明Description of drawings

此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description serve to explain the principles of the disclosure.

为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the accompanying drawings that are required to be used in the description of the embodiments or the prior art will be briefly introduced below. In other words, on the premise of no creative labor, other drawings can also be obtained from these drawings.

图1是本公开实施例提供的一种神经网络的网络架构图;1 is a network architecture diagram of a neural network provided by an embodiment of the present disclosure;

图2是本公开实施例提供的一种神经网络中的高层特征提取模块的网络架构图;2 is a network architecture diagram of a high-level feature extraction module in a neural network provided by an embodiment of the present disclosure;

图3是本公开实施例提供的一种神经网络中的耦合反馈模块的网络架构图;3 is a network architecture diagram of a coupled feedback module in a neural network provided by an embodiment of the present disclosure;

图4是本公开实施例提供的一种用于神经网络训练的神经网络的网络架构图;4 is a network architecture diagram of a neural network for neural network training provided by an embodiment of the present disclosure;

图5是本公开实施例提供的一种神经网络训练方法的流程图;5 is a flowchart of a neural network training method provided by an embodiment of the present disclosure;

图6是本公开实施例提供的一种图像融合方法的流程图;6 is a flowchart of an image fusion method provided by an embodiment of the present disclosure;

图7是本公开实施例提供的一种神经网络训练装置的结构示意图;7 is a schematic structural diagram of a neural network training apparatus provided by an embodiment of the present disclosure;

图8是本公开实施例提供的一种图像融合装置的结构示意图;FIG. 8 is a schematic structural diagram of an image fusion apparatus provided by an embodiment of the present disclosure;

图9是本公开实施例提供的一种电子设备的结构示意图。FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.

具体实施方式Detailed ways

为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步的详细描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。In order to more clearly understand the above objects, features and advantages of the present disclosure, the solutions of the present disclosure will be described in further detail below. It should be noted that the embodiments of the present disclosure and the features in the embodiments may be combined with each other under the condition of no conflict.

在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。Many specific details are set forth in the following description to facilitate a full understanding of the present disclosure, but the present disclosure can also be implemented in other ways different from those described herein; obviously, the embodiments in the specification are only a part of the embodiments of the present disclosure, and Not all examples.

本公开实施例提供的神经网络训练方案,可应用于对具有低动态范围和低分辨率特点的图像进行融合处理的应用场景中,尤其适用于对过曝光的低分辨率图像(简称过曝低分辨率图像)和欠曝光的低分辨率图像(简称欠曝低分辨率图像)进行图像融合处理的场景中。The neural network training solution provided by the embodiments of the present disclosure can be applied to the application scenario of fusion processing of images with low dynamic range and low resolution, and is especially suitable for overexposed low-resolution images (referred to as overexposed low resolution images for short). high-resolution images) and underexposed low-resolution images (referred to as underexposed low-resolution images) for image fusion processing.

图1为本申请实施例提供的一种用于图像融合的神经网络的网络结构框图。如图1所示,该神经网络包括网络结构相同、但模型参数不共享的第一子网络110和第二子网络120。第一子网络110中包含初级特征提取模块(Initial Feature Extraction Block,FEB)111、高层特征提取模块(super-resolution block,SRB)112和耦合反馈模块(coupledfeedback block,CFB)113。第二网络子120中包含初级特征提取模块121、高层特征提取模块122和耦合反馈模块123。第一子网络110中的耦合反馈模块113和第二子网络120中的耦合反馈模块123的数量一致,且数量等于或大于1。该神经网络的输入数据为过曝低分辨率图像和欠曝低分辨率图像,该输入的两张图像只要分别输入一个子网络即可,不限定特定的输入对应关系。本公开实施例中以欠曝低分辨率图像输入第一子网络110,且过曝低分辨率图像输入第二子网络120为例进行说明。上述FEB和SRB用于从输入图像中提取出高层次的特征,这有助于增强图像分辨率;上述CFB位于SRB之后,用于吸收两个子网络的SRB学习到的特征,从而融合出一张具有高分辨率(High-resolution,HR)和高动态范围(Highdynamic range,HDR)的图像。FIG. 1 is a block diagram of a network structure of a neural network for image fusion provided by an embodiment of the present application. As shown in FIG. 1 , the neural network includes a first sub-network 110 and a second sub-network 120 which have the same network structure but do not share model parameters. The first sub-network 110 includes an initial feature extraction block (Initial Feature Extraction Block, FEB) 111 , a high-level feature extraction block (super-resolution block, SRB) 112 , and a coupled feedback block (CFB) 113 . The second network sub 120 includes a primary feature extraction module 121 , a high-level feature extraction module 122 and a coupled feedback module 123 . The numbers of the coupling feedback modules 113 in the first sub-network 110 and the coupling feedback modules 123 in the second sub-network 120 are the same, and the number is equal to or greater than one. The input data of the neural network is an over-exposed low-resolution image and an under-exposed low-resolution image, and the two input images only need to be input into one sub-network respectively, and the specific input correspondence is not limited. In the embodiment of the present disclosure, the underexposed low-resolution image is input to the first sub-network 110 and the over-exposed low-resolution image is input to the second sub-network 120 as an example for description. The above FEB and SRB are used to extract high-level features from the input image, which helps to enhance the image resolution; the above CFB is located after the SRB and is used to absorb the features learned by the SRB of the two sub-networks, thereby merging a picture Images with high-resolution (HR) and high dynamic range (HDR).

初级特征提取模块111和初级特征提取模块121分别用于提取输入的欠曝低分辨率图像

Figure BDA0002689356980000111
和过曝低分辨率图像
Figure BDA0002689356980000112
的基本特征,获得对应的欠曝低层次特征
Figure BDA0002689356980000113
和过曝低层次特征
Figure BDA0002689356980000114
利用公式表征初级特征提取过程为:
Figure BDA0002689356980000115
Figure BDA0002689356980000116
其中的fFEB()表示初级特征提取模块的操作。在一些实施例中,fFEB()包含了一系列3×3和1×1卷积核的卷积层。The primary feature extraction module 111 and the primary feature extraction module 121 are respectively used to extract the input underexposed low resolution image
Figure BDA0002689356980000111
and overexposed low-resolution images
Figure BDA0002689356980000112
The basic features of , obtain the corresponding underexposure low-level features
Figure BDA0002689356980000113
and overexposed low-level features
Figure BDA0002689356980000114
Using the formula to characterize the primary feature extraction process is:
Figure BDA0002689356980000115
and
Figure BDA0002689356980000116
where fFEB () represents the operation of the primary feature extraction module. In some embodiments, fFEB ( ) consists of a series of convolutional layers with 3×3 and 1×1 convolution kernels.

高层特征提取模块112和高层特征提取模块122分别用于对输入的欠曝低层次特征

Figure BDA0002689356980000117
和过曝低层次特征
Figure BDA0002689356980000118
进行进一步的特征提取操作,来提取欠曝低分辨率图像
Figure BDA0002689356980000119
和过曝低分辨率图像
Figure BDA00026893569800001110
的高层次的特征,获得欠曝高层次特征Gu和过曝高层次特征Go。由于高层次的特征包含更高层的语义特征,其更能表征图像中小而复杂的目标,从而丰富图像中的细节信息,所以欠曝高层次特征Gu和过曝高层次特征Go能够提高相应图像的分辨率,实现超分辨效果。在一些实施例中,参见图2,利用SRFBN网络中的反馈模块作为高层特征提取模块112(122)的主要结构,它包含了数个以稠密连接(dense connection)方式连续连接的特征映射组210。每个特征映射组210中至少含有一个上采样操作(Deconv)和一个下采样操作(Conv)。通过连续上下采样的方式,在确保特征的大小不变的情况下,逐步地从低层次特征Fin中提取出更高层次的特征G,以改善图像的分辨率。利用公式表征高层特征提取过程为:SRB输出的高层次的特征可以被表示为:
Figure BDA00026893569800001111
Figure BDA00026893569800001112
其中,fSRB()代表高层特征提取模块中的操作。The high-level feature extraction module 112 and the high-level feature extraction module 122 are respectively used for underexposing the input low-level features
Figure BDA0002689356980000117
and overexposed low-level features
Figure BDA0002689356980000118
Perform further feature extraction operations to extract underexposed low-resolution images
Figure BDA0002689356980000119
and overexposed low-resolution images
Figure BDA00026893569800001110
The high-level features of , obtain under-exposed high-level features Gu and over-exposed high-level features G o . Since high-level features contain higher-level semantic features, they can better represent small and complex objects in the image, thereby enriching the detailed information in the image, so the under-exposed high-level features Gu and over-exposed high-level features G o can improve the corresponding The resolution of the image to achieve super-resolution effect. In some embodiments, referring to FIG. 2, the feedback module in the SRFBN network is used as the main structure of the high-level feature extraction module 112 (122), which includes several feature map groups 210 continuously connected in a dense connection manner . Each feature map group 210 contains at least one up-sampling operation (Deconv) and one down-sampling operation (Conv). Through continuous up- and down-sampling, while ensuring that the size of the feature remains unchanged, a higher-level feature G is gradually extracted from the low-level feature F in to improve the resolution of the image. Using the formula to characterize the high-level feature extraction process is: The high-level features output by SRB can be expressed as:
Figure BDA00026893569800001111
and
Figure BDA00026893569800001112
where f SRB ( ) represents the operations in the high-level feature extraction module.

耦合反馈模块CFB是神经网络的核心组成部分,其目的是通过复杂的网络结构同时实现超分辨和多曝光图像融合。耦合反馈模块CFB的输入数据包含三个,即同一子网络中的低层次特征和高层次特征,以及另一子网络中的高层次特征。这里同一子网络中的两个输入数据的作用是进一步提高图像分辨率,增强图像超分辨效果;另一子网络中的输入数据的作用是提高图像融合效果,实现多曝光图像融合。The coupled feedback module CFB is the core component of the neural network, and its purpose is to simultaneously achieve super-resolution and multi-exposure image fusion through a complex network structure. The input data of the coupled feedback module CFB contains three, namely low-level features and high-level features in the same sub-network, and high-level features in another sub-network. Here, the role of the two input data in the same sub-network is to further improve the image resolution and enhance the image super-resolution effect; the role of the input data in the other sub-network is to improve the image fusion effect and realize multi-exposure image fusion.

在一些实施例中,神经网络的每个子网络中包含一个耦合反馈模块CFB。那么,耦合反馈模块113用于融合输入的欠曝低层次特征

Figure BDA0002689356980000121
欠曝高层次特征Gu和过曝高层次特征Go,生成第一子网络110对应的耦合反馈结果。耦合反馈模块123用于融合输入的过曝低层次特征
Figure BDA0002689356980000122
过曝高层次特征Go和欠曝高层次特征Gu,生成第二子网络120对应的耦合反馈结果。这里的耦合反馈结果均是同时实现多曝光融合和超分辨的图像特征。In some embodiments, each sub-network of the neural network includes a coupled feedback module CFB. Then, the coupled feedback module 113 is used to fuse the input underexposed low-level features
Figure BDA0002689356980000121
The underexposed high-level feature Gu and the overexposed high-level feature G o are used to generate a coupling feedback result corresponding to the first sub-network 110 . The coupled feedback module 123 is used to fuse the input overexposed low-level features
Figure BDA0002689356980000122
The overexposed high-level feature Go and the underexposed high - level feature Gu are used to generate a coupling feedback result corresponding to the second sub-network 120 . The coupled feedback results here are all image features that simultaneously achieve multi-exposure fusion and super-resolution.

在一些实施例中,神经网络中的每个子网络中包含多个耦合反馈模块CFB,且该多个CFB是并行处理方式。本实施例中,同一子网络中的各个CFB的输入数据相同,输出的耦合反馈结果需进一步融合(如加权求和等),获得一个耦合反馈结果。In some embodiments, each sub-network in the neural network includes multiple coupled feedback modules CFB, and the multiple CFBs are processed in parallel. In this embodiment, the input data of each CFB in the same sub-network is the same, and the output coupling feedback results need to be further fused (such as weighted summation, etc.) to obtain a coupling feedback result.

在一些实施例中,神经网络中的每个子网络中包含多个耦合反馈模块CFB,且该多个CFB是串行式地循环连接,如图1所示。本实施例中,假设在每个子网络中各有T个CFB,那么,生成第一子网络110对应的耦合反馈结果的过程为:将欠曝低层次特征

Figure BDA0002689356980000123
欠曝高层次特征Gu和过曝高层次特征Go,输入第一子网络110中的第一个耦合反馈模块113,生成第一子网络对应的耦合反馈结果
Figure BDA0002689356980000124
针对第一子网络中除了第一个耦合反馈模块113之外的任一个后续耦合反馈模块113(序号为t),将欠曝低层次特征
Figure BDA0002689356980000125
该后续耦合反馈模块的前一相邻耦合反馈模块(序号为t-1)的耦合反馈结果
Figure BDA0002689356980000126
以及第二子网络120中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果
Figure BDA0002689356980000127
输入该后续耦合反馈模块113,生成第一子网络110对应的耦合反馈结果
Figure BDA0002689356980000128
按照该过程,经过所有CFB的运算,可获得第一子网络110对应的最终的耦合反馈结果
Figure BDA0002689356980000129
同样地,生成第二子网络120对应的耦合反馈结果的过程为:将过曝低层次特征
Figure BDA00026893569800001210
过曝高层次特征Go和欠曝高层次特征Gu,输入第二子网络120中的第一个耦合反馈模块123,生成第二子网络120对应的耦合反馈结果
Figure BDA0002689356980000131
针对第二子网络120中除了第一个耦合反馈模块123之外的任一个后续耦合反馈模块123(序号为t),将过曝低层次特征
Figure BDA0002689356980000132
该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果
Figure BDA0002689356980000133
以及第一子网络110中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果
Figure BDA0002689356980000134
输入该后续耦合反馈模块123,生成第二子网络120对应的耦合反馈结果
Figure BDA0002689356980000135
按照该过程,经过所有CFB的运算,可获得第二子网络120对应的最终的耦合反馈结果
Figure BDA0002689356980000136
将上述过程用公式表征为:
Figure BDA0002689356980000137
Figure BDA0002689356980000138
其中,fCFB()表示耦合反馈模块的操作。In some embodiments, each sub-network in the neural network includes multiple coupled feedback modules CFB, and the multiple CFBs are cyclically connected in series, as shown in FIG. 1 . In this embodiment, it is assumed that there are T CFBs in each sub-network, then, the process of generating the coupling feedback result corresponding to the first sub-network 110 is:
Figure BDA0002689356980000123
The underexposed high-level feature Gu and the overexposed high-level feature G o are input to the first coupling feedback module 113 in the first sub-network 110 to generate the coupling feedback result corresponding to the first sub-network
Figure BDA0002689356980000124
For any subsequent coupling feedback module 113 (serial number t) in the first sub-network except the first coupling feedback module 113, the underexposed low-level features are
Figure BDA0002689356980000125
The coupling feedback result of the previous adjacent coupling feedback module (serial number t-1) of the subsequent coupling feedback module
Figure BDA0002689356980000126
and the coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the second sub-network 120
Figure BDA0002689356980000127
Input the subsequent coupling feedback module 113 to generate the coupling feedback result corresponding to the first sub-network 110
Figure BDA0002689356980000128
According to this process, after all CFB operations, the final coupling feedback result corresponding to the first sub-network 110 can be obtained
Figure BDA0002689356980000129
Similarly, the process of generating the coupling feedback result corresponding to the second sub-network 120 is as follows: overexpose low-level features
Figure BDA00026893569800001210
The overexposed high-level feature G o and the underexposed high-level feature Gu are input to the first coupling feedback module 123 in the second sub-network 120 to generate the coupling feedback result corresponding to the second sub-network 120
Figure BDA0002689356980000131
For any subsequent coupling feedback module 123 (serial number is t) in the second sub-network 120 except for the first coupling feedback module 123, the overexposure low-level features
Figure BDA0002689356980000132
The coupling feedback result of the previous adjacent coupling feedback module of the subsequent coupling feedback module
Figure BDA0002689356980000133
and the coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the first sub-network 110
Figure BDA0002689356980000134
Input the subsequent coupling feedback module 123 to generate the coupling feedback result corresponding to the second sub-network 120
Figure BDA0002689356980000135
According to this process, after all CFB operations, the final coupling feedback result corresponding to the second sub-network 120 can be obtained
Figure BDA0002689356980000136
The above process is represented by the formula as:
Figure BDA0002689356980000137
and
Figure BDA0002689356980000138
where f CFB ( ) represents the operation of the coupled feedback block.

在一些实施例中,耦合反馈模块113和耦合反馈模块123的数量均为三个。这样可以更好地均衡神经网络的计算速度和模型精度。In some embodiments, the number of the coupled feedback modules 113 and the coupled feedback modules 123 is three. This can better balance the computational speed and model accuracy of the neural network.

在一些实施例中,每个耦合反馈模块CFB的内部网络结构相同,但不共享模型参数。参见图3,以第二子网络120中的第t个耦合反馈模块123来举例说明其内部的结构以及和其他模块之间的相互关联。耦合反馈模块123中包含至少两个联结子模块310和至少两个特征映射组320。同图2相同,该多个特征映射组320以稠密方式连续连接,并且每个特征映射组320包含一个滤波器、一个反卷积层Deconv和一个卷积层Conv,实现连续上下采样。其中的第一个联结子模块310位于所有的特征映射组320之前;除了第一个联结子模块外的任一其他联结子模块310位于任意两个相邻的特征映射组320之间,且任意两个其他联结子模块310位于不同位置。In some embodiments, the internal network structure of each coupled feedback module CFB is the same, but does not share model parameters. Referring to FIG. 3 , the t-th coupling feedback module 123 in the second sub-network 120 is used as an example to illustrate its internal structure and the interconnection with other modules. The coupling feedback module 123 includes at least two coupling sub-modules 310 and at least two feature map groups 320 . Same as FIG. 2 , the plurality of feature map groups 320 are continuously connected in a dense manner, and each feature map group 320 includes a filter, a deconvolution layer Deconv and a convolution layer Conv to achieve continuous up and down sampling. The first connection sub-module 310 is located before all feature map groups 320; any other connection sub-module 310 except the first connection sub-module is located between any two adjacent feature map groups 320, and any The two other bonding submodules 310 are located at different locations.

该第t个CFB有三个输入数据,分别是过曝低层次特征

Figure BDA0002689356980000139
第t-1个CFB提取的耦合反馈结果
Figure BDA00026893569800001310
以及第一子网络110中第t-1个CFB提取的耦合反馈结果
Figure BDA00026893569800001311
其中,用于反馈的特征
Figure BDA00026893569800001312
是从同一个子网络中获取到的反馈信息,故其主要功能是修正过曝低层次特征
Figure BDA00026893569800001313
以进一步提高超分辨的效果;而用于反馈的特征
Figure BDA00026893569800001314
是从另一个子网络中得到的反馈信息,其主要功能是带来互补的信息,以提高多曝光图像融合的效果。The t-th CFB has three input data, which are low-level features of overexposure
Figure BDA0002689356980000139
Coupled feedback results of the t-1th CFB extraction
Figure BDA00026893569800001310
and the coupling feedback result extracted by the t-1th CFB in the first sub-network 110
Figure BDA00026893569800001311
Among them, the features used for feedback
Figure BDA00026893569800001312
is the feedback information obtained from the same sub-network, so its main function is to correct the low-level features of overexposure
Figure BDA00026893569800001313
to further improve the effect of super-resolution; and the features used for feedback
Figure BDA00026893569800001314
is the feedback information obtained from another sub-network, and its main function is to bring complementary information to improve the effect of multi-exposure image fusion.

上述第t个耦合反馈模块123的处理过程为:首先,在通道数的维度上,利用联结子模块310联结三个输入的特征。然后,用一系列1×1的滤波器对联结结果进行融合:

Figure BDA0002689356980000141
其中,
Figure BDA0002689356980000142
表示基于三个输入特征进行滤波融合后的低分辨率的特征,Min表示一系列1×1的滤波器,
Figure BDA0002689356980000143
表示内部元素的联结。之后,基于滤波融合后的特征
Figure BDA0002689356980000144
利用一系列特征映射组320来重复执行上采样Deconv和下采样Conv的操作,每个上采样操作可获得高分辨率的特征
Figure BDA0002689356980000145
每次下采样可获得低分辨率的特征
Figure BDA0002689356980000146
最终以递进式地提取出更有效的高层次特征。The processing procedure of the above-mentioned t-th coupling feedback module 123 is as follows: first, in the dimension of the number of channels, the connection sub-module 310 is used to connect the features of the three inputs. Then, the concatenated results are fused with a series of 1×1 filters:
Figure BDA0002689356980000141
in,
Figure BDA0002689356980000142
Represents the low-resolution feature after filtering and fusion based on three input features, M in represents a series of 1×1 filters,
Figure BDA0002689356980000143
Represents a join of inner elements. After that, based on the filtered and fused features
Figure BDA0002689356980000144
The operations of upsampling Deconv and downsampling Conv are repeatedly performed using a series of feature map groups 320, each upsampling operation can obtain high-resolution features
Figure BDA0002689356980000145
Low-resolution features are obtained with each downsampling
Figure BDA0002689356980000146
Finally, more effective high-level features are extracted progressively.

在特征映射组320的操作过程中,基于上述

Figure BDA0002689356980000147
的主要功能是带来互补的信息以提高多曝光图像融合的效果的说明,同时考虑到随着特征映射组的数量的增加,模块内部对于特征
Figure BDA0002689356980000148
的记忆逐渐被遗忘,使得
Figure BDA0002689356980000149
产生的影响在逐渐下降,导致后续融合的效果不佳,所以,本实施例中为了增强
Figure BDA00026893569800001410
对于网络的影响,除了将
Figure BDA00026893569800001411
作为每个CFB的输入数据之外,还将它植入特征映射组320之间来重新激活CFB模块对其的记忆,即在CFB的特征映射组之间增加联结子模块310,增加的联结子模块310的输入数据均包含
Figure BDA00026893569800001412
具体实施时,在每个CFB中设置至少两个联结子模块310,并且除第一个联结子模块之外的其他联结子模块设置在不同的特征映射组320之间。如果对神经网络运算速度无要求,可以设置超过两个的联结子模块310,甚至可以在每两个特征映射组310之间均增加一个联结子模块310,这样能够更大程度上提高融合效果。如果对神经网络的运算速度和运算精度都有较高的要求,那么为了均衡速度和精度,可以只设置两个联结子模块310,第二个联结子模块310设置在多个特征映射组320的中间位置。例如,假设总的特征映射组个数是N,反馈特征
Figure BDA00026893569800001413
Figure BDA00026893569800001414
被联结起来构成一张新的低分辨率LR的特征图
Figure BDA00026893569800001415
Figure BDA00026893569800001416
其中,
Figure BDA00026893569800001417
表示向下取整的操作。该新的低分辨率的特征图
Figure BDA0002689356980000151
将替代
Figure BDA0002689356980000152
作为后续的特征映射组的输入特征。During the operation of the feature map group 320, based on the above
Figure BDA0002689356980000147
The main function of is to bring complementary information to improve the effect of multi-exposure image fusion, taking into account that as the number of feature map groups increases, the internal
Figure BDA0002689356980000148
memory is gradually forgotten, making
Figure BDA0002689356980000149
The resulting influence is gradually decreasing, resulting in a poor effect of subsequent fusion. Therefore, in this embodiment, in order to enhance the
Figure BDA00026893569800001410
The impact on the network, in addition to the
Figure BDA00026893569800001411
In addition to the input data of each CFB, it is also implanted between the feature map groups 320 to reactivate the memory of the CFB module. The input data of module 310 all contain
Figure BDA00026893569800001412
During specific implementation, at least two connection sub-modules 310 are set in each CFB, and other connection sub-modules except the first connection sub-module are set between different feature map groups 320 . If there is no requirement for the operation speed of the neural network, more than two connection sub-modules 310 can be set, and even a connection sub-module 310 can be added between every two feature map groups 310, which can improve the fusion effect to a greater extent. If there are high requirements for the operation speed and operation accuracy of the neural network, in order to balance the speed and accuracy, only two connection sub-modules 310 may be provided, and the second connection sub-module 310 is arranged in the plurality of feature mapping groups 320. in the middle. For example, assuming that the total number of feature map groups is N, the feedback features
Figure BDA00026893569800001413
and
Figure BDA00026893569800001414
are concatenated to form a new low-resolution LR feature map
Figure BDA00026893569800001415
Figure BDA00026893569800001416
in,
Figure BDA00026893569800001417
Represents an operation to round down. The new low-resolution feature map
Figure BDA0002689356980000151
will replace
Figure BDA0002689356980000152
as input features for subsequent feature map groups.

最后,经过N个特征映射组320的操作之后,将各个特征映射组320的LR特征图聚合到一起,并利用一系列的1×1的滤波器来融合,获得该CFB最终的输出结果

Figure BDA0002689356980000153
Figure BDA0002689356980000154
其中,Mout()表示用一系列1×1的滤波器进行卷积的操作。Finally, after the operations of N feature map groups 320, the LR feature maps of each feature map group 320 are aggregated together, and fused by a series of 1×1 filters to obtain the final output result of the CFB
Figure BDA0002689356980000153
Figure BDA0002689356980000154
Among them, M out ( ) represents the operation of convolution with a series of 1×1 filters.

在一些实施例中,第一子网络110和第二子网络120分别包含图像重建模块(reconstruction block,REC)114和图像重建模块124,用于将至少一个CFB所得的耦合反馈结果(特征)重建为图像。那么,多个CFB便可获得多个重建的图像。在此基础上,可进一步融合神经网络的原始输入图像,获得第一融合曝光高分辨率图像

Figure BDA0002689356980000155
和第二融合曝光高分辨率图像
Figure BDA0002689356980000156
任一融合曝光高分辨率图像均具有高动态范围HDR和高分辨率HR的特点。需要说明的是,在多个CFB循环连接的实施例中,每个CFB均会输出一个耦合反馈结果,但是考虑到各个CFB的串行反馈处理会逐步提升图像融合和超分辨的效果,故最后一个CFB所得的耦合反馈结果是综合效果最好的。基于此,在获得第一融合曝光高分辨率图像
Figure BDA0002689356980000157
和第二融合曝光高分辨率图像
Figure BDA0002689356980000158
的过程中,将每个子网络中最后一个CFB所得的耦合反馈结果对应的重建的图像作为输入之一。另外,耦合反馈结果的特征尺寸要大于原始输入图像的图像尺寸,故可先利用诸如双三次插值的上采样操作来扩大原始输入图像的图像尺寸,再将上采样的结果作为获得融合曝光高分辨率图像的另一输入。上述过程用公式表征为:
Figure BDA0002689356980000159
Figure BDA00026893569800001510
其中,fUP()和fREC()分别表示上采样操作和图像重建操作。In some embodiments, the first sub-network 110 and the second sub-network 120 include an image reconstruction module (REC) 114 and an image reconstruction module 124, respectively, for reconstructing the coupling feedback result (feature) obtained by at least one CFB for the image. Then, multiple CFBs can obtain multiple reconstructed images. On this basis, the original input image of the neural network can be further fused to obtain the first fusion exposure high-resolution image
Figure BDA0002689356980000155
and second fused exposure high resolution image
Figure BDA0002689356980000156
Any fused exposure high-resolution image is characterized by high dynamic range HDR and high resolution HR. It should be noted that, in the embodiment in which multiple CFBs are cyclically connected, each CFB will output a coupling feedback result, but considering that the serial feedback processing of each CFB will gradually improve the effect of image fusion and super-resolution, so finally The coupled feedback result obtained by a CFB is the best overall result. Based on this, the first fusion exposure high-resolution image is obtained
Figure BDA0002689356980000157
and second fused exposure high resolution image
Figure BDA0002689356980000158
During the process, the reconstructed image corresponding to the coupling feedback result obtained by the last CFB in each sub-network is used as one of the inputs. In addition, the feature size of the coupled feedback result is larger than the image size of the original input image, so an upsampling operation such as bicubic interpolation can be used to enlarge the image size of the original input image, and then the result of the upsampling can be used to obtain high-resolution fusion exposure. Another input for the rate image. The above process is represented by the formula:
Figure BDA0002689356980000159
and
Figure BDA00026893569800001510
where f UP ( ) and f REC ( ) represent the upsampling operation and the image reconstruction operation, respectively.

基于上述说明,本公开实施例提供的神经网络的各部分的参数设置可示例如下:Based on the above description, the parameter settings of each part of the neural network provided by the embodiments of the present disclosure can be exemplified as follows:

Figure BDA0002689356980000161
Figure BDA0002689356980000161

图4是本公开实施例提供的一种用于神经网络训练的神经网络的网络架构图。基于图1中神经网络的架构,该神经网络中存在多层次的特征,如低层次特征、高层次特征和至少一个耦合反馈结果(特征),这些特征都是为了同时实现多曝光图像融合和超分辨技术,所以,为了确保每项所得特征的有效性,在神经网络训练过程中采用了分层的损失函数限制。而分层的损失函数需要各层的图像来计算,故图4中用于神经网络训练的神经网络的网络架构相对于图1的用于图像融合预测的网络架构而言,增加了多个图像输出的分支,例如输出高层次特征对应的过曝高分辨率图像

Figure BDA0002689356980000171
和欠曝高分辨率图像
Figure BDA0002689356980000172
输出第一子网络110中其他耦合反馈模块对应的融合曝光高分辨率图像
Figure BDA0002689356980000173
Figure BDA0002689356980000174
以及输出第二子网络120中其他耦合反馈模块对应的融合曝光高分辨率图像
Figure BDA0002689356980000175
Figure BDA0002689356980000176
FIG. 4 is a network architecture diagram of a neural network for neural network training provided by an embodiment of the present disclosure. Based on the architecture of the neural network in Figure 1, there are multi-level features in the neural network, such as low-level features, high-level features, and at least one coupled feedback result (feature), all of which are used to simultaneously achieve multi-exposure image fusion and superimposition. Discrimination techniques, so, to ensure the validity of each resulting feature, a hierarchical loss function constraint is employed during neural network training. The layered loss function requires the images of each layer to be calculated, so the network architecture of the neural network used for neural network training in Figure 4 is compared with the network architecture used for image fusion prediction in Figure 1, adding multiple images Output branches, such as outputting overexposed high-resolution images corresponding to high-level features
Figure BDA0002689356980000171
and underexposed high-resolution images
Figure BDA0002689356980000172
Output the fused exposure high-resolution images corresponding to other coupling feedback modules in the first sub-network 110
Figure BDA0002689356980000173
and
Figure BDA0002689356980000174
and outputting the fused exposure high-resolution images corresponding to other coupling feedback modules in the second sub-network 120
Figure BDA0002689356980000175
and
Figure BDA0002689356980000176

图5是本公开实施例提供的一种神经网络训练方法的流程图。该神经网络训练方法基于图4中的神经网络架构来实现,其中与上述各实施例相同或相应的内容的解释在此不再赘述。本公开实施例提供的神经网络训练方法可以由神经网络训练装置来执行,该装置可以由软件和/或硬件的方式实现,该装置可以集成在具有一定计算能力的电子设备中,例如笔记本电脑、台式电脑、服务器或超级计算机等。参见图5,该神经网络训练方法具体包括:FIG. 5 is a flowchart of a neural network training method provided by an embodiment of the present disclosure. The neural network training method is implemented based on the neural network architecture in FIG. 4 , and the explanations of the same or corresponding contents as those of the above-mentioned embodiments are not repeated here. The neural network training method provided by the embodiments of the present disclosure may be performed by a neural network training apparatus, which may be implemented in software and/or hardware, and the apparatus may be integrated into an electronic device with certain computing capabilities, such as a notebook computer, Desktop computers, servers or supercomputers, etc. Referring to Figure 5, the neural network training method specifically includes:

S110、获取一张欠曝低分辨率图像和一张过曝低分辨率图像。S110. Acquire an under-exposed low-resolution image and an over-exposed low-resolution image.

具体地,整个神经网络训练过程中需要进行多次的网络训练,每次的网络训练均需获取一个训练图像组,那么整个训练过程就需要获取到多个训练图像组,且每个训练图像组的训练过程相同。本实施例中仅对一次训练过程进行阐述。上述一个训练图像组包含一张欠曝低分辨率图像

Figure BDA0002689356980000177
和一张过曝低分辨率图像
Figure BDA0002689356980000178
欠曝低分辨率图像是指拍摄曝光度小于第一预设曝光度阈值、且图像分辨率低于预设分辨率阈值的图像。过曝低分辨率图像是指拍摄曝光度高于第二预设曝光度阈值、且图像分辨率低于上述预设分辨率阈值的图像。这里的第一预设曝光度阈值小于第二预设曝光度阈值,并且第一预设曝光度阈值、第二预设曝光度阈值和预设分辨率阈值分别是预先确定的曝光度和图像分辨率。Specifically, multiple network trainings are required in the entire neural network training process, and each network training needs to obtain a training image group, then the entire training process needs to obtain multiple training image groups, and each training image group needs to be obtained. The training process is the same. In this embodiment, only one training process is described. One of the above training image sets contains an underexposed low-resolution image
Figure BDA0002689356980000177
and an overexposed low-res image
Figure BDA0002689356980000178
The underexposed low-resolution image refers to an image whose exposure is lower than the first preset exposure threshold and the image resolution is lower than the preset resolution threshold. An overexposed low-resolution image refers to an image whose exposure is higher than the second preset exposure threshold and whose image resolution is lower than the above-mentioned preset resolution threshold. Here, the first preset exposure threshold is smaller than the second preset exposure threshold, and the first preset exposure threshold, the second preset exposure threshold and the preset resolution threshold are the predetermined exposure and image resolution, respectively. Rate.

S120、将欠曝低分辨率图像和过曝低分辨率图像,分别输入第一子网络和第二子网络中的初始特征提取模块,生成欠曝低层次特征和过曝低层次特征。S120. Input the under-exposed low-resolution image and the over-exposed low-resolution image into the initial feature extraction modules in the first sub-network and the second sub-network, respectively, to generate under-exposed low-level features and over-exposed low-level features.

具体地,将欠曝低分辨率图像

Figure BDA0002689356980000179
输入第一子网络中的初始特征提取模块FEB,得到欠曝低层次特征
Figure BDA00026893569800001710
将过曝低分辨率图像
Figure BDA00026893569800001711
输入第二子网络中的初始特征提取模块FEB,得到过曝低层次特征
Figure BDA0002689356980000181
Specifically, underexposing low-resolution images
Figure BDA0002689356980000179
Input the initial feature extraction module FEB in the first sub-network to obtain underexposed low-level features
Figure BDA00026893569800001710
will overexpose low resolution images
Figure BDA00026893569800001711
Input the initial feature extraction module FEB in the second sub-network to obtain overexposed low-level features
Figure BDA0002689356980000181

S130、将欠曝低层次特征和过曝低层次特征,分别输入第一子网络和第二子网络中的高层特征提取模块,生成欠曝高层次特征和过曝高层次特征。S130. Input the under-exposed low-level features and over-exposed low-level features into the high-level feature extraction modules in the first sub-network and the second sub-network respectively, to generate under-exposed high-level features and over-exposed high-level features.

具体地,将欠曝低层次特征

Figure BDA0002689356980000182
输入第一子网络中的高层特征提取模块SRB,得到欠曝高层次特征Gu。将过曝低层次特征
Figure BDA0002689356980000183
输入第二子网络中的高层特征提取模块SRB,得到过曝高层次特征Go。Specifically, underexposing low-level features
Figure BDA0002689356980000182
Input the high-level feature extraction module SRB in the first sub-network to obtain the underexposed high - level feature Gu. will overexpose low-level features
Figure BDA0002689356980000183
Input the high-level feature extraction module SRB in the second sub-network to obtain the overexposed high-level feature G o .

S140、将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的耦合反馈模块,生成第一子网络对应的耦合反馈结果。S140. Input the under-exposed low-level features, under-exposed high-level features, and over-exposed high-level features into a coupling feedback module in the first sub-network to generate a coupling feedback result corresponding to the first sub-network.

具体地,以欠曝低层次特征

Figure BDA0002689356980000184
欠曝高层次特征Gu和过曝高层次特征Go为基础的输入特征,通过第一子网络中的至少一个耦合反馈模块CFB的处理,生成第一子网络对应的至少一个耦合反馈结果。Specifically, to underexpose low-level features
Figure BDA0002689356980000184
The input features based on the underexposed high-level feature Gu and the overexposed high-level feature G o are processed by at least one coupling feedback module CFB in the first sub-network to generate at least one coupling feedback result corresponding to the first sub-network.

在一些实施例中,S140可实现为:将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的第一个耦合反馈模块,生成第一子网络对应的耦合反馈结果;针对第一子网络中除了第一个耦合反馈模块之外的任一个后续耦合反馈模块,将欠曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及第二子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成第一子网络对应的耦合反馈结果。本实施例中,神经网络包含多个耦合反馈模块CFB(以T个为例),且各耦合反馈模块串行处理。In some embodiments, S140 may be implemented as: inputting the under-exposed low-level features, under-exposed high-level features, and over-exposed high-level features into the first coupling feedback module in the first sub-network to generate a first sub-network corresponding to for any subsequent coupling feedback module except the first coupling feedback module in the first sub-network, underexposing the low-level features, the coupling feedback module of the previous adjacent coupling feedback module of the subsequent coupling feedback module The feedback result and the coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the second sub-network are input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the first sub-network. In this embodiment, the neural network includes a plurality of coupling feedback modules CFB (T are taken as an example), and each coupling feedback module is processed in series.

参见图4,生成第一子网络对应的至少一个耦合反馈结果的过程为:首先,对于第一个CFB,将欠曝低层次特征

Figure BDA0002689356980000185
欠曝高层次特征Gu和过曝高层次特征Go,输入该第一个CFB后,输出第一子网络对应的第一个耦合反馈结果
Figure BDA0002689356980000186
然后,对于第一子网络中除了第一个CFB之外的某个后续CFB(假设为第t个,且t<T),将欠曝低层次特征
Figure BDA0002689356980000187
该第t个CFB的前一相邻CFB(即第t-1个CFB)的耦合反馈结果
Figure BDA0002689356980000188
以及第二子网络中第t-1个CFB的耦合反馈结果
Figure BDA0002689356980000191
输入该第t个CFB后,输出第一子网络对应的第t个耦合反馈结果
Figure BDA0002689356980000192
以此类推,通过迭代反馈,可获得第一子网络中任一后续CFB输出的耦合反馈结果。Referring to FIG. 4 , the process of generating at least one coupled feedback result corresponding to the first sub-network is as follows: first, for the first CFB, underexposing the low-level features
Figure BDA0002689356980000185
Under-exposed high-level feature Gu and over-exposed high-level feature G o , after inputting the first CFB, output the first coupling feedback result corresponding to the first sub-network
Figure BDA0002689356980000186
Then, for a certain subsequent CFB in the first sub-network other than the first CFB (assumed to be the t-th, and t<T), the low-level features will be underexposed
Figure BDA0002689356980000187
The coupling feedback result of the previous adjacent CFB of the t-th CFB (ie, the t-1-th CFB)
Figure BDA0002689356980000188
and the coupled feedback result of the t-1th CFB in the second sub-network
Figure BDA0002689356980000191
After inputting the t-th CFB, output the t-th coupling feedback result corresponding to the first sub-network
Figure BDA0002689356980000192
By analogy, through iterative feedback, the coupling feedback result of any subsequent CFB output in the first sub-network can be obtained.

S150、将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的耦合反馈模块,生成第二子网络对应的耦合反馈结果。S150. Input the overexposed low-level feature, the overexposed high-level feature, and the underexposed high-level feature into the coupling feedback module in the second sub-network to generate a coupling feedback result corresponding to the second sub-network.

具体地,以过曝低层次特征

Figure BDA0002689356980000193
过曝高层次特征Go和欠曝高层次特征Gu为基础的输入特征,通过第二子网络中的至少一个耦合反馈模块CFB的处理,生成第二子网络对应的至少一个耦合反馈结果。Specifically, to overexpose low-level features
Figure BDA0002689356980000193
The input features based on the overexposed high - level feature Go and the underexposed high-level feature Gu are processed by at least one coupling feedback module CFB in the second sub-network to generate at least one coupling feedback result corresponding to the second sub-network.

在一些实施例中,S150可实现为:将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的第一个耦合反馈模块,生成第二子网络对应的耦合反馈结果;针对第二子网络中除了第一个耦合反馈模块之外的任一个后续耦合反馈模块,将过曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及第一子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成第二子网络对应的耦合反馈结果。本实施例中,神经网络包含多个耦合反馈模块CFB(以T个为例),且各耦合反馈模块串行处理。In some embodiments, S150 may be implemented as: inputting the overexposed low-level feature, overexposed high-level feature, and underexposed high-level feature into the first coupling feedback module in the second sub-network to generate a corresponding second sub-network For any subsequent coupling feedback module in the second sub-network except the first coupling feedback module, the coupling feedback module of the previous adjacent coupling feedback module of the subsequent coupling feedback module will be overexposed. The feedback result and the coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the first sub-network are input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the second sub-network. In this embodiment, the neural network includes a plurality of coupling feedback modules CFB (T are taken as an example), and each coupling feedback module is processed in series.

参见图4,生成第二子网络对应的至少一个耦合反馈结果的过程为:首先,对于第一个CFB,将过曝低层次特征

Figure BDA0002689356980000194
过曝高层次特征Go和欠曝高层次特征Gu,输入该第一个CFB后,输出第二子网络对应的第一个耦合反馈结果
Figure BDA0002689356980000195
然后,对于第二子网络中除了第一个CFB之外的某个后续CFB(假设为第t个,且t<T),将过曝低层次特征
Figure BDA0002689356980000196
第t-1个CFB的耦合反馈结果
Figure BDA0002689356980000197
以及第一子网络中第t-1个CFB的耦合反馈结果
Figure BDA0002689356980000198
输入该第t个CFB后,输出第二子网络对应的第t个耦合反馈结果
Figure BDA0002689356980000199
以此类推,通过迭代反馈,可获得第二子网络中任一后续CFB输出的耦合反馈结果。Referring to FIG. 4 , the process of generating at least one coupled feedback result corresponding to the second sub-network is as follows: first, for the first CFB, the low-level features are overexposed
Figure BDA0002689356980000194
Overexposed high-level feature G o and underexposed high-level feature Gu , after inputting the first CFB, output the first coupling feedback result corresponding to the second sub-network
Figure BDA0002689356980000195
Then, for some subsequent CFB other than the first CFB in the second sub-network (assumed to be the t-th, and t<T), the low-level features will be overexposed
Figure BDA0002689356980000196
Coupling feedback results of the t-1th CFB
Figure BDA0002689356980000197
and the coupled feedback result of the t-1th CFB in the first sub-network
Figure BDA0002689356980000198
After inputting the t-th CFB, output the t-th coupling feedback result corresponding to the second sub-network
Figure BDA0002689356980000199
By analogy, through iterative feedback, the coupling feedback result of any subsequent CFB output in the second sub-network can be obtained.

S160、基于欠曝低分辨率图像、欠曝高层次特征和第一子网络对应的耦合反馈结果,以及过曝低分辨率图像、过曝高层次特征和第二子网络对应的耦合反馈结果,调整神经网络的参数。S160. Based on the coupling feedback result corresponding to the underexposed low-resolution image, the underexposed high-level feature and the first sub-network, and the coupling feedback result corresponding to the over-exposed low-resolution image, the overexposed high-level feature and the second sub-network, Adjust the parameters of the neural network.

具体地,根据上述说明,本公开实施例中采用了分层的损失函数来训练上述神经网络,故需要基于欠曝低分辨率图像

Figure BDA0002689356980000201
欠曝高层次特征Gu、第一子网络对应的各个耦合反馈结果
Figure BDA0002689356980000202
过曝低分辨率图像
Figure BDA0002689356980000203
过曝高层次特征Go和第二子网络对应的各个耦合反馈结果
Figure BDA0002689356980000204
来确定神经网络各层输出的图像,并利用这些输出的图像来计算该次训练的损失值,进而利用该损失值来调整神经网络中的模型参数。Specifically, according to the above description, in the embodiment of the present disclosure, a layered loss function is used to train the above neural network, so it needs to be based on the underexposed low-resolution image
Figure BDA0002689356980000201
Under-exposed high-level feature Gu and each coupling feedback result corresponding to the first sub-network
Figure BDA0002689356980000202
Overexposed low-resolution images
Figure BDA0002689356980000203
Overexposure high-level feature G o and each coupling feedback result corresponding to the second sub-network
Figure BDA0002689356980000204
To determine the images output by each layer of the neural network, and use these output images to calculate the loss value of this training, and then use the loss value to adjust the model parameters in the neural network.

在一些实施例中,S160可实现为:In some embodiments, S160 may be implemented as:

A、分别对欠曝低分辨率图像和过曝低分辨率图像进行上采样操作。A. Perform up-sampling operations on the under-exposed low-resolution image and the over-exposed low-resolution image respectively.

具体地,为了进一步提高图像融合效果,本公开实施例中神经网络的各层输出的图像均需融合原始输入图像,即欠曝低分辨率图像

Figure BDA0002689356980000205
和过曝低分辨率图像
Figure BDA0002689356980000206
但是,因为高层次特征和耦合反馈结果均增加了超分辨的更多图像细节信息,其特征大小均大于原始输入图像,故需要将原始输入图像上采样以扩大图像尺寸。例如,分别对欠曝低分辨率图像
Figure BDA0002689356980000207
和过曝低分辨率图像
Figure BDA0002689356980000208
进行双三次插值上采样的操作,获得上采样后的欠曝低分辨率图像
Figure BDA0002689356980000209
和上采样后的过曝低分辨率图像
Figure BDA00026893569800002010
Specifically, in order to further improve the image fusion effect, the images output by each layer of the neural network in the embodiment of the present disclosure need to be fused with the original input image, that is, the underexposed low-resolution image
Figure BDA0002689356980000205
and overexposed low-resolution images
Figure BDA0002689356980000206
However, because both high-level features and coupled feedback results add more image detail information for super-resolution, and their feature sizes are larger than the original input image, it is necessary to upsample the original input image to enlarge the image size. For example, for underexposed low-resolution images separately
Figure BDA0002689356980000207
and overexposed low-resolution images
Figure BDA0002689356980000208
Perform a bicubic interpolation and upsampling operation to obtain an underexposed low-resolution image after upsampling
Figure BDA0002689356980000209
and upsampled overexposed low-resolution images
Figure BDA00026893569800002010

B、将欠曝高层次特征对应的图像和第一子网络对应的耦合反馈结果对应的图像,分别与上采样后的欠曝低分辨率图像相加,生成欠曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像。B. Add the image corresponding to the under-exposed high-level feature and the image corresponding to the coupling feedback result corresponding to the first sub-network to the up-sampled under-exposed low-resolution image to generate an under-exposed high-resolution image and a second under-exposed high-resolution image. The sub-network corresponds to the fused exposure high-resolution image.

具体地,首先,对欠曝高层次特征Gu和第一子网络对应的各个耦合反馈结果

Figure BDA00026893569800002011
均施加图像重建模块REC的操作,获得相应的图像。然后,将欠曝高层次特征Gu对应的图像和上采样后的欠曝低分辨率图像
Figure BDA00026893569800002012
相加,得到欠曝高分辨率图像
Figure BDA00026893569800002013
并且,将上采样后的欠曝低分辨率图像
Figure BDA00026893569800002014
分别与第一子网络中的每个耦合反馈结果
Figure BDA00026893569800002015
对应的图像相加,得到第一子网络对应的各个融合曝光高分辨率图像
Figure BDA00026893569800002016
Specifically, first, each coupling feedback result corresponding to the underexposed high - level feature Gu and the first sub-network is
Figure BDA00026893569800002011
The operation of the image reconstruction module REC is applied to obtain the corresponding image. Then, the image corresponding to the under - exposed high-level feature Gu and the up-sampled under-exposed low-resolution image
Figure BDA00026893569800002012
Add to get underexposed high-resolution images
Figure BDA00026893569800002013
And, the upsampled underexposed low-resolution image
Figure BDA00026893569800002014
Separate feedback results with each coupling in the first sub-network
Figure BDA00026893569800002015
The corresponding images are added to obtain each fusion exposure high-resolution image corresponding to the first sub-network
Figure BDA00026893569800002016

C、将过曝高层次特征对应的图像和第二子网络对应的耦合反馈结果对应的图像,分别与上采样后的过曝低分辨率图像相加,生成过曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像。C. Add the image corresponding to the over-exposed high-level feature and the image corresponding to the coupling feedback result corresponding to the second sub-network respectively with the up-sampled over-exposed low-resolution image to generate the over-exposed high-resolution image and the second sub-network. The sub-network corresponds to the fused exposure high-resolution image.

具体地,首先,对过曝高层次特征Go和第二子网络对应的各个耦合反馈结果

Figure BDA0002689356980000211
均施加图像重建模块REC的操作,获得相应的图像。然后,将过曝高层次特征Go对应的图像和上采样后的过曝低分辨率图像
Figure BDA0002689356980000212
相加,得到过曝高分辨率图像
Figure BDA0002689356980000213
并且,将上采样后的过曝低分辨率图像
Figure BDA0002689356980000214
分别与第二子网络中的每个耦合反馈结果
Figure BDA0002689356980000215
对应的图像相加,得到第二子网络对应的各个融合曝光高分辨率图像
Figure BDA0002689356980000216
Specifically, first, each coupling feedback result corresponding to the overexposed high-level feature Go and the second sub-network is
Figure BDA0002689356980000211
The operation of the image reconstruction module REC is applied to obtain the corresponding image. Then, the image corresponding to the overexposed high-level feature Go and the upsampled overexposed low-resolution image
Figure BDA0002689356980000212
Add up to get an overexposed high-resolution image
Figure BDA0002689356980000213
And, the upsampled overexposed low-resolution image
Figure BDA0002689356980000214
Separate feedback results with each coupling in the second sub-network
Figure BDA0002689356980000215
The corresponding images are added to obtain each fusion exposure high-resolution image corresponding to the second sub-network
Figure BDA0002689356980000216

D、基于欠曝高分辨率图像、第一子网络对应的融合曝光高分辨率图像、过曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像,调整神经网络的参数。D. Adjust the parameters of the neural network based on the under-exposed high-resolution image, the fused-exposure high-resolution image corresponding to the first sub-network, the over-exposed high-resolution image, and the fused-exposure high-resolution image corresponding to the second sub-network.

具体地,利用上述过程所得的欠曝高分辨率图像

Figure BDA0002689356980000217
第一子网络对应的各融合曝光高分辨率图像
Figure BDA0002689356980000218
过曝高分辨率图像
Figure BDA0002689356980000219
和第二子网络对应的各融合曝光高分辨率图像
Figure BDA00026893569800002110
来计算该次训练的损失值,并利用该损失值的反向传播来调整神经网络中的模型参数。Specifically, using the underexposed high-resolution image obtained by the above process
Figure BDA0002689356980000217
High-resolution images of each fusion exposure corresponding to the first sub-network
Figure BDA0002689356980000218
Overexposed high-resolution images
Figure BDA0002689356980000219
Each fused exposure high-resolution image corresponding to the second sub-network
Figure BDA00026893569800002110
to calculate the loss value of this training, and use the back-propagation of the loss value to adjust the model parameters in the neural network.

在一些实施例中,步骤D可实现为:通过如下公式(1)所示的损失函数,调整神经网络的参数:In some embodiments, step D can be implemented as: adjusting the parameters of the neural network through the loss function shown in the following formula (1):

Figure BDA00026893569800002111
Figure BDA00026893569800002111

其中,Ltotal表示总损失函数值,λo、λu

Figure BDA00026893569800002112
分别表示每一部分损失函数值对应的权重,
Figure BDA00026893569800002113
Figure BDA00026893569800002114
分别表示第一子网络中的高层特征提取模块和耦合反馈模块对应的损失函数值,
Figure BDA00026893569800002115
Figure BDA00026893569800002116
分别表示第二子网络中的高层特征提取模块和耦合反馈模块对应的损失函数值,LMS表示基于图像的结构相似性指数确定的两个图像之间的损失值,
Figure BDA00026893569800002117
Figure BDA00026893569800002118
分别表示过曝高分辨率图像和过曝高分辨率参考图像,
Figure BDA00026893569800002119
Figure BDA00026893569800002120
分别表示欠曝高分辨率图像和欠曝高分辨率参考图像,
Figure BDA00026893569800002121
和Igt分别表示第t个第二子网络对应的融合曝光高分辨率图像、第t个第一子网络对应的融合曝光高分辨率图像和融合曝光高分辨率参考图像,T表示耦合反馈模块的数量。where L total represents the total loss function value, λ o , λ u and
Figure BDA00026893569800002112
respectively represent the weight corresponding to each part of the loss function value,
Figure BDA00026893569800002113
and
Figure BDA00026893569800002114
respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the first sub-network,
Figure BDA00026893569800002115
and
Figure BDA00026893569800002116
respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the second sub-network, L MS represents the loss value between the two images determined based on the structural similarity index of the image,
Figure BDA00026893569800002117
and
Figure BDA00026893569800002118
represent the overexposed high-resolution image and the overexposed high-resolution reference image, respectively,
Figure BDA00026893569800002119
and
Figure BDA00026893569800002120
represent the underexposed high-resolution image and the underexposed high-resolution reference image, respectively,
Figure BDA00026893569800002121
and I gt represent the fused exposure high-resolution image corresponding to the t-th second sub-network, the fused-exposure high-resolution image corresponding to the t-th first sub-network, and the fused-exposure high-resolution reference image, respectively, and T represents the coupling feedback module quantity.

上述各参考图像是相应的神经网络输出图像对应的真值图像,是希望上述神经网络生成的图像尽可能接近的目标图像。上述表示两个图像(X和Y)之间的图像层面的结构相似性指数(structural similarity index,SSIM)的损失值LMS可通过如下方式确定:Each of the above reference images is the true value image corresponding to the corresponding neural network output image, and is a target image that is expected to be as close as possible to the image generated by the above neural network. The above loss value L MS representing the image-level structural similarity index (SSIM) between two images (X and Y) can be determined as follows:

Figure BDA0002689356980000221
Figure BDA0002689356980000221

上述公式(1)中的损失函数可以被分为两个部分。前两个损失函数

Figure BDA0002689356980000222
Figure BDA0002689356980000223
用于保证高层特征提取模块SRB的有效性,最后一部分损失函数
Figure BDA0002689356980000224
用来确保耦合反馈模块CFB的有效性。即,前两个损失函数是为了确保超分辨的效果,而后一部分的构建用于同时保证超分辨和多曝光图像融合的效果。同时,前两个损失函数对于最后一部分的损失函数来说也是重要的基础。整个神经网络通过最小化公式(1)中定义的损失函数,以端到端的方式进行训练。The loss function in the above formula (1) can be divided into two parts. The first two loss functions
Figure BDA0002689356980000222
and
Figure BDA0002689356980000223
Used to ensure the effectiveness of the high-level feature extraction module SRB, the last part of the loss function
Figure BDA0002689356980000224
Used to ensure the effectiveness of the coupled feedback module CFB. That is, the first two loss functions are to ensure the effect of super-resolution, and the latter part is constructed to ensure the effect of super-resolution and multi-exposure image fusion at the same time. At the same time, the first two loss functions are also important foundations for the last part of the loss function. The entire neural network is trained in an end-to-end fashion by minimizing the loss function defined in Equation (1).

需要说明的是,S140和S150的执行顺序不限定,可以先执行S140后执行S150,也可以先执行S150后执行S140,还可以并行执行S140和S150。It should be noted that the execution order of S140 and S150 is not limited. S140 may be executed first and then S150 may be executed, or S150 may be executed first and then S140 may be executed, or S140 and S150 may be executed in parallel.

本公开实施例的上述技术方案,通过将获得的欠曝低分辨率图像和过曝低分辨率图像,分别输入第一子网络和第二子网络中的初始特征提取模块,生成欠曝低层次特征和过曝低层次特征;将欠曝低层次特征和过曝低层次特征,分别输入第一子网络和第二子网络中的高层特征提取模块,生成欠曝高层次特征和过曝高层次特征;将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的耦合反馈模块,生成第一子网络对应的耦合反馈结果;将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的耦合反馈模块,生成第二子网络对应的耦合反馈结果;基于欠曝低分辨率图像、欠曝高层次特征和第一子网络对应的耦合反馈结果,以及过曝低分辨率图像、过曝高层次特征和第二子网络对应的耦合反馈结果,调整神经网络的参数。实现了对耦合了多曝光融合技术和超分辨技术的神经网络的端到端的训练,得到了各部分模块参数更为精确的神经网络,该神经网络不仅能够简化拍摄图像的处理流程,提高图像处理速度,而且利用多曝光融合和超分辨之间的互补特性,进一步提高图像处理精确度。In the above-mentioned technical solutions of the embodiments of the present disclosure, by inputting the obtained underexposed low-resolution images and overexposed low-resolution images into the initial feature extraction modules in the first sub-network and the second sub-network, respectively, an underexposed low-resolution image is generated. Features and overexposed low-level features; input the under-exposed low-level features and over-exposed low-level features into the high-level feature extraction modules in the first sub-network and the second sub-network, respectively, to generate under-exposed high-level features and over-exposed high-level features Features; input the under-exposed low-level features, under-exposed high-level features, and over-exposed high-level features into the coupling feedback module in the first sub-network to generate the coupling feedback results corresponding to the first sub-network; The over-exposed high-level features and under-exposed high-level features are input to the coupling feedback module in the second sub-network to generate the coupling feedback results corresponding to the second sub-network; based on the under-exposed low-resolution image, under-exposed high-level features and the first The coupling feedback results corresponding to the sub-network, as well as the over-exposure low-resolution images, over-exposure high-level features, and coupling feedback results corresponding to the second sub-network, adjust the parameters of the neural network. The end-to-end training of the neural network coupled with the multi-exposure fusion technology and the super-resolution technology is realized, and a neural network with more accurate parameters of each part of the module is obtained. speed, and take advantage of the complementary properties between multi-exposure fusion and super-resolution to further improve image processing accuracy.

图6是本公开实施例提供的一种图像融合方法的流程图。该图像融合方法基于图1中的神经网络架构来实现,其中与上述各实施例相同或相应的内容的解释在此不再赘述。本公开实施例提供的图像融合方法可以由图像融合装置来执行,该装置可以由软件和/或硬件的方式实现,该装置可以集成在具有一定计算能力的电子设备中,例如手机、平板电脑、笔记本电脑、台式电脑、服务器或超级计算机等。参见图6,该图像融合方法包括:FIG. 6 is a flowchart of an image fusion method provided by an embodiment of the present disclosure. The image fusion method is implemented based on the neural network architecture in FIG. 1 , and the explanations of the same or corresponding contents as those of the above-mentioned embodiments are not repeated here. The image fusion method provided in the embodiments of the present disclosure may be performed by an image fusion apparatus, which may be implemented in software and/or hardware, and the apparatus may be integrated into an electronic device with certain computing capabilities, such as a mobile phone, a tablet computer, a Laptops, desktops, servers or supercomputers, etc. Referring to Figure 6, the image fusion method includes:

S210、获取一张欠曝低分辨率图像和一张过曝低分辨率图像。S210. Acquire an under-exposed low-resolution image and an over-exposed low-resolution image.

具体地,获取针对同一拍摄场景和相同拍摄对象的两个极端曝光的图像,即欠曝低分辨率图像和过曝低分辨率图像。Specifically, two extreme exposure images for the same shooting scene and the same shooting object, that is, an underexposed low-resolution image and an overexposed low-resolution image are acquired.

S220、将欠曝低分辨率图像和过曝低分辨率图像输入预先训练的神经网络,生成第一融合曝光高分辨率图像和第二融合曝光高分辨率图像。S220: Input the under-exposed low-resolution image and the over-exposed low-resolution image into a pre-trained neural network to generate a first fused-exposure high-resolution image and a second fused-exposure high-resolution image.

具体地,神经网络在应用过程中,只需输入欠曝低分辨率图像和过曝低分辨率图像,经过网络的处理,可输出两个图像,即欠曝低分辨率图像

Figure BDA0002689356980000231
对应的第一融合曝光高分辨率图像
Figure BDA0002689356980000232
过曝低分辨率图像
Figure BDA0002689356980000233
对应的第二融合曝光高分辨率图像
Figure BDA0002689356980000234
Specifically, in the application process of the neural network, it only needs to input the under-exposed low-resolution image and the over-exposed low-resolution image, and after the processing of the network, it can output two images, namely the under-exposed low-resolution image.
Figure BDA0002689356980000231
Corresponding first fused exposure high-resolution image
Figure BDA0002689356980000232
Overexposed low-resolution images
Figure BDA0002689356980000233
Corresponding second fused exposure high-resolution image
Figure BDA0002689356980000234

S230、基于第一融合曝光高分辨率图像和第二融合曝光高分辨率图像,生成图像融合结果。S230. Generate an image fusion result based on the first fusion exposure high-resolution image and the second fusion exposure high-resolution image.

具体地,第一融合曝光高分辨率图像

Figure BDA0002689356980000235
和第二融合曝光高分辨率图像
Figure BDA0002689356980000236
虽然都是超分辨和多曝光融合的图像,但是因其对应的输入图像不同,两个输出图像之间也有差异。为了进一步提高融合精度,本公开实施例中需要对
Figure BDA0002689356980000241
Figure BDA0002689356980000242
进行进一步的综合处理,获得最终的输出图像,即图像融合结果。Specifically, the first fusion exposure high-resolution image
Figure BDA0002689356980000235
and second fused exposure high resolution image
Figure BDA0002689356980000236
Although they are both super-resolution and multi-exposure fusion images, there are differences between the two output images because of their corresponding input images. In order to further improve the fusion accuracy, in the embodiments of the present disclosure, it is necessary to
Figure BDA0002689356980000241
and
Figure BDA0002689356980000242
Further comprehensive processing is performed to obtain the final output image, that is, the image fusion result.

在一些实施例中,S230可实现为:分别利用第一权重和第二权重,对第一融合曝光高分辨率图像和第二融合曝光高分辨率图像进行加权求和处理,生成图像融合结果。具体地,本实施例中对

Figure BDA0002689356980000243
Figure BDA0002689356980000244
进行加权处理,故需要预先确定两个加权权重,即第一权重和第二权重。这两个权重的取值与欠曝低分辨率图像和过曝低分辨率图像的曝光度和拍摄场景等有关。例如,可将0.5作为第一权重和第二权重的默认值。那么,可按照如下公式(2)来生成图像融合结果:In some embodiments, S230 may be implemented as: using the first weight and the second weight, respectively, to perform a weighted summation process on the first fused exposure high-resolution image and the second fused exposure high-resolution image to generate an image fusion result. Specifically, in this embodiment, the
Figure BDA0002689356980000243
and
Figure BDA0002689356980000244
For weighting processing, two weighting weights need to be pre-determined, namely the first weight and the second weight. The values of these two weights are related to the exposure and shooting scene of the underexposed low-resolution image and the overexposed low-resolution image. For example, 0.5 can be used as the default value for the first weight and the second weight. Then, the image fusion result can be generated according to the following formula (2):

Figure BDA0002689356980000245
Figure BDA0002689356980000245

其中,Iout、wo和wu分别表示图像融合结果、第二权重和第一权重。Among them, I out , wo and wu represent the image fusion result, the second weight and the first weight, respectively.

本公开实施例的上述技术方案,通过将拍摄所得的欠曝低分辨率图像和过曝低分辨率图像输入预先训练的神经网络,生成第一融合曝光高分辨率图像和第二融合曝光高分辨率图像;并基于第一融合曝光高分辨率图像和第二融合曝光高分辨率图像,生成图像融合结果。实现了利用耦合了多曝光融合技术和超分辨技术的神经网络来处理极端曝光的两张低分辨率图像,生成一张具有高分辨率HR和高动态范围HDR的图像融合结果,简化了拍摄图像的处理流程,提高了图像处理速度和处理精确度。According to the above technical solutions of the embodiments of the present disclosure, the first fused exposure high-resolution image and the second fused exposure high-resolution image are generated by inputting the captured under-exposed low-resolution image and over-exposed low-resolution image into a pre-trained neural network. and generate an image fusion result based on the first fusion exposure high-resolution image and the second fusion exposure high-resolution image. Realize the use of a neural network coupled with multi-exposure fusion technology and super-resolution technology to process two low-resolution images with extreme exposure, and generate an image fusion result with high-resolution HR and high dynamic range HDR, which simplifies the captured image. The processing flow improves the image processing speed and processing accuracy.

图7是本公开实施例提供的一种神经网络训练装置的结构示意图。该神经网络包括网络结构相同的第一子网络和第二子网络,且任一子网络中包含初级特征提取模块、高层特征提取模块和耦合反馈模块。参见图7,该装置具体包括:FIG. 7 is a schematic structural diagram of a neural network training apparatus provided by an embodiment of the present disclosure. The neural network includes a first sub-network and a second sub-network with the same network structure, and any sub-network includes a primary feature extraction module, a high-level feature extraction module and a coupled feedback module. Referring to Figure 7, the device specifically includes:

图像获取单元710,用于获取一张欠曝低分辨率图像和一张过曝低分辨率图像;An image acquisition unit 710, configured to acquire an under-exposed low-resolution image and an over-exposed low-resolution image;

低层次特征生成单元720,用于将欠曝低分辨率图像和过曝低分辨率图像,分别输入第一子网络和第二子网络中的初始特征提取模块,生成欠曝低层次特征和过曝低层次特征;The low-level feature generation unit 720 is configured to input the under-exposed low-resolution image and the over-exposed low-resolution image into the initial feature extraction modules in the first sub-network and the second sub-network, respectively, to generate under-exposed low-level features and over-exposed low-resolution images. Expose low-level features;

高层次特征生成单元730,用于将欠曝低层次特征和过曝低层次特征,分别输入第一子网络和第二子网络中的高层特征提取模块,生成欠曝高层次特征和过曝高层次特征;The high-level feature generation unit 730 is configured to input the under-exposed low-level features and over-exposed low-level features into the high-level feature extraction modules in the first sub-network and the second sub-network respectively, to generate under-exposed high-level features and over-exposed high-level features. Hierarchical features;

第一耦合反馈结果生成单元740,用于将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的耦合反馈模块,生成第一子网络对应的耦合反馈结果;The first coupling feedback result generating unit 740 is configured to input the under-exposed low-level features, under-exposed high-level features, and over-exposed high-level features into the coupling feedback module in the first sub-network to generate coupling feedback corresponding to the first sub-network result;

第二耦合反馈结果生成单元750,用于将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的耦合反馈模块,生成第二子网络对应的耦合反馈结果;The second coupling feedback result generating unit 750 is configured to input the over-exposed low-level features, over-exposed high-level features, and under-exposed high-level features into the coupling feedback module in the second sub-network to generate coupling feedback corresponding to the second sub-network result;

参数调整单元760,用于基于欠曝低分辨率图像、欠曝高层次特征和第一子网络对应的耦合反馈结果,以及过曝低分辨率图像、过曝高层次特征和第二子网络对应的耦合反馈结果,调整神经网络的参数。The parameter adjustment unit 760 is used for coupling feedback results based on the underexposed low-resolution image, the underexposed high-level feature and the corresponding first sub-network, and the over-exposed low-resolution image, the overexposed high-level feature corresponding to the second sub-network The results of the coupled feedback to adjust the parameters of the neural network.

在一些实施例中,神经网络包含多个耦合反馈模块,且各耦合反馈模块不共享模型参数。In some embodiments, the neural network includes multiple coupled feedback modules, and each coupled feedback module does not share model parameters.

在一些实施例中,各耦合反馈模块串行处理;In some embodiments, the coupled feedback modules are processed serially;

相应地,第一耦合反馈结果生成单元740具体用于:Correspondingly, the first coupling feedback result generating unit 740 is specifically configured to:

将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的第一个耦合反馈模块,生成第一子网络对应的耦合反馈结果;Input the under-exposed low-level features, under-exposed high-level features and over-exposed high-level features into the first coupling feedback module in the first sub-network to generate the coupling feedback result corresponding to the first sub-network;

针对第一子网络中除了第一个耦合反馈模块之外的任一个后续耦合反馈模块,将欠曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及第二子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成第一子网络对应的耦合反馈结果;For any subsequent coupling feedback module except the first coupling feedback module in the first sub-network, the underexposure low-level features, the coupling feedback result of the previous adjacent coupling feedback module of the subsequent coupling feedback module, and the The coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the two sub-networks is input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the first sub-network;

相应地,第二耦合反馈结果生成单元750具体用于:Correspondingly, the second coupling feedback result generating unit 750 is specifically configured to:

将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的第一个耦合反馈模块,生成第二子网络对应的耦合反馈结果;Input the over-exposed low-level features, over-exposed high-level features and under-exposed high-level features into the first coupling feedback module in the second sub-network to generate the coupling feedback result corresponding to the second sub-network;

针对第二子网络中除了第一个耦合反馈模块之外的任一个后续耦合反馈模块,将过曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及第一子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成第二子网络对应的耦合反馈结果。For any subsequent coupling feedback module except the first coupling feedback module in the second sub-network, the low-level features, the coupling feedback result of the previous adjacent coupling feedback module of the subsequent coupling feedback module, and the first coupling feedback module will be overexposed. The coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in a sub-network is input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the second sub-network.

在一些实施例中,耦合反馈模块中包含至少两个联结子模块和至少两个特征映射组,其中,每个特征映射组包含一个滤波器、一个反卷积层和一个卷积层;In some embodiments, the coupled feedback module includes at least two connection sub-modules and at least two feature map groups, wherein each feature map group includes a filter, a deconvolution layer, and a convolution layer;

第一个联结子模块位于各特征映射组之前;The first link submodule is located before each feature map group;

除了第一个联结子模块外的任一其他联结子模块位于任意两个相邻的特征映射组之间,且任意两个其他联结子模块位于不同位置。Any other linking submodules except the first linking submodule are located between any two adjacent feature map groups, and any two other linking submodules are located at different positions.

在一些实施例中,参数调整单元760具体用于:In some embodiments, the parameter adjustment unit 760 is specifically used to:

分别对欠曝低分辨率图像和过曝低分辨率图像进行上采样操作;Perform up-sampling operations on the under-exposed low-resolution image and the over-exposed low-resolution image respectively;

将欠曝高层次特征对应的图像和第一子网络对应的耦合反馈结果对应的图像,分别与上采样后的欠曝低分辨率图像相加,生成欠曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像;The image corresponding to the under-exposed high-level feature and the image corresponding to the coupling feedback result corresponding to the first sub-network are added to the up-sampled under-exposed low-resolution image to generate the under-exposed high-resolution image and the second sub-network. Corresponding fused exposure high-resolution images;

将过曝高层次特征对应的图像和第二子网络对应的耦合反馈结果对应的图像,分别与上采样后的过曝低分辨率图像相加,生成过曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像;The image corresponding to the overexposed high-level feature and the image corresponding to the coupling feedback result corresponding to the second sub-network are added to the up-sampled over-exposed low-resolution image to generate the over-exposed high-resolution image and the second sub-network. Corresponding fused exposure high-resolution images;

基于欠曝高分辨率图像、第一子网络对应的融合曝光高分辨率图像、过曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像,调整神经网络的参数。The parameters of the neural network are adjusted based on the under-exposed high-resolution image, the fused-exposure high-resolution image corresponding to the first sub-network, the over-exposed high-resolution image, and the fused-exposure high-resolution image corresponding to the second sub-network.

进一步地,参数调整单元760具体用于:Further, the parameter adjustment unit 760 is specifically used for:

通过如下公式所示的损失函数,调整神经网络的参数:Adjust the parameters of the neural network through the loss function shown in the following formula:

Figure BDA0002689356980000261
Figure BDA0002689356980000261

其中,Ltotal表示总损失函数值,λo、λu

Figure BDA0002689356980000262
分别表示每一部分损失函数值对应的权重,
Figure BDA0002689356980000271
Figure BDA0002689356980000272
分别表示第一子网络中的高层特征提取模块和耦合反馈模块对应的损失函数值,
Figure BDA0002689356980000273
Figure BDA0002689356980000274
分别表示第二子网络中的高层特征提取模块和耦合反馈模块对应的损失函数值,LMS表示基于图像的结构相似性指数确定的两个图像之间的损失值,
Figure BDA0002689356980000275
Figure BDA0002689356980000276
分别表示过曝高分辨率图像和过曝高分辨率参考图像,
Figure BDA0002689356980000277
Figure BDA0002689356980000278
分别表示欠曝高分辨率图像和欠曝高分辨率参考图像,
Figure BDA0002689356980000279
和Igt分别表示第t个第二子网络对应的融合曝光高分辨率图像、第t个第一子网络对应的融合曝光高分辨率图像和融合曝光高分辨率参考图像,T表示耦合反馈模块的数量。where L total represents the total loss function value, λ o , λ u and
Figure BDA0002689356980000262
respectively represent the weight corresponding to each part of the loss function value,
Figure BDA0002689356980000271
and
Figure BDA0002689356980000272
respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the first sub-network,
Figure BDA0002689356980000273
and
Figure BDA0002689356980000274
respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the second sub-network, L MS represents the loss value between the two images determined based on the structural similarity index of the image,
Figure BDA0002689356980000275
and
Figure BDA0002689356980000276
represent the overexposed high-resolution image and the overexposed high-resolution reference image, respectively,
Figure BDA0002689356980000277
and
Figure BDA0002689356980000278
represent the underexposed high-resolution image and the underexposed high-resolution reference image, respectively,
Figure BDA0002689356980000279
and I gt represent the fused exposure high-resolution image corresponding to the t-th second sub-network, the fused-exposure high-resolution image corresponding to the t-th first sub-network, and the fused-exposure high-resolution reference image, respectively, and T represents the coupling feedback module quantity.

通过本公开实施例提供的一种神经网络训练装置,实现了利用一个神经网络同时进行图像的多曝光融合处理和超分辨处理,不仅简化了拍摄图像的处理流程,提高了图像处理速度,而且利用多曝光融合和超分辨之间的互补特性,进一步提高了图像处理精确度。Through the neural network training device provided by the embodiment of the present disclosure, the multi-exposure fusion processing and super-resolution processing of images are simultaneously performed by using a neural network, which not only simplifies the processing flow of captured images, improves the image processing speed, but also utilizes Complementary properties between multi-exposure fusion and super-resolution further improve image processing accuracy.

本公开实施例所提供的神经网络训练装置可执行本公开任意实施例所提供的神经网络训练方法,具备执行方法相应的功能模块和有益效果。The neural network training apparatus provided by the embodiment of the present disclosure can execute the neural network training method provided by any embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.

图8是本公开实施例提供的一种图像融合装置的结构示意图。参见图8,该装置具体包括:FIG. 8 is a schematic structural diagram of an image fusion apparatus provided by an embodiment of the present disclosure. Referring to Figure 8, the device specifically includes:

图像获取单元810,用于获取一张欠曝低分辨率图像和一张过曝低分辨率图像;An image acquisition unit 810, configured to acquire an under-exposed low-resolution image and an over-exposed low-resolution image;

融合曝光高分辨率图像生成单元820,用于将欠曝低分辨率图像和过曝低分辨率图像输入预先训练的神经网络,生成第一融合曝光高分辨率图像和第二融合曝光高分辨率图像;其中,神经网络通过本公开任一实施例中的神经网络训练方法训练得到;The fused exposure high-resolution image generation unit 820 is configured to input the under-exposed low-resolution image and the over-exposed low-resolution image into the pre-trained neural network to generate a first fused-exposure high-resolution image and a second fused-exposure high-resolution image Image; wherein, the neural network is obtained by training the neural network training method in any embodiment of the present disclosure;

图像融合结果生成单元830,用于基于第一融合曝光高分辨率图像和第二融合曝光高分辨率图像,生成图像融合结果。The image fusion result generating unit 830 is configured to generate an image fusion result based on the first fusion exposure high-resolution image and the second fusion exposure high-resolution image.

在一些实施例中,图像融合结果生成单元830具体用于:In some embodiments, the image fusion result generating unit 830 is specifically configured to:

分别利用第一权重和第二权重,对第一融合曝光高分辨率图像和第二融合曝光高分辨率图像进行加权求和处理,生成图像融合结果。Using the first weight and the second weight respectively, the first fusion exposure high-resolution image and the second fusion exposure high-resolution image are weighted and summed to generate an image fusion result.

通过本公开实施例提供的一种图像融合装置,实现了利用一个神经网络同时进行图像的多曝光融合处理和超分辨处理,不仅简化了拍摄图像的处理流程,提高了图像处理速度,而且利用多曝光融合和超分辨之间的互补特性,进一步提高了图像处理精确度。The image fusion device provided by the embodiment of the present disclosure realizes the simultaneous multi-exposure fusion processing and super-resolution processing of images by using a neural network, which not only simplifies the processing flow of captured images, improves the image processing speed, but also utilizes multiple Complementary properties between exposure fusion and super-resolution further improve image processing accuracy.

本公开实施例所提供的图像融合装置可执行本公开任意实施例所提供的图像融合方法,具备执行方法相应的功能模块和有益效果。The image fusion apparatus provided by the embodiment of the present disclosure can execute the image fusion method provided by any embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.

值得注意的是,上述神经网络训练装置的实施例中,所包括的各个单元只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本公开的保护范围。It is worth noting that in the above embodiments of the neural network training apparatus, the units included are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be realized; in addition, each function The specific names of the units are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present disclosure.

参见图9,本实施例提供了一种电子设备,其包括:一个或多个处理器920;存储装置910,用于存储一个或多个程序,当一个或多个程序被一个或多个处理器920执行,使得一个或多个处理器920实现本发明实施例所提供的神经网络训练方法,该神经网络包括网络结构相同的第一子网络和第二子网络,且任一子网络中包含初级特征提取模块、高层特征提取模块和耦合反馈模块;该方法包括:Referring to FIG. 9, this embodiment provides an electronic device, which includes: one or more processors 920; and a storage device 910 for storing one or more programs, when the one or more programs are processed by one or more The processor 920 executes, so that one or more processors 920 implement the neural network training method provided by the embodiment of the present invention, where the neural network includes a first sub-network and a second sub-network with the same network structure, and any sub-network includes A primary feature extraction module, a high-level feature extraction module and a coupled feedback module; the method includes:

获取一张欠曝低分辨率图像和一张过曝低分辨率图像;Obtain an underexposed low-resolution image and an overexposed low-resolution image;

将欠曝低分辨率图像和过曝低分辨率图像,分别输入第一子网络和第二子网络中的初始特征提取模块,生成欠曝低层次特征和过曝低层次特征;Input the under-exposed low-resolution image and the over-exposed low-resolution image into the initial feature extraction modules in the first sub-network and the second sub-network, respectively, to generate under-exposed low-level features and over-exposed low-level features;

将欠曝低层次特征和过曝低层次特征,分别输入第一子网络和第二子网络中的高层特征提取模块,生成欠曝高层次特征和过曝高层次特征;Input the under-exposed low-level features and over-exposed low-level features into the high-level feature extraction modules in the first sub-network and the second sub-network, respectively, to generate under-exposed high-level features and over-exposed high-level features;

将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的耦合反馈模块,生成第一子网络对应的耦合反馈结果;Input the under-exposed low-level features, under-exposed high-level features and over-exposed high-level features into the coupling feedback module in the first sub-network to generate the coupling feedback result corresponding to the first sub-network;

将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的耦合反馈模块,生成第二子网络对应的耦合反馈结果;Input the over-exposed low-level features, over-exposed high-level features and under-exposed high-level features into the coupling feedback module in the second sub-network to generate the coupling feedback result corresponding to the second sub-network;

基于欠曝低分辨率图像、欠曝高层次特征和第一子网络对应的耦合反馈结果,以及过曝低分辨率图像、过曝高层次特征和第二子网络对应的耦合反馈结果,调整神经网络的参数。Based on the coupled feedback results corresponding to the underexposed low-resolution image, the underexposed high-level feature and the first sub-network, and the coupled feedback results corresponding to the over-exposed low-resolution image, the overexposed high-level feature and the second sub-network, adjust the neural parameters of the network.

当然,本领域技术人员可以理解,处理器920还可以实现本发明任意实施例所提供的神经网络训练方法的技术方案。Of course, those skilled in the art can understand that the processor 920 can also implement the technical solution of the neural network training method provided by any embodiment of the present invention.

图9显示的电子设备仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。The electronic device shown in FIG. 9 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present invention.

如图9所示,该电子设备包括处理器920、存储装置910、输入装置930和输出装置940;电子设备中处理器920的数量可以是一个或多个,图9中以一个处理器920为例;电子设备中的处理器920、存储装置910、输入装置930和输出装置940可以通过总线或其他方式连接,图9中以通过总线950连接为例。As shown in FIG. 9 , the electronic device includes a processor 920 , a storage device 910 , an input device 930 and an output device 940 ; the number of processors 920 in the electronic device may be one or more. In FIG. 9 , one processor 920 is used as the For example, the processor 920 , the storage device 910 , the input device 930 and the output device 940 in the electronic device may be connected by a bus or other means, and the connection by the bus 950 is taken as an example in FIG. 9 .

存储装置910作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本发明实施例中的神经网络训练方法对应的程序指令/模块。As a computer-readable storage medium, the storage device 910 may be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the neural network training method in the embodiment of the present invention.

存储装置910可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,存储装置910可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储装置910可进一步包括相对于处理器920远程设置的存储器,这些远程存储器可以通过网络连接至电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The storage device 910 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Additionally, storage device 910 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, storage device 910 may further include memory located remotely from processor 920, which may be connected to the electronic device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.

输入装置930可用于接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。输出装置940可包括显示屏等显示设备。The input device 930 may be used to receive input numerical or character information, and generate key signal input related to user setting and function control of the electronic device. The output device 940 may include a display device such as a display screen.

本发明实施例还提供了另一电子设备,其包括:一个或多个处理器;存储装置,用于存储一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现本发明实施例所提供的图像融合方法,包括:The embodiment of the present invention also provides another electronic device, which includes: one or more processors; a storage device for storing one or more programs, when the one or more programs are executed by the one or more processors, so that One or more processors implement the image fusion method provided by the embodiment of the present invention, including:

获取一张欠曝低分辨率图像和一张过曝低分辨率图像;Obtain an underexposed low-resolution image and an overexposed low-resolution image;

将欠曝低分辨率图像和过曝低分辨率图像输入预先训练的神经网络,生成第一融合曝光高分辨率图像和第二融合曝光高分辨率图像;其中,神经网络通过本公开任一实施例中的神经网络训练方法训练得到;Input the under-exposed low-resolution image and the over-exposed low-resolution image into a pre-trained neural network to generate a first fused-exposure high-resolution image and a second fused-exposure high-resolution image; wherein the neural network is implemented by any one of the present disclosure The neural network training method in the example is trained;

基于第一融合曝光高分辨率图像和第二融合曝光高分辨率图像,生成图像融合结果。An image fusion result is generated based on the first fused exposure high-resolution image and the second fused exposure high-resolution image.

当然,本领域技术人员可以理解,处理器还可以实现本发明任意实施例所提供的图像融合方法的技术方案。该电子设备的硬件结构以及功能可参见图9中的内容解释。Of course, those skilled in the art can understand that the processor can also implement the technical solution of the image fusion method provided by any embodiment of the present invention. The hardware structure and function of the electronic device can be explained with reference to the content in FIG. 9 .

本公开实施例还提供一种包含计算机可执行指令的存储介质,该计算机可执行指令在由计算机处理器执行时用于执行一种神经网络训练方法,该神经网络包括网络结构相同的第一子网络和第二子网络,且任一子网络中包含初级特征提取模块、高层特征提取模块和耦合反馈模块;该方法包括:Embodiments of the present disclosure also provide a storage medium containing computer-executable instructions, the computer-executable instructions, when executed by a computer processor, are used to execute a neural network training method, where the neural network includes a first sub-system with the same network structure network and a second sub-network, and any sub-network includes a primary feature extraction module, a high-level feature extraction module and a coupled feedback module; the method includes:

获取一张欠曝低分辨率图像和一张过曝低分辨率图像;Obtain an underexposed low-resolution image and an overexposed low-resolution image;

将欠曝低分辨率图像和过曝低分辨率图像,分别输入第一子网络和第二子网络中的初始特征提取模块,生成欠曝低层次特征和过曝低层次特征;Input the under-exposed low-resolution image and the over-exposed low-resolution image into the initial feature extraction modules in the first sub-network and the second sub-network, respectively, to generate under-exposed low-level features and over-exposed low-level features;

将欠曝低层次特征和过曝低层次特征,分别输入第一子网络和第二子网络中的高层特征提取模块,生成欠曝高层次特征和过曝高层次特征;Input the under-exposed low-level features and over-exposed low-level features into the high-level feature extraction modules in the first sub-network and the second sub-network, respectively, to generate under-exposed high-level features and over-exposed high-level features;

将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的耦合反馈模块,生成第一子网络对应的耦合反馈结果;Input the under-exposed low-level features, under-exposed high-level features and over-exposed high-level features into the coupling feedback module in the first sub-network to generate the coupling feedback result corresponding to the first sub-network;

将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的耦合反馈模块,生成第二子网络对应的耦合反馈结果;Input the over-exposed low-level features, over-exposed high-level features and under-exposed high-level features into the coupling feedback module in the second sub-network to generate the coupling feedback result corresponding to the second sub-network;

基于欠曝低分辨率图像、欠曝高层次特征和第一子网络对应的耦合反馈结果,以及过曝低分辨率图像、过曝高层次特征和第二子网络对应的耦合反馈结果,调整神经网络的参数。Based on the coupled feedback results corresponding to the underexposed low-resolution image, the underexposed high-level feature and the first sub-network, and the coupled feedback results corresponding to the over-exposed low-resolution image, the overexposed high-level feature and the second sub-network, adjust the neural parameters of the network.

当然,本发明实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上的方法操作,还可以执行本发明任意实施例所提供的神经网络训练方法中的相关操作。Of course, a storage medium containing computer-executable instructions provided by the embodiments of the present invention, the computer-executable instructions are not limited to the above method operations, and can also perform the relevant tasks in the neural network training method provided by any embodiment of the present invention. operate.

本发明实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。The computer storage medium of the embodiments of the present invention may adopt any combination of one or more computer-readable mediums. The computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (a non-exhaustive list) of computer readable storage media include: electrical connections having one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing. In this document, a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.

计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。A computer-readable signal medium may include a propagated data signal in baseband or as part of a carrier wave, with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .

计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。Program code embodied on a computer readable medium may be transmitted using any suitable medium, including - but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.

可以以一种或多种程序设计语言或其组合来编写用于执行本发明操作的计算机程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for carrying out operations of the present invention may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional procedural languages, or a combination thereof. Programming Language - such as the "C" language or similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).

本发明实施例还提供了另一种计算机可读存储介质,计算机可执行指令在由计算机处理器执行时用于执行一种图像融合方法,包括:The embodiment of the present invention also provides another computer-readable storage medium, and the computer-executable instructions are used to execute an image fusion method when executed by a computer processor, including:

获取一张欠曝低分辨率图像和一张过曝低分辨率图像;Obtain an underexposed low-resolution image and an overexposed low-resolution image;

将欠曝低分辨率图像和过曝低分辨率图像输入预先训练的神经网络,生成第一融合曝光高分辨率图像和第二融合曝光高分辨率图像;其中,神经网络通过本公开任一实施例中的神经网络训练方法训练得到;Input the under-exposed low-resolution image and the over-exposed low-resolution image into a pre-trained neural network to generate a first fused-exposure high-resolution image and a second fused-exposure high-resolution image; wherein the neural network is implemented by any one of the present disclosure The neural network training method in the example is trained;

基于第一融合曝光高分辨率图像和第二融合曝光高分辨率图像,生成图像融合结果。An image fusion result is generated based on the first fused exposure high-resolution image and the second fused exposure high-resolution image.

当然,本发明实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上的方法操作,还可以执行本发明任意实施例所提供的图像融合方法中的相关操作。对存储介质的介绍可参见上述实施例中的内容解释。Of course, a storage medium containing computer-executable instructions provided by an embodiment of the present invention, the computer-executable instructions of which are not limited to the above method operations, and can also perform related operations in the image fusion method provided by any embodiment of the present invention . For the introduction of the storage medium, please refer to the content explanation in the above-mentioned embodiment.

需要说明的是,本公开所用术语仅为了描述特定实施例,而非限制本申请范围。如本公开说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。术语“和/或”包括一个或多个相关所列条目的任何一个和所有组合。诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法或者设备中还存在另外的相同要素。It should be noted that the terminology used in the present disclosure is only for describing specific embodiments, rather than limiting the scope of the present application. As shown in this disclosure and the claims, unless the context clearly dictates otherwise, the words "a", "an", "an" and/or "the" are not intended to be specific in the singular and may include the plural. The term "and/or" includes any and all combinations of one or more of the associated listed items. Relational terms such as "first" and "second" are used only to distinguish one entity or operation from another, and do not necessarily require or imply that any such entity or operation exists between them. The actual relationship or sequence. The terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion such that a process, method or device comprising a list of elements includes not only those elements, but also other elements not expressly listed, Alternatively, elements inherent to such a process, method or apparatus may also be included. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in the process, method, or device that includes the element.

以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。The above descriptions are only specific embodiments of the present disclosure, so that those skilled in the art can understand or implement the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the present disclosure. Therefore, the present disclosure is not intended to be limited to the embodiments described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. A neural network training method is characterized in that the neural network comprises a first sub-network and a second sub-network which have the same network structure, and any sub-network comprises a primary feature extraction module, a high-level feature extraction module and a coupling feedback module; the method comprises the following steps:
acquiring an under-exposed low-resolution image and an over-exposed low-resolution image;
inputting the underexposed low-resolution image and the overexposed low-resolution image into the initial feature extraction modules in the first sub-network and the second sub-network respectively to generate an underexposed low-level feature and an overexposed low-level feature;
inputting the underexposed low-level features and the overexposed low-level features into the high-level feature extraction modules in the first sub-network and the second sub-network respectively to generate the underexposed high-level features and the overexposed high-level features;
inputting the underexposed low-level features, the underexposed high-level features and the overexposed high-level features into the coupling feedback module in the first sub-network to generate a coupling feedback result corresponding to the first sub-network;
inputting the over-exposed low-level features, the over-exposed high-level features and the under-exposed high-level features into the coupling feedback module in the second sub-network to generate a coupling feedback result corresponding to the second sub-network;
and adjusting parameters of the neural network based on the underexposed low-resolution image, the underexposed high-level features and the coupling feedback results corresponding to the first sub-network, and the overexposed low-resolution image, the overexposed high-level features and the coupling feedback results corresponding to the second sub-network.
2. The method of claim 1, wherein the neural network comprises a plurality of the coupled feedback modules, and wherein each of the coupled feedback modules does not share model parameters.
3. The method of claim 2, wherein each of the coupled feedback modules processes serially;
the inputting the underexposed low-level features, the underexposed high-level features, and the overexposed high-level features into the coupling feedback module in the first sub-network, and the generating of the coupling feedback result corresponding to the first sub-network includes:
inputting the underexposed low-level features, the underexposed high-level features and the overexposed high-level features into a first coupling feedback module in the first sub-network to generate a coupling feedback result corresponding to the first sub-network;
for any subsequent coupled feedback module in the first sub-network except the first coupled feedback module, inputting the underexposed low-level feature, the coupled feedback result of a previous adjacent coupled feedback module of the subsequent coupled feedback module, and the coupled feedback result of the coupled feedback module corresponding to the previous adjacent coupled feedback module in the second sub-network into the subsequent coupled feedback module, and generating the coupled feedback result corresponding to the first sub-network;
the inputting the overexposed low-level features, the overexposed high-level features, and the underexposed high-level features into the coupling feedback module in the second sub-network, and the generating of the coupling feedback result corresponding to the second sub-network includes:
inputting the over-exposed low-level features, the over-exposed high-level features and the under-exposed high-level features into a first coupling feedback module in the second sub-network to generate a coupling feedback result corresponding to the second sub-network;
and for any subsequent coupled feedback module except the first coupled feedback module in the second sub-network, inputting the over-exposed low-level feature, the coupled feedback result of a previous adjacent coupled feedback module of the subsequent coupled feedback module, and the coupled feedback result of the coupled feedback module corresponding to the previous adjacent coupled feedback module in the first sub-network into the subsequent coupled feedback module, and generating the coupled feedback result corresponding to the second sub-network.
4. The method of any one of claims 1 to 3, wherein the coupled feedback module comprises at least two concatenated sub-modules and at least two sets of signature maps, wherein each set of signature maps comprises a filter, an deconvolution layer and a convolution layer;
a first one of said join submodules precedes each of said sets of feature maps;
any other coupling submodule than the first coupling submodule is located between any two adjacent feature map sets, and any two other coupling submodules are located at different positions.
5. The method of claim 1, wherein the adjusting parameters of the neural network based on the under-exposed low resolution image, the under-exposed high level features and the coupled feedback results corresponding to the first sub-network, and the over-exposed low resolution image, the over-exposed high level features and the coupled feedback results corresponding to the second sub-network comprises:
respectively carrying out up-sampling operation on the under-exposed low-resolution image and the over-exposed low-resolution image;
adding the image corresponding to the under-exposed high-level features and the image corresponding to the coupling feedback result corresponding to the first sub-network to the up-sampled under-exposed low-resolution image respectively to generate an under-exposed high-resolution image and a fusion exposed high-resolution image corresponding to the second sub-network;
adding the image corresponding to the overexposure high-level feature and the image corresponding to the coupling feedback result corresponding to the second sub-network to the upsampled overexposure low-resolution image respectively to generate an overexposure high-resolution image and a fusion exposure high-resolution image corresponding to the second sub-network;
and adjusting parameters of the neural network based on the underexposed high-resolution image, the fusion exposed high-resolution image corresponding to the first sub-network, the overexposed high-resolution image and the fusion exposed high-resolution image corresponding to the second sub-network.
6. The method of claim 5, wherein adjusting parameters of the neural network based on the under-exposed high resolution image, the fused-exposed high resolution image corresponding to the first sub-network, the over-exposed high resolution image, and the fused-exposed high resolution image corresponding to the second sub-network comprises:
adjusting parameters of the neural network by a loss function as shown in the following equation:
Figure FDA0002689356970000031
wherein L istotalExpressing the value of the total loss function, λo、λuAnd
Figure FDA0002689356970000032
the weight corresponding to each partial loss function value is respectively represented,
Figure FDA0002689356970000033
and
Figure FDA0002689356970000034
respectively representing the high-level feature extraction module and the coupled feedback module in the first subnetworkThe value of the loss function to which the block corresponds,
Figure FDA0002689356970000035
and
Figure FDA0002689356970000036
respectively represent the loss function values, L, corresponding to the high-level feature extraction module and the coupling feedback module in the second sub-networkMSRepresenting a loss value between two images determined based on the structural similarity index of the images,
Figure FDA0002689356970000041
and
Figure FDA0002689356970000042
respectively representing the overexposed high resolution image and the overexposed high resolution reference image,
Figure FDA0002689356970000043
and
Figure FDA0002689356970000044
respectively representing the under-exposed high resolution image and the under-exposed high resolution reference image,
Figure FDA0002689356970000045
and IgtAnd the fusion exposure high-resolution images corresponding to the tth second sub-network, the fusion exposure high-resolution images corresponding to the tth first sub-network and the fusion exposure high-resolution reference images are respectively represented, and T represents the number of the coupling feedback modules.
7. An image fusion method, comprising:
acquiring an under-exposed low-resolution image and an over-exposed low-resolution image;
inputting the underexposed low-resolution image and the overexposed low-resolution image into a pre-trained neural network to generate a first fusion exposure high-resolution image and a second fusion exposure high-resolution image; wherein the neural network is trained by the neural network training method of any one of claims 1-6;
and generating an image fusion result based on the first fusion exposure high-resolution image and the second fusion exposure high-resolution image.
8. The method of claim 7, wherein generating an image fusion result based on the first and second fused-exposure high-resolution images comprises:
and respectively utilizing a first weight and a second weight to carry out weighted summation processing on the first fusion exposure high-resolution image and the second fusion exposure high-resolution image so as to generate the image fusion result.
9. A neural network training device is characterized in that the neural network comprises a first sub-network and a second sub-network which have the same network structure, and any sub-network comprises a primary feature extraction module, a high-level feature extraction module and a coupling feedback module; the device comprises:
an image acquisition unit for acquiring an under-exposed low-resolution image and an over-exposed low-resolution image;
a low-level feature generation unit, configured to input the under-exposed low-resolution image and the over-exposed low-resolution image into the initial feature extraction modules in the first sub-network and the second sub-network, respectively, and generate an under-exposed low-level feature and an over-exposed low-level feature;
a high-level feature generation unit configured to input the underexposed low-level features and the overexposed low-level features into the high-level feature extraction modules in the first sub-network and the second sub-network, respectively, and generate the underexposed high-level features and the overexposed high-level features;
a first coupling feedback result generating unit, configured to input the underexposed low-level features, the underexposed high-level features, and the overexposed high-level features into the coupling feedback module in the first sub-network, and generate a coupling feedback result corresponding to the first sub-network;
a second coupling feedback result generating unit, configured to input the overexposed low-level feature, the overexposed high-level feature, and the underexposed high-level feature into the coupling feedback module in the second sub-network, and generate a coupling feedback result corresponding to the second sub-network;
and the parameter adjusting unit is used for adjusting the parameters of the neural network based on the underexposed low-resolution image, the underexposed high-level feature and the coupling feedback result corresponding to the first sub-network, and the overexposed low-resolution image, the overexposed high-level feature and the coupling feedback result corresponding to the second sub-network.
10. An image fusion apparatus, comprising:
an image acquisition unit for acquiring an under-exposed low-resolution image and an over-exposed low-resolution image;
the fusion exposure high-resolution image generating unit is used for inputting the underexposure low-resolution image and the overexposure low-resolution image into a pre-trained neural network to generate a first fusion exposure high-resolution image and a second fusion exposure high-resolution image; wherein the neural network is trained by the neural network training method of any one of claims 1-6;
an image fusion result generating unit configured to generate an image fusion result based on the first fusion-exposed high-resolution image and the second fusion-exposed high-resolution image.
11. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the neural network training method of any one of claims 1-6 or the image fusion method of any one of claims 7-8.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the neural network training method of any one of claims 1 to 6 or the image fusion method of any one of claims 7 to 8.
CN202010986245.1A 2020-09-18 2020-09-18 Neural network training method, image fusion method, apparatus, equipment and medium Active CN112184550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010986245.1A CN112184550B (en) 2020-09-18 2020-09-18 Neural network training method, image fusion method, apparatus, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010986245.1A CN112184550B (en) 2020-09-18 2020-09-18 Neural network training method, image fusion method, apparatus, equipment and medium

Publications (2)

Publication Number Publication Date
CN112184550A true CN112184550A (en) 2021-01-05
CN112184550B CN112184550B (en) 2022-11-01

Family

ID=73921653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010986245.1A Active CN112184550B (en) 2020-09-18 2020-09-18 Neural network training method, image fusion method, apparatus, equipment and medium

Country Status (1)

Country Link
CN (1) CN112184550B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100043A (en) * 2022-08-25 2022-09-23 天津大学 A Deep Learning-Based HDR Image Reconstruction Method
CN115103118A (en) * 2022-06-20 2022-09-23 北京航空航天大学 High dynamic range image generation method, device, equipment and readable storage medium
CN116228578A (en) * 2023-02-22 2023-06-06 北京航空航天大学 An exposure level-guided low-light and low-resolution image quality optimization method
CN119741737A (en) * 2025-03-05 2025-04-01 南方海洋科学与工程广东省实验室(广州) Fish identification method, system, terminal and computer-readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805836A (en) * 2018-05-31 2018-11-13 大连理工大学 Method for correcting image based on the reciprocating HDR transformation of depth
US20190130545A1 (en) * 2017-11-01 2019-05-02 Google Llc Digital image auto exposure adjustment
CN110728633A (en) * 2019-09-06 2020-01-24 上海交通大学 Method and device for constructing multi-exposure high dynamic range inverse tone mapping model
US20200111194A1 (en) * 2018-10-08 2020-04-09 Rensselaer Polytechnic Institute Ct super-resolution gan constrained by the identical, residual and cycle learning ensemble (gan-circle)
CN111246091A (en) * 2020-01-16 2020-06-05 北京迈格威科技有限公司 Dynamic automatic exposure control method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190130545A1 (en) * 2017-11-01 2019-05-02 Google Llc Digital image auto exposure adjustment
CN108805836A (en) * 2018-05-31 2018-11-13 大连理工大学 Method for correcting image based on the reciprocating HDR transformation of depth
US20200111194A1 (en) * 2018-10-08 2020-04-09 Rensselaer Polytechnic Institute Ct super-resolution gan constrained by the identical, residual and cycle learning ensemble (gan-circle)
CN110728633A (en) * 2019-09-06 2020-01-24 上海交通大学 Method and device for constructing multi-exposure high dynamic range inverse tone mapping model
CN111246091A (en) * 2020-01-16 2020-06-05 北京迈格威科技有限公司 Dynamic automatic exposure control method and device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
史振威等: "图像超分辨重建算法综述", 《数据采集与处理》 *
陈文等: "基于卷积神经网络的LDR图像重建HDR图像的方法研究", 《包装工程》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115103118A (en) * 2022-06-20 2022-09-23 北京航空航天大学 High dynamic range image generation method, device, equipment and readable storage medium
CN115103118B (en) * 2022-06-20 2023-04-07 北京航空航天大学 High dynamic range image generation method, device, equipment and readable storage medium
CN115100043A (en) * 2022-08-25 2022-09-23 天津大学 A Deep Learning-Based HDR Image Reconstruction Method
CN116228578A (en) * 2023-02-22 2023-06-06 北京航空航天大学 An exposure level-guided low-light and low-resolution image quality optimization method
CN116228578B (en) * 2023-02-22 2025-12-09 北京航空航天大学 Low-light low-resolution image quality optimization method guided by exposure level
CN119741737A (en) * 2025-03-05 2025-04-01 南方海洋科学与工程广东省实验室(广州) Fish identification method, system, terminal and computer-readable storage medium

Also Published As

Publication number Publication date
CN112184550B (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN112184550B (en) Neural network training method, image fusion method, apparatus, equipment and medium
CN112602088B (en) Methods, systems and computer-readable media for improving the quality of low-light images
CN112488923B (en) Image super-resolution reconstruction method and device, storage medium and electronic equipment
TWI769725B (en) Image processing method, electronic device and computer readable storage medium
RU2706891C1 (en) Method of generating a common loss function for training a convolutional neural network for converting an image into an image with drawn parts and a system for converting an image into an image with drawn parts
CN110428362A (en) Image HDR conversion method and device, storage medium
Xu et al. Exploiting raw images for real-scene super-resolution
CN111951165A (en) Image processing method, apparatus, computer device, and computer-readable storage medium
CN113379600A (en) Short video super-resolution conversion method, device and medium based on deep learning
CN116152128A (en) High dynamic range multi-exposure image fusion model and method based on attention mechanism
CN113658050A (en) Image denoising method, denoising device, mobile terminal and storage medium
CN111372006A (en) A mobile terminal-oriented high dynamic range imaging method and system
CN116934615A (en) Image restoration method and device, electronic equipment and storage medium
CN115471417B (en) Image noise reduction processing method, device, equipment, storage medium and program product
CN112651911A (en) High dynamic range imaging generation method based on polarization image
CN112150363B (en) Convolutional neural network-based image night scene processing method, computing module for operating method and readable storage medium
CN115115518B (en) Method, device, equipment, medium and product for generating high dynamic range image
US20250252537A1 (en) Enhancing images from a mobile device to give a professional camera effect
CN115103118B (en) High dynamic range image generation method, device, equipment and readable storage medium
CN113792862B (en) Design method for generating countermeasure network based on correction chart of cascade attention mechanism
CN115829878A (en) A method and device for image enhancement
Huang et al. A two-stage HDR reconstruction pipeline for extreme dark-light RGGB images
CN115937044A (en) Image processing method, image processing apparatus, storage medium, and electronic device
Que et al. Residual dense U‐Net for abnormal exposure restoration from single images
US20250131528A1 (en) Dynamic resizing of audiovisual data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant