CN112184550A - Neural network training method, image fusion method, apparatus, equipment and medium - Google Patents
Neural network training method, image fusion method, apparatus, equipment and medium Download PDFInfo
- Publication number
- CN112184550A CN112184550A CN202010986245.1A CN202010986245A CN112184550A CN 112184550 A CN112184550 A CN 112184550A CN 202010986245 A CN202010986245 A CN 202010986245A CN 112184550 A CN112184550 A CN 112184550A
- Authority
- CN
- China
- Prior art keywords
- network
- sub
- low
- resolution image
- exposed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本公开涉及图像处理技术领域,尤其涉及一种神经网络训练方法、图像融合方法、装置、设备和介质。The present disclosure relates to the technical field of image processing, and in particular, to a neural network training method, an image fusion method, an apparatus, a device and a medium.
背景技术Background technique
随着技术的发展,人们越来越习惯于用照片来记录自己生活的点滴。然而,由于相机传感器的硬件限制,拍摄出的图像通常存在各种失真,这使得图像与真实的自然场景差别很大。与真实场景相比,用相机拍摄出的图像趋向于具有低动态范围(Low DynamicRange,LDR)和低分辨率(Low-Resolution,LR)的特点。为了缩小拍摄图像与真实的拍摄场景之间的差异,需要对图像进行处理。With the development of technology, people are more and more accustomed to using photos to record the details of their lives. However, due to the hardware limitations of the camera sensor, the captured images usually have various distortions, which make the images very different from the real natural scene. Compared with the real scene, the images captured by the camera tend to have the characteristics of low dynamic range (Low DynamicRange, LDR) and low resolution (Low-Resolution, LR). In order to narrow the difference between the captured image and the actual shooting scene, the image needs to be processed.
目前,主要利用多曝光图像融合技术(Multi-exposure image fusion,MEF)来修正图像的低动态范围问题,利用图像超分辨技术(Image Super-Resolution,ISR)来修正图像低分辨率的问题。多曝光图像融合技术旨在将数张具有不同曝光度级别的LDR图像进行融合,从而生成一张具有高动态范围(High Dynamic Range,HDR)的图像。图像超分辨技术旨在将一张LR图像重建为一张高分辨率(High-Resolution,HR)图像。At present, multi-exposure image fusion (MEF) is mainly used to correct the low dynamic range of the image, and Image Super-Resolution (ISR) is used to correct the low resolution of the image. The multi-exposure image fusion technology aims to fuse several LDR images with different exposure levels to generate an image with High Dynamic Range (HDR). Image super-resolution technology aims to reconstruct an LR image into a High-Resolution (HR) image.
但是,实际情况是一张拍摄图像同时具有LDR和LR两个特点,而多曝光图像融合技术和图像超分辨技术是两个独立的图像处理技术,这就意味着一张拍摄图像需要先后进行多曝光图像融合处理和图像超分辨处理。并且,两种图像处理技术的先后执行顺序有时会对最终的图像处理结果产生影响。所以,现有的图像处理方式,不仅处理过程繁琐,而且图像处理效果不理想。However, the actual situation is that a shot image has two characteristics of LDR and LR at the same time, and the multi-exposure image fusion technology and image super-resolution technology are two independent image processing technologies, which means that a shot image needs to be processed successively. Exposure image fusion processing and image super-resolution processing. Moreover, the sequence of execution of the two image processing techniques may sometimes affect the final image processing result. Therefore, the existing image processing methods are not only complicated in the processing process, but also have unsatisfactory image processing effects.
发明内容SUMMARY OF THE INVENTION
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种神经网络训练方法、图像融合方法、装置、设备和介质。In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides a neural network training method, an image fusion method, an apparatus, a device and a medium.
第一方面,本公开提供了一种神经网络训练方法,所述神经网络包括网络结构相同的第一子网络和第二子网络,且任一子网络中包含初级特征提取模块、高层特征提取模块和耦合反馈模块;该方法包括:In a first aspect, the present disclosure provides a neural network training method, the neural network includes a first sub-network and a second sub-network with the same network structure, and any sub-network includes a primary feature extraction module and a high-level feature extraction module and a coupled feedback module; the method includes:
获取一张欠曝低分辨率图像和一张过曝低分辨率图像;Obtain an underexposed low-resolution image and an overexposed low-resolution image;
将所述欠曝低分辨率图像和所述过曝低分辨率图像,分别输入所述第一子网络和所述第二子网络中的所述初始特征提取模块,生成欠曝低层次特征和过曝低层次特征;Input the under-exposed low-resolution image and the over-exposed low-resolution image into the initial feature extraction module in the first sub-network and the second sub-network, respectively, to generate under-exposed low-level features and Overexposure low-level features;
将所述欠曝低层次特征和所述过曝低层次特征,分别输入所述第一子网络和所述第二子网络中的所述高层特征提取模块,生成欠曝高层次特征和过曝高层次特征;Input the under-exposed low-level features and the over-exposed low-level features into the high-level feature extraction modules in the first sub-network and the second sub-network, respectively, to generate under-exposed high-level features and over-exposed low-level features high-level features;
将所述欠曝低层次特征、所述欠曝高层次特征和所述过曝高层次特征,输入所述第一子网络中的所述耦合反馈模块,生成所述第一子网络对应的耦合反馈结果;Input the under-exposed low-level features, the under-exposed high-level features, and the over-exposed high-level features into the coupling feedback module in the first sub-network to generate the coupling corresponding to the first sub-network feedback results;
将所述过曝低层次特征、所述过曝高层次特征和所述欠曝高层次特征,输入所述第二子网络中的所述耦合反馈模块,生成所述第二子网络对应的耦合反馈结果;Input the over-exposed low-level features, the over-exposed high-level features, and the under-exposed high-level features into the coupling feedback module in the second sub-network to generate the coupling corresponding to the second sub-network feedback results;
基于所述欠曝低分辨率图像、所述欠曝高层次特征和所述第一子网络对应的耦合反馈结果,以及所述过曝低分辨率图像、所述过曝高层次特征和所述第二子网络对应的耦合反馈结果,调整所述神经网络的参数。based on the underexposed low-resolution image, the underexposed high-level feature, and the coupling feedback result corresponding to the first sub-network, and the overexposed low-resolution image, the overexposed high-level feature, and the The coupling feedback result corresponding to the second sub-network adjusts the parameters of the neural network.
在一些实施例中,所述神经网络包含多个所述耦合反馈模块,且各所述耦合反馈模块不共享模型参数。In some embodiments, the neural network includes a plurality of the coupled feedback modules, and each of the coupled feedback modules does not share model parameters.
在一些实施例中,各所述耦合反馈模块串行处理;In some embodiments, each of the coupled feedback modules is processed serially;
所述将所述欠曝低层次特征、所述欠曝高层次特征和所述过曝高层次特征,输入所述第一子网络中的所述耦合反馈模块,生成所述第一子网络对应的耦合反馈结果包括:The under-exposure low-level feature, the under-exposure high-level feature, and the over-exposure high-level feature are input into the coupling feedback module in the first sub-network to generate a corresponding corresponding to the first sub-network. The coupled feedback results include:
将所述欠曝低层次特征、所述欠曝高层次特征和所述过曝高层次特征,输入所述第一子网络中的第一个耦合反馈模块,生成所述第一子网络对应的耦合反馈结果;Input the under-exposed low-level features, the under-exposed high-level features, and the over-exposed high-level features into the first coupling feedback module in the first sub-network to generate the corresponding Coupling feedback results;
针对所述第一子网络中除了所述第一个耦合反馈模块之外的任一个后续耦合反馈模块,将所述欠曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及所述第二子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成所述第一子网络对应的耦合反馈结果;With respect to any subsequent coupling feedback module except the first coupling feedback module in the first sub-network, the underexposed low-level feature, the coupling feedback module of the previous adjacent coupling feedback module of the subsequent coupling feedback module The coupling feedback result and the coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the second sub-network are input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the first sub-network ;
所述将所述过曝低层次特征、所述过曝高层次特征和所述欠曝高层次特征,输入所述第二子网络中的所述耦合反馈模块,生成所述第二子网络对应的耦合反馈结果包括:Inputting the over-exposed low-level features, the over-exposed high-level features and the under-exposed high-level features into the coupling feedback module in the second sub-network to generate the corresponding second sub-network The coupled feedback results include:
将所述过曝低层次特征、所述过曝高层次特征和所述欠曝高层次特征,输入所述第二子网络中的第一个耦合反馈模块,生成所述第二子网络对应的耦合反馈结果;Input the overexposure low-level features, the overexposure high-level features and the underexposure high-level features into the first coupling feedback module in the second sub-network to generate the corresponding Coupling feedback results;
针对所述第二子网络中除了所述第一个耦合反馈模块之外的任一个后续耦合反馈模块,将所述过曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及所述第一子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成所述第二子网络对应的耦合反馈结果。For any subsequent coupling feedback module except the first coupling feedback module in the second sub-network, the overexposure low-level feature, the coupling feedback module of the previous adjacent coupling feedback module of the subsequent coupling feedback module The coupling feedback result and the coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the first sub-network are input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the second sub-network .
在一些实施例中,所述耦合反馈模块中包含至少两个联结子模块和至少两个特征映射组,其中,每个所述特征映射组包含一个滤波器、一个反卷积层和一个卷积层;In some embodiments, the coupled feedback module includes at least two coupling sub-modules and at least two feature map groups, wherein each feature map group includes a filter, a deconvolution layer and a convolution layer Floor;
第一个所述联结子模块位于各所述特征映射组之前;The first described connection sub-module is located before each of the feature map groups;
除了第一个所述联结子模块外的任一其他联结子模块位于任意两个相邻的所述特征映射组之间,且任意两个所述其他联结子模块位于不同位置。Any other connection submodules except the first connection submodule are located between any two adjacent feature map groups, and any two other connection submodules are located in different positions.
在一些实施例中,所述基于所述欠曝低分辨率图像、所述欠曝高层次特征和所述第一子网络对应的耦合反馈结果,以及所述过曝低分辨率图像、所述过曝高层次特征和所述第二子网络对应的耦合反馈结果,调整所述神经网络的参数包括:In some embodiments, based on the under-exposed low-resolution image, the under-exposed high-level feature and a coupling feedback result corresponding to the first sub-network, and the over-exposed low-resolution image, the Overexposing the high-level features and the coupling feedback result corresponding to the second sub-network, and adjusting the parameters of the neural network includes:
分别对所述欠曝低分辨率图像和所述过曝低分辨率图像进行上采样操作;respectively performing an upsampling operation on the underexposed low-resolution image and the overexposed low-resolution image;
将所述欠曝高层次特征对应的图像和所述第一子网络对应的耦合反馈结果对应的图像,分别与上采样后的所述欠曝低分辨率图像相加,生成欠曝高分辨率图像和所述第二子网络对应的融合曝光高分辨率图像;The image corresponding to the under-exposed high-level feature and the image corresponding to the coupling feedback result corresponding to the first sub-network are respectively added to the up-sampled under-exposed low-resolution image to generate an under-exposed high-resolution image a fused exposure high-resolution image corresponding to the image and the second sub-network;
将所述过曝高层次特征对应的图像和所述第二子网络对应的耦合反馈结果对应的图像,分别与上采样后的所述过曝低分辨率图像相加,生成过曝高分辨率图像和所述第二子网络对应的融合曝光高分辨率图像;The image corresponding to the over-exposure high-level feature and the image corresponding to the coupling feedback result corresponding to the second sub-network are respectively added to the up-sampled over-exposed low-resolution image to generate an over-exposure high-resolution image a fused exposure high-resolution image corresponding to the image and the second sub-network;
基于所述欠曝高分辨率图像、所述第一子网络对应的融合曝光高分辨率图像、所述过曝高分辨率图像和所述第二子网络对应的融合曝光高分辨率图像,调整所述神经网络的参数。Based on the under-exposed high-resolution image, the fused-exposure high-resolution image corresponding to the first sub-network, the over-exposed high-resolution image, and the fused-exposure high-resolution image corresponding to the second sub-network, adjust the parameters of the neural network.
在一些实施例中,通过如下公式所示的损失函数,调整所述神经网络的参数:In some embodiments, the parameters of the neural network are adjusted through a loss function shown in the following formula:
其中,Ltotal表示总损失函数值,λo、λu和分别表示每一部分损失函数值对应的权重,和分别表示所述第一子网络中的所述高层特征提取模块和所述耦合反馈模块对应的损失函数值,和分别表示所述第二子网络中的所述高层特征提取模块和所述耦合反馈模块对应的损失函数值,LMS表示基于图像的结构相似性指数确定的两个图像之间的损失值,和分别表示所述过曝高分辨率图像和过曝高分辨率参考图像,和分别表示所述欠曝高分辨率图像和欠曝高分辨率参考图像,和Igt分别表示第t个所述第二子网络对应的融合曝光高分辨率图像、第t个所述第一子网络对应的融合曝光高分辨率图像和融合曝光高分辨率参考图像,T表示所述耦合反馈模块的数量。where L total represents the total loss function value, λ o , λ u and respectively represent the weight corresponding to each part of the loss function value, and respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the first sub-network, and respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the second sub-network, L MS represents the loss value between two images determined based on the structural similarity index of the images, and represent the overexposed high-resolution image and the overexposed high-resolution reference image, respectively, and represent the underexposed high-resolution image and the underexposed high-resolution reference image, respectively, and I gt respectively represent the fused exposure high-resolution image corresponding to the t-th second sub-network, the fused-exposure high-resolution image corresponding to the t-th first sub-network, and the fused-exposure high-resolution reference image, T Indicates the number of the coupled feedback modules.
第二方面,本公开提供了一种图像融合方法,该方法包括:In a second aspect, the present disclosure provides an image fusion method, the method comprising:
获取一张欠曝低分辨率图像和一张过曝低分辨率图像;Obtain an underexposed low-resolution image and an overexposed low-resolution image;
将所述欠曝低分辨率图像和所述过曝低分辨率图像输入预先训练的神经网络,生成第一融合曝光高分辨率图像和第二融合曝光高分辨率图像;其中,所述神经网络通过本公开任一实施例中的神经网络训练方法训练得到;Inputting the under-exposed low-resolution image and the over-exposed low-resolution image into a pre-trained neural network to generate a first fused-exposure high-resolution image and a second fused-exposure high-resolution image; wherein the neural network Obtained by training the neural network training method in any embodiment of the present disclosure;
基于所述第一融合曝光高分辨率图像和所述第二融合曝光高分辨率图像,生成图像融合结果。An image fusion result is generated based on the first fused exposure high-resolution image and the second fused exposure high-resolution image.
在一些实施例中,所述基于所述第一融合曝光高分辨率图像和所述第二融合曝光高分辨率图像,生成图像融合结果包括:In some embodiments, generating an image fusion result based on the first fused exposure high-resolution image and the second fused exposure high-resolution image includes:
分别利用第一权重和第二权重,对所述第一融合曝光高分辨率图像和所述第二融合曝光高分辨率图像进行加权求和处理,生成所述图像融合结果。Using the first weight and the second weight respectively, the first fusion exposure high-resolution image and the second fusion exposure high-resolution image are weighted and summed to generate the image fusion result.
第三方面,本公开提供了一种神经网络训练装置,所述神经网络包括网络结构相同的第一子网络和第二子网络,且任一子网络中包含初级特征提取模块、高层特征提取模块和耦合反馈模块;该装置包括:In a third aspect, the present disclosure provides a neural network training device, the neural network includes a first sub-network and a second sub-network with the same network structure, and any sub-network includes a primary feature extraction module and a high-level feature extraction module and a coupled feedback module; the device includes:
图像获取单元,用于获取一张欠曝低分辨率图像和一张过曝低分辨率图像;an image acquisition unit for acquiring an under-exposed low-resolution image and an over-exposed low-resolution image;
低层次特征生成单元,用于将所述欠曝低分辨率图像和所述过曝低分辨率图像,分别输入所述第一子网络和所述第二子网络中的所述初始特征提取模块,生成欠曝低层次特征和过曝低层次特征;A low-level feature generation unit, configured to input the under-exposed low-resolution image and the over-exposed low-resolution image into the initial feature extraction module in the first sub-network and the second sub-network respectively , to generate under-exposed low-level features and over-exposed low-level features;
高层次特征生成单元,用于将所述欠曝低层次特征和所述过曝低层次特征,分别输入所述第一子网络和所述第二子网络中的所述高层特征提取模块,生成欠曝高层次特征和过曝高层次特征;The high-level feature generation unit is configured to input the under-exposed low-level features and the over-exposed low-level features into the high-level feature extraction modules in the first sub-network and the second sub-network, respectively, to generate Underexposed high-level features and overexposed high-level features;
第一耦合反馈结果生成单元,用于将所述欠曝低层次特征、所述欠曝高层次特征和所述过曝高层次特征,输入所述第一子网络中的所述耦合反馈模块,生成所述第一子网络对应的耦合反馈结果;a first coupling feedback result generating unit, configured to input the under-exposed low-level feature, the under-exposed high-level feature, and the over-exposed high-level feature into the coupling feedback module in the first sub-network, generating a coupling feedback result corresponding to the first sub-network;
第二耦合反馈结果生成单元,用于将所述过曝低层次特征、所述过曝高层次特征和所述欠曝高层次特征,输入所述第二子网络中的所述耦合反馈模块,生成所述第二子网络对应的耦合反馈结果;A second coupling feedback result generating unit, configured to input the overexposed low-level feature, the overexposed high-level feature, and the underexposed high-level feature into the coupling feedback module in the second sub-network, generating a coupling feedback result corresponding to the second sub-network;
参数调整单元,用于基于所述欠曝低分辨率图像、所述欠曝高层次特征和所述第一子网络对应的耦合反馈结果,以及所述过曝低分辨率图像、所述过曝高层次特征和所述第二子网络对应的耦合反馈结果,调整所述神经网络的参数。A parameter adjustment unit, configured to base on the under-exposed low-resolution image, the under-exposed high-level feature and the coupling feedback result corresponding to the first sub-network, as well as the over-exposed low-resolution image, the over-exposed high-level feature The high-level feature and the corresponding coupling feedback result of the second sub-network adjust the parameters of the neural network.
在一些实施例中,神经网络包含多个耦合反馈模块,且各耦合反馈模块不共享模型参数。In some embodiments, the neural network includes multiple coupled feedback modules, and each coupled feedback module does not share model parameters.
在一些实施例中,各耦合反馈模块串行处理;In some embodiments, the coupled feedback modules are processed serially;
相应地,第一耦合反馈结果生成单元具体用于:Correspondingly, the first coupling feedback result generating unit is specifically used for:
将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的第一个耦合反馈模块,生成第一子网络对应的耦合反馈结果;Input the under-exposed low-level features, under-exposed high-level features and over-exposed high-level features into the first coupling feedback module in the first sub-network to generate the coupling feedback result corresponding to the first sub-network;
针对第一子网络中除了第一个耦合反馈模块之外的任一个后续耦合反馈模块,将欠曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及第二子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成第一子网络对应的耦合反馈结果;For any subsequent coupling feedback module except the first coupling feedback module in the first sub-network, the underexposure low-level features, the coupling feedback result of the previous adjacent coupling feedback module of the subsequent coupling feedback module, and the The coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the two sub-networks is input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the first sub-network;
相应地,第二耦合反馈结果生成单元具体用于:Correspondingly, the second coupling feedback result generating unit is specifically used for:
将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的第一个耦合反馈模块,生成第二子网络对应的耦合反馈结果;Input the over-exposed low-level features, over-exposed high-level features and under-exposed high-level features into the first coupling feedback module in the second sub-network to generate the coupling feedback result corresponding to the second sub-network;
针对第二子网络中除了第一个耦合反馈模块之外的任一个后续耦合反馈模块,将过曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及第一子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成第二子网络对应的耦合反馈结果。For any subsequent coupling feedback module except the first coupling feedback module in the second sub-network, the low-level features, the coupling feedback result of the previous adjacent coupling feedback module of the subsequent coupling feedback module, and the first coupling feedback module will be overexposed. The coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in a sub-network is input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the second sub-network.
在一些实施例中,耦合反馈模块中包含至少两个联结子模块和至少两个特征映射组,其中,每个特征映射组包含一个滤波器、一个反卷积层和一个卷积层;In some embodiments, the coupled feedback module includes at least two connection sub-modules and at least two feature map groups, wherein each feature map group includes a filter, a deconvolution layer, and a convolution layer;
第一个联结子模块位于各特征映射组之前;The first link submodule is located before each feature map group;
除了第一个联结子模块外的任一其他联结子模块位于任意两个相邻的特征映射组之间,且任意两个其他联结子模块位于不同位置。Any other linking submodules except the first linking submodule are located between any two adjacent feature map groups, and any two other linking submodules are located at different positions.
在一些实施例中,参数调整单元具体用于:In some embodiments, the parameter adjustment unit is specifically used to:
分别对欠曝低分辨率图像和过曝低分辨率图像进行上采样操作;Perform up-sampling operations on the under-exposed low-resolution image and the over-exposed low-resolution image respectively;
将欠曝高层次特征对应的图像和第一子网络对应的耦合反馈结果对应的图像,分别与上采样后的欠曝低分辨率图像相加,生成欠曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像;The image corresponding to the under-exposed high-level feature and the image corresponding to the coupling feedback result corresponding to the first sub-network are added to the up-sampled under-exposed low-resolution image to generate the under-exposed high-resolution image and the second sub-network. Corresponding fused exposure high-resolution images;
将过曝高层次特征对应的图像和第二子网络对应的耦合反馈结果对应的图像,分别与上采样后的过曝低分辨率图像相加,生成过曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像;The image corresponding to the overexposed high-level feature and the image corresponding to the coupling feedback result corresponding to the second sub-network are added to the up-sampled over-exposed low-resolution image to generate the over-exposed high-resolution image and the second sub-network. Corresponding fused exposure high-resolution images;
基于欠曝高分辨率图像、第一子网络对应的融合曝光高分辨率图像、过曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像,调整神经网络的参数。The parameters of the neural network are adjusted based on the under-exposed high-resolution image, the fused-exposure high-resolution image corresponding to the first sub-network, the over-exposed high-resolution image, and the fused-exposure high-resolution image corresponding to the second sub-network.
进一步地,参数调整单元具体用于:Further, the parameter adjustment unit is specifically used for:
通过如下公式所示的损失函数,调整神经网络的参数:Adjust the parameters of the neural network through the loss function shown in the following formula:
其中,Ltotal表示总损失函数值,λo、λu和分别表示每一部分损失函数值对应的权重,和分别表示第一子网络中的高层特征提取模块和耦合反馈模块对应的损失函数值,和分别表示第二子网络中的高层特征提取模块和耦合反馈模块对应的损失函数值,LMS表示基于图像的结构相似性指数确定的两个图像之间的损失值,和分别表示过曝高分辨率图像和过曝高分辨率参考图像,和分别表示欠曝高分辨率图像和欠曝高分辨率参考图像,和Igt分别表示第t个第二子网络对应的融合曝光高分辨率图像、第t个第一子网络对应的融合曝光高分辨率图像和融合曝光高分辨率参考图像,T表示耦合反馈模块的数量。where L total represents the total loss function value, λ o , λ u and respectively represent the weight corresponding to each part of the loss function value, and respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the first sub-network, and respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the second sub-network, L MS represents the loss value between the two images determined based on the structural similarity index of the image, and represent the overexposed high-resolution image and the overexposed high-resolution reference image, respectively, and represent the underexposed high-resolution image and the underexposed high-resolution reference image, respectively, and I gt represent the fused exposure high-resolution image corresponding to the t-th second sub-network, the fused-exposure high-resolution image corresponding to the t-th first sub-network, and the fused-exposure high-resolution reference image, respectively, and T represents the coupling feedback module quantity.
第四方面,本公开提供了一种图像融合装置,该装置包括:In a fourth aspect, the present disclosure provides an image fusion device, the device comprising:
图像获取单元,用于获取一张欠曝低分辨率图像和一张过曝低分辨率图像;an image acquisition unit for acquiring an under-exposed low-resolution image and an over-exposed low-resolution image;
融合曝光高分辨率图像生成单元,用于将所述欠曝低分辨率图像和所述过曝低分辨率图像输入预先训练的神经网络,生成第一融合曝光高分辨率图像和第二融合曝光高分辨率图像;其中,所述神经网络通过本公开中的神经网络训练方法的任一实施例训练得到;A fusion exposure high-resolution image generation unit, configured to input the under-exposed low-resolution image and the over-exposed low-resolution image into a pre-trained neural network to generate a first fusion exposure high-resolution image and a second fusion exposure high-resolution image A high-resolution image; wherein, the neural network is obtained by training any embodiment of the neural network training method in the present disclosure;
图像融合结果生成单元,用于基于所述第一融合曝光高分辨率图像和所述第二融合曝光高分辨率图像,生成图像融合结果。An image fusion result generating unit, configured to generate an image fusion result based on the first fusion exposure high-resolution image and the second fusion exposure high-resolution image.
在一些实施例中,图像融合结果生成单元具体用于:In some embodiments, the image fusion result generating unit is specifically used for:
分别利用第一权重和第二权重,对第一融合曝光高分辨率图像和第二融合曝光高分辨率图像进行加权求和处理,生成图像融合结果。Using the first weight and the second weight respectively, the first fusion exposure high-resolution image and the second fusion exposure high-resolution image are weighted and summed to generate an image fusion result.
第五方面,本公开提供了一种的电子设备,该电子设备包括:In a fifth aspect, the present disclosure provides an electronic device comprising:
一个或多个处理器;one or more processors;
存储装置,用于存储一个或多个程序,storage means for storing one or more programs,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现上述的神经网络训练方法或图像融合方法中的任一实施例。When the one or more programs are executed by the one or more processors, the one or more processors implement any one of the above-mentioned embodiments of the neural network training method or the image fusion method.
第四方面,本公开提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述的神经网络训练方法或图像融合方法中的任一实施例。In a fourth aspect, the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements any one of the foregoing neural network training methods or image fusion methods.
本公开实施例提供的技术方案,通过设计包括网络结构相同的第一子网络和第二子网络,且任一子网络中包含初级特征提取模块、高层特征提取模块和耦合反馈模块的神经网络,且初级特征模块用于提取欠曝低分辨率图像和过曝低分辨率图像的低层次特征,高层特征提取模块用于从欠曝低分辨率图像和过曝低分辨率图像各自对应的低层次特征中进一步提取其高层次特征,初步实现将低分辨率图像映射为高分辨率的特征。通过耦合反馈模块来交叉式地融合欠曝低分辨率图像和过曝低分辨率图像对应的低层次特征和高层次特征,实现了过曝光图像和欠曝光图像的多曝光融合,同时进一步提高图像的分辨率,获得一张同时具有高分辨率和高动态范围的图像,达到了同时进行图像的多曝光融合处理和超分辨处理的目的,不仅简化了拍摄图像的处理流程,提高了图像处理速度,而且利用多曝光融合和超分辨之间的互补特性,进一步提高了图像处理精确度。In the technical solution provided by the embodiments of the present disclosure, by designing a neural network that includes a first sub-network and a second sub-network with the same network structure, and any sub-network includes a primary feature extraction module, a high-level feature extraction module, and a coupled feedback module, And the primary feature module is used to extract the low-level features of the under-exposed low-resolution image and the over-exposed low-resolution image, and the high-level feature extraction module is used to extract the corresponding low-level features from the under-exposed low-resolution image and the over-exposed low-resolution image. The high-level features are further extracted from the features, and the low-resolution images are initially mapped to high-resolution features. Through the coupling feedback module, the low-level features and high-level features corresponding to the under-exposed low-resolution image and the over-exposed low-resolution image are cross-fused to realize the multi-exposure fusion of the over-exposed image and the under-exposed image, and further improve the image quality. It can obtain an image with high resolution and high dynamic range at the same time, and achieve the purpose of multi-exposure fusion processing and super-resolution processing of images at the same time, which not only simplifies the processing flow of captured images, but also improves the speed of image processing. , and the complementary properties between multi-exposure fusion and super-resolution are used to further improve the image processing accuracy.
附图说明Description of drawings
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description serve to explain the principles of the disclosure.
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the accompanying drawings that are required to be used in the description of the embodiments or the prior art will be briefly introduced below. In other words, on the premise of no creative labor, other drawings can also be obtained from these drawings.
图1是本公开实施例提供的一种神经网络的网络架构图;1 is a network architecture diagram of a neural network provided by an embodiment of the present disclosure;
图2是本公开实施例提供的一种神经网络中的高层特征提取模块的网络架构图;2 is a network architecture diagram of a high-level feature extraction module in a neural network provided by an embodiment of the present disclosure;
图3是本公开实施例提供的一种神经网络中的耦合反馈模块的网络架构图;3 is a network architecture diagram of a coupled feedback module in a neural network provided by an embodiment of the present disclosure;
图4是本公开实施例提供的一种用于神经网络训练的神经网络的网络架构图;4 is a network architecture diagram of a neural network for neural network training provided by an embodiment of the present disclosure;
图5是本公开实施例提供的一种神经网络训练方法的流程图;5 is a flowchart of a neural network training method provided by an embodiment of the present disclosure;
图6是本公开实施例提供的一种图像融合方法的流程图;6 is a flowchart of an image fusion method provided by an embodiment of the present disclosure;
图7是本公开实施例提供的一种神经网络训练装置的结构示意图;7 is a schematic structural diagram of a neural network training apparatus provided by an embodiment of the present disclosure;
图8是本公开实施例提供的一种图像融合装置的结构示意图;FIG. 8 is a schematic structural diagram of an image fusion apparatus provided by an embodiment of the present disclosure;
图9是本公开实施例提供的一种电子设备的结构示意图。FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步的详细描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。In order to more clearly understand the above objects, features and advantages of the present disclosure, the solutions of the present disclosure will be described in further detail below. It should be noted that the embodiments of the present disclosure and the features in the embodiments may be combined with each other under the condition of no conflict.
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。Many specific details are set forth in the following description to facilitate a full understanding of the present disclosure, but the present disclosure can also be implemented in other ways different from those described herein; obviously, the embodiments in the specification are only a part of the embodiments of the present disclosure, and Not all examples.
本公开实施例提供的神经网络训练方案,可应用于对具有低动态范围和低分辨率特点的图像进行融合处理的应用场景中,尤其适用于对过曝光的低分辨率图像(简称过曝低分辨率图像)和欠曝光的低分辨率图像(简称欠曝低分辨率图像)进行图像融合处理的场景中。The neural network training solution provided by the embodiments of the present disclosure can be applied to the application scenario of fusion processing of images with low dynamic range and low resolution, and is especially suitable for overexposed low-resolution images (referred to as overexposed low resolution images for short). high-resolution images) and underexposed low-resolution images (referred to as underexposed low-resolution images) for image fusion processing.
图1为本申请实施例提供的一种用于图像融合的神经网络的网络结构框图。如图1所示,该神经网络包括网络结构相同、但模型参数不共享的第一子网络110和第二子网络120。第一子网络110中包含初级特征提取模块(Initial Feature Extraction Block,FEB)111、高层特征提取模块(super-resolution block,SRB)112和耦合反馈模块(coupledfeedback block,CFB)113。第二网络子120中包含初级特征提取模块121、高层特征提取模块122和耦合反馈模块123。第一子网络110中的耦合反馈模块113和第二子网络120中的耦合反馈模块123的数量一致,且数量等于或大于1。该神经网络的输入数据为过曝低分辨率图像和欠曝低分辨率图像,该输入的两张图像只要分别输入一个子网络即可,不限定特定的输入对应关系。本公开实施例中以欠曝低分辨率图像输入第一子网络110,且过曝低分辨率图像输入第二子网络120为例进行说明。上述FEB和SRB用于从输入图像中提取出高层次的特征,这有助于增强图像分辨率;上述CFB位于SRB之后,用于吸收两个子网络的SRB学习到的特征,从而融合出一张具有高分辨率(High-resolution,HR)和高动态范围(Highdynamic range,HDR)的图像。FIG. 1 is a block diagram of a network structure of a neural network for image fusion provided by an embodiment of the present application. As shown in FIG. 1 , the neural network includes a
初级特征提取模块111和初级特征提取模块121分别用于提取输入的欠曝低分辨率图像和过曝低分辨率图像的基本特征,获得对应的欠曝低层次特征和过曝低层次特征利用公式表征初级特征提取过程为:和其中的fFEB()表示初级特征提取模块的操作。在一些实施例中,fFEB()包含了一系列3×3和1×1卷积核的卷积层。The primary
高层特征提取模块112和高层特征提取模块122分别用于对输入的欠曝低层次特征和过曝低层次特征进行进一步的特征提取操作,来提取欠曝低分辨率图像和过曝低分辨率图像的高层次的特征,获得欠曝高层次特征Gu和过曝高层次特征Go。由于高层次的特征包含更高层的语义特征,其更能表征图像中小而复杂的目标,从而丰富图像中的细节信息,所以欠曝高层次特征Gu和过曝高层次特征Go能够提高相应图像的分辨率,实现超分辨效果。在一些实施例中,参见图2,利用SRFBN网络中的反馈模块作为高层特征提取模块112(122)的主要结构,它包含了数个以稠密连接(dense connection)方式连续连接的特征映射组210。每个特征映射组210中至少含有一个上采样操作(Deconv)和一个下采样操作(Conv)。通过连续上下采样的方式,在确保特征的大小不变的情况下,逐步地从低层次特征Fin中提取出更高层次的特征G,以改善图像的分辨率。利用公式表征高层特征提取过程为:SRB输出的高层次的特征可以被表示为:和其中,fSRB()代表高层特征提取模块中的操作。The high-level
耦合反馈模块CFB是神经网络的核心组成部分,其目的是通过复杂的网络结构同时实现超分辨和多曝光图像融合。耦合反馈模块CFB的输入数据包含三个,即同一子网络中的低层次特征和高层次特征,以及另一子网络中的高层次特征。这里同一子网络中的两个输入数据的作用是进一步提高图像分辨率,增强图像超分辨效果;另一子网络中的输入数据的作用是提高图像融合效果,实现多曝光图像融合。The coupled feedback module CFB is the core component of the neural network, and its purpose is to simultaneously achieve super-resolution and multi-exposure image fusion through a complex network structure. The input data of the coupled feedback module CFB contains three, namely low-level features and high-level features in the same sub-network, and high-level features in another sub-network. Here, the role of the two input data in the same sub-network is to further improve the image resolution and enhance the image super-resolution effect; the role of the input data in the other sub-network is to improve the image fusion effect and realize multi-exposure image fusion.
在一些实施例中,神经网络的每个子网络中包含一个耦合反馈模块CFB。那么,耦合反馈模块113用于融合输入的欠曝低层次特征欠曝高层次特征Gu和过曝高层次特征Go,生成第一子网络110对应的耦合反馈结果。耦合反馈模块123用于融合输入的过曝低层次特征过曝高层次特征Go和欠曝高层次特征Gu,生成第二子网络120对应的耦合反馈结果。这里的耦合反馈结果均是同时实现多曝光融合和超分辨的图像特征。In some embodiments, each sub-network of the neural network includes a coupled feedback module CFB. Then, the coupled
在一些实施例中,神经网络中的每个子网络中包含多个耦合反馈模块CFB,且该多个CFB是并行处理方式。本实施例中,同一子网络中的各个CFB的输入数据相同,输出的耦合反馈结果需进一步融合(如加权求和等),获得一个耦合反馈结果。In some embodiments, each sub-network in the neural network includes multiple coupled feedback modules CFB, and the multiple CFBs are processed in parallel. In this embodiment, the input data of each CFB in the same sub-network is the same, and the output coupling feedback results need to be further fused (such as weighted summation, etc.) to obtain a coupling feedback result.
在一些实施例中,神经网络中的每个子网络中包含多个耦合反馈模块CFB,且该多个CFB是串行式地循环连接,如图1所示。本实施例中,假设在每个子网络中各有T个CFB,那么,生成第一子网络110对应的耦合反馈结果的过程为:将欠曝低层次特征欠曝高层次特征Gu和过曝高层次特征Go,输入第一子网络110中的第一个耦合反馈模块113,生成第一子网络对应的耦合反馈结果针对第一子网络中除了第一个耦合反馈模块113之外的任一个后续耦合反馈模块113(序号为t),将欠曝低层次特征该后续耦合反馈模块的前一相邻耦合反馈模块(序号为t-1)的耦合反馈结果以及第二子网络120中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果输入该后续耦合反馈模块113,生成第一子网络110对应的耦合反馈结果按照该过程,经过所有CFB的运算,可获得第一子网络110对应的最终的耦合反馈结果同样地,生成第二子网络120对应的耦合反馈结果的过程为:将过曝低层次特征过曝高层次特征Go和欠曝高层次特征Gu,输入第二子网络120中的第一个耦合反馈模块123,生成第二子网络120对应的耦合反馈结果针对第二子网络120中除了第一个耦合反馈模块123之外的任一个后续耦合反馈模块123(序号为t),将过曝低层次特征该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果以及第一子网络110中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果输入该后续耦合反馈模块123,生成第二子网络120对应的耦合反馈结果按照该过程,经过所有CFB的运算,可获得第二子网络120对应的最终的耦合反馈结果将上述过程用公式表征为:和其中,fCFB()表示耦合反馈模块的操作。In some embodiments, each sub-network in the neural network includes multiple coupled feedback modules CFB, and the multiple CFBs are cyclically connected in series, as shown in FIG. 1 . In this embodiment, it is assumed that there are T CFBs in each sub-network, then, the process of generating the coupling feedback result corresponding to the first sub-network 110 is: The underexposed high-level feature Gu and the overexposed high-level feature G o are input to the first coupling feedback module 113 in the first sub-network 110 to generate the coupling feedback result corresponding to the first sub-network For any subsequent coupling feedback module 113 (serial number t) in the first sub-network except the first coupling feedback module 113, the underexposed low-level features are The coupling feedback result of the previous adjacent coupling feedback module (serial number t-1) of the subsequent coupling feedback module and the coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the second sub-network 120 Input the subsequent coupling feedback module 113 to generate the coupling feedback result corresponding to the first sub-network 110 According to this process, after all CFB operations, the final coupling feedback result corresponding to the first sub-network 110 can be obtained Similarly, the process of generating the coupling feedback result corresponding to the second sub-network 120 is as follows: overexpose low-level features The overexposed high-level feature G o and the underexposed high-level feature Gu are input to the first coupling feedback module 123 in the second sub-network 120 to generate the coupling feedback result corresponding to the second sub-network 120 For any subsequent coupling feedback module 123 (serial number is t) in the second sub-network 120 except for the first coupling feedback module 123, the overexposure low-level features The coupling feedback result of the previous adjacent coupling feedback module of the subsequent coupling feedback module and the coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the first sub-network 110 Input the subsequent coupling feedback module 123 to generate the coupling feedback result corresponding to the second sub-network 120 According to this process, after all CFB operations, the final coupling feedback result corresponding to the second sub-network 120 can be obtained The above process is represented by the formula as: and where f CFB ( ) represents the operation of the coupled feedback block.
在一些实施例中,耦合反馈模块113和耦合反馈模块123的数量均为三个。这样可以更好地均衡神经网络的计算速度和模型精度。In some embodiments, the number of the coupled
在一些实施例中,每个耦合反馈模块CFB的内部网络结构相同,但不共享模型参数。参见图3,以第二子网络120中的第t个耦合反馈模块123来举例说明其内部的结构以及和其他模块之间的相互关联。耦合反馈模块123中包含至少两个联结子模块310和至少两个特征映射组320。同图2相同,该多个特征映射组320以稠密方式连续连接,并且每个特征映射组320包含一个滤波器、一个反卷积层Deconv和一个卷积层Conv,实现连续上下采样。其中的第一个联结子模块310位于所有的特征映射组320之前;除了第一个联结子模块外的任一其他联结子模块310位于任意两个相邻的特征映射组320之间,且任意两个其他联结子模块310位于不同位置。In some embodiments, the internal network structure of each coupled feedback module CFB is the same, but does not share model parameters. Referring to FIG. 3 , the t-th
该第t个CFB有三个输入数据,分别是过曝低层次特征第t-1个CFB提取的耦合反馈结果以及第一子网络110中第t-1个CFB提取的耦合反馈结果其中,用于反馈的特征是从同一个子网络中获取到的反馈信息,故其主要功能是修正过曝低层次特征以进一步提高超分辨的效果;而用于反馈的特征是从另一个子网络中得到的反馈信息,其主要功能是带来互补的信息,以提高多曝光图像融合的效果。The t-th CFB has three input data, which are low-level features of overexposure Coupled feedback results of the t-1th CFB extraction and the coupling feedback result extracted by the t-1th CFB in the
上述第t个耦合反馈模块123的处理过程为:首先,在通道数的维度上,利用联结子模块310联结三个输入的特征。然后,用一系列1×1的滤波器对联结结果进行融合:其中,表示基于三个输入特征进行滤波融合后的低分辨率的特征,Min表示一系列1×1的滤波器,表示内部元素的联结。之后,基于滤波融合后的特征利用一系列特征映射组320来重复执行上采样Deconv和下采样Conv的操作,每个上采样操作可获得高分辨率的特征每次下采样可获得低分辨率的特征最终以递进式地提取出更有效的高层次特征。The processing procedure of the above-mentioned t-th
在特征映射组320的操作过程中,基于上述的主要功能是带来互补的信息以提高多曝光图像融合的效果的说明,同时考虑到随着特征映射组的数量的增加,模块内部对于特征的记忆逐渐被遗忘,使得产生的影响在逐渐下降,导致后续融合的效果不佳,所以,本实施例中为了增强对于网络的影响,除了将作为每个CFB的输入数据之外,还将它植入特征映射组320之间来重新激活CFB模块对其的记忆,即在CFB的特征映射组之间增加联结子模块310,增加的联结子模块310的输入数据均包含具体实施时,在每个CFB中设置至少两个联结子模块310,并且除第一个联结子模块之外的其他联结子模块设置在不同的特征映射组320之间。如果对神经网络运算速度无要求,可以设置超过两个的联结子模块310,甚至可以在每两个特征映射组310之间均增加一个联结子模块310,这样能够更大程度上提高融合效果。如果对神经网络的运算速度和运算精度都有较高的要求,那么为了均衡速度和精度,可以只设置两个联结子模块310,第二个联结子模块310设置在多个特征映射组320的中间位置。例如,假设总的特征映射组个数是N,反馈特征和被联结起来构成一张新的低分辨率LR的特征图 其中,表示向下取整的操作。该新的低分辨率的特征图将替代作为后续的特征映射组的输入特征。During the operation of the
最后,经过N个特征映射组320的操作之后,将各个特征映射组320的LR特征图聚合到一起,并利用一系列的1×1的滤波器来融合,获得该CFB最终的输出结果 其中,Mout()表示用一系列1×1的滤波器进行卷积的操作。Finally, after the operations of N
在一些实施例中,第一子网络110和第二子网络120分别包含图像重建模块(reconstruction block,REC)114和图像重建模块124,用于将至少一个CFB所得的耦合反馈结果(特征)重建为图像。那么,多个CFB便可获得多个重建的图像。在此基础上,可进一步融合神经网络的原始输入图像,获得第一融合曝光高分辨率图像和第二融合曝光高分辨率图像任一融合曝光高分辨率图像均具有高动态范围HDR和高分辨率HR的特点。需要说明的是,在多个CFB循环连接的实施例中,每个CFB均会输出一个耦合反馈结果,但是考虑到各个CFB的串行反馈处理会逐步提升图像融合和超分辨的效果,故最后一个CFB所得的耦合反馈结果是综合效果最好的。基于此,在获得第一融合曝光高分辨率图像和第二融合曝光高分辨率图像的过程中,将每个子网络中最后一个CFB所得的耦合反馈结果对应的重建的图像作为输入之一。另外,耦合反馈结果的特征尺寸要大于原始输入图像的图像尺寸,故可先利用诸如双三次插值的上采样操作来扩大原始输入图像的图像尺寸,再将上采样的结果作为获得融合曝光高分辨率图像的另一输入。上述过程用公式表征为:和其中,fUP()和fREC()分别表示上采样操作和图像重建操作。In some embodiments, the
基于上述说明,本公开实施例提供的神经网络的各部分的参数设置可示例如下:Based on the above description, the parameter settings of each part of the neural network provided by the embodiments of the present disclosure can be exemplified as follows:
图4是本公开实施例提供的一种用于神经网络训练的神经网络的网络架构图。基于图1中神经网络的架构,该神经网络中存在多层次的特征,如低层次特征、高层次特征和至少一个耦合反馈结果(特征),这些特征都是为了同时实现多曝光图像融合和超分辨技术,所以,为了确保每项所得特征的有效性,在神经网络训练过程中采用了分层的损失函数限制。而分层的损失函数需要各层的图像来计算,故图4中用于神经网络训练的神经网络的网络架构相对于图1的用于图像融合预测的网络架构而言,增加了多个图像输出的分支,例如输出高层次特征对应的过曝高分辨率图像和欠曝高分辨率图像输出第一子网络110中其他耦合反馈模块对应的融合曝光高分辨率图像和以及输出第二子网络120中其他耦合反馈模块对应的融合曝光高分辨率图像和 FIG. 4 is a network architecture diagram of a neural network for neural network training provided by an embodiment of the present disclosure. Based on the architecture of the neural network in Figure 1, there are multi-level features in the neural network, such as low-level features, high-level features, and at least one coupled feedback result (feature), all of which are used to simultaneously achieve multi-exposure image fusion and superimposition. Discrimination techniques, so, to ensure the validity of each resulting feature, a hierarchical loss function constraint is employed during neural network training. The layered loss function requires the images of each layer to be calculated, so the network architecture of the neural network used for neural network training in Figure 4 is compared with the network architecture used for image fusion prediction in Figure 1, adding multiple images Output branches, such as outputting overexposed high-resolution images corresponding to high-level features and underexposed high-resolution images Output the fused exposure high-resolution images corresponding to other coupling feedback modules in the
图5是本公开实施例提供的一种神经网络训练方法的流程图。该神经网络训练方法基于图4中的神经网络架构来实现,其中与上述各实施例相同或相应的内容的解释在此不再赘述。本公开实施例提供的神经网络训练方法可以由神经网络训练装置来执行,该装置可以由软件和/或硬件的方式实现,该装置可以集成在具有一定计算能力的电子设备中,例如笔记本电脑、台式电脑、服务器或超级计算机等。参见图5,该神经网络训练方法具体包括:FIG. 5 is a flowchart of a neural network training method provided by an embodiment of the present disclosure. The neural network training method is implemented based on the neural network architecture in FIG. 4 , and the explanations of the same or corresponding contents as those of the above-mentioned embodiments are not repeated here. The neural network training method provided by the embodiments of the present disclosure may be performed by a neural network training apparatus, which may be implemented in software and/or hardware, and the apparatus may be integrated into an electronic device with certain computing capabilities, such as a notebook computer, Desktop computers, servers or supercomputers, etc. Referring to Figure 5, the neural network training method specifically includes:
S110、获取一张欠曝低分辨率图像和一张过曝低分辨率图像。S110. Acquire an under-exposed low-resolution image and an over-exposed low-resolution image.
具体地,整个神经网络训练过程中需要进行多次的网络训练,每次的网络训练均需获取一个训练图像组,那么整个训练过程就需要获取到多个训练图像组,且每个训练图像组的训练过程相同。本实施例中仅对一次训练过程进行阐述。上述一个训练图像组包含一张欠曝低分辨率图像和一张过曝低分辨率图像欠曝低分辨率图像是指拍摄曝光度小于第一预设曝光度阈值、且图像分辨率低于预设分辨率阈值的图像。过曝低分辨率图像是指拍摄曝光度高于第二预设曝光度阈值、且图像分辨率低于上述预设分辨率阈值的图像。这里的第一预设曝光度阈值小于第二预设曝光度阈值,并且第一预设曝光度阈值、第二预设曝光度阈值和预设分辨率阈值分别是预先确定的曝光度和图像分辨率。Specifically, multiple network trainings are required in the entire neural network training process, and each network training needs to obtain a training image group, then the entire training process needs to obtain multiple training image groups, and each training image group needs to be obtained. The training process is the same. In this embodiment, only one training process is described. One of the above training image sets contains an underexposed low-resolution image and an overexposed low-res image The underexposed low-resolution image refers to an image whose exposure is lower than the first preset exposure threshold and the image resolution is lower than the preset resolution threshold. An overexposed low-resolution image refers to an image whose exposure is higher than the second preset exposure threshold and whose image resolution is lower than the above-mentioned preset resolution threshold. Here, the first preset exposure threshold is smaller than the second preset exposure threshold, and the first preset exposure threshold, the second preset exposure threshold and the preset resolution threshold are the predetermined exposure and image resolution, respectively. Rate.
S120、将欠曝低分辨率图像和过曝低分辨率图像,分别输入第一子网络和第二子网络中的初始特征提取模块,生成欠曝低层次特征和过曝低层次特征。S120. Input the under-exposed low-resolution image and the over-exposed low-resolution image into the initial feature extraction modules in the first sub-network and the second sub-network, respectively, to generate under-exposed low-level features and over-exposed low-level features.
具体地,将欠曝低分辨率图像输入第一子网络中的初始特征提取模块FEB,得到欠曝低层次特征将过曝低分辨率图像输入第二子网络中的初始特征提取模块FEB,得到过曝低层次特征 Specifically, underexposing low-resolution images Input the initial feature extraction module FEB in the first sub-network to obtain underexposed low-level features will overexpose low resolution images Input the initial feature extraction module FEB in the second sub-network to obtain overexposed low-level features
S130、将欠曝低层次特征和过曝低层次特征,分别输入第一子网络和第二子网络中的高层特征提取模块,生成欠曝高层次特征和过曝高层次特征。S130. Input the under-exposed low-level features and over-exposed low-level features into the high-level feature extraction modules in the first sub-network and the second sub-network respectively, to generate under-exposed high-level features and over-exposed high-level features.
具体地,将欠曝低层次特征输入第一子网络中的高层特征提取模块SRB,得到欠曝高层次特征Gu。将过曝低层次特征输入第二子网络中的高层特征提取模块SRB,得到过曝高层次特征Go。Specifically, underexposing low-level features Input the high-level feature extraction module SRB in the first sub-network to obtain the underexposed high - level feature Gu. will overexpose low-level features Input the high-level feature extraction module SRB in the second sub-network to obtain the overexposed high-level feature G o .
S140、将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的耦合反馈模块,生成第一子网络对应的耦合反馈结果。S140. Input the under-exposed low-level features, under-exposed high-level features, and over-exposed high-level features into a coupling feedback module in the first sub-network to generate a coupling feedback result corresponding to the first sub-network.
具体地,以欠曝低层次特征欠曝高层次特征Gu和过曝高层次特征Go为基础的输入特征,通过第一子网络中的至少一个耦合反馈模块CFB的处理,生成第一子网络对应的至少一个耦合反馈结果。Specifically, to underexpose low-level features The input features based on the underexposed high-level feature Gu and the overexposed high-level feature G o are processed by at least one coupling feedback module CFB in the first sub-network to generate at least one coupling feedback result corresponding to the first sub-network.
在一些实施例中,S140可实现为:将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的第一个耦合反馈模块,生成第一子网络对应的耦合反馈结果;针对第一子网络中除了第一个耦合反馈模块之外的任一个后续耦合反馈模块,将欠曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及第二子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成第一子网络对应的耦合反馈结果。本实施例中,神经网络包含多个耦合反馈模块CFB(以T个为例),且各耦合反馈模块串行处理。In some embodiments, S140 may be implemented as: inputting the under-exposed low-level features, under-exposed high-level features, and over-exposed high-level features into the first coupling feedback module in the first sub-network to generate a first sub-network corresponding to for any subsequent coupling feedback module except the first coupling feedback module in the first sub-network, underexposing the low-level features, the coupling feedback module of the previous adjacent coupling feedback module of the subsequent coupling feedback module The feedback result and the coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the second sub-network are input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the first sub-network. In this embodiment, the neural network includes a plurality of coupling feedback modules CFB (T are taken as an example), and each coupling feedback module is processed in series.
参见图4,生成第一子网络对应的至少一个耦合反馈结果的过程为:首先,对于第一个CFB,将欠曝低层次特征欠曝高层次特征Gu和过曝高层次特征Go,输入该第一个CFB后,输出第一子网络对应的第一个耦合反馈结果然后,对于第一子网络中除了第一个CFB之外的某个后续CFB(假设为第t个,且t<T),将欠曝低层次特征该第t个CFB的前一相邻CFB(即第t-1个CFB)的耦合反馈结果以及第二子网络中第t-1个CFB的耦合反馈结果输入该第t个CFB后,输出第一子网络对应的第t个耦合反馈结果以此类推,通过迭代反馈,可获得第一子网络中任一后续CFB输出的耦合反馈结果。Referring to FIG. 4 , the process of generating at least one coupled feedback result corresponding to the first sub-network is as follows: first, for the first CFB, underexposing the low-level features Under-exposed high-level feature Gu and over-exposed high-level feature G o , after inputting the first CFB, output the first coupling feedback result corresponding to the first sub-network Then, for a certain subsequent CFB in the first sub-network other than the first CFB (assumed to be the t-th, and t<T), the low-level features will be underexposed The coupling feedback result of the previous adjacent CFB of the t-th CFB (ie, the t-1-th CFB) and the coupled feedback result of the t-1th CFB in the second sub-network After inputting the t-th CFB, output the t-th coupling feedback result corresponding to the first sub-network By analogy, through iterative feedback, the coupling feedback result of any subsequent CFB output in the first sub-network can be obtained.
S150、将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的耦合反馈模块,生成第二子网络对应的耦合反馈结果。S150. Input the overexposed low-level feature, the overexposed high-level feature, and the underexposed high-level feature into the coupling feedback module in the second sub-network to generate a coupling feedback result corresponding to the second sub-network.
具体地,以过曝低层次特征过曝高层次特征Go和欠曝高层次特征Gu为基础的输入特征,通过第二子网络中的至少一个耦合反馈模块CFB的处理,生成第二子网络对应的至少一个耦合反馈结果。Specifically, to overexpose low-level features The input features based on the overexposed high - level feature Go and the underexposed high-level feature Gu are processed by at least one coupling feedback module CFB in the second sub-network to generate at least one coupling feedback result corresponding to the second sub-network.
在一些实施例中,S150可实现为:将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的第一个耦合反馈模块,生成第二子网络对应的耦合反馈结果;针对第二子网络中除了第一个耦合反馈模块之外的任一个后续耦合反馈模块,将过曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及第一子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成第二子网络对应的耦合反馈结果。本实施例中,神经网络包含多个耦合反馈模块CFB(以T个为例),且各耦合反馈模块串行处理。In some embodiments, S150 may be implemented as: inputting the overexposed low-level feature, overexposed high-level feature, and underexposed high-level feature into the first coupling feedback module in the second sub-network to generate a corresponding second sub-network For any subsequent coupling feedback module in the second sub-network except the first coupling feedback module, the coupling feedback module of the previous adjacent coupling feedback module of the subsequent coupling feedback module will be overexposed. The feedback result and the coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the first sub-network are input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the second sub-network. In this embodiment, the neural network includes a plurality of coupling feedback modules CFB (T are taken as an example), and each coupling feedback module is processed in series.
参见图4,生成第二子网络对应的至少一个耦合反馈结果的过程为:首先,对于第一个CFB,将过曝低层次特征过曝高层次特征Go和欠曝高层次特征Gu,输入该第一个CFB后,输出第二子网络对应的第一个耦合反馈结果然后,对于第二子网络中除了第一个CFB之外的某个后续CFB(假设为第t个,且t<T),将过曝低层次特征第t-1个CFB的耦合反馈结果以及第一子网络中第t-1个CFB的耦合反馈结果输入该第t个CFB后,输出第二子网络对应的第t个耦合反馈结果以此类推,通过迭代反馈,可获得第二子网络中任一后续CFB输出的耦合反馈结果。Referring to FIG. 4 , the process of generating at least one coupled feedback result corresponding to the second sub-network is as follows: first, for the first CFB, the low-level features are overexposed Overexposed high-level feature G o and underexposed high-level feature Gu , after inputting the first CFB, output the first coupling feedback result corresponding to the second sub-network Then, for some subsequent CFB other than the first CFB in the second sub-network (assumed to be the t-th, and t<T), the low-level features will be overexposed Coupling feedback results of the t-1th CFB and the coupled feedback result of the t-1th CFB in the first sub-network After inputting the t-th CFB, output the t-th coupling feedback result corresponding to the second sub-network By analogy, through iterative feedback, the coupling feedback result of any subsequent CFB output in the second sub-network can be obtained.
S160、基于欠曝低分辨率图像、欠曝高层次特征和第一子网络对应的耦合反馈结果,以及过曝低分辨率图像、过曝高层次特征和第二子网络对应的耦合反馈结果,调整神经网络的参数。S160. Based on the coupling feedback result corresponding to the underexposed low-resolution image, the underexposed high-level feature and the first sub-network, and the coupling feedback result corresponding to the over-exposed low-resolution image, the overexposed high-level feature and the second sub-network, Adjust the parameters of the neural network.
具体地,根据上述说明,本公开实施例中采用了分层的损失函数来训练上述神经网络,故需要基于欠曝低分辨率图像欠曝高层次特征Gu、第一子网络对应的各个耦合反馈结果过曝低分辨率图像过曝高层次特征Go和第二子网络对应的各个耦合反馈结果来确定神经网络各层输出的图像,并利用这些输出的图像来计算该次训练的损失值,进而利用该损失值来调整神经网络中的模型参数。Specifically, according to the above description, in the embodiment of the present disclosure, a layered loss function is used to train the above neural network, so it needs to be based on the underexposed low-resolution image Under-exposed high-level feature Gu and each coupling feedback result corresponding to the first sub-network Overexposed low-resolution images Overexposure high-level feature G o and each coupling feedback result corresponding to the second sub-network To determine the images output by each layer of the neural network, and use these output images to calculate the loss value of this training, and then use the loss value to adjust the model parameters in the neural network.
在一些实施例中,S160可实现为:In some embodiments, S160 may be implemented as:
A、分别对欠曝低分辨率图像和过曝低分辨率图像进行上采样操作。A. Perform up-sampling operations on the under-exposed low-resolution image and the over-exposed low-resolution image respectively.
具体地,为了进一步提高图像融合效果,本公开实施例中神经网络的各层输出的图像均需融合原始输入图像,即欠曝低分辨率图像和过曝低分辨率图像但是,因为高层次特征和耦合反馈结果均增加了超分辨的更多图像细节信息,其特征大小均大于原始输入图像,故需要将原始输入图像上采样以扩大图像尺寸。例如,分别对欠曝低分辨率图像和过曝低分辨率图像进行双三次插值上采样的操作,获得上采样后的欠曝低分辨率图像和上采样后的过曝低分辨率图像 Specifically, in order to further improve the image fusion effect, the images output by each layer of the neural network in the embodiment of the present disclosure need to be fused with the original input image, that is, the underexposed low-resolution image and overexposed low-resolution images However, because both high-level features and coupled feedback results add more image detail information for super-resolution, and their feature sizes are larger than the original input image, it is necessary to upsample the original input image to enlarge the image size. For example, for underexposed low-resolution images separately and overexposed low-resolution images Perform a bicubic interpolation and upsampling operation to obtain an underexposed low-resolution image after upsampling and upsampled overexposed low-resolution images
B、将欠曝高层次特征对应的图像和第一子网络对应的耦合反馈结果对应的图像,分别与上采样后的欠曝低分辨率图像相加,生成欠曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像。B. Add the image corresponding to the under-exposed high-level feature and the image corresponding to the coupling feedback result corresponding to the first sub-network to the up-sampled under-exposed low-resolution image to generate an under-exposed high-resolution image and a second under-exposed high-resolution image. The sub-network corresponds to the fused exposure high-resolution image.
具体地,首先,对欠曝高层次特征Gu和第一子网络对应的各个耦合反馈结果均施加图像重建模块REC的操作,获得相应的图像。然后,将欠曝高层次特征Gu对应的图像和上采样后的欠曝低分辨率图像相加,得到欠曝高分辨率图像并且,将上采样后的欠曝低分辨率图像分别与第一子网络中的每个耦合反馈结果对应的图像相加,得到第一子网络对应的各个融合曝光高分辨率图像 Specifically, first, each coupling feedback result corresponding to the underexposed high - level feature Gu and the first sub-network is The operation of the image reconstruction module REC is applied to obtain the corresponding image. Then, the image corresponding to the under - exposed high-level feature Gu and the up-sampled under-exposed low-resolution image Add to get underexposed high-resolution images And, the upsampled underexposed low-resolution image Separate feedback results with each coupling in the first sub-network The corresponding images are added to obtain each fusion exposure high-resolution image corresponding to the first sub-network
C、将过曝高层次特征对应的图像和第二子网络对应的耦合反馈结果对应的图像,分别与上采样后的过曝低分辨率图像相加,生成过曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像。C. Add the image corresponding to the over-exposed high-level feature and the image corresponding to the coupling feedback result corresponding to the second sub-network respectively with the up-sampled over-exposed low-resolution image to generate the over-exposed high-resolution image and the second sub-network. The sub-network corresponds to the fused exposure high-resolution image.
具体地,首先,对过曝高层次特征Go和第二子网络对应的各个耦合反馈结果均施加图像重建模块REC的操作,获得相应的图像。然后,将过曝高层次特征Go对应的图像和上采样后的过曝低分辨率图像相加,得到过曝高分辨率图像并且,将上采样后的过曝低分辨率图像分别与第二子网络中的每个耦合反馈结果对应的图像相加,得到第二子网络对应的各个融合曝光高分辨率图像 Specifically, first, each coupling feedback result corresponding to the overexposed high-level feature Go and the second sub-network is The operation of the image reconstruction module REC is applied to obtain the corresponding image. Then, the image corresponding to the overexposed high-level feature Go and the upsampled overexposed low-resolution image Add up to get an overexposed high-resolution image And, the upsampled overexposed low-resolution image Separate feedback results with each coupling in the second sub-network The corresponding images are added to obtain each fusion exposure high-resolution image corresponding to the second sub-network
D、基于欠曝高分辨率图像、第一子网络对应的融合曝光高分辨率图像、过曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像,调整神经网络的参数。D. Adjust the parameters of the neural network based on the under-exposed high-resolution image, the fused-exposure high-resolution image corresponding to the first sub-network, the over-exposed high-resolution image, and the fused-exposure high-resolution image corresponding to the second sub-network.
具体地,利用上述过程所得的欠曝高分辨率图像第一子网络对应的各融合曝光高分辨率图像过曝高分辨率图像和第二子网络对应的各融合曝光高分辨率图像来计算该次训练的损失值,并利用该损失值的反向传播来调整神经网络中的模型参数。Specifically, using the underexposed high-resolution image obtained by the above process High-resolution images of each fusion exposure corresponding to the first sub-network Overexposed high-resolution images Each fused exposure high-resolution image corresponding to the second sub-network to calculate the loss value of this training, and use the back-propagation of the loss value to adjust the model parameters in the neural network.
在一些实施例中,步骤D可实现为:通过如下公式(1)所示的损失函数,调整神经网络的参数:In some embodiments, step D can be implemented as: adjusting the parameters of the neural network through the loss function shown in the following formula (1):
其中,Ltotal表示总损失函数值,λo、λu和分别表示每一部分损失函数值对应的权重,和分别表示第一子网络中的高层特征提取模块和耦合反馈模块对应的损失函数值,和分别表示第二子网络中的高层特征提取模块和耦合反馈模块对应的损失函数值,LMS表示基于图像的结构相似性指数确定的两个图像之间的损失值,和分别表示过曝高分辨率图像和过曝高分辨率参考图像,和分别表示欠曝高分辨率图像和欠曝高分辨率参考图像,和Igt分别表示第t个第二子网络对应的融合曝光高分辨率图像、第t个第一子网络对应的融合曝光高分辨率图像和融合曝光高分辨率参考图像,T表示耦合反馈模块的数量。where L total represents the total loss function value, λ o , λ u and respectively represent the weight corresponding to each part of the loss function value, and respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the first sub-network, and respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the second sub-network, L MS represents the loss value between the two images determined based on the structural similarity index of the image, and represent the overexposed high-resolution image and the overexposed high-resolution reference image, respectively, and represent the underexposed high-resolution image and the underexposed high-resolution reference image, respectively, and I gt represent the fused exposure high-resolution image corresponding to the t-th second sub-network, the fused-exposure high-resolution image corresponding to the t-th first sub-network, and the fused-exposure high-resolution reference image, respectively, and T represents the coupling feedback module quantity.
上述各参考图像是相应的神经网络输出图像对应的真值图像,是希望上述神经网络生成的图像尽可能接近的目标图像。上述表示两个图像(X和Y)之间的图像层面的结构相似性指数(structural similarity index,SSIM)的损失值LMS可通过如下方式确定:Each of the above reference images is the true value image corresponding to the corresponding neural network output image, and is a target image that is expected to be as close as possible to the image generated by the above neural network. The above loss value L MS representing the image-level structural similarity index (SSIM) between two images (X and Y) can be determined as follows:
上述公式(1)中的损失函数可以被分为两个部分。前两个损失函数和用于保证高层特征提取模块SRB的有效性,最后一部分损失函数用来确保耦合反馈模块CFB的有效性。即,前两个损失函数是为了确保超分辨的效果,而后一部分的构建用于同时保证超分辨和多曝光图像融合的效果。同时,前两个损失函数对于最后一部分的损失函数来说也是重要的基础。整个神经网络通过最小化公式(1)中定义的损失函数,以端到端的方式进行训练。The loss function in the above formula (1) can be divided into two parts. The first two loss functions and Used to ensure the effectiveness of the high-level feature extraction module SRB, the last part of the loss function Used to ensure the effectiveness of the coupled feedback module CFB. That is, the first two loss functions are to ensure the effect of super-resolution, and the latter part is constructed to ensure the effect of super-resolution and multi-exposure image fusion at the same time. At the same time, the first two loss functions are also important foundations for the last part of the loss function. The entire neural network is trained in an end-to-end fashion by minimizing the loss function defined in Equation (1).
需要说明的是,S140和S150的执行顺序不限定,可以先执行S140后执行S150,也可以先执行S150后执行S140,还可以并行执行S140和S150。It should be noted that the execution order of S140 and S150 is not limited. S140 may be executed first and then S150 may be executed, or S150 may be executed first and then S140 may be executed, or S140 and S150 may be executed in parallel.
本公开实施例的上述技术方案,通过将获得的欠曝低分辨率图像和过曝低分辨率图像,分别输入第一子网络和第二子网络中的初始特征提取模块,生成欠曝低层次特征和过曝低层次特征;将欠曝低层次特征和过曝低层次特征,分别输入第一子网络和第二子网络中的高层特征提取模块,生成欠曝高层次特征和过曝高层次特征;将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的耦合反馈模块,生成第一子网络对应的耦合反馈结果;将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的耦合反馈模块,生成第二子网络对应的耦合反馈结果;基于欠曝低分辨率图像、欠曝高层次特征和第一子网络对应的耦合反馈结果,以及过曝低分辨率图像、过曝高层次特征和第二子网络对应的耦合反馈结果,调整神经网络的参数。实现了对耦合了多曝光融合技术和超分辨技术的神经网络的端到端的训练,得到了各部分模块参数更为精确的神经网络,该神经网络不仅能够简化拍摄图像的处理流程,提高图像处理速度,而且利用多曝光融合和超分辨之间的互补特性,进一步提高图像处理精确度。In the above-mentioned technical solutions of the embodiments of the present disclosure, by inputting the obtained underexposed low-resolution images and overexposed low-resolution images into the initial feature extraction modules in the first sub-network and the second sub-network, respectively, an underexposed low-resolution image is generated. Features and overexposed low-level features; input the under-exposed low-level features and over-exposed low-level features into the high-level feature extraction modules in the first sub-network and the second sub-network, respectively, to generate under-exposed high-level features and over-exposed high-level features Features; input the under-exposed low-level features, under-exposed high-level features, and over-exposed high-level features into the coupling feedback module in the first sub-network to generate the coupling feedback results corresponding to the first sub-network; The over-exposed high-level features and under-exposed high-level features are input to the coupling feedback module in the second sub-network to generate the coupling feedback results corresponding to the second sub-network; based on the under-exposed low-resolution image, under-exposed high-level features and the first The coupling feedback results corresponding to the sub-network, as well as the over-exposure low-resolution images, over-exposure high-level features, and coupling feedback results corresponding to the second sub-network, adjust the parameters of the neural network. The end-to-end training of the neural network coupled with the multi-exposure fusion technology and the super-resolution technology is realized, and a neural network with more accurate parameters of each part of the module is obtained. speed, and take advantage of the complementary properties between multi-exposure fusion and super-resolution to further improve image processing accuracy.
图6是本公开实施例提供的一种图像融合方法的流程图。该图像融合方法基于图1中的神经网络架构来实现,其中与上述各实施例相同或相应的内容的解释在此不再赘述。本公开实施例提供的图像融合方法可以由图像融合装置来执行,该装置可以由软件和/或硬件的方式实现,该装置可以集成在具有一定计算能力的电子设备中,例如手机、平板电脑、笔记本电脑、台式电脑、服务器或超级计算机等。参见图6,该图像融合方法包括:FIG. 6 is a flowchart of an image fusion method provided by an embodiment of the present disclosure. The image fusion method is implemented based on the neural network architecture in FIG. 1 , and the explanations of the same or corresponding contents as those of the above-mentioned embodiments are not repeated here. The image fusion method provided in the embodiments of the present disclosure may be performed by an image fusion apparatus, which may be implemented in software and/or hardware, and the apparatus may be integrated into an electronic device with certain computing capabilities, such as a mobile phone, a tablet computer, a Laptops, desktops, servers or supercomputers, etc. Referring to Figure 6, the image fusion method includes:
S210、获取一张欠曝低分辨率图像和一张过曝低分辨率图像。S210. Acquire an under-exposed low-resolution image and an over-exposed low-resolution image.
具体地,获取针对同一拍摄场景和相同拍摄对象的两个极端曝光的图像,即欠曝低分辨率图像和过曝低分辨率图像。Specifically, two extreme exposure images for the same shooting scene and the same shooting object, that is, an underexposed low-resolution image and an overexposed low-resolution image are acquired.
S220、将欠曝低分辨率图像和过曝低分辨率图像输入预先训练的神经网络,生成第一融合曝光高分辨率图像和第二融合曝光高分辨率图像。S220: Input the under-exposed low-resolution image and the over-exposed low-resolution image into a pre-trained neural network to generate a first fused-exposure high-resolution image and a second fused-exposure high-resolution image.
具体地,神经网络在应用过程中,只需输入欠曝低分辨率图像和过曝低分辨率图像,经过网络的处理,可输出两个图像,即欠曝低分辨率图像对应的第一融合曝光高分辨率图像过曝低分辨率图像对应的第二融合曝光高分辨率图像 Specifically, in the application process of the neural network, it only needs to input the under-exposed low-resolution image and the over-exposed low-resolution image, and after the processing of the network, it can output two images, namely the under-exposed low-resolution image. Corresponding first fused exposure high-resolution image Overexposed low-resolution images Corresponding second fused exposure high-resolution image
S230、基于第一融合曝光高分辨率图像和第二融合曝光高分辨率图像,生成图像融合结果。S230. Generate an image fusion result based on the first fusion exposure high-resolution image and the second fusion exposure high-resolution image.
具体地,第一融合曝光高分辨率图像和第二融合曝光高分辨率图像虽然都是超分辨和多曝光融合的图像,但是因其对应的输入图像不同,两个输出图像之间也有差异。为了进一步提高融合精度,本公开实施例中需要对和进行进一步的综合处理,获得最终的输出图像,即图像融合结果。Specifically, the first fusion exposure high-resolution image and second fused exposure high resolution image Although they are both super-resolution and multi-exposure fusion images, there are differences between the two output images because of their corresponding input images. In order to further improve the fusion accuracy, in the embodiments of the present disclosure, it is necessary to and Further comprehensive processing is performed to obtain the final output image, that is, the image fusion result.
在一些实施例中,S230可实现为:分别利用第一权重和第二权重,对第一融合曝光高分辨率图像和第二融合曝光高分辨率图像进行加权求和处理,生成图像融合结果。具体地,本实施例中对和进行加权处理,故需要预先确定两个加权权重,即第一权重和第二权重。这两个权重的取值与欠曝低分辨率图像和过曝低分辨率图像的曝光度和拍摄场景等有关。例如,可将0.5作为第一权重和第二权重的默认值。那么,可按照如下公式(2)来生成图像融合结果:In some embodiments, S230 may be implemented as: using the first weight and the second weight, respectively, to perform a weighted summation process on the first fused exposure high-resolution image and the second fused exposure high-resolution image to generate an image fusion result. Specifically, in this embodiment, the and For weighting processing, two weighting weights need to be pre-determined, namely the first weight and the second weight. The values of these two weights are related to the exposure and shooting scene of the underexposed low-resolution image and the overexposed low-resolution image. For example, 0.5 can be used as the default value for the first weight and the second weight. Then, the image fusion result can be generated according to the following formula (2):
其中,Iout、wo和wu分别表示图像融合结果、第二权重和第一权重。Among them, I out , wo and wu represent the image fusion result, the second weight and the first weight, respectively.
本公开实施例的上述技术方案,通过将拍摄所得的欠曝低分辨率图像和过曝低分辨率图像输入预先训练的神经网络,生成第一融合曝光高分辨率图像和第二融合曝光高分辨率图像;并基于第一融合曝光高分辨率图像和第二融合曝光高分辨率图像,生成图像融合结果。实现了利用耦合了多曝光融合技术和超分辨技术的神经网络来处理极端曝光的两张低分辨率图像,生成一张具有高分辨率HR和高动态范围HDR的图像融合结果,简化了拍摄图像的处理流程,提高了图像处理速度和处理精确度。According to the above technical solutions of the embodiments of the present disclosure, the first fused exposure high-resolution image and the second fused exposure high-resolution image are generated by inputting the captured under-exposed low-resolution image and over-exposed low-resolution image into a pre-trained neural network. and generate an image fusion result based on the first fusion exposure high-resolution image and the second fusion exposure high-resolution image. Realize the use of a neural network coupled with multi-exposure fusion technology and super-resolution technology to process two low-resolution images with extreme exposure, and generate an image fusion result with high-resolution HR and high dynamic range HDR, which simplifies the captured image. The processing flow improves the image processing speed and processing accuracy.
图7是本公开实施例提供的一种神经网络训练装置的结构示意图。该神经网络包括网络结构相同的第一子网络和第二子网络,且任一子网络中包含初级特征提取模块、高层特征提取模块和耦合反馈模块。参见图7,该装置具体包括:FIG. 7 is a schematic structural diagram of a neural network training apparatus provided by an embodiment of the present disclosure. The neural network includes a first sub-network and a second sub-network with the same network structure, and any sub-network includes a primary feature extraction module, a high-level feature extraction module and a coupled feedback module. Referring to Figure 7, the device specifically includes:
图像获取单元710,用于获取一张欠曝低分辨率图像和一张过曝低分辨率图像;An
低层次特征生成单元720,用于将欠曝低分辨率图像和过曝低分辨率图像,分别输入第一子网络和第二子网络中的初始特征提取模块,生成欠曝低层次特征和过曝低层次特征;The low-level
高层次特征生成单元730,用于将欠曝低层次特征和过曝低层次特征,分别输入第一子网络和第二子网络中的高层特征提取模块,生成欠曝高层次特征和过曝高层次特征;The high-level
第一耦合反馈结果生成单元740,用于将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的耦合反馈模块,生成第一子网络对应的耦合反馈结果;The first coupling feedback
第二耦合反馈结果生成单元750,用于将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的耦合反馈模块,生成第二子网络对应的耦合反馈结果;The second coupling feedback
参数调整单元760,用于基于欠曝低分辨率图像、欠曝高层次特征和第一子网络对应的耦合反馈结果,以及过曝低分辨率图像、过曝高层次特征和第二子网络对应的耦合反馈结果,调整神经网络的参数。The
在一些实施例中,神经网络包含多个耦合反馈模块,且各耦合反馈模块不共享模型参数。In some embodiments, the neural network includes multiple coupled feedback modules, and each coupled feedback module does not share model parameters.
在一些实施例中,各耦合反馈模块串行处理;In some embodiments, the coupled feedback modules are processed serially;
相应地,第一耦合反馈结果生成单元740具体用于:Correspondingly, the first coupling feedback
将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的第一个耦合反馈模块,生成第一子网络对应的耦合反馈结果;Input the under-exposed low-level features, under-exposed high-level features and over-exposed high-level features into the first coupling feedback module in the first sub-network to generate the coupling feedback result corresponding to the first sub-network;
针对第一子网络中除了第一个耦合反馈模块之外的任一个后续耦合反馈模块,将欠曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及第二子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成第一子网络对应的耦合反馈结果;For any subsequent coupling feedback module except the first coupling feedback module in the first sub-network, the underexposure low-level features, the coupling feedback result of the previous adjacent coupling feedback module of the subsequent coupling feedback module, and the The coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in the two sub-networks is input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the first sub-network;
相应地,第二耦合反馈结果生成单元750具体用于:Correspondingly, the second coupling feedback
将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的第一个耦合反馈模块,生成第二子网络对应的耦合反馈结果;Input the over-exposed low-level features, over-exposed high-level features and under-exposed high-level features into the first coupling feedback module in the second sub-network to generate the coupling feedback result corresponding to the second sub-network;
针对第二子网络中除了第一个耦合反馈模块之外的任一个后续耦合反馈模块,将过曝低层次特征、该后续耦合反馈模块的前一相邻耦合反馈模块的耦合反馈结果、以及第一子网络中与该前一相邻耦合反馈模块对应的耦合反馈模块的耦合反馈结果,输入该后续耦合反馈模块,生成第二子网络对应的耦合反馈结果。For any subsequent coupling feedback module except the first coupling feedback module in the second sub-network, the low-level features, the coupling feedback result of the previous adjacent coupling feedback module of the subsequent coupling feedback module, and the first coupling feedback module will be overexposed. The coupling feedback result of the coupling feedback module corresponding to the previous adjacent coupling feedback module in a sub-network is input to the subsequent coupling feedback module to generate the coupling feedback result corresponding to the second sub-network.
在一些实施例中,耦合反馈模块中包含至少两个联结子模块和至少两个特征映射组,其中,每个特征映射组包含一个滤波器、一个反卷积层和一个卷积层;In some embodiments, the coupled feedback module includes at least two connection sub-modules and at least two feature map groups, wherein each feature map group includes a filter, a deconvolution layer, and a convolution layer;
第一个联结子模块位于各特征映射组之前;The first link submodule is located before each feature map group;
除了第一个联结子模块外的任一其他联结子模块位于任意两个相邻的特征映射组之间,且任意两个其他联结子模块位于不同位置。Any other linking submodules except the first linking submodule are located between any two adjacent feature map groups, and any two other linking submodules are located at different positions.
在一些实施例中,参数调整单元760具体用于:In some embodiments, the
分别对欠曝低分辨率图像和过曝低分辨率图像进行上采样操作;Perform up-sampling operations on the under-exposed low-resolution image and the over-exposed low-resolution image respectively;
将欠曝高层次特征对应的图像和第一子网络对应的耦合反馈结果对应的图像,分别与上采样后的欠曝低分辨率图像相加,生成欠曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像;The image corresponding to the under-exposed high-level feature and the image corresponding to the coupling feedback result corresponding to the first sub-network are added to the up-sampled under-exposed low-resolution image to generate the under-exposed high-resolution image and the second sub-network. Corresponding fused exposure high-resolution images;
将过曝高层次特征对应的图像和第二子网络对应的耦合反馈结果对应的图像,分别与上采样后的过曝低分辨率图像相加,生成过曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像;The image corresponding to the overexposed high-level feature and the image corresponding to the coupling feedback result corresponding to the second sub-network are added to the up-sampled over-exposed low-resolution image to generate the over-exposed high-resolution image and the second sub-network. Corresponding fused exposure high-resolution images;
基于欠曝高分辨率图像、第一子网络对应的融合曝光高分辨率图像、过曝高分辨率图像和第二子网络对应的融合曝光高分辨率图像,调整神经网络的参数。The parameters of the neural network are adjusted based on the under-exposed high-resolution image, the fused-exposure high-resolution image corresponding to the first sub-network, the over-exposed high-resolution image, and the fused-exposure high-resolution image corresponding to the second sub-network.
进一步地,参数调整单元760具体用于:Further, the
通过如下公式所示的损失函数,调整神经网络的参数:Adjust the parameters of the neural network through the loss function shown in the following formula:
其中,Ltotal表示总损失函数值,λo、λu和分别表示每一部分损失函数值对应的权重,和分别表示第一子网络中的高层特征提取模块和耦合反馈模块对应的损失函数值,和分别表示第二子网络中的高层特征提取模块和耦合反馈模块对应的损失函数值,LMS表示基于图像的结构相似性指数确定的两个图像之间的损失值,和分别表示过曝高分辨率图像和过曝高分辨率参考图像,和分别表示欠曝高分辨率图像和欠曝高分辨率参考图像,和Igt分别表示第t个第二子网络对应的融合曝光高分辨率图像、第t个第一子网络对应的融合曝光高分辨率图像和融合曝光高分辨率参考图像,T表示耦合反馈模块的数量。where L total represents the total loss function value, λ o , λ u and respectively represent the weight corresponding to each part of the loss function value, and respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the first sub-network, and respectively represent the loss function values corresponding to the high-level feature extraction module and the coupled feedback module in the second sub-network, L MS represents the loss value between the two images determined based on the structural similarity index of the image, and represent the overexposed high-resolution image and the overexposed high-resolution reference image, respectively, and represent the underexposed high-resolution image and the underexposed high-resolution reference image, respectively, and I gt represent the fused exposure high-resolution image corresponding to the t-th second sub-network, the fused-exposure high-resolution image corresponding to the t-th first sub-network, and the fused-exposure high-resolution reference image, respectively, and T represents the coupling feedback module quantity.
通过本公开实施例提供的一种神经网络训练装置,实现了利用一个神经网络同时进行图像的多曝光融合处理和超分辨处理,不仅简化了拍摄图像的处理流程,提高了图像处理速度,而且利用多曝光融合和超分辨之间的互补特性,进一步提高了图像处理精确度。Through the neural network training device provided by the embodiment of the present disclosure, the multi-exposure fusion processing and super-resolution processing of images are simultaneously performed by using a neural network, which not only simplifies the processing flow of captured images, improves the image processing speed, but also utilizes Complementary properties between multi-exposure fusion and super-resolution further improve image processing accuracy.
本公开实施例所提供的神经网络训练装置可执行本公开任意实施例所提供的神经网络训练方法,具备执行方法相应的功能模块和有益效果。The neural network training apparatus provided by the embodiment of the present disclosure can execute the neural network training method provided by any embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.
图8是本公开实施例提供的一种图像融合装置的结构示意图。参见图8,该装置具体包括:FIG. 8 is a schematic structural diagram of an image fusion apparatus provided by an embodiment of the present disclosure. Referring to Figure 8, the device specifically includes:
图像获取单元810,用于获取一张欠曝低分辨率图像和一张过曝低分辨率图像;An
融合曝光高分辨率图像生成单元820,用于将欠曝低分辨率图像和过曝低分辨率图像输入预先训练的神经网络,生成第一融合曝光高分辨率图像和第二融合曝光高分辨率图像;其中,神经网络通过本公开任一实施例中的神经网络训练方法训练得到;The fused exposure high-resolution
图像融合结果生成单元830,用于基于第一融合曝光高分辨率图像和第二融合曝光高分辨率图像,生成图像融合结果。The image fusion
在一些实施例中,图像融合结果生成单元830具体用于:In some embodiments, the image fusion
分别利用第一权重和第二权重,对第一融合曝光高分辨率图像和第二融合曝光高分辨率图像进行加权求和处理,生成图像融合结果。Using the first weight and the second weight respectively, the first fusion exposure high-resolution image and the second fusion exposure high-resolution image are weighted and summed to generate an image fusion result.
通过本公开实施例提供的一种图像融合装置,实现了利用一个神经网络同时进行图像的多曝光融合处理和超分辨处理,不仅简化了拍摄图像的处理流程,提高了图像处理速度,而且利用多曝光融合和超分辨之间的互补特性,进一步提高了图像处理精确度。The image fusion device provided by the embodiment of the present disclosure realizes the simultaneous multi-exposure fusion processing and super-resolution processing of images by using a neural network, which not only simplifies the processing flow of captured images, improves the image processing speed, but also utilizes multiple Complementary properties between exposure fusion and super-resolution further improve image processing accuracy.
本公开实施例所提供的图像融合装置可执行本公开任意实施例所提供的图像融合方法,具备执行方法相应的功能模块和有益效果。The image fusion apparatus provided by the embodiment of the present disclosure can execute the image fusion method provided by any embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.
值得注意的是,上述神经网络训练装置的实施例中,所包括的各个单元只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本公开的保护范围。It is worth noting that in the above embodiments of the neural network training apparatus, the units included are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be realized; in addition, each function The specific names of the units are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present disclosure.
参见图9,本实施例提供了一种电子设备,其包括:一个或多个处理器920;存储装置910,用于存储一个或多个程序,当一个或多个程序被一个或多个处理器920执行,使得一个或多个处理器920实现本发明实施例所提供的神经网络训练方法,该神经网络包括网络结构相同的第一子网络和第二子网络,且任一子网络中包含初级特征提取模块、高层特征提取模块和耦合反馈模块;该方法包括:Referring to FIG. 9, this embodiment provides an electronic device, which includes: one or
获取一张欠曝低分辨率图像和一张过曝低分辨率图像;Obtain an underexposed low-resolution image and an overexposed low-resolution image;
将欠曝低分辨率图像和过曝低分辨率图像,分别输入第一子网络和第二子网络中的初始特征提取模块,生成欠曝低层次特征和过曝低层次特征;Input the under-exposed low-resolution image and the over-exposed low-resolution image into the initial feature extraction modules in the first sub-network and the second sub-network, respectively, to generate under-exposed low-level features and over-exposed low-level features;
将欠曝低层次特征和过曝低层次特征,分别输入第一子网络和第二子网络中的高层特征提取模块,生成欠曝高层次特征和过曝高层次特征;Input the under-exposed low-level features and over-exposed low-level features into the high-level feature extraction modules in the first sub-network and the second sub-network, respectively, to generate under-exposed high-level features and over-exposed high-level features;
将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的耦合反馈模块,生成第一子网络对应的耦合反馈结果;Input the under-exposed low-level features, under-exposed high-level features and over-exposed high-level features into the coupling feedback module in the first sub-network to generate the coupling feedback result corresponding to the first sub-network;
将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的耦合反馈模块,生成第二子网络对应的耦合反馈结果;Input the over-exposed low-level features, over-exposed high-level features and under-exposed high-level features into the coupling feedback module in the second sub-network to generate the coupling feedback result corresponding to the second sub-network;
基于欠曝低分辨率图像、欠曝高层次特征和第一子网络对应的耦合反馈结果,以及过曝低分辨率图像、过曝高层次特征和第二子网络对应的耦合反馈结果,调整神经网络的参数。Based on the coupled feedback results corresponding to the underexposed low-resolution image, the underexposed high-level feature and the first sub-network, and the coupled feedback results corresponding to the over-exposed low-resolution image, the overexposed high-level feature and the second sub-network, adjust the neural parameters of the network.
当然,本领域技术人员可以理解,处理器920还可以实现本发明任意实施例所提供的神经网络训练方法的技术方案。Of course, those skilled in the art can understand that the
图9显示的电子设备仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。The electronic device shown in FIG. 9 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present invention.
如图9所示,该电子设备包括处理器920、存储装置910、输入装置930和输出装置940;电子设备中处理器920的数量可以是一个或多个,图9中以一个处理器920为例;电子设备中的处理器920、存储装置910、输入装置930和输出装置940可以通过总线或其他方式连接,图9中以通过总线950连接为例。As shown in FIG. 9 , the electronic device includes a
存储装置910作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本发明实施例中的神经网络训练方法对应的程序指令/模块。As a computer-readable storage medium, the
存储装置910可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,存储装置910可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储装置910可进一步包括相对于处理器920远程设置的存储器,这些远程存储器可以通过网络连接至电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The
输入装置930可用于接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。输出装置940可包括显示屏等显示设备。The
本发明实施例还提供了另一电子设备,其包括:一个或多个处理器;存储装置,用于存储一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现本发明实施例所提供的图像融合方法,包括:The embodiment of the present invention also provides another electronic device, which includes: one or more processors; a storage device for storing one or more programs, when the one or more programs are executed by the one or more processors, so that One or more processors implement the image fusion method provided by the embodiment of the present invention, including:
获取一张欠曝低分辨率图像和一张过曝低分辨率图像;Obtain an underexposed low-resolution image and an overexposed low-resolution image;
将欠曝低分辨率图像和过曝低分辨率图像输入预先训练的神经网络,生成第一融合曝光高分辨率图像和第二融合曝光高分辨率图像;其中,神经网络通过本公开任一实施例中的神经网络训练方法训练得到;Input the under-exposed low-resolution image and the over-exposed low-resolution image into a pre-trained neural network to generate a first fused-exposure high-resolution image and a second fused-exposure high-resolution image; wherein the neural network is implemented by any one of the present disclosure The neural network training method in the example is trained;
基于第一融合曝光高分辨率图像和第二融合曝光高分辨率图像,生成图像融合结果。An image fusion result is generated based on the first fused exposure high-resolution image and the second fused exposure high-resolution image.
当然,本领域技术人员可以理解,处理器还可以实现本发明任意实施例所提供的图像融合方法的技术方案。该电子设备的硬件结构以及功能可参见图9中的内容解释。Of course, those skilled in the art can understand that the processor can also implement the technical solution of the image fusion method provided by any embodiment of the present invention. The hardware structure and function of the electronic device can be explained with reference to the content in FIG. 9 .
本公开实施例还提供一种包含计算机可执行指令的存储介质,该计算机可执行指令在由计算机处理器执行时用于执行一种神经网络训练方法,该神经网络包括网络结构相同的第一子网络和第二子网络,且任一子网络中包含初级特征提取模块、高层特征提取模块和耦合反馈模块;该方法包括:Embodiments of the present disclosure also provide a storage medium containing computer-executable instructions, the computer-executable instructions, when executed by a computer processor, are used to execute a neural network training method, where the neural network includes a first sub-system with the same network structure network and a second sub-network, and any sub-network includes a primary feature extraction module, a high-level feature extraction module and a coupled feedback module; the method includes:
获取一张欠曝低分辨率图像和一张过曝低分辨率图像;Obtain an underexposed low-resolution image and an overexposed low-resolution image;
将欠曝低分辨率图像和过曝低分辨率图像,分别输入第一子网络和第二子网络中的初始特征提取模块,生成欠曝低层次特征和过曝低层次特征;Input the under-exposed low-resolution image and the over-exposed low-resolution image into the initial feature extraction modules in the first sub-network and the second sub-network, respectively, to generate under-exposed low-level features and over-exposed low-level features;
将欠曝低层次特征和过曝低层次特征,分别输入第一子网络和第二子网络中的高层特征提取模块,生成欠曝高层次特征和过曝高层次特征;Input the under-exposed low-level features and over-exposed low-level features into the high-level feature extraction modules in the first sub-network and the second sub-network, respectively, to generate under-exposed high-level features and over-exposed high-level features;
将欠曝低层次特征、欠曝高层次特征和过曝高层次特征,输入第一子网络中的耦合反馈模块,生成第一子网络对应的耦合反馈结果;Input the under-exposed low-level features, under-exposed high-level features and over-exposed high-level features into the coupling feedback module in the first sub-network to generate the coupling feedback result corresponding to the first sub-network;
将过曝低层次特征、过曝高层次特征和欠曝高层次特征,输入第二子网络中的耦合反馈模块,生成第二子网络对应的耦合反馈结果;Input the over-exposed low-level features, over-exposed high-level features and under-exposed high-level features into the coupling feedback module in the second sub-network to generate the coupling feedback result corresponding to the second sub-network;
基于欠曝低分辨率图像、欠曝高层次特征和第一子网络对应的耦合反馈结果,以及过曝低分辨率图像、过曝高层次特征和第二子网络对应的耦合反馈结果,调整神经网络的参数。Based on the coupled feedback results corresponding to the underexposed low-resolution image, the underexposed high-level feature and the first sub-network, and the coupled feedback results corresponding to the over-exposed low-resolution image, the overexposed high-level feature and the second sub-network, adjust the neural parameters of the network.
当然,本发明实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上的方法操作,还可以执行本发明任意实施例所提供的神经网络训练方法中的相关操作。Of course, a storage medium containing computer-executable instructions provided by the embodiments of the present invention, the computer-executable instructions are not limited to the above method operations, and can also perform the relevant tasks in the neural network training method provided by any embodiment of the present invention. operate.
本发明实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。The computer storage medium of the embodiments of the present invention may adopt any combination of one or more computer-readable mediums. The computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (a non-exhaustive list) of computer readable storage media include: electrical connections having one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing. In this document, a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。A computer-readable signal medium may include a propagated data signal in baseband or as part of a carrier wave, with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。Program code embodied on a computer readable medium may be transmitted using any suitable medium, including - but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
可以以一种或多种程序设计语言或其组合来编写用于执行本发明操作的计算机程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for carrying out operations of the present invention may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional procedural languages, or a combination thereof. Programming Language - such as the "C" language or similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
本发明实施例还提供了另一种计算机可读存储介质,计算机可执行指令在由计算机处理器执行时用于执行一种图像融合方法,包括:The embodiment of the present invention also provides another computer-readable storage medium, and the computer-executable instructions are used to execute an image fusion method when executed by a computer processor, including:
获取一张欠曝低分辨率图像和一张过曝低分辨率图像;Obtain an underexposed low-resolution image and an overexposed low-resolution image;
将欠曝低分辨率图像和过曝低分辨率图像输入预先训练的神经网络,生成第一融合曝光高分辨率图像和第二融合曝光高分辨率图像;其中,神经网络通过本公开任一实施例中的神经网络训练方法训练得到;Input the under-exposed low-resolution image and the over-exposed low-resolution image into a pre-trained neural network to generate a first fused-exposure high-resolution image and a second fused-exposure high-resolution image; wherein the neural network is implemented by any one of the present disclosure The neural network training method in the example is trained;
基于第一融合曝光高分辨率图像和第二融合曝光高分辨率图像,生成图像融合结果。An image fusion result is generated based on the first fused exposure high-resolution image and the second fused exposure high-resolution image.
当然,本发明实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上的方法操作,还可以执行本发明任意实施例所提供的图像融合方法中的相关操作。对存储介质的介绍可参见上述实施例中的内容解释。Of course, a storage medium containing computer-executable instructions provided by an embodiment of the present invention, the computer-executable instructions of which are not limited to the above method operations, and can also perform related operations in the image fusion method provided by any embodiment of the present invention . For the introduction of the storage medium, please refer to the content explanation in the above-mentioned embodiment.
需要说明的是,本公开所用术语仅为了描述特定实施例,而非限制本申请范围。如本公开说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。术语“和/或”包括一个或多个相关所列条目的任何一个和所有组合。诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法或者设备中还存在另外的相同要素。It should be noted that the terminology used in the present disclosure is only for describing specific embodiments, rather than limiting the scope of the present application. As shown in this disclosure and the claims, unless the context clearly dictates otherwise, the words "a", "an", "an" and/or "the" are not intended to be specific in the singular and may include the plural. The term "and/or" includes any and all combinations of one or more of the associated listed items. Relational terms such as "first" and "second" are used only to distinguish one entity or operation from another, and do not necessarily require or imply that any such entity or operation exists between them. The actual relationship or sequence. The terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion such that a process, method or device comprising a list of elements includes not only those elements, but also other elements not expressly listed, Alternatively, elements inherent to such a process, method or apparatus may also be included. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in the process, method, or device that includes the element.
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。The above descriptions are only specific embodiments of the present disclosure, so that those skilled in the art can understand or implement the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the present disclosure. Therefore, the present disclosure is not intended to be limited to the embodiments described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (12)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010986245.1A CN112184550B (en) | 2020-09-18 | 2020-09-18 | Neural network training method, image fusion method, apparatus, equipment and medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010986245.1A CN112184550B (en) | 2020-09-18 | 2020-09-18 | Neural network training method, image fusion method, apparatus, equipment and medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112184550A true CN112184550A (en) | 2021-01-05 |
| CN112184550B CN112184550B (en) | 2022-11-01 |
Family
ID=73921653
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010986245.1A Active CN112184550B (en) | 2020-09-18 | 2020-09-18 | Neural network training method, image fusion method, apparatus, equipment and medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112184550B (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115100043A (en) * | 2022-08-25 | 2022-09-23 | 天津大学 | A Deep Learning-Based HDR Image Reconstruction Method |
| CN115103118A (en) * | 2022-06-20 | 2022-09-23 | 北京航空航天大学 | High dynamic range image generation method, device, equipment and readable storage medium |
| CN116228578A (en) * | 2023-02-22 | 2023-06-06 | 北京航空航天大学 | An exposure level-guided low-light and low-resolution image quality optimization method |
| CN119741737A (en) * | 2025-03-05 | 2025-04-01 | 南方海洋科学与工程广东省实验室(广州) | Fish identification method, system, terminal and computer-readable storage medium |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108805836A (en) * | 2018-05-31 | 2018-11-13 | 大连理工大学 | Method for correcting image based on the reciprocating HDR transformation of depth |
| US20190130545A1 (en) * | 2017-11-01 | 2019-05-02 | Google Llc | Digital image auto exposure adjustment |
| CN110728633A (en) * | 2019-09-06 | 2020-01-24 | 上海交通大学 | Method and device for constructing multi-exposure high dynamic range inverse tone mapping model |
| US20200111194A1 (en) * | 2018-10-08 | 2020-04-09 | Rensselaer Polytechnic Institute | Ct super-resolution gan constrained by the identical, residual and cycle learning ensemble (gan-circle) |
| CN111246091A (en) * | 2020-01-16 | 2020-06-05 | 北京迈格威科技有限公司 | Dynamic automatic exposure control method and device and electronic equipment |
-
2020
- 2020-09-18 CN CN202010986245.1A patent/CN112184550B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190130545A1 (en) * | 2017-11-01 | 2019-05-02 | Google Llc | Digital image auto exposure adjustment |
| CN108805836A (en) * | 2018-05-31 | 2018-11-13 | 大连理工大学 | Method for correcting image based on the reciprocating HDR transformation of depth |
| US20200111194A1 (en) * | 2018-10-08 | 2020-04-09 | Rensselaer Polytechnic Institute | Ct super-resolution gan constrained by the identical, residual and cycle learning ensemble (gan-circle) |
| CN110728633A (en) * | 2019-09-06 | 2020-01-24 | 上海交通大学 | Method and device for constructing multi-exposure high dynamic range inverse tone mapping model |
| CN111246091A (en) * | 2020-01-16 | 2020-06-05 | 北京迈格威科技有限公司 | Dynamic automatic exposure control method and device and electronic equipment |
Non-Patent Citations (2)
| Title |
|---|
| 史振威等: "图像超分辨重建算法综述", 《数据采集与处理》 * |
| 陈文等: "基于卷积神经网络的LDR图像重建HDR图像的方法研究", 《包装工程》 * |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115103118A (en) * | 2022-06-20 | 2022-09-23 | 北京航空航天大学 | High dynamic range image generation method, device, equipment and readable storage medium |
| CN115103118B (en) * | 2022-06-20 | 2023-04-07 | 北京航空航天大学 | High dynamic range image generation method, device, equipment and readable storage medium |
| CN115100043A (en) * | 2022-08-25 | 2022-09-23 | 天津大学 | A Deep Learning-Based HDR Image Reconstruction Method |
| CN116228578A (en) * | 2023-02-22 | 2023-06-06 | 北京航空航天大学 | An exposure level-guided low-light and low-resolution image quality optimization method |
| CN116228578B (en) * | 2023-02-22 | 2025-12-09 | 北京航空航天大学 | Low-light low-resolution image quality optimization method guided by exposure level |
| CN119741737A (en) * | 2025-03-05 | 2025-04-01 | 南方海洋科学与工程广东省实验室(广州) | Fish identification method, system, terminal and computer-readable storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112184550B (en) | 2022-11-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112184550B (en) | Neural network training method, image fusion method, apparatus, equipment and medium | |
| CN112602088B (en) | Methods, systems and computer-readable media for improving the quality of low-light images | |
| CN112488923B (en) | Image super-resolution reconstruction method and device, storage medium and electronic equipment | |
| TWI769725B (en) | Image processing method, electronic device and computer readable storage medium | |
| RU2706891C1 (en) | Method of generating a common loss function for training a convolutional neural network for converting an image into an image with drawn parts and a system for converting an image into an image with drawn parts | |
| CN110428362A (en) | Image HDR conversion method and device, storage medium | |
| Xu et al. | Exploiting raw images for real-scene super-resolution | |
| CN111951165A (en) | Image processing method, apparatus, computer device, and computer-readable storage medium | |
| CN113379600A (en) | Short video super-resolution conversion method, device and medium based on deep learning | |
| CN116152128A (en) | High dynamic range multi-exposure image fusion model and method based on attention mechanism | |
| CN113658050A (en) | Image denoising method, denoising device, mobile terminal and storage medium | |
| CN111372006A (en) | A mobile terminal-oriented high dynamic range imaging method and system | |
| CN116934615A (en) | Image restoration method and device, electronic equipment and storage medium | |
| CN115471417B (en) | Image noise reduction processing method, device, equipment, storage medium and program product | |
| CN112651911A (en) | High dynamic range imaging generation method based on polarization image | |
| CN112150363B (en) | Convolutional neural network-based image night scene processing method, computing module for operating method and readable storage medium | |
| CN115115518B (en) | Method, device, equipment, medium and product for generating high dynamic range image | |
| US20250252537A1 (en) | Enhancing images from a mobile device to give a professional camera effect | |
| CN115103118B (en) | High dynamic range image generation method, device, equipment and readable storage medium | |
| CN113792862B (en) | Design method for generating countermeasure network based on correction chart of cascade attention mechanism | |
| CN115829878A (en) | A method and device for image enhancement | |
| Huang et al. | A two-stage HDR reconstruction pipeline for extreme dark-light RGGB images | |
| CN115937044A (en) | Image processing method, image processing apparatus, storage medium, and electronic device | |
| Que et al. | Residual dense U‐Net for abnormal exposure restoration from single images | |
| US20250131528A1 (en) | Dynamic resizing of audiovisual data |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |