[go: up one dir, main page]

CN119316737A - Image processing method, device, electronic device and computer readable storage medium - Google Patents

Image processing method, device, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN119316737A
CN119316737A CN202310873802.2A CN202310873802A CN119316737A CN 119316737 A CN119316737 A CN 119316737A CN 202310873802 A CN202310873802 A CN 202310873802A CN 119316737 A CN119316737 A CN 119316737A
Authority
CN
China
Prior art keywords
image
motion
pixel
moving
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310873802.2A
Other languages
Chinese (zh)
Other versions
CN119316737B (en
Inventor
权威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202310873802.2A priority Critical patent/CN119316737B/en
Publication of CN119316737A publication Critical patent/CN119316737A/en
Application granted granted Critical
Publication of CN119316737B publication Critical patent/CN119316737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present application relates to an image processing method, an apparatus, an electronic device, a storage medium, and a computer program product. The method comprises the steps of carrying out moving area identification on at least two shot adjacent images, determining moving areas of at least one group of moving objects, enabling the at least two adjacent images to comprise a first image with exposure parameters larger than a preset exposure threshold and a second image with exposure parameters smaller than the preset exposure threshold, determining first pixels with pixel values larger than the preset pixel threshold in the moving areas of the first image aiming at the moving areas of each group of moving objects, determining reference areas from the moving areas of the moving objects based on statistical parameters of the first pixels in the moving areas of the first image, and fusing the at least two adjacent images to obtain target images, wherein the moving areas of each group of moving objects are fused by taking the corresponding reference areas as references. The method can improve the image quality and definition of the fused image.

Description

Image processing method, apparatus, electronic device, and computer-readable storage medium
Technical Field
The present application relates to the field of image technology, and in particular, to an image processing method, an image processing device, an electronic device, and a computer readable storage medium.
Background
Exposure fusion is a technique for generating high dynamic range (HIGH DYNAMIC RANGE, HDR for short) images by combining multiple images with different exposure levels, fusing their luminance and detail information to produce an image with a wider range of luminances.
Conventional image processing methods typically use the same camera settings to capture a series of images with different exposure levels, typically including low exposure, standard exposure, and high exposure images, and then fuse the multiple images to obtain a high dynamic range image. However, due to the existence of a moving object, an exposure image needs to be designated as a reference for exposure fusion when an image is shot.
However, in the conventional image processing method, the designated reference image is generally fixed, and there is a problem that the signal-to-noise ratio of the fused image is low, resulting in a decrease in global image quality.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment, a computer readable storage medium and a computer program product, which can more accurately determine a reference area for fusion of a motion area, improve the signal-to-noise ratio of a fused image and further improve the image quality and definition of the fused image.
In a first aspect, the present application provides an image processing method. The method comprises the following steps:
Identifying moving areas of at least two adjacent images to be shot, and determining moving areas of at least one group of moving objects, wherein the at least two adjacent images comprise a first image with exposure parameters larger than a preset exposure threshold value and a second image with exposure parameters smaller than the preset exposure threshold value, and the moving areas of each group of moving objects comprise the moving areas of the moving objects in each image;
for each group of moving areas of the moving object, determining first pixels with pixel values larger than a preset pixel threshold value in the moving areas of the first image, and determining a reference area from the moving areas of the moving object based on statistical parameters of the first pixels in the moving areas of the first image;
And fusing the at least two adjacent images to obtain a target image, wherein the motion area of each group of motion objects is fused by taking the corresponding reference area as a reference.
In a second aspect, the present application also provides an image processing apparatus. The device comprises:
The device comprises a motion area identification module, a motion area detection module and a motion area detection module, wherein the motion area identification module is used for carrying out motion area identification on at least two shot adjacent images and determining the motion area of at least one group of motion objects, the at least two adjacent images comprise a first image with exposure parameters larger than a preset exposure threshold value and a second image with exposure parameters smaller than the preset exposure threshold value, and the motion area of each group of motion objects comprises the motion area of the motion object in each image;
A reference region determining module, configured to determine, for each group of motion regions of a moving object, first pixels in the motion regions of the first image having pixel values greater than a preset pixel threshold, and determine a reference region from each motion region of the moving object based on statistical parameters of each first pixel in the motion regions of the first image;
and the fusion module is used for fusing the at least two adjacent images to obtain a target image, and the motion area of each group of motion objects is fused by taking the corresponding reference area as a reference.
In a third aspect, the application further provides electronic equipment. The electronic device comprises a memory and a processor, the memory stores a computer program, and the processor executes the computer program to realize the following steps:
Identifying moving areas of at least two adjacent images to be shot, and determining moving areas of at least one group of moving objects, wherein the at least two adjacent images comprise a first image with exposure parameters larger than a preset exposure threshold value and a second image with exposure parameters smaller than the preset exposure threshold value, and the moving areas of each group of moving objects comprise the moving areas of the moving objects in each image;
for each group of moving areas of the moving object, determining first pixels with pixel values larger than a preset pixel threshold value in the moving areas of the first image, and determining a reference area from the moving areas of the moving object based on statistical parameters of the first pixels in the moving areas of the first image;
And fusing the at least two adjacent images to obtain a target image, wherein the motion area of each group of motion objects is fused by taking the corresponding reference area as a reference.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Identifying moving areas of at least two adjacent images to be shot, and determining moving areas of at least one group of moving objects, wherein the at least two adjacent images comprise a first image with exposure parameters larger than a preset exposure threshold value and a second image with exposure parameters smaller than the preset exposure threshold value, and the moving areas of each group of moving objects comprise the moving areas of the moving objects in each image;
for each group of moving areas of the moving object, determining first pixels with pixel values larger than a preset pixel threshold value in the moving areas of the first image, and determining a reference area from the moving areas of the moving object based on statistical parameters of the first pixels in the moving areas of the first image;
And fusing the at least two adjacent images to obtain a target image, wherein the motion area of each group of motion objects is fused by taking the corresponding reference area as a reference.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
Identifying moving areas of at least two adjacent images to be shot, and determining moving areas of at least one group of moving objects, wherein the at least two adjacent images comprise a first image with exposure parameters larger than a preset exposure threshold value and a second image with exposure parameters smaller than the preset exposure threshold value, and the moving areas of each group of moving objects comprise the moving areas of the moving objects in each image;
for each group of moving areas of the moving object, determining first pixels with pixel values larger than a preset pixel threshold value in the moving areas of the first image, and determining a reference area from the moving areas of the moving object based on statistical parameters of the first pixels in the moving areas of the first image;
And fusing the at least two adjacent images to obtain a target image, wherein the motion area of each group of motion objects is fused by taking the corresponding reference area as a reference.
The image processing method, the device, the electronic equipment, the computer readable storage medium and the computer program product are used for identifying the moving areas of at least two shot adjacent images, determining the moving areas of at least one group of moving objects, wherein the at least two adjacent images comprise a first image with exposure parameters larger than a preset exposure threshold and a second image with exposure parameters smaller than the preset exposure threshold, the moving areas of each group of moving objects comprise the moving areas of the moving objects in each image, determining the first pixels with pixel values larger than the preset pixel threshold in the moving areas of the first image aiming at the moving areas of each group of moving objects, and determining the reference areas of the group of moving objects in the moving areas of the first image based on the statistical parameters of the first pixels in the moving areas of the first image.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an image processing method in one embodiment;
FIG. 2 is a schematic diagram of a clustering process performed on a motion mask region in one embodiment;
FIG. 3 is a block diagram showing the structure of an image processing apparatus in one embodiment;
fig. 4 is an internal structural diagram of an electronic device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In one embodiment, as shown in fig. 1, an image processing method is provided, where the method is applied to an electronic device, and the electronic device may be a terminal or a server, and it is understood that the method may also be applied to a system including a terminal and a server, and implemented through interaction between the terminal and the server. The terminal can be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things equipment and portable wearable equipment, and the internet of things equipment can be smart speakers, smart televisions, smart air conditioners, smart vehicle-mounted equipment, smart automobiles and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
In this embodiment, the image processing method includes the steps of:
Step S102, moving area identification is carried out on at least two shot adjacent images to determine moving areas of at least one group of moving objects, wherein the at least two adjacent images comprise a first image with exposure parameters larger than a preset exposure threshold value and a second image with exposure parameters smaller than the preset exposure threshold value, and the moving areas of each group of moving objects comprise the moving areas of the moving objects in each image.
Wherein the moving object may be a person, an animal, an object or other object, etc. The exposure parameters may include exposure duration, aperture, etc. The preset exposure threshold may be set as desired.
It can be understood that the larger the exposure parameter, the larger the brightness of the image, the larger the exposure parameter of the first image is, i.e. the first image is a long exposure image, and the exposure parameter of the second image is smaller than the preset exposure threshold, i.e. the second image is a short exposure image. The number of first images may be one or more, and the number of second images may be one or more.
The motion areas of each set of moving objects include the motion areas of the same moving object in each image.
Optionally, the electronic device comprises a camera module, at least two adjacent images are obtained through the camera module, the at least two adjacent images are registered to obtain at least two registered adjacent images, the at least two registered adjacent images are subjected to motion area identification, and the motion area of at least one group of moving objects is determined.
The electronic device performs global registration on at least two adjacent images. It will be appreciated that the global registration may ensure that there is local motion in at least two adjacent images after registration, with portions of the global motion being substantially aligned.
In other alternative embodiments, the electronic device may also locally register at least two adjacent images.
Step S104, for each group of moving areas of the moving object, determining first pixels in the moving area of the first image, the pixel values of which are greater than a preset pixel threshold, and determining a reference area from the respective moving areas of the moving object based on the statistical parameters of the respective first pixels in the moving area of the first image.
The preset pixel threshold value can be set according to requirements. If a first pixel with a pixel value larger than a preset pixel threshold value exists in the motion area of the first image, the brightness of the first pixel is higher, and the first pixel is an overexposure point; if the pixel value in the motion area of the first image is smaller than or equal to the preset pixel threshold value, the brightness of the pixel is lower.
The statistical parameter is a statistically derived parameter for characterizing overexposure of the moving region of the first image. The larger the statistical parameter, the larger the overexposure ratio characterizing the motion region of the first image.
Alternatively, the statistical parameter may be the number of the first pixels in the moving area of the first image, the area of the first pixels in the moving area of the first image, the ratio of the number of the first pixels to the total number of pixels in the moving area of the first image, or the like, without being limited thereto.
The reference region is a region in which a group of motion regions of a moving object serve as a reference when fusion is performed.
Optionally, for each group of moving areas of the moving object, detecting a pixel value of each pixel in the moving area of the moving object of each first image, determining a first pixel with a pixel value larger than a preset pixel threshold value in the moving area of the moving object of each first image, counting each first pixel for the moving area of the moving object of each first image to obtain a statistic parameter, and determining a reference area from each moving area of the moving object based on the statistic parameter of each first pixel in the moving area of the first image.
Optionally, for each group of moving areas of the moving object, if the statistical parameter in the moving area of each first image is greater than a preset statistical threshold, the moving area of the moving object in the second image is determined as a reference area, and if the statistical parameter in the moving area of the first image is less than or equal to the preset statistical threshold, the moving area of the moving object in the first image is determined as a reference area.
And S106, fusing at least two adjacent images to obtain a target image, wherein the motion area of each group of motion objects is fused by taking the corresponding reference area as a reference.
Optionally, for each group of moving areas of the moving object, fusing each moving area of the moving object with a corresponding reference area to obtain a target moving area, fusing non-moving areas in at least two adjacent images to obtain a target non-moving area, and splicing the target moving area and the target non-moving area to obtain a target image.
Wherein the target image may be a high dynamic range image (HDR image, HIGH DYNAMIC RANGE IMAGING).
Optionally, the electronic device forms a reference image from the reference area and the non-moving area of the moving area of each group of moving objects, and fuses at least two adjacent images with the reference image as a reference to obtain a target image.
According to the image processing method, the electronic equipment identifies the moving areas of at least two adjacent photographed images, the moving areas of at least one group of moving objects are determined, the at least two adjacent images comprise first images with exposure parameters larger than a preset exposure threshold value and second images with exposure parameters smaller than the preset exposure threshold value, the moving areas of each group of moving objects comprise moving areas of moving objects in each image, first pixels with pixel values larger than the preset pixel threshold value in the moving areas of the first images are determined for the moving areas of each group of moving objects, and based on statistical parameters of the first pixels in the moving areas of the first images, the reference areas of the group of moving objects can be accurately determined from the moving areas of the moving objects, and then the moving areas of each group of moving objects are fused by taking the corresponding reference areas as references, so that target images with higher signal to noise ratio, higher image quality and higher definition can be obtained through fusion.
By adopting the image processing method, combining the motion areas and determining the reference areas from the motion areas of the moving object more accurately based on the statistical parameters of the first pixels in the motion areas of the first image, the problem of image quality degradation caused by dark frame reference can be reduced as much as possible, the image quality can be ensured to a certain extent, and the ghost can be restrained at the same time.
In one embodiment, the moving region identification is performed on at least two adjacent images, and the moving region of at least one group of moving objects is determined, including that the moving pixel identification is performed on at least two adjacent images, and each moving mask region in each image is determined; and clustering each motion mask region in each image to obtain at least one group of motion regions of the moving object.
Where the motion MASK region (MASK) is a region or process that is used to control image processing by masking the processed image (either fully or partially) with a selected image, graphic or object. The at least two adjacent images may be RAW images. The RAW image is a linear threshold image, which is RAW image data acquired by the image sensor. The RAW image itself has a value proportional to the light intensity and is therefore said to be linear.
Optionally, the electronic device performs motion pixel identification on at least two adjacent images by using a preset motion pixel detection algorithm, determines motion pixels in each image, and takes an area where each motion pixel is located as a motion mask area.
Optionally, performing motion pixel identification on at least two adjacent images to determine each motion mask region in each image, wherein the method comprises performing motion pixel identification on at least two adjacent images to determine the pixel of the moving object in each image, and determining each motion mask region in each image based on the pixel of each moving object.
The moving pixels refer to pixels where moving objects are located in an image.
Optionally, the electronic device detects an image difference (temporal diff) between pixels at corresponding positions of at least two adjacent images, if the image difference satisfies a preset image difference condition, the pixel is represented as a pixel where the moving object is located, and if the image difference does not satisfy the preset image difference condition, the pixel is represented as a non-moving area pixel.
Optionally, the preset image difference condition may be temporal diff > σ×c, where c is a coefficient for determining the motion model, σ is a noise standard deviation, and is obtained from the measured noise model. Wherein, the judgment motion model and the noise model can be obtained by training in advance.
Optionally, the electronic device performs noise level estimation on at least two adjacent images by using a noise model, determines a non-noise pixel, performs motion pixel identification on the non-noise pixel, and determines a pixel where a motion object is located in each image.
It will be appreciated that the motion image difference of the noise pixel and the non-noise pixel is often not at a level, and noise may be determined according to a preset noise determination condition, for example, with a normal distribution, the probability that the motion image difference of the non-noise pixel is within 3σ is very low, and the threshold c may be set to 3, so as to distinguish the noise pixel and the non-noise pixel.
It can be understood that at least two adjacent images are RAW images, the RAW images are linear threshold images, the at least two adjacent images can be subjected to brightness alignment in an x-gain mode, and then the at least two adjacent images subjected to brightness alignment are subjected to motion region identification. The gain is an exposure ratio, and the motion area is recognized on the premise that the brightness between images is approximately the same, and the brightness alignment can be performed by multiplying the exposure ratio.
Optionally, after determining the pixel where the moving object is located in each image, the electronic device performs a smoothing operation on the pixel where the moving object is located, to obtain each moving mask area (motion mask) in each image. Among these, smoothing operations may include smoothing, swelling, corrosion, and the like.
Optionally, the electronic equipment performs corrosion operation on each motion mask region to obtain a corroded motion mask region, and performs clustering treatment on the corroded motion mask regions to obtain at least one group of motion regions of the moving object. It will be appreciated that the electronic device may perform etching operations on each of the motion mask regions to remove noise-like discontinuities.
Optionally, clustering is performed on each motion mask region in each image to obtain at least one group of motion regions of the motion object, wherein the method comprises the steps of fusing adjacent motion mask regions into motion regions of the motion object if the adjacent motion mask regions in each image are communicated, and taking each motion mask region as the motion region of the motion object if the adjacent motion mask regions in each image are not communicated.
Optionally, the electronic device judges whether adjacent moving mask areas in each image are communicated or not and marks each moving mask area, each communicating moving mask area is marked as the same marking signal, and the moving mask areas of the same marking signal are communicated as the moving area of a moving object.
It can be appreciated that the various motion mask regions that are not in communication with each other are marked as different marking messages, which can ensure that different moving objects are distinguished.
Optionally, in each image, if the distance between the adjacent moving mask areas is smaller than a preset distance threshold, the adjacent moving mask areas are judged to be communicated, and if the distance between the adjacent moving mask areas is larger than or equal to the preset distance threshold, the adjacent moving mask areas are judged to be not communicated. The preset distance threshold can be set according to requirements.
As shown in fig. 2, the electronic device performs clustering processing on each motion mask region in the image, so as to obtain motion regions of three groups of moving objects.
The distance between the adjacent moving mask regions may be a distance between the center positions of the adjacent moving mask regions, a distance between the center of gravity positions of the adjacent moving mask regions, or a distance between other positions of the adjacent moving mask regions, which is not limited herein.
In this embodiment, the electronic device performs motion pixel recognition on at least two adjacent captured images, determines each motion mask region in each image, and performs clustering processing on each motion mask region in each image, so as to accurately obtain a motion region of at least one group of moving objects.
Further, the electronic device performs motion pixel identification on the image, so that a motion mask region in the image can be accurately determined. The electronic equipment accurately determines the motion area of the moving object by judging whether the adjacent motion mask areas are communicated.
In one embodiment, determining the reference region from each motion region of the moving object based on the statistical parameter of each first pixel in the motion region of the first image includes determining the motion region of the moving object in the first image as the reference region if the statistical parameter of each first pixel in the motion region of the first image is less than or equal to a preset statistical threshold value, and determining the motion region of the moving object in the second image as the reference region if the statistical parameter of each first pixel in the motion region of the first image is greater than the preset statistical threshold value.
It can be understood that if the statistical parameter of each first pixel in the moving area of the first image is smaller than or equal to the preset statistical threshold, the moving area of the moving object in the first image with larger exposure parameters is determined as the reference area, and if the statistical parameter of each first pixel in the moving area of the first image is larger than the preset statistical threshold, the moving area of the moving object in the second image with smaller exposure parameters is determined as the reference area.
In the embodiment, the electronic device can determine whether the motion area of the first image is overexposed based on the statistical parameters in the motion area of the first image, so that the reference area is determined more accurately, and the motion area in the first image with larger exposure parameters is used as the reference area as much as possible on the premise of no image ghosting and loss of dynamic state, so that the signal-to-noise ratio of the fused target image can be improved.
In one embodiment, at least two adjacent images are fused to obtain a target image, wherein the method comprises the steps of determining the first weight of each pixel in each image based on the brightness of each pixel in each image, adjusting the first weight of each pixel in each motion area based on a corresponding reference area for the motion area of each group of moving objects to obtain the target weight of each pixel in each motion area, generating a weight map corresponding to each image based on the target weight of each pixel in each motion area of each image and the first weight of each pixel in a non-motion area, and fusing at least two adjacent images based on the weight map of each image to obtain the target image.
The first weight is the weight calculated for the first time, and is also the weight (fusion weight) for fusing the non-motion areas. The target weight is the weight of the motion region which is finally fused.
Optionally, the electronic device detects a pixel value of each pixel in each image, takes the pixel value as the brightness of the pixel, inputs the brightness of each pixel into a preset first weight formula, and determines the first weight of the pixel.
The first weight design method may adopt a gaussian function method, as follows:
Where weight fuslon is the first weight of the pixel, σ is a preset parameter, x is the luminance of the pixel, mid is the preset optimal exposure luminance, and e is the Exp function (index), independent of the noise itself.
Optionally, adjusting the first weight of each pixel in each motion area based on the corresponding reference area to obtain the target weight of each pixel in each motion area, wherein the method comprises the steps of determining pixel differences between the second pixels in the reference area and corresponding third pixels in the motion areas except the reference area for each group of motion areas of the motion objects, determining the second weight of the third pixels based on the pixel differences, adjusting the first weight based on the second weight of the third pixels to obtain the target weight of the third pixels, and adjusting the first weight of the second pixels based on the target weight of the third pixels to obtain the target weight of the second pixels.
For each group of motion areas of the motion objects, the pixels in the reference area are second pixels, the pixels in the motion areas except the reference area are third pixels, the pixel difference between the second pixels and the corresponding third pixels is determined, and the pixel difference is input into a preset second weight formula to obtain the second weight of the third pixels.
Wherein, the second weight formula is as follows:
Wherein weight deghost is the second weight of Three kinds of pixels, e is Exp function (index), diff is the pixel difference between the second pixel and the corresponding third pixel after brightness alignment, offse and sigma are preset parameters, and are independent of noise, and all the parameters need to be adjusted in advance to obtain good ghost eliminating effect.
Optionally, the electronic device performs normalization processing on the first weights of the pixels at corresponding positions in the respective images after determining the first weights of the pixels in each image based on the brightness of each pixel in each image, and adjusts the normalized first weights of the pixels in each motion region based on the corresponding reference region for the motion region of each group of moving objects to obtain the second weights of the pixels in each motion region. Wherein the sum of the normalized first weights of the pixels at the corresponding positions in each image is 1.
Optionally, the electronic device multiplies the first weight of the third pixel by the second weight of the third pixel to obtain a target weight of the third pixel.
It can be understood that the sum of the target weight of the second pixel and the target weight of the third pixel is 1, and therefore, the electronic device performs difference processing on the target weights of the 1 and the third pixel to obtain the target weight of the second pixel.
For example, for determining a motion area of a moving object in a first image as a reference area, a first weight of a third pixel in the motion area of the moving object in a second image is calculated to be w2, a second weight deghost _weight, then a target weight of the third pixel is w2× deghost _weight, and a target weight of the second pixel in the reference area in the first image is w1=1-w2× deghost _weight.
Similarly, for determining a motion area of a moving object in a second image as a reference area, calculating that a first weight of a third pixel in the motion area of the moving object in the first image is w1, a second weight deghost _weight, a target weight of the third pixel is w1× deghost _weight, and a target weight of a second pixel in the reference area in the second image is w2=1-w1× deghost _weight.
Optionally, the first weight is maintained for non-moving areas in the image.
It can be understood that the electronic device obtains the first weight of the pixel in each non-motion area in each image and the target weight of the pixel in the motion area, so that a weight map corresponding to each image can be generated, and each weight in the weight map is the first weight or the target weight of the corresponding pixel in the image.
Optionally, the electronic device multiplies the pixel value of each pixel in each image by the weight in the corresponding weight map to obtain an intermediate pixel value of each pixel in each image, adds the intermediate pixel values to obtain a target pixel, and constructs the target image from the target pixels.
In the embodiment, the electronic device can accurately determine the first weight of each pixel in each image based on the brightness of each pixel in each image, and further determine the target weight of each pixel in each motion area, and then generate a weight map corresponding to each image based on the target weight of each pixel in the motion area of each image and the first weight of each pixel in the non-motion area, and fuse at least two adjacent images based on the weight map of each image, so as to accurately fuse and obtain the target image.
Furthermore, the electronic equipment adjusts each pixel in the motion area by taking the reference area as a reference, so that more accurate target weight is obtained, and fusion is performed more accurately, so that a target image with higher image quality and higher definition is obtained.
In one embodiment, there is also provided another image processing method applied to an electronic device, the image processing method including the steps of:
And A1, performing motion pixel identification on at least two adjacent images to determine the pixel of a moving object in each image, wherein the at least two adjacent images comprise a first image with exposure parameters larger than a preset exposure threshold value and a second image with exposure parameters smaller than the preset exposure threshold value.
And step A2, determining each motion mask area in each image based on the pixel where each motion object is located.
And A3, fusing adjacent moving mask areas into moving areas of moving objects if the adjacent moving mask areas in each image are communicated, and taking each moving mask area as the moving area of the moving object if the adjacent moving mask areas in each image are not communicated, wherein the moving areas of each group of moving objects comprise the moving areas of the moving objects in each image.
Step A4, for each group of motion areas of the moving objects, determining a first pixel with a pixel value larger than a preset pixel threshold value in the motion area of the first image.
And step A5, if the statistical parameter of each first pixel in the motion area of the first image is smaller than or equal to a preset statistical threshold value, determining the motion area of the motion object in the first image as a reference area, and if the statistical parameter of each first pixel in the motion area of the first image is larger than the preset statistical threshold value, determining the motion area of the motion object in the second image as the reference area.
Step A6, determining a first weight of each pixel in each image based on the brightness of each pixel in each image.
Step A7 of determining, for each set of motion regions of the moving object, a pixel difference between the second pixel in the reference region and a corresponding third pixel in the motion region other than the reference region, and determining a second weight of the third pixel based on the pixel difference.
And step A8, adjusting the first weight based on the second weight of the third pixel to obtain the target weight of the third pixel.
And step A9, adjusting the first weight of the second pixel based on the target weight of the third pixel to obtain the target weight of the second pixel.
Step A10, generating a weight map corresponding to each image based on the target weight of each pixel in the motion area of each image and the first weight of each pixel in the non-motion area.
And step A11, fusing at least two adjacent images based on the weight graph of each image to obtain a target image.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an image processing device for realizing the above-mentioned image processing method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation of one or more embodiments of the image processing apparatus provided below may refer to the limitation of the image processing method hereinabove, and will not be repeated herein.
In one embodiment, as shown in FIG. 3, an image processing apparatus is provided, comprising a motion region identification module 302, a reference region determination module 304, and a fusion module 306, wherein:
The motion region identification module 302 is configured to identify motion regions of at least two adjacent images, and determine a motion region of at least one group of moving objects, where the at least two adjacent images include a first image with an exposure parameter greater than a preset exposure threshold, and a second image with an exposure parameter less than the preset exposure threshold, and the motion region of each group of moving objects includes a motion region of a moving object in each image.
The reference region determining module 304 is configured to determine, for each set of motion regions of the moving object, first pixels in the motion regions of the first image having pixel values greater than a preset pixel threshold, and determine a reference region from the respective motion regions of the moving object based on statistical parameters of the respective first pixels in the motion regions of the first image.
And the fusion module 306 is used for fusing at least two adjacent images to obtain a target image, and the motion area of each group of motion objects is fused by taking the corresponding reference area as a reference.
The image processing device is used for identifying moving areas of at least two adjacent photographed images by the electronic equipment, determining the moving areas of at least one group of moving objects, wherein the at least two adjacent images comprise first images with exposure parameters larger than a preset exposure threshold and second images with exposure parameters smaller than the preset exposure threshold, the moving areas of each group of moving objects comprise moving areas of moving objects in each image, first pixels with pixel values larger than the preset pixel threshold in the moving areas of the first images are determined for the moving areas of each group of moving objects, and the reference areas of the group of moving objects can be accurately determined from the moving areas of the moving objects based on the statistical parameters of the first pixels in the moving areas of the first images.
In one embodiment, the motion region identification module 302 is further configured to perform motion pixel identification on at least two adjacent captured images to determine each motion mask region in each image, and perform clustering processing on each motion mask region in each image to obtain a motion region of at least one group of moving objects.
In one embodiment, the motion region identification module 302 is further configured to perform motion pixel identification on at least two adjacent captured images, determine a pixel where a moving object is located in each image, and determine each motion mask region in each image based on the pixel where each moving object is located.
In one embodiment, the motion region identification module 302 is further configured to fuse adjacent motion mask regions into a motion region of a motion object if the adjacent motion mask regions in each image are connected, and to use each motion mask region as a motion region of the motion object if the adjacent motion mask regions in each image are not connected.
In one embodiment, the fusing module 306 is further configured to determine a first weight of each pixel in each image based on the brightness of each pixel in each image, adjust, for each group of motion regions of the moving object, the first weight of each pixel in each motion region based on the corresponding reference region to obtain a target weight of each pixel in each motion region, generate a weight map corresponding to each image based on the target weight of each pixel in each motion region of each image and the first weight of each pixel in the non-motion region, and fuse at least two adjacent images based on the weight map of each image to obtain the target image.
In one embodiment, the fusion module 306 is further configured to determine, for each set of motion regions of the moving object, a pixel difference between a second pixel in the reference region and a corresponding third pixel in the motion region other than the reference region, determine a second weight of the third pixel based on the pixel difference, adjust the first weight based on the second weight of the third pixel to obtain a target weight of the third pixel, and adjust the first weight of the second pixel based on the target weight of the third pixel to obtain the target weight of the second pixel.
In one embodiment, the reference area determining module 304 is further configured to determine the moving area of the moving object in the first image as the reference area if the statistical parameter of each first pixel in the moving area of the first image is less than or equal to the preset statistical threshold, and determine the moving area of the moving object in the second image as the reference area if the statistical parameter of each first pixel in the moving area of the first image is greater than the preset statistical threshold.
The respective modules in the above-described image processing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or independent of a processor in the electronic device, or may be stored in software in a memory in the electronic device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, an electronic device is provided, which may be a terminal, and an internal structure diagram thereof may be as shown in fig. 4. The electronic device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the electronic device is used to exchange information between the processor and the external device. The communication interface of the electronic device is used for conducting wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image processing method. The display unit of the electronic device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the electronic equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 4 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the electronic device to which the present inventive arrangements are applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform steps of an image processing method.
Embodiments of the present application also provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform an image processing method.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. An image processing method, comprising:
Identifying moving areas of at least two adjacent images to be shot, and determining moving areas of at least one group of moving objects, wherein the at least two adjacent images comprise a first image with exposure parameters larger than a preset exposure threshold value and a second image with exposure parameters smaller than the preset exposure threshold value, and the moving areas of each group of moving objects comprise the moving areas of the moving objects in each image;
for each group of moving areas of the moving object, determining first pixels with pixel values larger than a preset pixel threshold value in the moving areas of the first image, and determining a reference area from the moving areas of the moving object based on statistical parameters of the first pixels in the moving areas of the first image;
And fusing the at least two adjacent images to obtain a target image, wherein the motion area of each group of motion objects is fused by taking the corresponding reference area as a reference.
2. The method of claim 1, wherein the moving region identification of the captured at least two adjacent images, determining the moving region of the at least one set of moving objects, comprises:
Performing motion pixel identification on at least two adjacent images to determine each motion mask area in each image;
And clustering each motion mask region in each image to obtain at least one group of motion regions of the moving object.
3. The method according to claim 2, wherein the performing motion pixel recognition on at least two adjacent captured images to determine the respective motion mask regions in each image comprises:
performing motion pixel identification on at least two adjacent images to determine the pixel of a moving object in each image;
And determining each motion mask region in each image based on the pixel where each motion object is located.
4. A method according to claim 3, wherein clustering the motion mask regions in each image to obtain a motion region of at least one group of moving objects comprises:
if the adjacent moving mask areas in each image are communicated, fusing the adjacent moving mask areas into a moving area of a moving object;
if the adjacent moving mask areas in each image are not communicated, each moving mask area is respectively taken as a moving area of a moving object.
5. The method of claim 1, wherein fusing the at least two adjacent images to obtain the target image comprises:
Determining a first weight for each pixel in each image based on the brightness of each pixel in each image;
for the motion area of each group of motion objects, adjusting the first weight of each pixel in each motion area based on the corresponding reference area to obtain the target weight of each pixel in each motion area;
Generating a weight map corresponding to each image based on the target weight of each pixel in the motion area of each image and the first weight of each pixel in the non-motion area;
And fusing at least two adjacent images based on the weight graph of each image to obtain a target image.
6. The method of claim 5, wherein for each set of motion regions of the moving object, adjusting the first weight of each pixel in each motion region based on the corresponding reference region to obtain the target weight of each pixel in each motion region, comprising:
Determining, for each set of motion regions of the moving object, a pixel difference between a second pixel in the reference region and a corresponding third pixel in a motion region other than the reference region, and determining a second weight of the third pixel based on the pixel difference;
adjusting the first weight based on the second weight of the third pixel to obtain a target weight of the third pixel;
And adjusting the first weight of the second pixel based on the target weight of the third pixel to obtain the target weight of the second pixel.
7. The method according to any one of claims 1 to 6, wherein determining a reference region from each motion region of the moving object based on a statistical parameter of each first pixel in the motion region of the first image comprises:
if the statistical parameter of each first pixel in the motion area of the first image is smaller than or equal to a preset statistical threshold value, determining the motion area of the moving object in the first image as a reference area;
and if the statistical parameter of each first pixel in the motion area of the first image is greater than a preset statistical threshold value, determining the motion area of the motion object in the second image as a reference area.
8. An image processing apparatus, comprising:
The device comprises a motion area identification module, a motion area detection module and a motion area detection module, wherein the motion area identification module is used for carrying out motion area identification on at least two shot adjacent images and determining the motion area of at least one group of motion objects, the at least two adjacent images comprise a first image with exposure parameters larger than a preset exposure threshold value and a second image with exposure parameters smaller than the preset exposure threshold value, and the motion area of each group of motion objects comprises the motion area of the motion object in each image;
A reference region determining module, configured to determine, for each group of motion regions of a moving object, first pixels in the motion regions of the first image having pixel values greater than a preset pixel threshold, and determine a reference region from each motion region of the moving object based on statistical parameters of each first pixel in the motion regions of the first image;
and the fusion module is used for fusing the at least two adjacent images to obtain a target image, and the motion area of each group of motion objects is fused by taking the corresponding reference area as a reference.
9. An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to perform the steps of the image processing method according to any of claims 1 to 7.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 7.
CN202310873802.2A 2023-07-14 2023-07-14 Image processing methods, apparatuses, electronic devices, and computer-readable storage media Active CN119316737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310873802.2A CN119316737B (en) 2023-07-14 2023-07-14 Image processing methods, apparatuses, electronic devices, and computer-readable storage media

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310873802.2A CN119316737B (en) 2023-07-14 2023-07-14 Image processing methods, apparatuses, electronic devices, and computer-readable storage media

Publications (2)

Publication Number Publication Date
CN119316737A true CN119316737A (en) 2025-01-14
CN119316737B CN119316737B (en) 2025-12-19

Family

ID=94185236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310873802.2A Active CN119316737B (en) 2023-07-14 2023-07-14 Image processing methods, apparatuses, electronic devices, and computer-readable storage media

Country Status (1)

Country Link
CN (1) CN119316737B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110199501A1 (en) * 2010-02-18 2011-08-18 Canon Kabushiki Kaisha Image input apparatus, image verification apparatus, and control methods therefor
CN108668093A (en) * 2017-03-31 2018-10-16 华为技术有限公司 The generation method and device of HDR image
CN115049572A (en) * 2022-06-28 2022-09-13 西安欧珀通信科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115272155A (en) * 2022-08-24 2022-11-01 声呐天空资讯顾问有限公司 Image synthesis method, image synthesis device, computer equipment and storage medium
CN115439384A (en) * 2022-09-05 2022-12-06 中国科学院长春光学精密机械与物理研究所 A ghost-free multi-exposure image fusion method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110199501A1 (en) * 2010-02-18 2011-08-18 Canon Kabushiki Kaisha Image input apparatus, image verification apparatus, and control methods therefor
CN108668093A (en) * 2017-03-31 2018-10-16 华为技术有限公司 The generation method and device of HDR image
CN115049572A (en) * 2022-06-28 2022-09-13 西安欧珀通信科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115272155A (en) * 2022-08-24 2022-11-01 声呐天空资讯顾问有限公司 Image synthesis method, image synthesis device, computer equipment and storage medium
CN115439384A (en) * 2022-09-05 2022-12-06 中国科学院长春光学精密机械与物理研究所 A ghost-free multi-exposure image fusion method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘红帝: "基于HDRI的高反光表面缺陷检测方法研究", 《湖北工业大学硕士论文》, 15 August 2020 (2020-08-15) *

Also Published As

Publication number Publication date
CN119316737B (en) 2025-12-19

Similar Documents

Publication Publication Date Title
CN114424253B (en) Model training method, device, storage medium and electronic device
CN115115552B (en) Image correction model training method, image correction device and computer equipment
CN110660066A (en) Network training method, image processing method, network, terminal equipment and medium
CN109886864B (en) Privacy mask processing method and device
CN108551552B (en) Image processing method, device, storage medium and mobile terminal
CN118071719A (en) Defect detection method, defect detection device, computer equipment and computer readable storage medium
CN114422721B (en) Imaging method, device, electronic device and storage medium
CN119316737B (en) Image processing methods, apparatuses, electronic devices, and computer-readable storage media
CN118608521B (en) Defect detection method, defect detection device, computer equipment and computer readable storage medium
CN115761239B (en) Semantic segmentation method and related device
CN115550558B (en) Automatic exposure method, device, electronic device and storage medium for photographing device
CN114972100B (en) Noise model estimation method and device, image processing method and device
CN118297994A (en) Image processing method, device, electronic equipment and storage medium
CN116883257A (en) Image defogging method, device, computer equipment and storage medium
CN117576634A (en) Anomaly analysis method, device and storage medium based on density detection
CN117975473A (en) Bill text detection model training and detection method, device, equipment and medium
CN118055335A (en) Reference frame determining method, apparatus, electronic device, and computer-readable storage medium
CN118735831B (en) Image processing method, device, equipment, readable storage medium and program product
CN116630629B (en) Domain adaptation-based semantic segmentation method, device, equipment and storage medium
CN119946249A (en) Camera defect detection method, device, computer equipment and readable storage medium
CN117541525A (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN118015102A (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN119545167A (en) Photographing method, device, electronic device and computer-readable storage medium
CN120339086A (en) Image processing method, device, electronic device and computer readable storage medium
CN119313712A (en) Image registration method, device, electronic device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant