[go: up one dir, main page]

CN109685839A - Image alignment method, mobile terminal and computer storage medium - Google Patents

Image alignment method, mobile terminal and computer storage medium Download PDF

Info

Publication number
CN109685839A
CN109685839A CN201811565425.1A CN201811565425A CN109685839A CN 109685839 A CN109685839 A CN 109685839A CN 201811565425 A CN201811565425 A CN 201811565425A CN 109685839 A CN109685839 A CN 109685839A
Authority
CN
China
Prior art keywords
image
data
feature
alignment method
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811565425.1A
Other languages
Chinese (zh)
Other versions
CN109685839B (en
Inventor
彭君
王德才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huaduo Network Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN201811565425.1A priority Critical patent/CN109685839B/en
Publication of CN109685839A publication Critical patent/CN109685839A/en
Application granted granted Critical
Publication of CN109685839B publication Critical patent/CN109685839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

This application provides a kind of image alignment method, mobile terminal and calculate machine storage medium.Image alignment method includes: to obtain the first image and at least second image;Extract multiple fisrt feature points of the first image and multiple second feature points of the second image;The confidence level for obtaining the second feature point Yu the fisrt feature point filters out the confidence bit in the second feature point of default confidence range;Transformation parameter is calculated according to fisrt feature point and the second feature point filtered out;Multiple second feature points are handled according to transformation parameter, to obtain third image.By above-mentioned image alignment method, automatic aligning can be carried out to image, to obtain the image of multiple alignment.

Description

Image alignment method, mobile terminal and computer storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image alignment method, a mobile terminal, and a computer storage medium.
Background
With the continuous progress of image processing technology, apparatuses that capture images are also continuously updated. Current photographic equipment has been able to take a continuous shot of a scene quickly to obtain multiple images.
However, when the user uses the photographing apparatus, a plurality of images acquired in the same scene may be shifted to different degrees due to a drastic change in the environment or an unintentional body movement of the user. These shaking and jittering can adversely affect the video or moving image formed by combining multiple images, such as restart, blur, etc. of the combined video or moving image.
Disclosure of Invention
The application provides an image alignment method, a mobile terminal and a computer storage medium, and mainly solves the technical problem of how to prevent the situation that a plurality of images acquired in the same scene have different degrees of deviation caused by severe environmental change or unconscious body movement of a user.
In order to solve the above technical problem, the present application provides an image alignment method, including:
acquiring a first image and at least one second image;
extracting a plurality of first feature points of the first image and a plurality of second feature points of the second image;
obtaining the confidence degrees of the second characteristic points and the first characteristic points, and screening out the second characteristic points with the confidence degrees in a preset confidence degree range;
calculating a transformation parameter according to the first characteristic point and the screened second characteristic point;
and processing the plurality of second feature points according to the transformation parameters to obtain a third image. In order to solve the technical problem, the application further provides a mobile terminal, which includes a processor, and a memory and a camera module coupled to the processor;
the camera module is used for acquiring the first image and the second image;
the memory is used for storing program data and the processor is used for executing the program data to realize the image alignment method.
To solve the above technical problem, the present application further provides a computer storage medium for storing program data, which when executed by a processor, is used to implement the image alignment method as described above.
Compared with the prior art, the beneficial effects of this application are: acquiring a first image and at least one second image; extracting a plurality of first characteristic points of the first image and a plurality of second characteristic points of the second image; obtaining the confidence degrees of the second characteristic points and the first characteristic points, and screening out the second characteristic points with the confidence degrees in a preset confidence degree range; calculating a transformation parameter according to the first characteristic point and the screened second characteristic point; and processing the plurality of second characteristic points according to the transformation parameters to obtain a third image. After the characteristic points are obtained, the first image is taken as a reference image, and transformation parameters are calculated according to the characteristic points of the first image and the second image with high confidence coefficient; all the feature points of the second image are transformed into coordinate data according to the transformation parameters to generate a third image aligned with the first image. Therefore, by the above image alignment method, at least one second image can be adjusted to acquire an image aligned with the first image.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram of a first embodiment of an image alignment method according to the present application;
FIG. 2 is a schematic flow chart diagram illustrating a second embodiment of the image alignment method of the present application;
FIG. 3 is a schematic flow chart of a third embodiment of the image alignment method of the present application;
FIG. 4 is a schematic flow chart diagram illustrating a fourth embodiment of the image alignment method of the present application;
FIG. 5 is a schematic flow chart of the third image acquisition in the image alignment method of FIG. 4;
FIG. 6 is a schematic structural diagram of an embodiment of a mobile terminal according to the present application;
FIG. 7 is a schematic structural diagram of an embodiment of a computer storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art without any inventive work according to the embodiments of the present application are within the scope of the present application.
The application provides an image alignment method, which can realize batch processing of a plurality of images; for example, the image alignment method of the present embodiment can be used for automatic alignment before a plurality of images are combined into a video or a dynamic image. Referring to fig. 1, fig. 1 is a schematic flow chart of a first embodiment of an image alignment method according to the present application.
The image alignment method of the embodiment is applied to a mobile terminal capable of being used for shooting images, and in the field of terminals, the mobile terminal can be an intelligent terminal such as a smart phone, a tablet personal computer and a camera; in the field of operating systems, a mobile terminal may carry an android operating system or an IOS operating system.
As shown in fig. 1, the image alignment method of the present embodiment includes the steps of:
s11: a first image and at least one second image are acquired.
The mobile terminal acquires a plurality of images, and acquires an image with one image as a reference from the plurality of images, wherein the reference image is a first image. And the other images in the plurality of images are second images, and the mobile terminal transforms at least one second image according to the first image, so that the alignment of the plurality of images is realized.
The mobile terminal can obtain the first image and at least one second image in real time through the camera module, namely the camera module can be used for continuously shooting a plurality of images, the mobile terminal takes the first image in the plurality of images as a first image of a reference, and the rest images are taken as second images. Further, the mobile terminal may also preset the image with the preset number of sheets in the continuous shooting function as a first image of a reference, and the rest of the images are all used as second images.
The mobile terminal may also acquire a plurality of images from a storage medium, which may be a usb disk, a mobile hard disk, or the like, and acquire the first image as a reference and the at least one second image that needs to be transformed. Further, the mobile terminal may also download similar images from the internet, and set the first image and the at least one second image as references according to the download time.
After the mobile terminal acquires the first image and the at least one second image, the mobile terminal preprocesses the first image and the second image. In the image analysis process, the preprocessing is used to eliminate interference or noise in the first image and the second image, thereby improving the reliability and accuracy of feature extraction in the following steps.
For example, when the mobile terminal acquires a face image, the preprocessing of the face image may include face correction, face image enhancement, normalization, and other processing methods.
S12: a plurality of first feature points of the first image and a plurality of second feature points of the second image are extracted.
The mobile terminal extracts a plurality of first feature points of the first image, and the first feature points can be used as main marks of the first image and also can be used as reference marks of the second image. The first feature point of the present embodiment may include one or more of a color feature, a texture feature, a shape feature, or a spatial relationship feature.
The method for extracting the first feature point by the mobile terminal may be one of Fourier transform method, window Fourier transform (Gabor), wavelet transform method, least square method, boundary direction histogram method, or texture feature extraction according to Tamura texture feature.
The mobile terminal extracts a plurality of second feature points of the second image, and the manner of extracting the second feature points is the same as that of extracting the first feature points of the first image, and is not described herein again.
S13: and obtaining the confidence degrees of the second characteristic points and the first characteristic points, and extracting the corresponding second characteristic points when the confidence degrees are within a preset confidence degree range.
The mobile terminal extracts the first characteristic point and the second characteristic point, and presets a confidence degree range according to the first characteristic point. In this embodiment, the confidence may be a coordinate distance of the first feature point on the first image and the second feature point on the second image, based on the coordinate data.
The mobile terminal extracts each first feature point, then traverses all second feature points, obtains the confidence degrees of a plurality of groups of second feature points and the first feature points, and judges whether the confidence degree of each group of second feature points and the first feature points is within a preset confidence degree range. And if so, extracting a second feature point corresponding to the confidence coefficient.
S14: and acquiring transformation parameters according to the first characteristic points and the related second characteristic points.
The mobile terminal extracts a plurality of groups of first characteristic points and corresponding second characteristic points, and processes and calculates the first characteristic points and the corresponding second characteristic points to obtain transformation parameters suitable for the plurality of groups of first characteristic points and the related second characteristic points.
And the mobile terminal transforms the second characteristic points on the second image to the coordinate positions which are the same as or close to the corresponding first characteristic points according to the transformation parameters.
S15: and transforming the plurality of second characteristic points according to the transformation parameters to obtain a third image.
After the mobile terminal obtains the transformation parameters, the second feature points on the second image are transformed according to the transformation parameters, and then the transformed second feature points are combined to obtain a third image.
In the embodiment, the mobile terminal acquires a first image and at least one second image, and pre-processes the first image and the second image, so that feature points of the first image and the second image are easier to extract; after extracting the first characteristic points and the second characteristic points, the mobile terminal obtains related second characteristic points of the confidence coefficient of each first characteristic point within a preset confidence coefficient range, and obtains transformation parameters according to the multiple groups of first characteristic points and the corresponding related second characteristic points, so that transformation parameters for transforming the second image to the first image are obtained; finally, according to the conversion parameters, the mobile terminal converts the plurality of second feature points to obtain a third image; the third image is a second image transformed according to the first image; through the image alignment method, the mobile terminal can automatically align the rest images according to the reference image, so that a plurality of images aligned with the reference image are obtained.
The present application further provides another image alignment method, and please refer to fig. 2 specifically, where fig. 2 is a schematic flowchart of a second embodiment of the image alignment method of the present application.
On the basis of the steps S11 and S12 of the first embodiment of the resource management method described above, the image alignment method of the present embodiment further includes the steps of:
s21: a first image and at least one second image are acquired.
S22: and carrying out gray scale processing on the first image and the second image.
And the mobile terminal further performs gray level processing on the first image and the second image so as to improve the accuracy of feature extraction.
Because the picture that prior art was gathered and was shot is mostly the color image, and the color of every pixel in the color image has R, G, B three components to decide, and every component has 255 kinds of values to be taken, and a pixel can have 1600 many tens of thousands of color's variation range like this, and this can cause very big interference to the feature extraction. The gray image is a special color image with R, G, B components, the description of the gray image still reflects the distribution and characteristics of the whole and local chromaticity and brightness levels of the whole image like the color image, but the variation range of one pixel point of the gray image is only 255. Therefore, in the image processing of the embodiment, the mobile terminal converts the color images in various formats into grayscale images, so that the amount of calculation for the mobile terminal to perform feature extraction on the first image and the second image in subsequent processing is reduced.
Furthermore, the mobile terminal can also perform image smoothing processing in the process of performing gray scale processing on the first image and the second image, so that the noise of the first image and the second image can be suppressed, and the image quality can be improved. And then, the mobile terminal performs histogram equalization on the processed first image and the processed second image, namely, the gray level distribution of the first image and the gray level distribution of the second image are converted into uniform distribution, so that the details of the first image and the second image are clearer, and the distribution of each gray level of the histogram is more balanced. And finally, the mobile terminal performs gray scale transformation, namely contrast stretching, on the first image and the second image, and expands the dynamic range of the brightness value of the original image to a specified range or the whole dynamic range according to a linear relation by using the simplest piecewise linear transformation function.
S23: according to the ORB algorithm, a plurality of first feature points of the first image are extracted.
The mobile terminal extracts a plurality of first feature points in the first image by adopting an ORB algorithm. Specifically, the mobile terminal detects the feature point by using a fast (features from a filtered segment test) algorithm, that is, randomly selects a pixel point in the first image, and compares the pixel point with a plurality of pixel points in a preset range; if the pixel point is different from most of the pixel points, the mobile terminal judges that the pixel point is a first feature point in the first image.
S24: according to the ORB algorithm, a plurality of second feature points of the second image are extracted.
The mobile terminal also extracts a plurality of second feature points in the second image by using the ORB algorithm, which is not described herein again.
S25: and acquiring the definition moment data, the centroid data and the main direction data of the first characteristic point according to the coordinate data of the first characteristic point.
The mobile terminal extracts coordinate data of the first characteristic point from the first image, and calculates definition moment data, centroid data and main direction data of the first characteristic point according to the coordinate data. The definition moment data, the centroid data and the main direction data of the feature points can reflect the properties of the feature points, further reflect the essential features of the image, and can identify the target object in the image. And image processing operations such as matching and transformation of the feature points can be completed through the defined moment data, the centroid data and the main direction data of the feature points.
Specifically, the mobile terminal may obtain the moment-defining data of the first feature point according to the coordinate data of the first feature point, where a calculation formula of the moment-defining data is:
wherein M isijThe definition moment of the first characteristic point is defined, (x, y) is the coordinate of the first characteristic point, and I (x, y) is the coordinate function relation of the first characteristic point.
The mobile terminal can obtain the centroid data of the first characteristic point according to the defined moment data of the first characteristic point, and the calculation formula of the centroid data is as follows:
wherein,cxis the abscissa of the center of mass, cyAs ordinate, zeroth order moment of the centroidFirst moment
Further, the mobile terminal may further obtain shape and direction data corresponding to the first feature point according to the second moment of the defined moment of the first feature point, which is not described herein again.
The mobile terminal further can obtain main direction data of the first characteristic point according to the defined moment data of the first characteristic point, and a calculation formula of the main direction data is as follows:
wherein, coriIs the main direction data of the first characteristic point.
S26: and acquiring the definition moment data, the centroid data and the main direction data of the second characteristic point according to the coordinate data of the second characteristic point.
And the mobile terminal extracts the coordinate data of the second characteristic point from the second image and calculates to obtain the definition moment data, the centroid data and the main direction data of the second characteristic point according to the coordinate data.
In this embodiment, after acquiring a first image and a second image, a mobile terminal performs gray processing on the first image and the second image so as to respectively extract feature points of the first image and the second image; further, the mobile terminal calculates the moment-of-definition data, the centroid data and the main direction data of the first feature point according to the coordinate data of the first feature point, and the data can further reflect the essential features of the image, so that the accuracy of the image alignment method of the embodiment is improved.
The present application further provides another image alignment method, specifically please refer to fig. 3, and fig. 3 is a schematic flowchart of a third embodiment of the image alignment method of the present application.
On the basis of step S13 of the above-described third embodiment of the image alignment method, the image alignment method of the present embodiment further includes the steps of:
s31: according to the first feature point, obtaining preset conditions related to the moment data, the mass center data and the main direction data of the first feature point, and obtaining a plurality of second feature points meeting the preset conditions.
The mobile terminal presets a range condition according to the defined moment data, the centroid data and the main direction data of the first characteristic point. And the mobile terminal acquires a plurality of second characteristic points meeting the range condition according to the first characteristic points and the range condition.
Specifically, the mobile terminal may preset a defined moment range condition according to the defined moment data, a centroid range condition according to the centroid data, and a principal direction range condition according to the principal direction data, and when a certain second feature point simultaneously satisfies the defined moment range condition, the centroid range condition, and the principal direction range condition, the mobile terminal considers that the second feature point satisfies the preset range condition.
S32: and traversing the plurality of second feature points to obtain related second feature points within a preset distance range of the first feature points.
The mobile terminal obtains a plurality of second feature points satisfying the preset range condition in step S31, and calculates a distance between each second feature point and the first feature point on the image. The mobile terminal presets a distance range, and when the distance between a certain second feature point and the first feature point on the image is within the preset distance range, the mobile terminal considers that the second feature point is a feature point with high confidence. By the judging method, the mobile terminal can obtain at least one second feature point with high confidence level according to the first feature point.
Further, the mobile terminal may compare the plurality of sets of distance data, and set a second feature point having the shortest distance from the first feature point on the image as a second feature point with high confidence. By the judging method, the mobile terminal can acquire a second feature point with high confidence level according to the first feature point.
For example, the first feature point f0-1 is extracted, and then two nearest feature points f1-a, f1-b among the plurality of second feature points are extracted. The mobile terminal calculates the distance L (1a), L (1b) between the two nearest feature points f1-a, f1-b and the first feature point f0-1 on the image, and when L (1a) > L (1b), the mobile terminal considers f1-b as the second feature point with the highest confidence level of the first feature point f0-1 on the second image.
In this embodiment, the mobile terminal extracts one or more second feature points with high confidence that satisfy the preset range condition according to the first feature points through the preset range condition.
The present application further provides another image alignment method, please refer to fig. 4 specifically, and fig. 4 is a schematic flowchart of a fourth embodiment of the image alignment method of the present application.
On the basis of step S14 of the first embodiment of the image alignment method described above, the image alignment method of the present embodiment further includes the steps of:
s41: and acquiring transformation parameters according to the at least three groups of first characteristic points and the related second characteristic points.
The mobile terminal obtains transformation parameters according to at least three groups of first characteristic points and related second characteristic points, and the calculation formula of the transformation parameters is as follows:
wherein,for the transformation parameters, (x, y) is the coordinate data of the first feature point, and (x ', y') is the coordinate data of the second feature point.
Among the transformation parameters, the transformation parameters are,the mobile terminal applies the non-rigid transformation matrix to the second image as an affine transformation matrix, and can transform the second image; [ a, c ]]Angle transformation parameters characterizing the second feature points, [ b, d ]]Size transformation parameters [ e, f ] characterizing the second feature points]And characterizing the displacement transformation parameters of the second feature points.
S42: and transforming the plurality of second characteristic points according to the transformation parameters to obtain a third image.
As shown in fig. 5, after obtaining the transformation parameters, the mobile terminal transforms the second feature points on the second image according to the transformation parameters, and then combines the transformed second feature points to obtain a third image.
S43: and synthesizing a video or dynamic image comprising a plurality of frames of images according to the first image and the at least one third image.
After the plurality of second images are transformed, the mobile terminal obtains the first image and at least one third image. The mobile terminal then combines the first image and the at least one third image into a video or dynamic image comprising a plurality of frames of images.
In this embodiment, according to the conversion parameter, the mobile terminal converts the second image into the third image, and synthesizes the first image and at least one third image into a video or a dynamic image including multiple frames of images, thereby reducing the threshold of a user for taking a sequence picture, solving the problem of shaking of the sequence image by using the image alignment method of this embodiment, and improving the quality of the synthesized video or dynamic image.
To implement the image alignment method, the present application further provides a mobile terminal, and please refer to fig. 6 specifically, where fig. 6 is a schematic structural diagram of an embodiment of the mobile terminal of the present application.
The mobile terminal 100 is a mobile terminal capable of capturing images according to the foregoing embodiments, and as shown in fig. 6, the mobile terminal 100 includes a processor 11, a memory 12 coupled to the processor 11, and a camera module 13.
The camera module 13 is used for acquiring a first image and a second image;
the memory 12 is used for storing program data and the processor 11 is used for executing the program data to implement the image alignment method described above.
In the present embodiment, the processor 11 may also be referred to as a CPU (Central Processing Unit). The processor 11 may be an integrated circuit chip having signal processing capabilities. The processor 11 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 11 may be any conventional processor or the like.
The present application also provides a computer storage medium, as shown in fig. 7, the computer storage medium 200 stores program data that can be executed to implement the method as described in the embodiments of the image alignment method of the present application.
The method involved in the embodiment of the image alignment method of the present application, when implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a device, for example, a computer readable storage medium. With such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. An image alignment method, characterized in that the image alignment method comprises:
acquiring a first image and at least one second image;
extracting a plurality of first feature points of the first image and a plurality of second feature points of the second image;
obtaining the confidence degrees of the second characteristic points and the first characteristic points, and screening out the second characteristic points with the confidence degrees in a preset confidence degree range;
calculating a transformation parameter according to the first characteristic point and the screened second characteristic point;
and processing the plurality of second feature points according to the transformation parameters to obtain a third image.
2. The image alignment method according to claim 1, wherein after the step of acquiring the first image and the at least one second image, the image alignment method further comprises:
preprocessing the first image and the second image, wherein the preprocessing comprises gray processing;
the step of extracting a plurality of first feature points of the first image and a plurality of second feature points of the second image further includes:
according to the ORB algorithm, a plurality of first feature points of the first image and a plurality of second feature points of the second image are extracted.
3. The image alignment method according to claim 2, wherein the first feature point includes coordinate data; the step of extracting a plurality of first feature points of the first image further includes:
and acquiring the definition moment data, the centroid data and the main direction data of the first characteristic point according to the coordinate data of the first characteristic point.
4. The image alignment method according to claim 3, further comprising:
acquiring definition moment data of the first characteristic point according to the coordinate data of the first characteristic point, wherein a calculation formula of the definition moment data is as follows:
wherein M isij(x, y) is the first feature point's defining moment, (x, y) is the first feature point's coordinates, and I (x, y) is the first feature pointCoordinate functional relationship of points;
the step of obtaining the moment-defining data, the centroid data and the main direction data of the first feature point according to the coordinate data of the first feature point further includes:
obtaining centroid data of the first characteristic point according to the defined moment data of the first characteristic point, wherein a calculation formula of the centroid data is as follows:
wherein, cxIs the abscissa of the center of mass, cyIs the ordinate of the centroid;
acquiring main direction data of the first characteristic point according to the centroid data of the first characteristic point, wherein a calculation formula of the main direction data is as follows:
wherein, coriIs the main direction data of the first characteristic point.
5. The image alignment method according to claim 3, wherein the step of obtaining the confidence degrees of the second feature points and the first feature points and screening out the second feature points with the confidence degrees within a preset confidence degree range further comprises:
and acquiring preset conditions related to the moment-of-definition data, the centroid data and the main direction data of the first feature point, and acquiring a plurality of second feature points meeting the preset conditions.
6. The image alignment method according to claim 5, wherein after the step of acquiring preset conditions related to the moment-of-definition data, the centroid data, and the principal direction data of the first feature point, and acquiring a plurality of the second feature points satisfying the preset conditions, further comprising:
traversing a plurality of second feature points, obtaining the distance between the second feature points and the first feature points, and screening out the corresponding second feature points when the distance is within a preset distance range.
7. The image alignment method according to claim 1, wherein the obtaining of the transformation parameter from the first feature point and the associated second feature point further comprises:
obtaining the transformation parameters according to at least three groups of the first characteristic points and the related second characteristic points, wherein the calculation formula of the transformation parameters is as follows:
wherein,for the transformation parameters, x is an abscissa of the first feature point, y is an ordinate of the first feature point, x 'is an abscissa of the second feature point, and y' is an ordinate of the second feature point.
8. The image alignment method according to claim 1, wherein after the step of transforming the plurality of second feature points to obtain a third image according to the transformation parameter, the image alignment method further comprises:
and synthesizing a video or dynamic image comprising a plurality of frames of images according to the first image and at least one third image.
9. A mobile terminal is characterized by comprising a processor, a memory and a camera module, wherein the memory and the camera module are coupled with the processor;
the camera module is used for acquiring the first image and the second image;
the memory is for storing program data, the processor is for executing the program data to implement the image alignment method of any one of claims 1-8.
10. A computer storage medium for storing program data which, when executed by a processor, is adapted to implement the image alignment method of any one of claims 1 to 8.
CN201811565425.1A 2018-12-20 2018-12-20 Image alignment method, mobile terminal and computer storage medium Active CN109685839B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811565425.1A CN109685839B (en) 2018-12-20 2018-12-20 Image alignment method, mobile terminal and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811565425.1A CN109685839B (en) 2018-12-20 2018-12-20 Image alignment method, mobile terminal and computer storage medium

Publications (2)

Publication Number Publication Date
CN109685839A true CN109685839A (en) 2019-04-26
CN109685839B CN109685839B (en) 2023-04-18

Family

ID=66188148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811565425.1A Active CN109685839B (en) 2018-12-20 2018-12-20 Image alignment method, mobile terminal and computer storage medium

Country Status (1)

Country Link
CN (1) CN109685839B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110856014A (en) * 2019-11-05 2020-02-28 北京奇艺世纪科技有限公司 Moving image generation method, moving image generation device, electronic device, and storage medium
CN119495119A (en) * 2025-01-15 2025-02-21 人民卫生电子音像出版社有限公司 Intelligent face recognition identity authentication system and method for examination environment

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5859921A (en) * 1995-05-10 1999-01-12 Mitsubishi Denki Kabushiki Kaisha Apparatus for processing an image of a face
US20050089213A1 (en) * 2003-10-23 2005-04-28 Geng Z. J. Method and apparatus for three-dimensional modeling via an image mosaic system
US20080063298A1 (en) * 2006-09-13 2008-03-13 Liming Zhou Automatic alignment of video frames for image processing
JP2009195306A (en) * 2008-02-19 2009-09-03 Toshiba Corp Medical image display device and medical image display method
US20090238464A1 (en) * 2008-03-21 2009-09-24 Masakazu Ohira Image processing method, image processing apparatus, image forming apparatus and storage medium
US20110305405A1 (en) * 2010-06-11 2011-12-15 Fujifilm Corporation Method, apparatus, and program for aligning images
US20120051665A1 (en) * 2010-08-26 2012-03-01 Sony Corporation Image processing system with image alignment mechanism and method of operation thereof
US20140072231A1 (en) * 2012-09-10 2014-03-13 Nokia Corporation Method, apparatus and computer program product for processing of images
JP2014126892A (en) * 2012-12-25 2014-07-07 Fujitsu Ltd Image processing method, image processing apparatus, and image processing program
CN103973958A (en) * 2013-01-30 2014-08-06 阿里巴巴集团控股有限公司 Image processing method and image processing equipment
US20150199585A1 (en) * 2014-01-14 2015-07-16 Samsung Techwin Co., Ltd. Method of sampling feature points, image matching method using the same, and image matching apparatus
US20150294490A1 (en) * 2014-04-13 2015-10-15 International Business Machines Corporation System and method for relating corresponding points in images with different viewing angles
US20170280055A1 (en) * 2016-03-23 2017-09-28 Canon Kabushiki Kaisha Image processing apparatus, imaging apparatus, and control method of image processing apparatus
CN107305682A (en) * 2016-04-22 2017-10-31 富士通株式会社 Method and apparatus for being spliced to image
CN107465882A (en) * 2017-09-22 2017-12-12 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN107527360A (en) * 2017-08-23 2017-12-29 维沃移动通信有限公司 A kind of image alignment method and mobile terminal
US20180005394A1 (en) * 2016-06-30 2018-01-04 Synaptics Incorporated Systems and methods for point-based image alignment
CN107909600A (en) * 2017-11-04 2018-04-13 南京奇蛙智能科技有限公司 The unmanned plane real time kinematics target classification and detection method of a kind of view-based access control model
WO2018104609A1 (en) * 2016-12-06 2018-06-14 B<>Com Method for tracking a target in a sequence of medical images, associated device, terminal apparatus and computer programs
US20180211445A1 (en) * 2015-07-17 2018-07-26 Sharp Kabushiki Kaisha Information processing device, terminal, and remote communication system
CN108537845A (en) * 2018-04-27 2018-09-14 腾讯科技(深圳)有限公司 Pose determination method, device and storage medium
WO2018180550A1 (en) * 2017-03-30 2018-10-04 富士フイルム株式会社 Image processing device and image processing method
CN108764024A (en) * 2018-04-09 2018-11-06 平安科技(深圳)有限公司 Generating means, method and the computer readable storage medium of human face recognition model

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5859921A (en) * 1995-05-10 1999-01-12 Mitsubishi Denki Kabushiki Kaisha Apparatus for processing an image of a face
US20050089213A1 (en) * 2003-10-23 2005-04-28 Geng Z. J. Method and apparatus for three-dimensional modeling via an image mosaic system
US20080063298A1 (en) * 2006-09-13 2008-03-13 Liming Zhou Automatic alignment of video frames for image processing
JP2009195306A (en) * 2008-02-19 2009-09-03 Toshiba Corp Medical image display device and medical image display method
US20090238464A1 (en) * 2008-03-21 2009-09-24 Masakazu Ohira Image processing method, image processing apparatus, image forming apparatus and storage medium
US20110305405A1 (en) * 2010-06-11 2011-12-15 Fujifilm Corporation Method, apparatus, and program for aligning images
US20120051665A1 (en) * 2010-08-26 2012-03-01 Sony Corporation Image processing system with image alignment mechanism and method of operation thereof
US20140072231A1 (en) * 2012-09-10 2014-03-13 Nokia Corporation Method, apparatus and computer program product for processing of images
JP2014126892A (en) * 2012-12-25 2014-07-07 Fujitsu Ltd Image processing method, image processing apparatus, and image processing program
CN103973958A (en) * 2013-01-30 2014-08-06 阿里巴巴集团控股有限公司 Image processing method and image processing equipment
US20150199585A1 (en) * 2014-01-14 2015-07-16 Samsung Techwin Co., Ltd. Method of sampling feature points, image matching method using the same, and image matching apparatus
US20150294490A1 (en) * 2014-04-13 2015-10-15 International Business Machines Corporation System and method for relating corresponding points in images with different viewing angles
US20180211445A1 (en) * 2015-07-17 2018-07-26 Sharp Kabushiki Kaisha Information processing device, terminal, and remote communication system
US20170280055A1 (en) * 2016-03-23 2017-09-28 Canon Kabushiki Kaisha Image processing apparatus, imaging apparatus, and control method of image processing apparatus
CN107305682A (en) * 2016-04-22 2017-10-31 富士通株式会社 Method and apparatus for being spliced to image
US20180005394A1 (en) * 2016-06-30 2018-01-04 Synaptics Incorporated Systems and methods for point-based image alignment
WO2018104609A1 (en) * 2016-12-06 2018-06-14 B<>Com Method for tracking a target in a sequence of medical images, associated device, terminal apparatus and computer programs
WO2018180550A1 (en) * 2017-03-30 2018-10-04 富士フイルム株式会社 Image processing device and image processing method
CN107527360A (en) * 2017-08-23 2017-12-29 维沃移动通信有限公司 A kind of image alignment method and mobile terminal
CN107465882A (en) * 2017-09-22 2017-12-12 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN107909600A (en) * 2017-11-04 2018-04-13 南京奇蛙智能科技有限公司 The unmanned plane real time kinematics target classification and detection method of a kind of view-based access control model
CN108764024A (en) * 2018-04-09 2018-11-06 平安科技(深圳)有限公司 Generating means, method and the computer readable storage medium of human face recognition model
CN108537845A (en) * 2018-04-27 2018-09-14 腾讯科技(深圳)有限公司 Pose determination method, device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐国庆: "在线模板的人脸特征点对齐", 《计算机工程与设计》 *
葛永新等: "基于边缘特征点对对齐度的图像配准方法", 《中国图象图形学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110856014A (en) * 2019-11-05 2020-02-28 北京奇艺世纪科技有限公司 Moving image generation method, moving image generation device, electronic device, and storage medium
CN119495119A (en) * 2025-01-15 2025-02-21 人民卫生电子音像出版社有限公司 Intelligent face recognition identity authentication system and method for examination environment

Also Published As

Publication number Publication date
CN109685839B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
US11037278B2 (en) Systems and methods for transforming raw sensor data captured in low-light conditions to well-exposed images using neural network architectures
US10708525B2 (en) Systems and methods for processing low light images
US9202263B2 (en) System and method for spatio video image enhancement
US11004179B2 (en) Image blurring methods and apparatuses, storage media, and electronic devices
CN109064504B (en) Image processing method, apparatus and computer storage medium
TW201142718A (en) Scale space normalization technique for improved feature detection in uniform and non-uniform illumination changes
CN112330618B (en) Image offset detection method, device and storage medium
CN108769523A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113283319A (en) Method and device for evaluating face ambiguity, medium and electronic equipment
CN113744307A (en) Image feature point tracking method and system based on threshold dynamic adjustment
CN113822927B (en) Face detection method, device, medium and equipment suitable for weak quality image
CN109685839B (en) Image alignment method, mobile terminal and computer storage medium
CN110598712B (en) Object position recognition method, device, computer equipment and storage medium
CN113627324A (en) Face image matching method, device, storage medium and electronic device
CN107403412A (en) Image processing method and apparatus for carrying out the method
CN117156297A (en) Multi-image fusion processing method and device, electronic equipment and storage medium
CN108805033B (en) Method and device for image selection based on local gradient distribution
Xue Blind image deblurring: a review
Hua et al. Low-light image enhancement based on joint generative adversarial network and image quality assessment
CN110365897A (en) Image correction method and device, electronic equipment and computer readable storage medium
CN111583124A (en) Method, device, system and storage medium for deblurring images
CN112446837B (en) Image filtering method, electronic device and storage medium
CN113379611B (en) Image processing model generation method, processing method, storage medium and terminal
CN113901917A (en) A face recognition method, device, computer equipment and storage medium
CN116245745B (en) Image processing method and image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190426

Assignee: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Assignor: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

Contract record no.: X2021440000031

Denomination of invention: Image alignment method, mobile terminal and computer storage medium

License type: Common License

Record date: 20210125

EE01 Entry into force of recordation of patent licensing contract
GR01 Patent grant
GR01 Patent grant