[go: up one dir, main page]

CN116862771A - Data center cabinet image stitching method, system, electronic equipment and storage medium - Google Patents

Data center cabinet image stitching method, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN116862771A
CN116862771A CN202310850642.XA CN202310850642A CN116862771A CN 116862771 A CN116862771 A CN 116862771A CN 202310850642 A CN202310850642 A CN 202310850642A CN 116862771 A CN116862771 A CN 116862771A
Authority
CN
China
Prior art keywords
image
edge
matching
feature
cabinet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310850642.XA
Other languages
Chinese (zh)
Inventor
南国
郝虹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong New Generation Information Industry Technology Research Institute Co Ltd
Original Assignee
Shandong New Generation Information Industry Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong New Generation Information Industry Technology Research Institute Co Ltd filed Critical Shandong New Generation Information Industry Technology Research Institute Co Ltd
Priority to CN202310850642.XA priority Critical patent/CN116862771A/en
Publication of CN116862771A publication Critical patent/CN116862771A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a data center cabinet image splicing method, a system, electronic equipment and a storage medium, which belong to the technical field of image processing, and the technical problem to be solved by the invention is how to realize real-time online splicing of a plurality of images, improve the splicing accuracy, and adopt the following technical scheme: acquiring cabinet sequence image data in a data center by using a patrol robot; preprocessing cabinet sequence image data to obtain an edge characteristic region in an image; setting local feature areas in the reference image and the image to be spliced according to the edge feature areas, and extracting point features; and (3) performing image matching by utilizing the key point characteristics, calculating a geometric relation matrix, and performing image registration so as to realize seamless splicing and fusion of images. The system comprises a sequential image acquisition unit, an image preprocessing and edge feature acquisition unit, a key point feature extraction and matching unit and an image fusion and splicing unit.

Description

Data center cabinet image stitching method, system, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a data center cabinet image splicing method, a system, electronic equipment and a storage medium.
Background
The image stitching technique is a technique of stitching and combining a plurality of images acquired by a sensor into one larger image. During the shooting process, there is often an overlapping area between each neighborhood image. The image stitching aims to simultaneously acquire complete scene information in a complete image and complete related tasks. In some specific tasks, further tasks such as image understanding, processing, detection, classification, etc. are required in the images after stitching.
In the process of inspecting data center equipment by using an inspection robot, it is often necessary to inspect several components contained in the front of a complete cabinet. In the vision solving system of the robot, a complete cabinet front image is required to be taken as a vision processing unit. In an actual scenario, the vision sensor cannot capture a complete image of a cabinet due to the installation settings of the robot's own vision sensor, the effective field of view range, and the practical space limitations of the robot's walking. This requires several vision sensors mounted on the robotic arm and capturing several sets of images simultaneously, which contain a large number of overlapping areas between them, which requires stitching of multiple images. The existing cabinet image is characterized by having a repetitive flaky fine mesh area, and the image information is rich but repeated and single, so that great difficulty is brought to accurate splicing.
Therefore, how to realize real-time online multi-image stitching, improve stitching accuracy, and overcome the defects of image repeat list and image feature limitation in actual scenes is a technical problem to be solved at present.
Disclosure of Invention
The invention aims to provide a data center cabinet image splicing method, a system, electronic equipment and a storage medium, which are used for solving the problem of how to realize real-time online splicing of a plurality of images and improving the accuracy of splicing.
The technical task of the invention is realized in the following way, namely, a data center cabinet image splicing method, which comprises the following steps:
acquiring cabinet sequence image data in a data center by using a patrol robot;
preprocessing cabinet sequence image data to obtain an edge characteristic region in an image;
setting local feature areas in the reference image and the image to be spliced according to the edge feature areas, and extracting point features;
and (3) performing image matching by utilizing the key point characteristics, calculating a geometric relation matrix, and performing image registration so as to realize seamless splicing and fusion of images.
Preferably, the cabinet sequence image data is acquired in the data center by using the inspection robot specifically as follows:
the inspection robot is provided with 4 CMOS cameras at fixed positions on the side surface of the robot body, the 4 CMOS cameras form a multi-camera sensor system, the multi-camera sensor system is sequentially arranged from top to bottom, and the adjacent two CMOS cameras are provided with intervals;
when the inspection robot linearly moves to the front of the cabinet along the data center channel, four side CMOS camera shutters are simultaneously controlled to be opened, 4 RGB images containing different parts of the cabinet are obtained, an overlapping area is formed between the 4 images from top to bottom, and the front part area of the cabinet is covered;
the inspection robot continues to travel for a set distance, 4 images from top to bottom are obtained, the rear part area of the cabinet is covered, a complete cabinet is further contained in 8 images, and overlapping areas exist in every two adjacent areas of the 8 images respectively;
and according to the driving path, the inspection robot sequentially acquires a plurality of image groups of a plurality of cabinets.
Preferably, the cabinet sequence image data is preprocessed to obtain the edge feature area in the image, specifically as follows:
image preprocessing and image enhancement: equalizing and enhancing the image by homomorphic filtering;
edge detection: obtaining an edge detection image by using a Canny edge detection algorithm, wherein the edge detection image obviously shows a mesh sheet area, an area formed by edges of cabinet doors and an area formed by edges between cabinets;
and (3) longitudinal projection: performing longitudinal pixel value projection on the edge detection image to obtain a pixel value summation projection graph with an abscissa of 800;
output signal by wavelet transform and high pass filtering: performing a discrete wavelet transform (DWT: discrete Wavelet Transform) on the acquired projection graph;
the discrete wavelet transformation is wavelet transformation in which wavelet is subjected to discrete sampling, and the discrete wavelet transformation can capture frequency information and time position information; the discrete wavelet transforms filters with different frequencies to analyze signals with different frequencies, and the wavelet basis function and the scale function are used for analyzing high-frequency signals and low-frequency signals, namely a high-pass filter and a low-pass filter; the low-pass filter filters out the high-frequency part of the input signal and outputs the low-frequency part corresponding to the approximation value of the original signal; the high-pass filter filters out the low-frequency part and outputs the high-frequency part, and the high-frequency part corresponds to the detail information; the high pass filtered output generated by the discrete wavelet transform is used as a signal, which can be implemented with the pywt.dwt function of python:
(cA,cD)=pywt.dwt(x,'db1');
wherein x represents the projection curve signal; db1 is a specified wavelet function type; the pywt.dwt function returns two arrays, where array cA represents the approximation and array cD represents the detail coefficient; and expanding the abscissa of the transformed output signal map to 800;
obtaining a linear edge area: and acquiring candidate edge regions according to the signal diagram acquired by the discrete wavelet transform, and acquiring the regions formed by the edges of the cabinets and the positions of the regions formed by the edges between the cabinets in the candidate edge regions according to the width threshold value of the pixel values and the prior physical size.
More preferably, the image preprocessing and image enhancement are as follows:
homomorphic filtering is utilized to remove multiplicative noise, and meanwhile, contrast and standardized brightness are increased, so that the aim of enhancing an image is fulfilled; the flow of homomorphism filtering image processing is expressed as follows:
f(x,y)→ln→FFT→H(u,v)→IFFT→exp→f′(x,y);
wherein f (x, y) represents an original image; f' (x, y) represents the processed image; ln represents a logarithmic operation; the FFT represents a fast fourier transform; the IFFT represents an inverse fast fourier transform; exp represents an exponential operation;
the Canny edge detection algorithm is a multi-stage algorithm, and specifically comprises the following stages:
(1) Noise reduction: removing noise by using a Gaussian filter;
(2) Calculating an image intensity gradient;
(3) Non-maximum suppression;
(4) Hysteresis threshold: deciding which edges are edges and which are not edges, i.e. setting two thresholds, any edges with an intensity gradient greater than the large threshold are edges, while edges below the small threshold are non-edges, which are discarded.
Preferably, local feature areas are respectively set in the reference image and the image to be spliced according to the edge feature areas, and the local feature areas are used for extracting point features specifically as follows:
local feature region: according to the obtained edge regions, two local feature regions with different sizes are respectively defined at corresponding positions; the local feature region includes a partial edge feature and a partial mesh feature;
and (3) extracting key point characteristics: respectively detecting feature points in two local feature areas with different sizes by adopting an ORB feature operator, so as to obtain key point features of a reference image and an image to be spliced;
key point matching: matching the feature point pairs detected by the reference image and the image to be spliced by adopting a Brute-Force matching method, and taking a hamming distance as a similarity measure; and meanwhile, removing abnormal matching values in the initial matching result by using a random sampling consistent RANSAC algorithm to obtain an optimal matching point pair set.
Preferably, the image matching is performed by utilizing the key point features, the geometric relation matrix is calculated, and the image registration is performed, so that the seamless splicing and fusion of the images is realized specifically as follows:
acquiring a matching point pair set according to the local characteristic region, and calculating a geometric transformation relation between the reference image and the image to be spliced after the matching point pair set is acquired: calculating a homography () function of OpenCV to calculate a homography matrix;
aligning the image reference image and the image to be spliced in the same coordinate system;
image fusion is carried out in an overlapping area of image splicing, and adjustment is carried out according to an edge characteristic area so as to reduce discontinuity at the splicing position, and smooth transition is realized by referring to a method using gradual change mixing or poisson fusion;
and splicing the aligned and fused images to generate a final spliced image.
The system comprises a sequential image acquisition unit, an image preprocessing and edge feature acquisition unit, a key point feature extraction and matching unit and an image fusion and splicing unit;
the sequence image acquisition unit is used for acquiring cabinet sequence image data in the data center by using the inspection robot;
the image preprocessing and edge feature acquiring unit is used for preprocessing the cabinet sequence image data to acquire an edge feature area in the image;
the key point feature extraction and matching unit is used for setting local feature areas in the reference image and the image to be spliced according to the edge feature areas, and extracting point features;
the image fusion and splicing unit is used for carrying out image matching by utilizing the key point features, calculating a geometric relation matrix and carrying out image registration so as to realize seamless splicing and fusion of images.
Preferably, the image preprocessing and edge feature acquiring unit comprises an image preprocessing and enhancing module, an edge detecting module, a longitudinal projection module, a signal output module and a linear edge region acquiring module;
the image preprocessing and enhancing module is used for carrying out equalization and enhancing processing on the image by adopting homomorphic filtering;
the edge detection module is used for obtaining an edge detection image by using a Canny edge detection algorithm, wherein the edge detection image obviously presents a mesh sheet area, an area formed by edges of cabinet doors and an area formed by edges between cabinets;
the vertical projection module is used for projecting the pixel values of the edge detection image in the vertical direction to obtain a pixel value summation projection graph with the abscissa of 800;
the signal output module is used for carrying out discrete wavelet transformation on the projection graph, and outputting the high-pass filtering produced by the discrete wavelet transformation as a used signal; the discrete wavelet transformation is wavelet transformation in which wavelet is subjected to discrete sampling, and the discrete wavelet transformation can capture frequency information and time position information; the discrete wavelet transforms filters with different frequencies to analyze signals with different frequencies, and the wavelet basis function and the scale function are used for analyzing high-frequency signals and low-frequency signals, namely a high-pass filter and a low-pass filter; the low-pass filter filters out the high-frequency part of the input signal and outputs the low-frequency part corresponding to the approximation value of the original signal; the high-pass filter filters out the low-frequency part and outputs the high-frequency part, and the high-frequency part corresponds to the detail information; the high pass filtered output generated by the discrete wavelet transform is used as a signal, which can be implemented with the pywt.dwt function of python:
(cA,cD)=pywt.dwt(x,'db1');
wherein x represents the projection curve signal; db1 is a specified wavelet function type; the pywt.dwt function returns two arrays, where array cA represents the approximation and array cD represents the detail coefficient; and expanding the abscissa of the transformed output signal map to 800;
the linear edge region acquisition module is used for acquiring candidate edge regions according to the signal diagram acquired by discrete wavelet transformation, and acquiring the regions formed by the edges of the cabinets and the positions of the regions formed by the edges between the cabinets in the candidate edge regions according to the width threshold value of the pixel value and the prior physical size;
the key point feature extraction and matching unit comprises a local feature region acquisition module, a key point feature extraction module and a key point matching module;
the local feature region acquisition module is used for respectively defining two local feature regions with different sizes at corresponding positions according to the acquired edge regions; the local feature region includes a partial edge feature and a partial mesh feature;
the key point feature extraction module is used for detecting feature points in two local feature areas with different sizes by adopting an ORB feature operator, so as to obtain key point features of a reference image and an image to be spliced;
the key point matching module is used for matching the characteristic point pairs detected by the reference image and the image to be spliced by adopting a Brite-Force matching method, and taking the hamming distance as a similarity measure; and meanwhile, removing abnormal matching values in the initial matching result by using a random sampling consistent RANSAC algorithm to obtain an optimal matching point pair set.
An electronic device, comprising: a memory and at least one processor;
wherein the memory has a computer program stored thereon;
the at least one processor executes the computer program stored by the memory, causing the at least one processor to perform a data center cabinet image stitching method as described above.
A computer readable storage medium having a computer program stored therein, the computer program being executable by a processor to implement a data center cabinet image stitching method as described herein.
The data center cabinet image splicing method, the system, the electronic equipment and the storage medium have the following advantages:
according to the invention, the cabinet sequence image is acquired in the data center based on the robot, two groups of edge characteristic areas and local characteristic areas are respectively acquired according to the visual characteristics of the cabinet image, key point characteristics are respectively extracted according to the defined local characteristic areas and are matched based on the key point characteristics, so that the splicing of a plurality of images is completed, the real-time online splicing of the plurality of images is realized, and the follow-up series of image correlation algorithm works are facilitated;
it should be noted that, according to the appearance difference of the cabinet in the specific scene, different processing flows are also provided during the image stitching; for example, if the original image has shape distortion, the original image needs to be corrected, and then a series of operations of the method are performed;
secondly, aiming at the visual characteristics of the image, firstly acquiring an edge characteristic region, defining a local characteristic region based on the edge characteristic region, and extracting a characteristic value based on the local characteristic region; the defined local feature area retains a limited number of key points so as to facilitate effective image matching;
the invention overcomes the defects of single image repetition and image characteristic limitation in an actual scene, realizes real-time online splicing of a plurality of images, and further improves the accuracy of splicing;
fourth, the present invention is applicable to a plurality of image stitching with similar appearance characteristics as the cabinet image.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a block flow diagram of a method for stitching images of a cabinet in a data center;
FIG. 2 is an exploded schematic view of a region feature extraction method of a left reference image, a right reference image and an image to be spliced;
fig. 3 is an exploded schematic diagram of the method for extracting the regional characteristics of the upper and lower reference images and the image to be spliced.
Detailed Description
The data center cabinet image stitching method, system, electronic equipment and storage medium of the present invention are described in detail below with reference to the accompanying drawings and specific embodiments.
Example 1:
as shown in fig. 1, this embodiment provides a method for stitching images of a cabinet of a data center, which specifically includes:
s1, acquiring cabinet sequence image data in a data center by using a patrol robot;
s2, preprocessing cabinet sequence image data to obtain an edge characteristic region in an image;
s3, setting local feature areas in the reference image and the image to be spliced according to the edge feature areas, and extracting point features;
and S4, performing image matching by utilizing the key point features, calculating a geometric relation matrix, and performing image registration so as to realize seamless splicing and fusion of images.
In step S1 of this embodiment, the method for acquiring the cabinet sequence image data in the data center by using the inspection robot specifically includes the following steps:
s101, mounting 4 CMOS cameras on the side surface fixed positions of a machine body of the inspection robot, wherein the 4 CMOS cameras form a multi-camera sensor system which is sequentially arranged from top to bottom, and the adjacent two CMOS cameras are provided with a distance;
s102, when the inspection robot linearly moves to the front of the cabinet along a data center channel, controlling the shutters of four side CMOS cameras to be opened simultaneously, obtaining 4 RGB images comprising upper and lower different parts of the cabinet, wherein an overlapping area is formed between the 4 images from top to bottom, and the overlapping area covers the front part area of the cabinet;
s103, continuing to travel for a set distance by the inspection robot, acquiring 4 images from top to bottom, covering the rear part area of the cabinet, further including a complete cabinet in 8 images, wherein overlapping areas exist in every two adjacent areas of the 8 images;
s104, sequentially acquiring a plurality of image groups of a plurality of cabinets by the inspection robot according to the driving path.
In the process of shooting by a camera, a series of image algorithms are completed simultaneously according to different engineering application requirements.
The images acquired according to the above procedure also constitute an offline image dataset for use in an algorithm test.
In step S2 of this embodiment, preprocessing is performed on the cabinet sequence image data to obtain an edge feature region in the image, which is specifically as follows:
in addition to having a large area of repetitive mesh area, the original image has two significant edge areas. The area formed by the edges of the cabinet doors and the area formed by the edges between the cabinets are respectively. This is also the most important feature area used in this embodiment.
In order to extract these two areas effectively, a series of pre-treatments of the original image are first required, as follows:
s201, image preprocessing and image enhancement: equalizing and enhancing the image by homomorphic filtering;
s202, edge detection: obtaining an edge detection image by using a Canny edge detection algorithm, wherein the edge detection image obviously shows a mesh sheet area, an area formed by edges of cabinet doors and an area formed by edges between cabinets;
s203, longitudinal projection: the size of the acquired image in the specific scene is different from the vision sensor, and the size of the image in the data set is 800 multiplied by 603 pixels;
performing longitudinal pixel value projection on the edge detection image to obtain a pixel value summation projection graph with an abscissa of 800;
s204, outputting a signal through wavelet transformation and high-pass filtering: performing a discrete wavelet transform (DWT: discrete Wavelet Transform) on the acquired projection graph;
the discrete wavelet transformation is wavelet transformation in which wavelet is subjected to discrete sampling, and the discrete wavelet transformation can capture frequency information and time position information; the discrete wavelet transforms filters with different frequencies to analyze signals with different frequencies, and the wavelet basis function and the scale function are used for analyzing high-frequency signals and low-frequency signals, namely a high-pass filter and a low-pass filter; the low-pass filter filters out the high-frequency part of the input signal and outputs the low-frequency part corresponding to the approximation value of the original signal; the high-pass filter filters out the low-frequency part and outputs the high-frequency part, and the high-frequency part corresponds to the detail information; the high pass filtered output generated by the discrete wavelet transform is used as a signal, which can be implemented with the pywt.dwt function of python:
(cA,cD)=pywt.dwt(x,'db1');
wherein x represents the projection curve signal; db1 is a specified wavelet function type; the pywt.dwt function returns two arrays, where array cA represents the approximation and array cD represents the detail coefficient; and expanding the abscissa of the transformed output signal map to 800;
s205, acquiring a linear edge region: and acquiring candidate edge regions according to the signal diagram acquired by the discrete wavelet transform, and acquiring the positions of the regions formed by the edges of the cabinets and the regions formed by the edges between the cabinets in the candidate edge regions according to the width threshold value of the pixel value and the prior physical size, as shown by L10 and L11 in the figure 2.
As shown in fig. 2, the cabinet door edge region and the edge region between two cabinets are defined as edge feature regions according to a priori knowledge. The definition of edge feature areas is here based on the salient features that are represented in the whole image, relative to the repetitive mesh visual features. This visual contrast facilitates locating specific areas of the image with respect to the edge feature areas.
It should be noted that, the two edge areas may have differences according to the actual appearance shape of the cabinet, and the method takes the image in the data set as an example.
The image preprocessing and image enhancement in step S201 of this embodiment are specifically as follows:
homomorphic filtering is utilized to remove multiplicative noise, and meanwhile, contrast and standardized brightness are increased, so that the aim of enhancing an image is fulfilled;
an image may be represented as the product of its illumination component and reflection component, which, although inseparable in the time domain, may be linearly separated in the frequency domain via fourier transformation. Since the illuminance can be regarded as illumination in the environment, the relative change is small, and can be regarded as a low-frequency component of the image; the relatively large change in reflectance can be regarded as a high frequency component. The illumination of the image is more uniform by processing the influence of illumination and reflectivity on the gray value of the pixel respectively, and a high-pass filter is generally used for achieving the purpose of enhancing the detail characteristics of the shadow region.
The flow of homomorphism filtering image processing is expressed as follows:
f(x,y)→ln→FFT→H(u,v)→IFFT→exp→f′(x,y);
wherein f (x, y) represents an original image; f' (x, y) represents the processed image; ln represents a logarithmic operation; the FFT represents a fast fourier transform; the IFFT represents an inverse fast fourier transform; exp represents an exponential operation;
and the gray level image is obtained through homomorphic filtering. The following steps are performed on top of the homomorphically filtered image.
The Canny edge detection algorithm in step S202 of this embodiment is a multi-stage algorithm, and specifically includes the following stages:
(1) Noise reduction: removing noise by using a Gaussian filter;
(2) Calculating an image intensity gradient;
(3) Non-maximum suppression;
(4) Hysteresis threshold: deciding which edges are edges and which are not edges, i.e. setting two thresholds, any edges with an intensity gradient greater than the large threshold are edges, while edges below the small threshold are non-edges, which are discarded.
In step S3 of this embodiment, local feature areas are set in the reference image and the image to be stitched according to the edge feature areas, and the method is used for extracting point features specifically as follows:
s301, local feature area: according to the obtained edge regions, two local feature regions with different sizes are respectively defined at corresponding positions; the local feature region includes a partial edge feature and a partial mesh feature;
according to the obtained edge regions, two local feature regions with different sizes are respectively defined at a certain position. The local feature region includes a partial edge feature and a partial mesh feature. As shown in fig. 2, the areas a10, a11, a20, a21 surrounded by the dashed border are defined local feature areas. So-called local, is relative to the entire image.
The size of the local feature region is adjusted according to the number of the key point feature values extracted in the subsequent process.
The corresponding local feature areas in the two images have similar structural features.
The corresponding local characteristic areas in the two images are positioned in the possible overlapping areas at the back, the left image is used as a reference image, and the right image is used as an image to be matched;
s302, extracting key point features: as shown in fig. 2, taking a left image as a reference image, taking a right image as an image to be matched, and adopting ORB feature operators to detect feature points in two local feature areas with different sizes respectively so as to obtain key point features of the reference image and the image to be spliced;
s303, key point matching: matching the feature point pairs detected by the reference image and the image to be spliced by adopting a Brute-Force matching method, and taking a hamming distance as a similarity measure; and meanwhile, removing abnormal matching values in the initial matching result by using a random sampling consistent RANSAC algorithm to obtain an optimal matching point pair set.
In step S4 of the present embodiment, image matching is performed by using key point features, a geometric relationship matrix is calculated, and image registration is performed, so that seamless stitching and fusion of images is realized specifically as follows:
s401, acquiring a matching point pair set according to the local feature region, and calculating a geometric transformation relation between the reference image and the image to be spliced after acquiring the matching pairs of the feature points: calculating a homography () function of OpenCV to calculate a homography matrix;
s402, aligning the reference image and the image to be spliced in the same coordinate system;
s403, performing image fusion in an overlapping region of image splicing, adjusting according to an edge characteristic region to reduce discontinuity at the splicing position, and realizing smooth transition by referring to a method using gradual change mixing or poisson fusion;
s404, stitching the aligned and fused images to generate a final stitched image.
It should be noted that, as shown in fig. 3, in the case of two images that are adjacent to each other in sequence, because the four cameras are installed in the vertical direction, the misalignment of two adjacent images in the left-right direction is limited to a small scale. The splicing process refers to the process of splicing the left image and the right image.
And completing the integral splicing of 8 images in sequence according to the flow.
Example 2:
the embodiment provides a data center cabinet image splicing system, which comprises a sequence image acquisition unit, an image preprocessing and edge feature acquisition unit, a key point feature extraction and matching unit and an image fusion and splicing unit;
the sequence image acquisition unit is used for acquiring cabinet sequence image data in the data center by using the inspection robot;
the image preprocessing and edge feature acquiring unit is used for preprocessing the cabinet sequence image data to acquire an edge feature area in the image;
the key point feature extraction and matching unit is used for setting local feature areas in the reference image and the image to be spliced according to the edge feature areas, and extracting point features;
the image fusion and splicing unit is used for carrying out image matching by utilizing the key point features, calculating a geometric relation matrix and carrying out image registration so as to realize seamless splicing and fusion of images.
The image preprocessing and edge characteristic acquiring unit in the embodiment comprises an image preprocessing and enhancing module, an edge detecting module, a longitudinal projection module, a signal output module and a linear edge region acquiring module;
the image preprocessing and enhancing module is used for carrying out equalization and enhancing processing on the image by adopting homomorphic filtering;
the edge detection module is used for obtaining an edge detection image by using a Canny edge detection algorithm, wherein the edge detection image obviously presents a mesh sheet area, an area formed by edges of cabinet doors and an area formed by edges between cabinets;
the vertical projection module is used for projecting the pixel values of the edge detection image in the vertical direction to obtain a pixel value summation projection graph with the abscissa of 800;
the signal output module is used for carrying out discrete wavelet transformation on the projection graph, and outputting the high-pass filtering produced by the discrete wavelet transformation as a used signal; the discrete wavelet transformation is wavelet transformation in which wavelet is subjected to discrete sampling, and the discrete wavelet transformation can capture frequency information and time position information; the discrete wavelet transforms filters with different frequencies to analyze signals with different frequencies, and the wavelet basis function and the scale function are used for analyzing high-frequency signals and low-frequency signals, namely a high-pass filter and a low-pass filter; the low-pass filter filters out the high-frequency part of the input signal and outputs the low-frequency part corresponding to the approximation value of the original signal; the high-pass filter filters out the low-frequency part and outputs the high-frequency part, and the high-frequency part corresponds to the detail information; the high pass filtered output generated by the discrete wavelet transform is used as a signal, which can be implemented with the pywt.dwt function of python:
(cA,cD)=pywt.dwt(x,'db1');
wherein x represents the projection curve signal; db1 is a specified wavelet function type; the pywt.dwt function returns two arrays, where array cA represents the approximation and array cD represents the detail coefficient; and expanding the abscissa of the transformed output signal map to 800;
the linear edge region acquisition module is used for acquiring candidate edge regions according to the signal diagram acquired by discrete wavelet transformation, and acquiring the regions formed by the edges of the cabinets and the positions of the regions formed by the edges between the cabinets in the candidate edge regions according to the width threshold value of the pixel value and the prior physical size.
The key point feature extraction and matching unit in the embodiment comprises a local feature region acquisition module, a key point feature extraction module and a key point matching module;
the local feature region acquisition module is used for respectively defining two local feature regions with different sizes at corresponding positions according to the acquired edge regions; the local feature region includes a partial edge feature and a partial mesh feature;
the key point feature extraction module is used for detecting feature points in two local feature areas with different sizes by adopting an ORB feature operator, so as to obtain key point features of a reference image and an image to be spliced;
the key point matching module is used for matching the characteristic point pairs detected by the reference image and the image to be spliced by adopting a Brite-Force matching method, and taking the hamming distance as a similarity measure; and meanwhile, removing abnormal matching values in the initial matching result by using a random sampling consistent RANSAC algorithm to obtain an optimal matching point pair set.
Example 3:
the embodiment of the invention also provides electronic equipment, which comprises: a memory and a processor;
wherein the memory stores computer-executable instructions;
and the processor executes the computer-executed instructions stored by the memory, so that the processor executes the data center cabinet image stitching method in any embodiment of the invention.
The processor may be a Central Processing Unit (CPU), but may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), off-the-shelf programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be used to store computer programs and/or modules, and the processor implements various functions of the electronic device by running or executing the computer programs and/or modules stored in the memory, and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the terminal, etc. The memory may also include high-speed random access memory, but may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, memory card only (SMC), secure Digital (SD) card, flash memory card, at least one disk storage period, flash memory device, or other volatile solid state memory device.
Example 4:
the embodiment of the invention also provides a computer readable storage medium, wherein a plurality of instructions are stored, and the instructions are loaded by a processor, so that the processor executes the data center cabinet image stitching method in any embodiment of the invention. Specifically, a system or apparatus provided with a storage medium on which a software program code realizing the functions of any of the above embodiments is stored, and a computer (or CPU or MPU) of the system or apparatus may be caused to read out and execute the program code stored in the storage medium.
In this case, the program code itself read from the storage medium may realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code form part of the present invention.
Examples of storage media for providing program code include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-R, CD-RWs, DVD-ROMs, DVD-RYM, DVD-RWs, DVD+RWs), magnetic tapes, nonvolatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer by a communication network.
Further, it should be apparent that the functions of any of the above-described embodiments may be implemented not only by executing the program code read out by the computer, but also by causing an operating system or the like operating on the computer to perform part or all of the actual operations based on the instructions of the program code.
Further, it is understood that the program code read out by the storage medium is written into a memory provided in an expansion board inserted into a computer or into a memory provided in an expansion unit connected to the computer, and then a CPU or the like mounted on the expansion board or the expansion unit is caused to perform part and all of actual operations based on instructions of the program code, thereby realizing the functions of any of the above embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (10)

1. The data center cabinet image splicing method is characterized by comprising the following steps of:
acquiring cabinet sequence image data in a data center by using a patrol robot;
preprocessing cabinet sequence image data to obtain an edge characteristic region in an image;
setting local feature areas in the reference image and the image to be spliced according to the edge feature areas, and extracting point features;
and (3) performing image matching by utilizing the key point characteristics, calculating a geometric relation matrix, and performing image registration so as to realize seamless splicing and fusion of images.
2. The method for stitching images of cabinets in a data center according to claim 1, wherein the step of acquiring the cabinet sequence image data in the data center by using the inspection robot is specifically as follows:
the inspection robot is provided with 4 CMOS cameras at fixed positions on the side surface of the robot body, the 4 CMOS cameras form a multi-camera sensor system, the multi-camera sensor system is sequentially arranged from top to bottom, and the adjacent two CMOS cameras are provided with intervals;
when the inspection robot linearly moves to the front of the cabinet along the data center channel, four side CMOS camera shutters are simultaneously controlled to be opened, 4 RGB images containing different parts of the cabinet are obtained, an overlapping area is formed between the 4 images from top to bottom, and the front part area of the cabinet is covered;
the inspection robot continues to travel for a set distance, 4 images from top to bottom are obtained, the rear part area of the cabinet is covered, a complete cabinet is further contained in 8 images, and overlapping areas exist in every two adjacent areas of the 8 images respectively;
and according to the driving path, the inspection robot sequentially acquires a plurality of image groups of a plurality of cabinets.
3. The method for stitching images of racks in a data center according to claim 1, wherein preprocessing is performed on the image data of the rack sequence to obtain edge feature areas in the images, specifically as follows:
image preprocessing and image enhancement: equalizing and enhancing the image by homomorphic filtering;
edge detection: obtaining an edge detection image by using a Canny edge detection algorithm, wherein the edge detection image obviously shows a mesh sheet area, an area formed by edges of cabinet doors and an area formed by edges between cabinets;
and (3) longitudinal projection: performing longitudinal pixel value projection on the edge detection image to obtain a pixel value summation projection graph with an abscissa of 800;
output signal by wavelet transform and high pass filtering: performing discrete wavelet transform on the obtained projection graph; the high pass filtered output generated by the discrete wavelet transform is used as a signal, which can be implemented with the pywt.dwt function of python:
(cA,cD)=pywt.dwt(x,'db1');
wherein x represents the projection curve signal; db1 is a specified wavelet function type; the pywt.dwt function returns two arrays, where array cA represents the approximation and array cD represents the detail coefficient; and expanding the abscissa of the transformed output signal map to 800;
obtaining a linear edge area: and acquiring candidate edge regions according to the signal diagram acquired by the discrete wavelet transform, and acquiring the regions formed by the edges of the cabinets and the positions of the regions formed by the edges between the cabinets in the candidate edge regions according to the width threshold value of the pixel values and the prior physical size.
4. The method for stitching images of a rack in a data center according to claim 3, wherein the image preprocessing and the image enhancement are as follows:
homomorphic filtering is utilized to remove multiplicative noise, and meanwhile, contrast and standardized brightness are increased, so that the aim of enhancing an image is fulfilled; the flow of homomorphism filtering image processing is expressed as follows:
f(x,y)→ln→FFT→H(u,v)→IFFT→exp→f′(x,y);
wherein f (x, y) represents an original image; f' (x, y) represents the processed image; ln represents a logarithmic operation; the FFT represents a fast fourier transform; the IFFT represents an inverse fast fourier transform; exp represents an exponential operation;
the Canny edge detection algorithm is a multi-stage algorithm, and specifically comprises the following stages:
(1) Noise reduction: removing noise by using a Gaussian filter;
(2) Calculating an image intensity gradient;
(3) Non-maximum suppression;
(4) Hysteresis threshold: deciding which edges are edges and which are not edges, i.e. setting two thresholds, any edges with an intensity gradient greater than the large threshold are edges, while edges below the small threshold are non-edges, which are discarded.
5. The method for stitching images of a rack in a data center according to claim 1, wherein local feature areas are respectively set in a reference image and an image to be stitched according to edge feature areas, and the method is used for extracting point features specifically as follows:
local feature region: according to the obtained edge regions, two local feature regions with different sizes are respectively defined at corresponding positions; the local feature region includes a partial edge feature and a partial mesh feature;
and (3) extracting key point characteristics: respectively detecting feature points in two local feature areas with different sizes by adopting an ORB feature operator, so as to obtain key point features of a reference image and an image to be spliced;
key point matching: matching the feature point pairs detected by the reference image and the image to be spliced by adopting a Brute-Force matching method, and taking a hamming distance as a similarity measure; and meanwhile, removing abnormal matching values in the initial matching result by using a random sampling consistent RANSAC algorithm to obtain an optimal matching point pair set.
6. The method for stitching images of a rack in a data center according to claim 1, wherein the method for stitching images by utilizing key point features to perform image matching, calculating a geometric relation matrix, and performing image registration, further realizing seamless stitching and fusion of images comprises the following specific steps:
after the matching pair set of the characteristic points is obtained, calculating the geometric transformation relation between the reference image and the image to be spliced;
acquiring a matching point pair set according to the local characteristic region, and calculating a geometric transformation relation between the reference image and the image to be spliced after the matching point pair set is acquired: calculating a homography () function of OpenCV to calculate a homography matrix;
aligning the reference image and the image to be spliced in the same coordinate system;
image fusion is carried out in an overlapping area of image splicing, and adjustment is carried out according to an edge characteristic area so as to reduce discontinuity at the splicing position, and smooth transition is realized by referring to a method using gradual change mixing or poisson fusion;
and splicing the aligned and fused images to generate a final spliced image.
7. The system is characterized by comprising a sequential image acquisition unit, an image preprocessing and edge feature acquisition unit, a key point feature extraction and matching unit and an image fusion and splicing unit;
the sequence image acquisition unit is used for acquiring cabinet sequence image data in the data center by using the inspection robot;
the image preprocessing and edge feature acquiring unit is used for preprocessing the cabinet sequence image data to acquire an edge feature area in the image;
the key point feature extraction and matching unit is used for setting local feature areas in the reference image and the image to be spliced according to the edge feature areas, and extracting point features;
the image fusion and splicing unit is used for carrying out image matching by utilizing the key point features, calculating a geometric relation matrix and carrying out image registration so as to realize seamless splicing and fusion of images.
8. The data center cabinet image stitching system according to claim 7, wherein the image preprocessing and edge feature acquisition unit includes an image preprocessing and enhancement module, an edge detection module, a longitudinal projection module, a signal output module, and a linear edge region acquisition module;
the image preprocessing and enhancing module is used for carrying out equalization and enhancing processing on the image by adopting homomorphic filtering;
the edge detection module is used for obtaining an edge detection image by using a Canny edge detection algorithm, wherein the edge detection image obviously presents a mesh sheet area, an area formed by edges of cabinet doors and an area formed by edges between cabinets;
the vertical projection module is used for projecting the pixel values of the edge detection image in the vertical direction to obtain a pixel value summation projection graph with the abscissa of 800;
the signal output module is used for carrying out discrete wavelet transformation on the projection graph, and outputting the high-pass filtering produced by the discrete wavelet transformation as a used signal; wherein the high pass filtered output generated by the discrete wavelet transform is the signal used, which can be implemented with the pywt.dwt function of python:
(cA,cD)=pywt.dwt(x,'db1');
wherein x represents the projection curve signal; db1 is a specified wavelet function type; the pywt.dwt function returns two arrays, where array cA represents the approximation and array cD represents the detail coefficient; and expanding the abscissa of the transformed output signal map to 800;
the linear edge region acquisition module is used for acquiring candidate edge regions according to the signal diagram acquired by discrete wavelet transformation, and acquiring the regions formed by the edges of the cabinets and the positions of the regions formed by the edges between the cabinets in the candidate edge regions according to the width threshold value of the pixel value and the prior physical size;
the key point feature extraction and matching unit comprises a local feature region acquisition module, a key point feature extraction module and a key point matching module;
the local feature region acquisition module is used for respectively defining two local feature regions with different sizes at corresponding positions according to the acquired edge regions; the local feature region includes a partial edge feature and a partial mesh feature;
the key point feature extraction module is used for detecting feature points in two local feature areas with different sizes by adopting an ORB feature operator, so as to obtain key point features of a reference image and an image to be spliced;
the key point matching module is used for matching the characteristic point pairs detected by the reference image and the image to be spliced by adopting a Brite-Force matching method, and taking the hamming distance as a similarity measure; and meanwhile, removing abnormal matching values in the initial matching result by using a random sampling consistent RANSAC algorithm to obtain an optimal matching point pair set.
9. An electronic device, comprising: a memory and at least one processor;
wherein the memory has a computer program stored thereon;
the at least one processor executing the computer program stored by the memory causes the at least one processor to perform the data center cabinet image stitching method of any one of claims 1 to 6.
10. A computer readable storage medium having a computer program stored therein, the computer program being executable by a processor to implement the data center cabinet image stitching method of any one of claims 1 to 6.
CN202310850642.XA 2023-07-12 2023-07-12 Data center cabinet image stitching method, system, electronic equipment and storage medium Pending CN116862771A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310850642.XA CN116862771A (en) 2023-07-12 2023-07-12 Data center cabinet image stitching method, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310850642.XA CN116862771A (en) 2023-07-12 2023-07-12 Data center cabinet image stitching method, system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116862771A true CN116862771A (en) 2023-10-10

Family

ID=88230017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310850642.XA Pending CN116862771A (en) 2023-07-12 2023-07-12 Data center cabinet image stitching method, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116862771A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118521671A (en) * 2024-07-19 2024-08-20 深圳中安高科电子有限公司 CMOS area array camera array train bottom imaging method and device
CN119397864A (en) * 2024-12-31 2025-02-07 固控电气集团有限公司 Method and system for constructing assembled environmental protection cabinet using aluminum-zinc plates under multiple folded edges

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118521671A (en) * 2024-07-19 2024-08-20 深圳中安高科电子有限公司 CMOS area array camera array train bottom imaging method and device
CN119397864A (en) * 2024-12-31 2025-02-07 固控电气集团有限公司 Method and system for constructing assembled environmental protection cabinet using aluminum-zinc plates under multiple folded edges

Similar Documents

Publication Publication Date Title
Ahmed Comparative study among Sobel, Prewitt and Canny edge detection operators used in image processing
Abdelhamed et al. A high-quality denoising dataset for smartphone cameras
Santhaseelan et al. Utilizing local phase information to remove rain from video
CN116862771A (en) Data center cabinet image stitching method, system, electronic equipment and storage medium
CN110189285A (en) A multi-frame image fusion method and device
EP2637138A1 (en) Method and apparatus for combining panoramic image
CN109492543A (en) The small target detecting method and system of infrared image
CN116309562B (en) Board defect identification method and system
CN109784322B (en) Method, equipment and medium for identifying vin code based on image processing
CN110378902B (en) Scratch detection method under high noise background
CN112435278B (en) A visual SLAM method and device based on dynamic target detection
CN107909554B (en) Image noise reduction method and device, terminal equipment and medium
WO2007057808A2 (en) Blur estimation
Al Mudhafar et al. Noise in digital image processing: A review study
Attard et al. Image mosaicing of tunnel wall images using high level features
Min et al. Noise reduction in high dynamic range images
US8538163B2 (en) Method and system for detecting edges within an image
Rana et al. Review on traditional methods of edge detection to morphological based techniques
Wang et al. A bilateral filtering based ringing elimination approach for motion-blurred restoration image
Gurrala et al. Enhancing Safety and Security: Face Tracking and Detection in Dehazed Video Frames Using KLT and Viola-Jones Algorithms.
Zhou et al. Speeded-up robust features based moving object detection on shaky video
Suguna et al. Image Preprocessing and Its Applications in Computer Vision
Tico et al. Robust image registration for multi-frame mobile applications
Alemán-Flores et al. Morphological thick line center detection
CN120339283B (en) Method and device for detecting sub-pixel layer defect of self-adaptive multi-depth-of-field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination