[go: up one dir, main page]

CN111126183A - Method for detecting damage of building after earthquake based on near-ground image data - Google Patents

Method for detecting damage of building after earthquake based on near-ground image data Download PDF

Info

Publication number
CN111126183A
CN111126183A CN201911249399.6A CN201911249399A CN111126183A CN 111126183 A CN111126183 A CN 111126183A CN 201911249399 A CN201911249399 A CN 201911249399A CN 111126183 A CN111126183 A CN 111126183A
Authority
CN
China
Prior art keywords
building
image
image data
damage
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911249399.6A
Other languages
Chinese (zh)
Inventor
眭海刚
孙向东
黄立洪
刘超贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201911249399.6A priority Critical patent/CN111126183A/en
Publication of CN111126183A publication Critical patent/CN111126183A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/38Outdoor scenes
    • G06V20/39Urban scenes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供一种基于近地面影像数据的震后建筑物损毁检测方法,包括以下步骤:步骤一,使用摄影设备采集震后建筑物近地面影像数据,并进行预处理,同时制备建筑物损毁标注样本集;步骤二,将损毁建筑物训练样本和对应的标注信息代入深度神经网络模型中进行训练;步骤三,将待检测的建筑物近地面影像数据代入到步骤二中训练好的深度神经网络模型中,得到建筑物损毁预检测结果;步骤四,对待检测的建筑物近地面影像数据进行超像素分割,利用超像素分割结果对步骤三得到的建筑物损毁预检测结果进行融合处理,得到建筑物损毁精检测结果。

Figure 201911249399

The present invention provides a post-earthquake building damage detection method based on near-ground image data, comprising the following steps: Step 1, using photographic equipment to collect near-ground image data of post-earthquake buildings, perform preprocessing, and prepare building damage labels at the same time Sample set; Step 2, substitute the damaged building training samples and corresponding annotation information into the deep neural network model for training; Step 3, substitute the near-ground image data of the building to be detected into the deep neural network trained in Step 2 In the model, the pre-detection result of building damage is obtained; in step 4, superpixel segmentation is performed on the near-ground image data of the building to be detected, and the superpixel segmentation result is used to fuse the pre-detection results of building damage obtained in step 3 to obtain a building. Destruction sperm test results.

Figure 201911249399

Description

Method for detecting damage of building after earthquake based on near-ground image data
Technical Field
The invention relates to the technical field of remote sensing application and the field of disaster assessment, in particular to a method for detecting damage of a building after earthquake based on near-ground image data such as a camera (handheld/vehicle-mounted/unmanned aerial vehicle) and a smart phone, and particularly relates to a method for detecting damage of a building after earthquake by adopting a deep learning and super-pixel segmentation algorithm based on the near-ground image data.
Background
After a natural disaster occurs, emergency rescue, decision command and post-earthquake reconstruction work can be carried out at the first time, and casualties and property loss in a disaster area can be reduced to the minimum as possible. In the whole post-earthquake emergency response, the damage information of the buildings in the disaster area can provide important guiding significance for decision makers and rescue workers. At present, the most common method is to extract the damaged object of the building based on the pre-disaster/post-disaster remote sensing image change detection technology. The method is usually based on remote sensing images of the same sensor in different time phases, and the difficulty of obtaining remote sensing data of the same sensor is quite large in emergency monitoring and evaluation of actual earthquake disasters. In contrast, the post-disaster remote sensing image data is relatively easy to acquire, and therefore building damage detection based on the post-disaster remote sensing image is gradually a research hotspot in recent years. At present, in the aspect of building damage detection research based on post-disaster single-temporal images, the traditional satellite remote sensing image has long revisit period and can only obtain top surface information of a building in a main data source; the aviation Lidar point cloud and aviation oblique photogrammetry technology can make up the inherent defect of satellite remote sensing image to building facade damage detection, however, due to the complexity of modern buildings, particularly in dense areas of buildings, the problems of ground object shielding, dead angle shooting and the like exist. Therefore, there is still a need to further collect near-ground image data which is closer to the building and has a more intuitive shooting angle as auxiliary data for detecting the damage of the building after the earthquake.
With the maturity of the camera (handheld/vehicle-mounted/unmanned aerial vehicle) and smart phone technology, it becomes possible to perform near-ground image acquisition and fine damage detection on the damaged buildings in the disaster area by using the camera (handheld/vehicle-mounted/unmanned aerial vehicle) and the high-pixel camera equipped in the smart phone. Compared with the traditional photogrammetry technology, the camera (handheld/vehicle-mounted/unmanned aerial vehicle) and the smart phone are used for data acquisition, the image resolution is obviously improved, the technical requirements on data acquisition personnel are lower, the timeliness is strong, and the problems of facade information loss, ground object shielding and the like existing in satellite and aviation data can be effectively solved. In conjunction with satellite and aviation data, the method for detecting damage of buildings after earthquake based on near-ground image data has the potential to become one of important technical means for detecting damage of buildings after earthquake in an all-dimensional and fine manner.
Disclosure of Invention
Aiming at the problems, the invention provides a method for detecting damage of a building after earthquake based on near-ground image data, which comprises the following specific steps:
acquiring image data of a building near the ground after an earthquake by using photographic equipment, preprocessing the image data, and preparing a building damage labeling sample set;
substituting the training samples of the damaged buildings and the corresponding labeling information into the deep neural network model for training;
step three, substituting the building near-ground image data to be detected into the deep neural network model trained in the step two to obtain a building damage pre-detection result;
performing superpixel segmentation on the building near-ground image data to be detected, and performing fusion processing on the building damage pre-detection result obtained in the step three on the basis of a majority voting rule by using a superpixel segmentation result to obtain a building damage fine detection result; the concrete implementation mode is as follows,
firstly, dividing an image superpixel segmentation result graph into different regions based on segmented superpixel blocks; further calculating the number of pixels contained in each category in the corresponding damage pre-detection result in each region; and finally, according to the pixel number statistical result, determining the category with the most total number of pixels as the category label of the super-pixel region, wherein the calculation formula is as follows:
Figure BDA0002308594510000021
wherein L isrThe method includes the steps that a region r belongs to a class label, M is the total class number of damage pre-detection results, r (i, j) is an image element with coordinates (i, j) in the region r, f (r (i, j)) is the class label to which the image element r (i, j) belongs, and sgn (x) function is a mathematical sign function, wherein if f (r (i, j)) ═ c, sgn returns 1, and otherwise, 0 is returned.
Further, in the first step, the photographing device includes a handheld camera, a vehicle-mounted camera, an unmanned aerial vehicle camera, or a smart phone.
Furthermore, in the step one, the specific implementation manner of preprocessing the image and preparing the building damage labeling sample set is as follows,
(1) cutting the image data of the earthquake building near the ground and readjusting the image width and height;
(2) and carrying out ambiguity analysis on the adjusted image:
a) converting the near-ground image of the building into a gray image;
b) calculating the Laplace variance of the gray level image: 1) firstly, using a Laplacian operator (LoG, Laplacian of Gaussian) to carry out edge detection on an image; 2) then, for the edge detection result of 1), calculating the variance with a certain sampling window, wherein the variance calculation formula is as follows:
Figure BDA0002308594510000022
wherein x isiCalculating the value of LoG of the ith pixel in the sampling window,
Figure BDA0002308594510000023
calculating the mean value of all pixels LoG in the sampling window, wherein n is the total number of pixels contained in the sampling window;
c) averaging the variance values of the windows returned in the step b), taking the average value as a measurement value V of the image ambiguity level, and filtering image data of which the ambiguity measurement value V is smaller than a threshold value;
(3) and manually plotting the objects of wall falling and wall crack damage on the facade of the building by using a marking tool to prepare a building damage marking sample set.
Further, in the third step, the deep neural network model is an example segmentation model Mask R-CNN, and comprises a target detection model fast R-CNN network and a semantic segmentation model FCN network, wherein the Mask R-CNN model firstly detects a building damage object from an input image by adopting the fast R-CNN network and marks a detection frame ROI on the building damage object, and then performs semantic grade segmentation and pixel-by-pixel marking on the detected ROI area by adopting the FCN network;
wherein, the Faster R-CNN firstly adopts a feature extraction network to extract important features of different targets from an image to generate a feature map; secondly, in the RPN stage, a plurality of candidate ROIs are taken from each anchor point on the obtained feature map, and the ROI is subjected to foreground and background differentiation and ROI position preliminary adjustment; furthermore, in the detection network, distinguishing different types of targets and accurately adjusting the position of the ROI; the FCN converts the full-link layer in the traditional CNN into a convolutional layer, and the characteristic graph of the last convolutional layer is up-sampled by adopting an anti-convolutional layer to restore the characteristic graph to the same size as an input image, so that each pixel is predicted, the classification of the pixel level of the image is realized, and the problem of image segmentation of the semantic level is solved.
The invention utilizes the near-ground image data acquired from a camera (handheld/vehicle-mounted/unmanned aerial vehicle) and a smart phone, adopts a method combining deep learning and a superpixel segmentation algorithm, realizes the fine damage detection of the building after the earthquake, and is characterized in that:
(1) the data sources employed are cameras (handheld/vehicle/drone) and smart phones. On one hand, due to the maturity of the related technology, the camera (handheld/vehicle-mounted/unmanned aerial vehicle) and the smart phone are used for acquiring the near-ground image data of the earthquake-caused building, and the timeliness and the mass source of the camera provide good guarantee for emergency work after the earthquake; on the other hand, the image resolution ratio of the near-ground image data acquired by the camera (handheld/vehicle-mounted/unmanned aerial vehicle) and the smart phone is high, fine damage detection can be performed on the building, and congenital defects such as building facade information loss and ground feature shielding existing in satellite and aviation data can be overcome.
(2) The adopted data acquisition means is multi-view/surround shooting of buildings based on low-altitude or ground platforms. Taking an unmanned aerial vehicle as an example, in the existing method for evaluating the damage of the building after the disaster by using the unmanned aerial vehicle, the oblique downward-looking facade image data of the building is mainly obtained by establishing a large-scale cruising route for the disaster area. Similar to an airborne remote sensing image, the method is limited by factors such as high shooting angle, complex terrain in disaster areas and the like, and has the defects of building elevation information loss, ground object shielding and the like. The method provided by the invention is suitable for the oblique downward-looking image data acquired under the large-scale cruising route of the unmanned aerial vehicle, but is more biased to utilize a camera (handheld/vehicle-mounted/unmanned aerial vehicle) and a smart phone to perform low-altitude/ground short-distance multi-view angle/surrounding shooting on a single building or a small-range building group to acquire the near-ground image data.
(3) The adopted building damage detection method is a two-step extraction method. Firstly, processing input near-ground image data of a building to be detected by adopting a deep learning method, and outputting a building damage pre-detection result; and secondly, fusing the building damage pre-detection result extracted in the first step by using the super-pixel segmentation result of the input image, so as to obtain a building damage fine detection result after noise removal and boundary optimization. Compared with the traditional one-step extraction method, the two-step extraction method can obviously improve the damage detection precision of the building.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a Mask R-CNN model used in the embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a Faster R-CNN model according to an embodiment of the present invention;
fig. 4 is a schematic structural composition diagram of an FCN model according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a backbone architecture (FPN network) of a Mask R-CNN model according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating an effect of performing super-pixel fusion processing on a building damage pre-detection result according to an embodiment of the present invention. The graph (a) is a building damage pre-detection result output by a Mask R-CNN model, and the graph (b) is a building damage fine detection result after super-pixel fusion processing.
Detailed Description
The technical solution of the present invention will be described in detail below with reference to the accompanying drawings and examples, so that the technical contents thereof will be more clear and easy to understand. It should be noted that the scope of the present invention is not limited to the embodiments mentioned herein.
As in fig. 1, the embodiment comprises the following steps:
firstly, preprocessing building near-ground image data acquired by a camera (handheld/vehicle-mounted/unmanned aerial vehicle) or a smart phone after an earthquake, and preparing a building damage labeling sample set; .
(1) Readjusting the image width and height of a camera (handheld/vehicle-mounted/unmanned aerial vehicle) or a smart phone image data acquired after an earthquake through resampling to meet the requirement of subsequent input model training, wherein in the embodiment, the image width and height are set to 3000 px;
(2) carrying out ambiguity analysis on image data of a camera (handheld/vehicle-mounted/unmanned aerial vehicle) or a smart phone acquired after an earthquake:
a) converting the near-ground image of the building into a gray image;
b) calculating the Laplace variance of the gray level image: 1) firstly, using a Laplacian operator (LoG, Laplacian of Gaussian) to carry out edge detection on an image; 2) then, for the edge detection result of 1), the variance is calculated with a sampling window of (2 × 3), and the variance calculation formula is as follows:
Figure BDA0002308594510000041
wherein x isiCalculating the value of LoG of the ith pixel in the sampling window,
Figure BDA0002308594510000042
the calculated value is the mean value of all pixels LoG in the sampling window, and n is the total number of pixels contained in the sampling window.
c) Taking the mean value of the variance values of the windows returned in the step b) and taking the mean value as a measure of the image ambiguity level, wherein the variance value of a normally clear image is large, and the variance value of a blurred image is small.
(3) The image after the ambiguity analysis is subjected to manual marking of the damaged object, in this embodiment, the damaged objects such as wall falling, wall cracks and the like in the building facade image are manually plotted mainly by means of a labelme marking tool.
And step two, substituting the training samples of the damaged buildings and the corresponding labeling information into the deep neural network model for training.
(1) In the embodiment, the Mask R-CNN model in the deep learning example segmentation algorithm is adopted to carry out damage detection on the image data of the post-earthquake near-ground building.
As shown in fig. 2, the example segmentation model Mask R-CNN network adopted in the present embodiment mainly includes a target detection model fast R-CNN network and a semantic segmentation model FCN (full Convolutional neural network) network. As shown in FIG. 3, structurally, the Faster R-CNN first adopts a feature extraction network to extract important features of different targets from an image to generate a feature map; secondly, in the RPN stage, a plurality of candidate ROIs are taken from each anchor point on the obtained feature map, and the ROI is subjected to foreground and background differentiation and ROI position preliminary adjustment; furthermore, in the detection network, the ROI is subjected to the distinguishing of different types of targets and the accurate adjustment of the position of the ROI. As shown in fig. 4, in structure, the FCN converts the fully-connected layer in the conventional CNN into convolutional layers, and uses the deconvolution layer to up-sample the feature map of the last convolutional layer, so that it is restored to the same size as the input image, thereby predicting each pixel, implementing pixel-level classification on the image, and solving the problem of image segmentation at semantic level. In the step, a Mask R-CNN model firstly adopts a FasterR-CNN network to detect a building damage object from an input image and label the building damage object with a detection frame (ROI), and then adopts an FCN network to carry out semantic level segmentation and pixel-by-pixel labeling on the detected ROI area.
As shown in fig. 5, the Mask R-CNN of the example segmentation model adopted in this embodiment mainly adopts an fpn (femtocell) network as a backbone network architecture. The FPN network can extract and simultaneously extract image characteristic information under different scales by setting convolution kernels with different sizes for an input image, thereby detecting building damage objects with different scales in the image. Therefore, the problems of omission of small-scale damaged objects, breakage of large-scale damaged objects and the like in the building damage detection process can be avoided as much as possible, and therefore the damage detection method provided by the invention has certain robustness under different scales.
(2) And (4) substituting the manually marked sample image and the corresponding marking information into a Mask R-CNN model for training, wherein the trained Mask R-CNN model has the capability of detecting and segmenting the damaged object in the image data of the near ground of the building.
And step three, substituting the building near-ground image data to be detected into the deep neural network model trained in the step two to obtain a building damage pre-detection result.
(1) Substituting the building near-ground image data to be detected into the example segmentation model trained in the step two to obtain a segmented damage pre-detection result;
(2) as shown in fig. 6(a), in the damage pre-detection result generated in step (1), pre-detection and positioning of the damaged object in the building image after the earthquake are realized.
And step four, performing superpixel segmentation on the building near-ground image data to be detected, and performing fusion processing on the building damage pre-detection result obtained in the step three by using a superpixel segmentation result to obtain a building damage fine detection result.
(1) By means of a multi-Scale Segmentation algorithm provided by eCoginization software, the near-ground image of the building to be detected is subjected to object-oriented superpixel Segmentation by adjusting a proper Scale Parameter (Scale Parameter). In this embodiment, the scale parameter is set to 200 (image size is 3000px by 3000 px);
(2) and (3) based on a Majority Voting (Majority Voting) rule, carrying out fusion processing on the image superpixel segmentation result obtained in the step (1) and the building damage pre-detection result generated in the step three. And carrying out post-processing optimization on the building damage pre-detection area by means of the homogeneous clustering characteristic of the superpixel segmentation algorithm.
According to a Majority Voting (Majority Voting) rule adopted by the embodiment, firstly, an image superpixel segmentation result graph is divided into different areas based on segmented superpixel blocks; further calculating the number of pixels contained in each category in the damage pre-detection result corresponding to each region; and finally, according to the pixel number statistical result, taking the category with the most total number of contained pixels as the category label of the super pixel block region. The formula is expressed as follows:
Figure BDA0002308594510000061
wherein L isrThe method includes the steps that a category label belongs to an area r, M is the total number of categories of a damage pre-detection result, r (i, j) is an image element with coordinates (i, j) in the area r, f (r (i, j)) is the category label to which the image element r (i, j) belongs, and sgn (x) function is a mathematical sign function.
(3) As shown in fig. 6, the results of building damage detection before and after the super-pixel fusion process are shown. Wherein, (a) is the building damage pre-detection result before the super pixel fusion processing, and (b) is the building damage fine detection result after the super pixel fusion processing. It can be found that, before the super-pixel fusion processing, the damaged object segmentation is performed on the building by adopting the deep learning example segmentation method, although the damaged object in the image can be identified and positioned more accurately, some detailed information of the damaged object boundary can be lost due to the superposition of the number of convolution layers. The super-pixel segmentation algorithm can perform homogeneous clustering on pixels in an image and reserve relatively complete boundary information for each super-pixel block, so that a building damage fine detection result after super-pixel fusion processing can be used for accurately detecting and positioning a damaged object and reserving rich boundary information of the damaged object, and damage detection precision is remarkably improved.

Claims (4)

1.一种基于近地面影像数据的震后建筑物损毁检测方法,包括如下具体步骤:1. A post-earthquake building damage detection method based on near-ground image data, comprising the following specific steps: 步骤一,使用摄影设备采集震后建筑物近地面影像数据,并进行预处理,同时制备建筑物损毁标注样本集;Step 1: Use photographic equipment to collect post-earthquake building near-ground image data, perform preprocessing, and prepare a sample set of building damage annotations at the same time; 步骤二,将损毁建筑物训练样本和对应的标注信息代入深度神经网络模型中进行训练;Step 2: Substitute the damaged building training samples and the corresponding annotation information into the deep neural network model for training; 步骤三,将待检测的建筑物近地面影像数据代入到步骤二中训练好的深度神经网络模型中,得到建筑物损毁预检测结果;Step 3: Substitute the near-ground image data of the building to be detected into the deep neural network model trained in Step 2 to obtain a building damage pre-detection result; 步骤四,对待检测的建筑物近地面影像数据进行超像素分割,利用超像素分割结果,基于多数投票规则,对步骤三得到的建筑物损毁预检测结果进行融合处理,得到建筑物损毁精检测结果;具体实现方式如下,Step 4: Perform superpixel segmentation on the near-ground image data of the building to be detected, use the superpixel segmentation result, and based on the majority voting rule, perform fusion processing on the building damage pre-detection results obtained in step 3, and obtain the building damage accurate detection result. ; The specific implementation is as follows, 首先基于分割后的超像素块将影像超像素分割结果图划分为不同区域;进而计算各个区域内,与之相对应的损毁预检测结果中,各个类别所包含的像元数;最后根据像元数统计结果,确定所包含像元总数最多的类别为该超像素区域所属的类别标签,计算公式表示如下:First, the image superpixel segmentation result map is divided into different regions based on the segmented superpixel blocks; then, the number of pixels included in each category in the corresponding damage pre-detection results in each region is calculated; According to the statistics results, it is determined that the category with the most total number of pixels is the category label of the superpixel area. The calculation formula is as follows:
Figure FDA0002308594500000011
Figure FDA0002308594500000011
其中,Lr为区域r所属的类别标签,M为损毁预检测结果的类别总数,r(i,j)为区域r内坐标为(i,j)的像元,f(r(i,j))为像元r(i,j)所属的类别标签,sgn(x)函数为数学上的符号函数,此处,如果f(r(i,j))=c,则sgn返回1,否则,返回0。Among them, L r is the category label to which the region r belongs, M is the total number of categories of damage pre-detection results, r(i, j) is the pixel with coordinates (i, j) in the region r, f(r(i, j) )) is the category label to which the pixel r(i,j) belongs, and the sgn(x) function is a mathematical symbolic function. Here, if f(r(i,j))=c, then sgn returns 1, otherwise , returns 0.
2.根据权利要求1所述基于近地面影像数据的震后建筑物损毁检测方法,其特征在于:步骤一中,所述摄影设备包括手持相机、车载相机、无人机相机或智能手机。2 . The post-earthquake building damage detection method based on near-ground image data according to claim 1 , wherein in step 1, the photographing equipment comprises a handheld camera, a vehicle-mounted camera, a drone camera or a smart phone. 3 . 3.根据权利要求1所述基于近地面影像数据的震后建筑物损毁检测方法,其特征在于:步骤一中,对影像进行预处理并制备建筑物损毁标注样本集的具体实现方式如下,3. The post-earthquake building damage detection method based on near-ground image data according to claim 1, characterized in that: in step 1, the specific implementation method of preprocessing the image and preparing the building damage labeling sample set is as follows, (1)对震后建筑物近地面影像数据进行裁剪及影像宽高的重新调整;(1) Crop the near-ground image data of the building after the earthquake and readjust the width and height of the image; (2)对调整后的影像进行模糊度分析:(2) Perform blur analysis on the adjusted image: a)将建筑物的近地面影像转换为灰度图像;a) Convert the near-ground image of the building to a grayscale image; b)计算灰度图像的拉普拉斯方差:1)首先使用拉普拉斯算子(LoG,Laplacian ofGaussian)对图像进行边缘检测;2)然后,对1)的边缘检测结果,以一定的采样窗口计算方差,方差计算公式如下:b) Calculate the Laplacian variance of the grayscale image: 1) First use the Laplacian operator (LoG, Laplacian of Gaussian) to detect the edge of the image; 2) Then, use the edge detection result of 1) to a certain The sampling window calculates the variance, and the variance calculation formula is as follows:
Figure FDA0002308594500000021
Figure FDA0002308594500000021
其中,xi为采样窗口内第i像元的LoG计算值,
Figure FDA0002308594500000022
为采样窗口内所有像元LoG计算值的均值,n为采样窗口内所包含的像元总数;
Among them, x i is the calculated LoG value of the i-th pixel in the sampling window,
Figure FDA0002308594500000022
is the mean of the LoG calculated values of all pixels in the sampling window, and n is the total number of pixels contained in the sampling window;
c)对b)返回的各个窗口方差值取均值,将其作为图像模糊度水平的度量值V,并滤除模糊度度量值V小于阈值影像数据;c) taking the mean value of each window variance value returned by b), taking it as the metric value V of the blur degree level of the image, and filtering out the blur degree metric value V less than the threshold image data; (3)采用标注工具对建筑物立面上的墙体脱落、墙体裂缝损毁对象进行人工标绘,以制备建筑物损毁标注样本集。(3) Use the labeling tool to manually mark the objects on the facade of the building with falling off walls and wall cracks, so as to prepare a sample set of building damage labels.
4.根据权利要求1所述基于近地面影像数据的震后建筑物损毁检测方法,其特征在于:步骤三中,深度神经网络模型为实例分割模型Mask R-CNN,包括目标检测模型Faster R-CNN网络和语义分割模型FCN网络,Mask R-CNN模型首先采用Faster R-CNN网络从输入图像中检测出建筑物损毁对象并对其标注检测框ROI,然后再对检测出的ROI区域采用FCN网络进行损毁对象的语义级别分割并逐像素标注;4. the post-earthquake building damage detection method based on near-ground image data according to claim 1, is characterized in that: in step 3, deep neural network model is instance segmentation model Mask R-CNN, comprises target detection model Faster R- CNN network and semantic segmentation model FCN network, Mask R-CNN model first uses Faster R-CNN network to detect building damage objects from the input image and mark the detection frame ROI, and then use FCN network for the detected ROI area Semantic-level segmentation of damaged objects and pixel-by-pixel annotation; 其中,Faster R-CNN首先采用特征提取网络从图像中提取出不同目标的重要特征,生成特征图;其次,在RPN阶段,对所获得特征图上的每一个锚点取多个候选ROI,并对这些ROI进行前景和背景的区分以及ROI位置的初步调整;进而,在检测网络中,对ROI进行不同种类目标的区分以及ROI位置的精确调整;FCN网络将传统CNN中的全连接层转化成卷积层,并采用反卷积层对最后一个卷积层的特征图进行上采样,使它恢复到与输入图像相同的尺寸,从而对每一个像素进行预测,对图像实现像素级的分类,解决语义级别的图像分割问题。Among them, Faster R-CNN first uses the feature extraction network to extract the important features of different targets from the image and generate feature maps; secondly, in the RPN stage, multiple candidate ROIs are taken for each anchor point on the obtained feature map, and The foreground and background of these ROIs are distinguished and the ROI position is initially adjusted; then, in the detection network, the ROI is distinguished between different types of targets and the ROI position is accurately adjusted; the FCN network converts the fully connected layer in the traditional CNN into a The convolutional layer, and the deconvolutional layer is used to upsample the feature map of the last convolutional layer to restore it to the same size as the input image, so as to predict each pixel and implement pixel-level classification of the image. Solve the problem of image segmentation at the semantic level.
CN201911249399.6A 2019-12-09 2019-12-09 Method for detecting damage of building after earthquake based on near-ground image data Pending CN111126183A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911249399.6A CN111126183A (en) 2019-12-09 2019-12-09 Method for detecting damage of building after earthquake based on near-ground image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911249399.6A CN111126183A (en) 2019-12-09 2019-12-09 Method for detecting damage of building after earthquake based on near-ground image data

Publications (1)

Publication Number Publication Date
CN111126183A true CN111126183A (en) 2020-05-08

Family

ID=70497906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911249399.6A Pending CN111126183A (en) 2019-12-09 2019-12-09 Method for detecting damage of building after earthquake based on near-ground image data

Country Status (1)

Country Link
CN (1) CN111126183A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111666856A (en) * 2020-05-29 2020-09-15 武汉大学 High-resolution single-polarization SAR image building target detection method based on structural characteristics
CN111915128A (en) * 2020-06-17 2020-11-10 西安交通大学 Post-disaster evaluation and rescue auxiliary system for secondary landslide induced by earthquake
CN112381060A (en) * 2020-12-04 2021-02-19 哈尔滨工业大学 Building earthquake damage level classification method based on deep learning
CN112435253A (en) * 2020-12-08 2021-03-02 深圳创维数字技术有限公司 Wall falling detection method and device and readable storage medium
CN112508030A (en) * 2020-12-18 2021-03-16 山西省信息产业技术研究院有限公司 Tunnel crack detection and measurement method based on double-depth learning model
CN112733711A (en) * 2021-01-08 2021-04-30 西南交通大学 Remote sensing image damaged building extraction method based on multi-scale scene change detection
CN114359756A (en) * 2022-01-06 2022-04-15 中国科学院空天信息创新研究院 Rapid and intelligent detection method for house damaged by remote sensing image of post-earthquake unmanned aerial vehicle
CN114707553A (en) * 2022-04-11 2022-07-05 浙江大学 Step-by-step fault arc detection method and device, electronic equipment and storage medium
CN116109899A (en) * 2022-12-14 2023-05-12 内蒙古建筑职业技术学院(内蒙古自治区建筑职工培训中心) Ancient architecture repairing method, system, computer equipment and storage medium
CN116434009A (en) * 2023-04-19 2023-07-14 应急管理部国家减灾中心(应急管理部卫星减灾应用中心) Construction method and system for deep learning sample set of damaged building
CN117011350A (en) * 2023-08-08 2023-11-07 中国国家铁路集团有限公司 A feature matching method between oblique aerial images and airborne LiDAR point clouds
CN119810084A (en) * 2025-01-06 2025-04-11 浙江大学 A bridge disease detection and positioning method and device based on 2D-3D data fusion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139388A (en) * 2015-08-12 2015-12-09 武汉大学 Method and apparatus for building facade damage detection in oblique aerial image
CN105631892A (en) * 2016-02-23 2016-06-01 武汉大学 Aviation image building damage detection method based on shadow and texture characteristics
US20180293864A1 (en) * 2017-04-03 2018-10-11 Oneevent Technologies, Inc. System and method for monitoring a building
CN109544579A (en) * 2018-11-01 2019-03-29 上海理工大学 A method of damage building is assessed after carrying out calamity using unmanned plane
CN110136170A (en) * 2019-05-13 2019-08-16 武汉大学 A Method of Building Change Detection in Remote Sensing Imagery Based on Convolutional Neural Network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139388A (en) * 2015-08-12 2015-12-09 武汉大学 Method and apparatus for building facade damage detection in oblique aerial image
CN105631892A (en) * 2016-02-23 2016-06-01 武汉大学 Aviation image building damage detection method based on shadow and texture characteristics
US20180293864A1 (en) * 2017-04-03 2018-10-11 Oneevent Technologies, Inc. System and method for monitoring a building
CN109544579A (en) * 2018-11-01 2019-03-29 上海理工大学 A method of damage building is assessed after carrying out calamity using unmanned plane
CN110136170A (en) * 2019-05-13 2019-08-16 武汉大学 A Method of Building Change Detection in Remote Sensing Imagery Based on Convolutional Neural Network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DONGMEI SONG ET.AL: "Integration of super-pixel segmentation and deep-learning methods for evaluating earthquake-damaged buildings using single-phase remote sensing imagery", 《TAYLOR & FRANCIS ONLINE》 *
KAIMING HE ET.AL: "Mask R-CNN", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 *
涂继辉: "基于震后多视航空影像的建筑物损毁检测研究", 《中国博士学位论文全文数据库 工程科技Ⅱ辑》 *
涂继辉等: "基于基尼系数的倾斜航空影像中建筑物立面损毁检测", 《武汉大学学报(信息科学版)》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111666856A (en) * 2020-05-29 2020-09-15 武汉大学 High-resolution single-polarization SAR image building target detection method based on structural characteristics
CN111915128A (en) * 2020-06-17 2020-11-10 西安交通大学 Post-disaster evaluation and rescue auxiliary system for secondary landslide induced by earthquake
CN111915128B (en) * 2020-06-17 2023-12-19 西安交通大学 A post-disaster assessment and rescue assistance system for earthquake-induced secondary landslides
CN112381060B (en) * 2020-12-04 2022-05-20 哈尔滨工业大学 Building earthquake damage level classification method based on deep learning
CN112381060A (en) * 2020-12-04 2021-02-19 哈尔滨工业大学 Building earthquake damage level classification method based on deep learning
CN112435253A (en) * 2020-12-08 2021-03-02 深圳创维数字技术有限公司 Wall falling detection method and device and readable storage medium
CN112508030A (en) * 2020-12-18 2021-03-16 山西省信息产业技术研究院有限公司 Tunnel crack detection and measurement method based on double-depth learning model
CN112733711B (en) * 2021-01-08 2021-08-31 西南交通大学 Remote sensing image damaged building extraction method based on multi-scale scene change detection
CN112733711A (en) * 2021-01-08 2021-04-30 西南交通大学 Remote sensing image damaged building extraction method based on multi-scale scene change detection
CN114359756A (en) * 2022-01-06 2022-04-15 中国科学院空天信息创新研究院 Rapid and intelligent detection method for house damaged by remote sensing image of post-earthquake unmanned aerial vehicle
CN114359756B (en) * 2022-01-06 2025-02-11 中国科学院空天信息创新研究院 A rapid and intelligent detection method for damaged houses using remote sensing images from UAVs after an earthquake
CN114707553A (en) * 2022-04-11 2022-07-05 浙江大学 Step-by-step fault arc detection method and device, electronic equipment and storage medium
CN116109899A (en) * 2022-12-14 2023-05-12 内蒙古建筑职业技术学院(内蒙古自治区建筑职工培训中心) Ancient architecture repairing method, system, computer equipment and storage medium
CN116434009A (en) * 2023-04-19 2023-07-14 应急管理部国家减灾中心(应急管理部卫星减灾应用中心) Construction method and system for deep learning sample set of damaged building
CN116434009B (en) * 2023-04-19 2023-10-24 应急管理部国家减灾中心(应急管理部卫星减灾应用中心) Construction method and system for deep learning sample set of damaged building
CN117011350A (en) * 2023-08-08 2023-11-07 中国国家铁路集团有限公司 A feature matching method between oblique aerial images and airborne LiDAR point clouds
CN117011350B (en) * 2023-08-08 2025-08-12 中国国家铁路集团有限公司 Method for matching inclined aerial image with airborne LiDAR point cloud characteristics
CN119810084A (en) * 2025-01-06 2025-04-11 浙江大学 A bridge disease detection and positioning method and device based on 2D-3D data fusion

Similar Documents

Publication Publication Date Title
CN111126183A (en) Method for detecting damage of building after earthquake based on near-ground image data
Peng et al. A UAV-based machine vision method for bridge crack recognition and width quantification through hybrid feature learning
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
CN111126184B (en) Post-earthquake building damage detection method based on unmanned aerial vehicle video
Yan et al. Towards automated detection and quantification of concrete cracks using integrated images and lidar data from unmanned aerial vehicles
US11551341B2 (en) Method and device for automatically drawing structural cracks and precisely measuring widths thereof
Koch et al. Achievements and challenges in machine vision-based inspection of large concrete structures
Feng et al. Crack assessment using multi-sensor fusion simultaneous localization and mapping (SLAM) and image super-resolution for bridge inspection
Deng et al. Binocular video-based 3D reconstruction and length quantification of cracks in concrete structures
CN106127204B (en) A multi-directional water meter reading area detection algorithm based on fully convolutional neural network
CN114705689B (en) A method and system for detecting cracks on building facades based on drones
CN106570863A (en) Method and device for detecting a power transmission line
CN114331986A (en) A method of dam crack identification and measurement based on unmanned aerial vehicle vision
CN112581301B (en) Detection and early warning method and system for residual quantity of farmland residual film based on deep learning
CN119246518B (en) Bridge surface defect detection method and system based on laser point cloud and visual image fusion
CN112991425B (en) Water level extraction method, system and storage medium
CN113095324A (en) Classification and distance measurement method and system for cone barrel
CN115639248A (en) System and method for detecting quality of building outer wall
CN118298338B (en) Road crack rapid identification and calculation method based on unmanned aerial vehicle low-altitude photography
CN115994901A (en) A method and system for automatic detection of road defects
CN108520277A (en) Reinforced concrete structure seismic Damage automatic identification based on computer vision and intelligent locating method
CN118470641B (en) Ship overload determination method and device based on image recognition
CN112669301A (en) High-speed rail bottom plate paint removal fault detection method
CN120411120B (en) Power transmission image defect detection and defect deduplication method and system based on deep learning image segmentation algorithm
CN117456388A (en) Geological measurement and remote sensing data fusion management system based on unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200508