[go: up one dir, main page]

WO2025041337A1 - Dispositif de génération d'informations de dégradation, procédé de génération d'informations de dégradation et support de stockage - Google Patents

Dispositif de génération d'informations de dégradation, procédé de génération d'informations de dégradation et support de stockage Download PDF

Info

Publication number
WO2025041337A1
WO2025041337A1 PCT/JP2023/030534 JP2023030534W WO2025041337A1 WO 2025041337 A1 WO2025041337 A1 WO 2025041337A1 JP 2023030534 W JP2023030534 W JP 2023030534W WO 2025041337 A1 WO2025041337 A1 WO 2025041337A1
Authority
WO
WIPO (PCT)
Prior art keywords
deformation
image
information
information generating
generating device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2023/030534
Other languages
English (en)
Japanese (ja)
Inventor
恭太 比嘉
一峰 小倉
和也 稲葉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Priority to PCT/JP2023/030534 priority Critical patent/WO2025041337A1/fr
Publication of WO2025041337A1 publication Critical patent/WO2025041337A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination

Definitions

  • This disclosure relates to the technical fields of a deformation information generating device, a deformation information generating method, and a storage medium that generate deformation information for structures.
  • Patent Document 1 discloses an image processing device that analyzes images of cracked areas and presents a composite image of the cracked areas and the results of the image analysis.
  • Patent Document 1 makes it possible to obtain composite images and image analysis results of cracked areas without the need for expensive equipment and without complicating user operations.
  • Patent Document 1 has the problem that, when recording abnormalities, the user must manually enter image position information into a field notebook at the inspection site, resulting in a large workload in linking the abnormalities to structural parts.
  • one of the objectives of the present disclosure is to provide a deformation information generating device, a deformation information generating method, and a storage medium that generate deformation information that can identify parts of a structure in which a deformation has been detected.
  • One aspect of the deformation information generating device is as follows: a part estimation means for estimating parts of the structure included in a first image based on three-dimensional data representing the structure and the first image obtained by capturing a portion of the structure; A deformation detection means for detecting a deformation of the structure based on the first image or a second image obtained by photographing a part of the photographing range of the first image; A deformation information generating means for generating deformation information indicating at least identification information of the part in which the deformation has occurred based on the estimation result of the part and the detection result of the deformation;
  • the deformation information generating device has the above structure.
  • One aspect of the deformation information generating method is to The computer Based on three-dimensional data representing a structure and a first image capturing a portion of the structure, parts of the structure included in the first image are estimated; Detecting a deformation of the structure based on the first image or a second image obtained by capturing a part of a capturing range of the first image; generating deformation information indicating at least identification information of the part in which the deformation has occurred based on the part estimation result and the deformation detection result;
  • the present invention relates to a method for generating information on a state of an abnormality.
  • the term "computer” includes any electronic device (which may be a processor included in an electronic device), and may be configured by a plurality of electronic devices.
  • One aspect of the storage medium is Based on three-dimensional data representing a structure and a first image capturing a portion of the structure, parts of the structure included in the first image are estimated; Detecting a deformation of the structure based on the first image or a second image obtained by capturing a part of a capturing range of the first image;
  • This is a storage medium storing a program that causes a computer to execute a process of generating abnormality information indicating at least the identification information of the part in which the abnormality has occurred based on the estimation result of the part and the detection result of the abnormality.
  • One example of the effect of this disclosure is that it becomes possible to generate abnormality information that can identify parts of a structure in which an abnormality has been detected.
  • the schematic configuration of the structure inspection system is shown. 2 shows an example of a hardware configuration of a deformation information generating device. 4 is an example of a functional block of a processor of the deformation information generating device.
  • A This shows an input image for inspection taken when the road surface and road shoulder are the structures to be inspected.
  • B This is an image segmented based on the part estimation results.
  • A A mask image of the affected area based on the deformation detection results.
  • B An image in which a mask image of the affected area is superimposed on the image shown in Figure 4 (B) which has been divided into areas based on the part estimation results.
  • 1 is an example of a flowchart relating to processing executed by a deformation information generating device.
  • FIG. 2 shows a functional block diagram of a processor.
  • FIG. 2 shows a functional block diagram of a processor.
  • a specific example of the processing of the deformation detection unit will be described.
  • FIG. 2 shows a functional block diagram of a processor.
  • 13 is an example of a design drawing of a structure to be inspected that includes a part in which a defect has been detected.
  • FIG. 2 shows a functional block diagram of a processor.
  • A An example of a distant inspection image having a road surface area including a deformed area corresponding to a "crack" is shown.
  • B An example of a close-up inspection image corresponding to the distant inspection image is shown.
  • C An example of a distant inspection image clearly showing the results of identifying the correspondence relationship is shown.
  • FIG. 2 is a block diagram of a deformation information generating device. 13 is an example of a flowchart executed by the deformation information generating device.
  • System Configuration Fig. 1 shows a schematic configuration of a structure inspection system 100 according to the first embodiment.
  • the structure inspection system 100 is a system that detects abnormalities occurring in a structure to be inspected based on images of the structure, and manages information relating to the detected abnormalities.
  • the structure inspection system 100 mainly includes a abnormality information generating device 1, an input device 2, an output device 3, a storage device 4, and a camera 5 that captures an image of a structure 6 to be inspected.
  • the inspection target structure 6 is any structure that is subject to inspection, and examples of the inspection target structure 6 include bridges, tunnels, buildings, roads, etc.
  • a "structure” refers to any feature or collection of features that has multiple elements (parts).
  • deformation includes any type of damage or deterioration that has occurred to the inspection target structure 6, such as cracks, corrosion, dents, peeling, exposed rebar, water leakage, etc.
  • the deformation information generating device 1 refers to various information stored in the storage device 4, and based on the photographed image of the inspection target structure 6 generated by the camera 5, detects the presence or absence of deformation of the inspection target structure 6 in the photographed image, and generates deformation information related to the detected deformation if a deformation is detected.
  • the photographed image of the inspection target structure 6 generated by the camera 5 is an image input to the deformation information generating device 1 as an image for inspection of the inspection target structure 6 by the deformation information generating device 1, and is hereinafter also referred to as the "inspection input image Ii".
  • the deformation information generating device 1 may receive the inspection input image Ii from the camera 5, or may acquire the inspection input image Ii from a device (e.g., a device capable of data communication with the storage device 4 or the deformation information generating device 1) or a storage medium that stores the inspection input image Ii generated by the camera 5.
  • the deformation information generating device 1 may accept a user input to select an image to be used as the inspection input image Ii through the input device 2, and acquire the image designated based on the input signal generated by the input device 2 as the inspection input image Ii.
  • the deformation information generating device 1 communicates data with the input device 2, output device 3, and storage device 4 via a communication network or by direct wireless or wired communication.
  • the input device 2 is an interface that accepts input (user input) from a user who manages the inspection work of the inspection target structure 6.
  • the input device 2 may be, for example, a touch panel, a button, a keyboard, a mouse, a voice input device, or any other type of user input interface.
  • the input device 2 supplies an input signal generated based on the user input to the abnormality information generating device 1.
  • the output device 3 outputs information (which may include deformation information) relating to the inspection of the inspection target structure 6 based on the output signal supplied from the deformation information generating device 1.
  • the output signal includes at least one of a display signal and an audio signal.
  • the output device 3 displays the information based on the display signal supplied from the deformation information generating device 1, and outputs the information as audio based on the audio signal supplied from the deformation information generating device 1. Examples of the output device 3 include display devices such as a display or projector, and audio output devices such as speakers.
  • the storage device 4 is a memory that stores information used by the deformation information generating device 1.
  • the storage device 4 may be an external storage device such as a hard disk connected to or built into the deformation information generating device 1, or may be a storage medium such as a flash memory.
  • the storage device 4 may also be a server device that performs data communication with the deformation information generating device 1.
  • the storage device 4 may also be composed of multiple devices.
  • the storage device 4 functionally comprises a structure measurement data storage unit 41, a deformation detection model information storage unit 42, and a deformation information storage unit 43.
  • the structure measurement data storage unit 41 stores structure measurement data, which is data obtained by previously measuring the inspection target structure 6 in three dimensions.
  • the structure measurement data is point cloud data obtained by previously measuring the inspection target structure 6 using a distance measurement sensor such as a lidar.
  • this point cloud data may be point cloud data generated from multiple images using SfM (Structure From Motion).
  • the structure measurement data is, as an example, point cloud data, but is not limited to point cloud data and may be data representing the inspection target structure 6 using any three-dimensional model (wireframe, surface, or solid).
  • label information (also called a "part label") that serves as an identifier for the part (element) of the inspection target structure 6 to which the point belongs is linked to each point (i.e., the smallest unit of data representing a position) that represents the inspection target structure 6.
  • a part label that identifies the part to which the point belongs is linked to the data for each point in the structure measurement data.
  • the part label may be information identifying the type of part, may be unique identification information assigned to each part that constitutes the inspected structure 6, or may indicate both of these.
  • the part label may be information identifying the type of bridge part (e.g., girder, joint, or deck), or may indicate a component element number that can identify the location of the bridge in more detail.
  • the component element number is a number assigned to each component when the component is divided into its smallest constituent unit.
  • the part label may contain or be linked to information indicating the name of the part that the part label indicates.
  • the deformity detection model information storage unit 42 is information such as parameters necessary to configure the deformity detection model.
  • the deformity detection model is a machine learning model (engine) that has learned the relationship between an image and the detection result of the deformity contained in the image by machine learning. For example, when an image showing a structure is input, the deformity detection model is trained to output a detection result of an area (also called a "deformation area") showing a deformed part of the structure in the image.
  • the detection result of the deformity area output by the deformity detection model is, for example, information indicating the presence or absence of a deformity for each pixel (or may be in subpixel units; the same applies below), and the pixel with a deformity may further include information indicating the type of deformity.
  • the deformity detection model information storage unit 42 includes various parameters (including hyperparameters), such as, for example, a layer structure, a neuron structure of each layer, the number of filters and filter size in each layer, and the weight of each element of each filter.
  • the deformity detection model may be, for example, any machine learning model used in segmentation such as instance segmentation or anomaly detection.
  • Machine learning is performed in advance on the anomaly detection model using training data including multiple records that are pairs of input samples to the anomaly detection model (here, images from the camera 5) and correct answers to be output by the anomaly detection model when the sample is input (for example, data showing the presence or absence of an anomaly and the type of an anomaly for each pixel).
  • the parameters of the anomaly detection model are determined so that the error (loss) between the detection result output by the anomaly detection model when the sample is input to the anomaly detection model and the correct answer is minimized.
  • the algorithm for determining the above-mentioned parameters so as to minimize the loss may be any learning algorithm used in machine learning, such as the gradient descent method or the backpropagation method. Similarly, for other machine learning models described below, learning is performed using learning data that records pairs of input samples and correct answers to be output as described above, and learned parameters are obtained.
  • the deformation information storage unit 43 stores a database in which the deformation information generated by the deformation information generating device 1 is stored as records.
  • the deformation information includes deformation type information indicating the type of deformation and deformation position information indicating the position where the deformation occurred (deformation position), and may further include any information related to the deformation, such as the size of the deformation or the shape of the deformation.
  • the deformation position information includes at least a part label that identifies the part of the inspection target structure 6 where the deformation occurred.
  • the deformation position information may further include information indicating the position within the part indicated by the part label.
  • the deformation position information may include coordinate information of the deformation position expressed in the coordinate system used in the structure measurement data or a coordinate system that can be converted from that coordinate system (these are also called "reference coordinate systems").
  • the camera 5 is a camera that photographs the inspection target structure 6 and generates an inspection input image Ii.
  • the inspection input image Ii is, for example, an RGB image.
  • the inspection input image Ii generated by the camera 5 may be directly supplied to the abnormality information generating device 1, or may be stored in the camera 5 or an external device or storage medium connected to the camera 5 and then supplied to the abnormality information generating device 1.
  • the configuration of the structure inspection system 100 shown in FIG. 1 is one example, and various modifications may be made to the configuration.
  • the abnormality information generating device 1 may be configured integrally with at least one of the input device 2, output device 3, storage device 4, and camera 5.
  • the structure inspection system 100 may be realized by a single device. In other examples, the structure inspection system 100 may not include at least one of the input device 2 or output device 3.
  • the deformation information generation device 1 includes, as hardware, a processor 11, a memory 12, and an interface 13.
  • the processor 11, the memory 12, and the interface 13 are connected via a data bus 90.
  • the processor 11 functions as a controller (computing device) that controls the entire abnormality information generating device 1 by executing the programs stored in the memory 12.
  • the processor 11 is, for example, a processor such as a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or a TPU (Tensor Processing Unit).
  • the processor 11 may be composed of multiple processors.
  • the processor 11 is an example of a computer.
  • the memory 12 is composed of various types of volatile and non-volatile memory, such as RAM (Random Access Memory), ROM (Read Only Memory), and flash memory.
  • the memory 12 also stores programs for executing the processes executed by the abnormality information generating device 1. Some of the information stored in the memory 12 may be stored in one or more external storage devices capable of communicating with the abnormality information generating device 1, or in a storage medium that is detachable from the abnormality information generating device 1.
  • the memory 12 may also function as at least a part of the storage device 4.
  • the interface 13 is an interface for electrically connecting the deformation information generating device 1 to other devices.
  • These interfaces may be wireless interfaces such as network adapters for wirelessly transmitting and receiving data to and from other devices, or may be hardware interfaces for connecting to other devices via cables or the like.
  • the hardware configuration of the deformation information generating device 1 is not limited to the configuration shown in FIG. 2.
  • the deformation information generating device 1 may include at least one of an input device 2, an output device 3, a storage device 4, and a camera 5.
  • the deformation information generating device 1 generates deformation information indicating at least the part label where a deformation has occurred based on the result of detection of the deformation area on the inspection input image Ii using the deformation detection model and the result of estimation of the part label on the inspection input image Ii estimated using the structure measurement data.
  • the deformation information generating device 1 automates the linking of the deformation and the parts of the structure without requiring manual part identification work, and efficiently obtains deformation information that can identify the part where a deformation has occurred.
  • Such deformation information is information that favorably supports the user's decision-making regarding repairs, etc. of the inspection target structure 6.
  • FIG. 3 is an example of functional blocks of the processor 11 of the deformation information generation device 1.
  • the processor 11 of the deformation information generation device 1 functionally has a camera position and orientation estimation unit 14, a parts estimation unit 15, a deformation detection unit 16, and a deformation information generation unit 17.
  • the blocks through which data is exchanged are connected by solid lines, but the combination of blocks through which data is exchanged is not limited to those shown in the figure. The same applies to the diagrams of other functional blocks described later.
  • the camera position and orientation estimation unit 14 estimates the position and orientation in the reference coordinate system of the camera 5 that acquired the inspection input image Ii based on the structure measurement data stored in the structure measurement data storage unit 41 and the inspection input image Ii. In this way, the camera position and orientation estimation unit 14 specifies the position and orientation of the camera 5 in the coordinate system used in the structure measurement data. The camera position and orientation estimation unit 14 then supplies the estimated position and orientation of the camera 5 to the part estimation unit 15.
  • the camera position and orientation estimation unit 14 identifies correspondences between points of the structure measurement data and pixels of the inspection input image Ii by matching two-dimensional (i.e., pixel-by-pixel) feature amounts extracted from an image obtained by converting the structure measurement data into a two-dimensional image with two-dimensional feature amounts extracted from the inspection input image Ii, and estimates the position and orientation of the camera 5 based on the identified correspondences.
  • Such camera position and orientation estimation methods are disclosed in, for example, the following documents, but are not limited to these. "C. Jaramillo, I. Dryanovski, R. G. Valenti and J. Xiao, "6-DoF pose localization in 3D point-cloud dense maps using a monocular camera," 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1747-1752, 2013.”
  • the part estimation unit 15 estimates part labels corresponding to the inspection input image Ii (i.e., a two-dimensional image) based on the structure measurement data to which the part labels are linked and the estimation result of the position and orientation of the camera 5 generated by the camera position and orientation estimation unit 14. In this way, the part estimation unit 15 generates a part label estimation result (also called a "part estimation result") indicating the correspondence between the pixels of the inspection input image Ii and the part labels.
  • the part estimation result may be, for example, data indicating the part label for each pixel of the inspection input image Ii.
  • the part estimation unit 15 associates the inspection input image Ii with the part labels of the structure measurement data by projecting part labels onto the image plane of the inspection input image Ii based on the estimation result of the position and orientation of the camera 5.
  • the part estimation unit 15 associates the inspection input image Ii with the part labels of the structure measurement data by rendering the structure measurement data (point cloud data) within the angle of view of the camera 5 as a two-dimensional image based on the estimation result of the position and orientation of the camera 5.
  • the part estimation unit 15 supplies the part label estimation result to the deformation information generation unit 17.
  • the deformity detection unit 16 detects deformity areas in the inspection input image Ii based on the machine-learned deformity detection model composed of the deformity detection model information stored in the deformity detection model information storage unit 42 and the inspection input image Ii, and supplies the detection result regarding the deformity area (also called the "deformation detection result") to the deformity information generation unit 17. In this case, the deformity detection unit 16 obtains the deformity detection result based on the information output by the deformity detection model by inputting the inspection input image Ii to the deformity detection model.
  • the deformity detection result is information indicating the deformity class for each pixel, and corresponds to the segmentation result obtained by segmenting the inspection input image Ii based on the deformity class (i.e., a mask image indicating the deformity class for each pixel).
  • the "deformation class" may be a class indicating the presence or absence of a deformity, or may be a class that can identify the type of deformity when a deformity occurs in addition to the presence or absence of a deformity.
  • the anomaly detection result may be the information output by the anomaly detection model itself, or may be information that can be derived from the information output by the anomaly detection model.
  • the deformation information generating unit 17 generates deformation information based on the part estimation results generated by the part estimation unit 15 and the deformation detection results generated by the deformation detection unit 16. For example, when the deformation information generating unit 17 determines that a deformation area has been detected based on the deformation detection results, it identifies a part label corresponding to the deformation area indicated by the deformation detection results based on the part estimation results. Note that when there are multiple part labels corresponding to the deformation area, the deformation information generating unit 17 may, for example, count the part labels for each pixel of the deformation area, and when there is a part label corresponding to a predetermined percentage or more of the pixels of the deformation area, identify that part label as the part label corresponding to the deformation area. In addition, the deformation information generating unit 17 may generate coordinate information in a reference coordinate system for the deformation area, estimate the size of the deformation area, estimate the shape of the deformation area, and so on, based on various image recognition technologies.
  • the deformation information generating unit 17 generates deformation information including deformation type information indicating the type of deformation in the deformation area, deformation position information indicating part labels (and coordinate information), etc., and other information such as the size of the deformation area, and stores it in the deformation information storage unit 43.
  • the deformation information may be stored in the deformation information storage unit 43 in association with any related information such as the inspection input image Ii and shooting date and time information.
  • the deformation information generating unit 17 may store deformation information indicating that no deformation has been detected in the deformation information storage unit 43, or may not generate deformation information. In the former example, the deformation information generating unit 17 may generate deformation information indicating that no deformation has occurred in the part label indicated by the part estimation result.
  • the deformation information generating unit 17 may determine whether the deformation detection result generated by the deformation detection unit 16 is correct based on the part estimation result estimated by the part estimation unit 15. In this case, for example, table information indicating the type of deformation that may occur for each part label is stored in advance in the storage device 4 or memory 12. The deformation information generating unit 17 then determines whether the type of deformation in the detected deformation area and the part label corresponding to the deformation area are associated in the above-mentioned table information.
  • the deformation information generating unit 17 determines that the type of deformation in the detected deformation area and the part label corresponding to the deformation area are not associated in the above-mentioned table information, it determines that the deformation detection result for the deformation area was generated due to a false detection, and deletes the deformation information for the deformation detection result without storing it in the deformation information storage unit 43. On the other hand, if the deformation information generating unit 17 determines that the type of deformation in the detected deformation area and the part label corresponding to the deformation area are associated in the above-mentioned table information, etc., it determines that the deformation detection result for the deformation area is correct and stores the deformation information related to the deformation detection result in the deformation information storage unit 43.
  • abnormality information generation unit 17 determines that the above-mentioned abnormality information is due to a false detection, and deletes the abnormality information without storing it in the abnormality information storage unit 43.
  • abnormality information generation unit 17 determines that the above-mentioned abnormality information was generated based on a correct detection result, and stores the abnormality information in the abnormality information storage unit 43.
  • the components of the camera position and orientation estimation unit 14, the parts estimation unit 15, the deformation detection unit 16, and the deformation information generation unit 17 described in FIG. 3 can be realized, for example, by the processor 11 executing a program.
  • the necessary programs may be recorded in any non-volatile storage medium and installed as necessary to realize each component.
  • At least some of these components may be realized not only by software programs but also by any combination of hardware, firmware, and software.
  • At least some of these components may be realized using a user-programmable integrated circuit, such as an FPGA (Field-Programmable Gate Array) or a microcontroller. In this case, the integrated circuit may be used to realize a program consisting of the above components.
  • FPGA Field-Programmable Gate Array
  • each component may be configured by an ASSP (Application Specific Standard Production), an ASIC (Application Specific Integrated Circuit), or a quantum processor (quantum computer control chip).
  • ASSP Application Specific Standard Production
  • ASIC Application Specific Integrated Circuit
  • quantum processor quantum computer control chip
  • FIG. 4(A) shows an input image for inspection Ii captured when the road surface and road shoulder (including curbstones) are the structures 6 to be inspected.
  • the input image for inspection Ii shown in FIG. 4(A) includes a road surface area 71 corresponding to the road surface, which is the structure 6 to be inspected, and a road shoulder area 72 corresponding to the road shoulder, as well as a vegetation area 73 indicating vegetation.
  • the road surface area 71 includes a deformation area 74 where a deformation corresponding to a "crack" has occurred.
  • FIG. 4(B) shows an image segmented based on the part estimation results.
  • the part estimation unit 15 generates segments corresponding to part labels indicating the road surface, part labels indicating road shoulders, and part labels indicating vegetation, based on the camera position and orientation estimation results generated by the camera position and orientation estimation unit 14 and the structure measurement data.
  • FIG. 5(A) is a mask image of the deformed area based on the deformed area detection result generated from the inspection input image Ii shown in FIG. 4(A).
  • the deformed area detection unit 16 acquires the deformed area detection result output by the deformed area detection model when the inspection input image Ii shown in FIG. 4(A) is input to the deformed area detection model.
  • the deformed area detection unit 16 acquires a mask image indicating the deformed area in the inspection input image Ii shown in FIG. 5(A) from the deformed area detection model.
  • the pixels indicating the deformed area in the above-mentioned mask image are associated with a classification result (class) indicating that the type of deformed area is "crack".
  • Figure 5 (B) shows an image in which the mask image of the deformed area shown in Figure 5 (A) is superimposed on the image shown in Figure 4 (B) which has been divided into areas based on the part estimation results.
  • the deformation information generation unit 17 Based on the image shown in Figure 4 (B) which has been divided into areas based on the part estimation results and the mask image shown in Figure 5 (A), the deformation information generation unit 17 recognizes that the deformed area 74 is included in the segment corresponding to the part label indicating the road surface. Therefore, the deformation information generation unit 17 generates deformation information including deformation position information indicating at least the part label indicating the road surface and deformation type information indicating "crack", etc. In this way, the deformation information generation unit 17 can suitably generate deformation information that can identify the part in which a deformation has occurred.
  • Processing Flow Fig. 6 is an example of a flowchart regarding the processing executed by the deformation information generation device 1. For example, when a user input specifying an inspection input image Ii is detected, or when other processing start conditions are satisfied, the deformation information generation device 1 executes the flowchart shown in Fig. 8.
  • the deformation information generating device 1 acquires the inspection input image Ii generated by the camera 5 (step S11). Instead of acquiring the inspection input image Ii directly from the camera 5, the deformation information generating device 1 may acquire the inspection input image Ii stored in the storage device 4 or the like.
  • the deformation information generating device 1 estimates the position and orientation of the camera 5 based on the inspection input image Ii and the structure measurement data stored in the structure measurement data storage unit 41 (step S12).
  • the process of step S12 corresponds to the process executed by the camera position and orientation estimation unit 14.
  • the deformation information generating device 1 estimates the area of each part of the inspection target structure 6 included in the inspection input image Ii based on the structure measurement data with part labels and the camera position and orientation estimation result (step S13).
  • the process of step S13 corresponds to the process executed by the part estimation unit 15.
  • the abnormality information generating device 1 refers to the abnormality detection model information storage unit 42 and detects abnormality areas from the inspection input image Ii based on the abnormality detection model, which is a machine-learned model (step S14).
  • the process of step S14 corresponds to the process executed by the abnormality detection unit 16. Note that step S14 may be executed before step S12 and step S13, or may be executed in parallel with them.
  • the deformation information generating device 1 generates and outputs deformation information based on the processing results of steps S13 and S14 (step S15).
  • the deformation information generating device 1 stores the generated deformation information in the deformation information storage unit 43.
  • the deformation information generating device 1 may also generate a display signal and/or an audio signal based on the generated deformation information, and supply the generated signal to the output device 3, thereby displaying and/or outputting information related to the deformation information as audio by the output device 3.
  • the processing of step S15 corresponds to the processing executed by the deformation information generating unit 17.
  • FIG. 7 shows a functional block diagram of the processor 11.
  • the processor 11 in the first modification includes an interpolation unit 18.
  • the interpolation unit 18 interpolates the part label of a pixel (target pixel) that is not associated with a part label in the part estimation result generated by the part estimation unit 15, based on the part labels of a predetermined number of neighboring pixels that are close to the target pixel. In this case, for example, the interpolation unit 18 determines the most common part label among the predetermined number of neighboring pixels as the part label of the target pixel. Note that the interpolation unit 18 may determine the part label of the target pixel based on the part labels corresponding to the points of the structure measurement data that correspond to the neighboring pixels.
  • the deformation information generating device 1 may select a deformation detection model to be used for detecting a deformation area based on the part estimation results generated by the part estimation unit 15.
  • each anomaly detection model is trained on machine learning using training data including a plurality of records in which an image of a structure in which a target anomaly has occurred is used as a sample, and the correct answer is data indicating, for each pixel, the area in which the type of anomaly to be detected has occurred.
  • the deformity detection unit 16A determines the deformity detection model to be used based on the part estimation results obtained from the part estimation unit 15. For example, table information indicating the identification information of the deformity detection model to be applied for each type of part (or the type of expected deformity) is stored in advance in the storage device 4 or memory 12, etc., and the deformity detection unit 16A refers to the table information to determine the deformity detection model to be used from the type of part indicated by the part estimation results.
  • the structure measurement data stored in the structure measurement data storage unit 41 does not need to include information regarding part labels.
  • the part estimation model is a model that has been machine-learned in advance so that, when three-dimensional data (point cloud data) within the camera's angle of view corresponding to an image is input, the model outputs part estimation results that indicate the part class for each pixel on the image.
  • the "part class" may be, for example, a label indicating the type of part, a component element number, or any other identifier for the part.
  • the part estimation unit 15A may also generate a part estimation result by further using the inspection input image Ii.
  • the part estimation model is a machine learning model that generates a part estimation result by inputting an image and three-dimensional data corresponding to the image.
  • a known example of the architecture of such a machine learning model is BPNet (Bidirectional Projection Network).
  • the part estimation unit 15A inputs the inspection input image Ii and structure measurement data present within the angle of view of the camera 5 to such a part estimation model, and obtains a part estimation result from the part estimation model.
  • the deformation detection unit 16 of the deformation information generating device 1 may perform deformation detection using the structure measurement data stored in the structure measurement data storage unit 41 in addition to the inspection input image Ii.
  • the deformation information generating device 1 may update drawing data indicating the design drawings of the inspection target structure 6 based on the deformation information (i.e., the deformation detection results and part estimation results), and store the updated drawing data in the deformation information storage unit 43 in association with the above-mentioned deformation information.
  • the deformation information generating device 1 when converting from the structure measurement data to a design drawing (e.g., an aerial view), the deformation information generating device 1 generates corresponding location information that associates the coordinate position in the reference coordinate system or the part label before conversion with the position on the design drawing after conversion.
  • a design drawing e.g., an aerial view
  • the deformation information generating device 1 may determine the size and shape of the mark 81 to be superimposed on the design drawing based on the size information and shape information contained in the deformation information, or may extract a deformation area from the inspection input image Ii and superimpose an image of the extracted deformation area as the mark 81 on the image of the design drawing.
  • the deformation information generating device 1 can add information about deformation to drawing data that represents design drawings such as development drawings. This allows a user viewing the drawing data to easily grasp the location where the deformation has occurred on the design drawing.
  • the deformation information generating device 1 uses two inspection input images Ii, one of which includes the other, as the inspection input image Ii. This allows the deformation information generating device 1 to more accurately perform deformation detection and part estimation.
  • an image of the inspection target structure 6 photographed at a close distance i.e., corresponding to the above-mentioned "other shooting range”
  • an image of the inspection target structure 6 photographed at a long distance so as to include the shooting range of the inspection close-up image IiC (i.e., corresponding to the above-mentioned "one shooting range"
  • an image of the inspection target structure 6 photographed at a long distance so as to include the shooting range of the inspection close-up image IiC i.e., corresponding to the above-mentioned "one shooting range
  • an image of the inspection target structure 6 photographed at a long distance so as to include the shooting range of the inspection close-up image IiC i.e., corresponding to the above-mentioned "one shooting range”
  • FIG. 12 shows the functional block configuration of the processor 11 of the deformation information generating device 1.
  • the processor 11 in the second embodiment has a camera position and orientation estimation unit 14B, a part estimation unit 15B, a deformation detection unit 16B, a deformation information generating unit 17B, and a correspondence identification unit 19B.
  • the camera position and orientation estimation unit 14B estimates the position and orientation of the camera 5 that captured the inspection distant image IiF based on the structure measurement data stored in the structure measurement data storage unit 41 and the inspection distant image IiF.
  • the part estimation unit 15B estimates the part label that corresponds to the inspection distant image IiF based on the structure measurement data linked to the part label and the estimation result of the position and orientation of the camera 5 generated by the camera position and orientation estimation unit 14B.
  • the processing that the camera position and orientation estimation unit 14B and the part estimation unit 15B execute based on the inspection distant image IiF is the same as the processing that the camera position and orientation estimation unit 14 and the part estimation unit 15 execute based on the inspection input image Ii in the first embodiment.
  • the correspondence identification unit 19B identifies the correspondence between the inspection distant image IiF and the inspection close-up image IiC. In this case, the correspondence identification unit 19B identifies an area in the inspection distant image IiF that corresponds to the inspection close-up image IiC based on any image matching technology. The correspondence identification unit 19B then supplies the identification result of the above-mentioned correspondence (also called the "correspondence identification result") to the deformation information generation unit 17B. In this case, for example, the correspondence identification unit 19B supplies information indicating the pixels of the inspection distant image IiF that correspond to each pixel of the inspection close-up image IiC to the deformation information generation unit 17B as the correspondence identification result.
  • the deformation information generating unit 17B generates deformation information based on the part estimation result generated by the part estimation unit 15B, the deformation detection result generated by the deformation detection unit 16B, and the correspondence identification result generated by the correspondence identification unit 19B. For example, when the deformation information generating unit 17 determines that a deformation area has been detected based on the deformation detection result, it identifies the part label corresponding to the deformation area on the inspection close-up image IiC indicated by the deformation detection result based on the part estimation result indicating the part label on the inspection close-up image IiF and the correspondence identification result indicating the correspondence between the inspection close-up image IiF and the inspection close-up image IiC.
  • the deformation information generating unit 17B may also generate coordinate information in the reference coordinate system of the deformation area, estimate the size of the deformation area, estimate the shape of the deformation area, and so on, based on various image recognition technologies.
  • the deformation information generating unit 17B generates deformation information including deformation type information indicating the type of deformation in the deformation area, deformation position information indicating the part label (and coordinate information), and other information such as the size of the deformation area, and stores the deformation information in the deformation information storage unit 43.
  • the deformation information generating unit 17B may store deformation information indicating that no deformation was detected in the deformation information storage unit 43, or may not generate deformation information. In addition, like the deformation information generating unit 17 of the first embodiment, the deformation information generating unit 17B may correct the deformation detection result by referring to the part label.
  • FIG. 13(A) shows an example of an inspection distant view image IiF having a road surface area 71, a road shoulder area 72, and a vegetation area 73, including a deformation area 74 corresponding to a "crack," and FIG. 13(B) shows an example of an inspection close-up image IiC corresponding to the inspection distant view image IiF shown in FIG. 13(A).
  • FIG. 13(C) shows an inspection distant view image IiF that clearly shows the correspondence relationship identification result.
  • a frame 85 that forms the outer edge of the image area corresponding to the inspection close-up image IiC is clearly shown.
  • the inspection close-up image IiC corresponds to a part of the image area in the inspection distant image IiF
  • the correspondence identification unit 19B identifies the image area in the inspection distant image IiF that corresponds to the inspection close-up image IiC.
  • the inspection close-up image IiC is an image of a part where a deformation has occurred (here, the road surface) photographed from a close distance, and is an image that makes it easy for the deformation detection unit 16B to detect deformation.
  • the inspection close-up image IiC is an enlarged image of a limited part of the inspection target structure 6, and it is difficult to identify the correspondence with the structure measurement data that represents the entire inspection target structure 6, so the inspection distant image IiF is used for part estimation. Then, the deformation information generating device 1 identifies the part label that corresponds to the deformation area 74 based on the correspondence identification result and the part estimation result, and generates deformation information based on the identified part label, etc.
  • the deformation information generating device 1 can detect deformation areas with high accuracy using the inspection close-up image IiC, while also estimating parts with high accuracy by using the inspection distant image IiF.
  • each of the modified examples of the first embodiment can also be applied to the second embodiment in any combination.
  • the third Embodiment 14 is a block diagram of the deformation information generation device 1X.
  • the deformation information generation device 1X mainly includes a parts estimation unit 15X, a deformation detection unit 16X, and a deformation information generation unit 17X.
  • the deformation information generation device 1X may be composed of multiple devices.
  • the part estimation means 15X estimates parts of the structure contained in the first image based on the three-dimensional data representing the structure and the first image which is a partial photograph of the structure.
  • the part estimation means 15X may be the camera position and orientation estimation unit 14 and the part estimation unit 15 (or the camera position and orientation estimation unit 14 and the part estimation unit 15A) in the first embodiment, or the camera position and orientation estimation unit 14B and the part estimation unit 15B in the second embodiment.
  • the first image may be, for example, the inspection input image Ii in the first embodiment or the inspection distant image IiF in the second embodiment.
  • the deformation detection means 16X detects deformation of the structure based on the first image or a second image that captures a part of the capture range of the first image.
  • the deformation detection means 16X may be the deformation detection unit 16 or the deformation detection unit 16A in the first embodiment, or the deformation detection unit 16B in the second embodiment.
  • the second image may be, for example, the inspection close-up image IiC in the second embodiment.
  • the deformation information generating means 17X generates deformation information indicating at least the identification information of the part in which deformation has occurred based on the part estimation result and the deformation detection result.
  • the deformation information generating means 17X may be the deformation information generating unit 17 in the first embodiment, or may be the deformation information generating unit 17B in the second embodiment.
  • FIG. 15 is an example of a flowchart executed by the deformation information generating device 1X in the third embodiment.
  • the part estimation means 15X estimates parts of the structure included in the first image based on the three-dimensional data representing the structure and the first image capturing a portion of the structure (step S21).
  • the deformation detection means 16X detects deformation of the structure based on the first image or a second image capturing a portion of the capturing range of the first image (step S22).
  • the deformation information generating means 17X generates deformation information indicating at least the identification information of the deformed part based on the part estimation results and the deformation detection results (step S23).
  • the abnormality information generating device 1X can effectively generate abnormality information regarding the abnormality detection results linked to the part identification information.
  • Non-transitory computer readable medium includes various types of tangible storage medium.
  • Examples of non-transitory computer readable medium include magnetic storage medium (e.g., flexible disks, magnetic tapes, hard disk drives), magneto-optical storage medium (e.g., magneto-optical disks), CD-ROM (Read Only Memory), CD-R, CD-R/W, and semiconductor memory (e.g., mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), flash ROM, RAM (Random Access Memory)).
  • the program may also be supplied to a computer by various types of transitory computer readable medium.
  • Examples of transitory computer readable medium include electrical signals, optical signals, and electromagnetic waves.
  • the temporary computer-readable medium can supply the program to the computer via a wired communication path, such as an electric wire or optical fiber, or via a wireless communication path.
  • [Appendix 1] a part estimation means for estimating parts of the structure included in a first image based on three-dimensional data representing the structure and the first image obtained by capturing a portion of the structure;
  • a deformation detection means for detecting a deformation of the structure based on the first image or a second image obtained by photographing a part of the photographing range of the first image;
  • a deformation information generating means for generating deformation information indicating at least identification information of the part in which the deformation has occurred based on the estimation result of the part and the detection result of the deformation;
  • a deformation information generating device having the above structure.
  • the anomaly detection means detects the anomaly based on the first image or the second image and a machine learning model,
  • the machine learning model is a model that machine-learns the relationship between an image and a detection result of an anomaly contained in the image.
  • the deformation information generating device described in Appendix 3 wherein the deformation detection means selects the machine learning model based on the estimation result of the part.
  • the part estimation means generates an estimation result of the part based on the first image
  • the anomaly detection means generates a detection result of the anomaly based on the second image
  • the deformation information generating means generates the deformation information based on information indicating the correspondence between the first image and the second image, the estimation result of the part, and the detection result of the deformation.
  • the deformation detection means generates a detection result of the deformation including information regarding the type of the deformation, The deformation information generating device described in Appendix 1, wherein the deformation information generating means generates the deformation information indicating at least identification information of the part in which the deformation has occurred and the type of the deformation.
  • the deformation information generating means determines whether the detection result of the deformation is correct based on the identification information of the part in which the deformation has occurred and the type of the deformation.
  • the part estimation means estimates identification information of the part corresponding to each pixel of the first image based on the position and orientation estimation result and identification information of the part associated with the three-dimensional data.
  • the deformation information generating device described in Appendix 1 wherein the deformation information generating means adds information indicating the detected position of the deformation in the drawing to drawing data showing a drawing of the structure.
  • [Appendix 11] Based on three-dimensional data representing a structure and a first image capturing a portion of the structure, parts of the structure included in the first image are estimated; Detecting a deformation of the structure based on the first image or a second image obtained by capturing a part of a capturing range of the first image; A storage medium storing a program that causes a computer to execute a process of generating deformation information indicating at least the identification information of the part in which the deformation has occurred based on the part estimation result and the deformation detection result.

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un dispositif de génération d'informations de dégradation 1X qui comprend principalement un moyen d'inférence de partie 15X, un moyen de détection de dégradation 16X et un moyen de génération d'informations de dégradation 17X. Le moyen d'inférence de partie 15X infère, sur la base de données tridimensionnelles représentant une structure et d'une première image obtenue par imagerie partielle de la structure, des parties de la structure incluses dans la première image. Le moyen de détection de dégradation 16X détecte la dégradation de la structure, sur la base de la première image ou d'une deuxième image obtenue par imagerie d'une partie d'une plage d'imagerie de la première image. Le moyen de génération d'informations de dégradation 17X génère, sur la base du résultat d'inférence des parties et du résultat de détection de la dégradation, des informations de dégradation indiquant au moins des informations d'identification d'une partie dans laquelle la dégradation s'est produite.
PCT/JP2023/030534 2023-08-24 2023-08-24 Dispositif de génération d'informations de dégradation, procédé de génération d'informations de dégradation et support de stockage Pending WO2025041337A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2023/030534 WO2025041337A1 (fr) 2023-08-24 2023-08-24 Dispositif de génération d'informations de dégradation, procédé de génération d'informations de dégradation et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2023/030534 WO2025041337A1 (fr) 2023-08-24 2023-08-24 Dispositif de génération d'informations de dégradation, procédé de génération d'informations de dégradation et support de stockage

Publications (1)

Publication Number Publication Date
WO2025041337A1 true WO2025041337A1 (fr) 2025-02-27

Family

ID=94731976

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/030534 Pending WO2025041337A1 (fr) 2023-08-24 2023-08-24 Dispositif de génération d'informations de dégradation, procédé de génération d'informations de dégradation et support de stockage

Country Status (1)

Country Link
WO (1) WO2025041337A1 (fr)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009188280A (ja) * 2008-02-08 2009-08-20 Hitachi High-Technologies Corp レビュー装置,検査領域設定支援システム、および、欠陥の画像得方法
WO2017119202A1 (fr) * 2016-01-06 2017-07-13 富士フイルム株式会社 Dispositif et procédé de spécification d'élément de structure
WO2018037689A1 (fr) * 2016-08-22 2018-03-01 富士フイルム株式会社 Dispositif de traitement d'image et procédé de traitement d'image
WO2018155590A1 (fr) * 2017-02-24 2018-08-30 国立研究開発法人理化学研究所 Dispositif d'identification, procédé d'identification et programme d'identification de la position d'une surface de paroi à l'intérieur d'un tunnel apparaissant dans une image photographique
WO2020003818A1 (fr) * 2018-06-28 2020-01-02 パナソニックIpマネジメント株式会社 Instrument d'inspection et procédé d'inspection
JP2020159969A (ja) * 2019-03-27 2020-10-01 三菱電機株式会社 付帯設備状態評価装置、付帯設備状態評価方法、および付帯設備状態評価プログラム
JP2020197460A (ja) * 2019-06-03 2020-12-10 株式会社インフォマティクス 構造物の検査システム、構造物の検査方法及びプログラム
JP2023025476A (ja) * 2021-08-10 2023-02-22 キヤノン株式会社 情報処理装置、情報処理方法、およびプログラム

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009188280A (ja) * 2008-02-08 2009-08-20 Hitachi High-Technologies Corp レビュー装置,検査領域設定支援システム、および、欠陥の画像得方法
WO2017119202A1 (fr) * 2016-01-06 2017-07-13 富士フイルム株式会社 Dispositif et procédé de spécification d'élément de structure
WO2018037689A1 (fr) * 2016-08-22 2018-03-01 富士フイルム株式会社 Dispositif de traitement d'image et procédé de traitement d'image
WO2018155590A1 (fr) * 2017-02-24 2018-08-30 国立研究開発法人理化学研究所 Dispositif d'identification, procédé d'identification et programme d'identification de la position d'une surface de paroi à l'intérieur d'un tunnel apparaissant dans une image photographique
WO2020003818A1 (fr) * 2018-06-28 2020-01-02 パナソニックIpマネジメント株式会社 Instrument d'inspection et procédé d'inspection
JP2020159969A (ja) * 2019-03-27 2020-10-01 三菱電機株式会社 付帯設備状態評価装置、付帯設備状態評価方法、および付帯設備状態評価プログラム
JP2020197460A (ja) * 2019-06-03 2020-12-10 株式会社インフォマティクス 構造物の検査システム、構造物の検査方法及びプログラム
JP2023025476A (ja) * 2021-08-10 2023-02-22 キヤノン株式会社 情報処理装置、情報処理方法、およびプログラム

Similar Documents

Publication Publication Date Title
US11893724B2 (en) Methods of artificial intelligence-assisted infrastructure assessment using mixed reality systems
Pantoja-Rosero et al. Damage-augmented digital twins towards the automated inspection of buildings
KR102256181B1 (ko) 강구조물의 도막 상태 검사 및 평가 방법과 이를 위한 시스템
EP3690800B1 (fr) Appareil de traitement d'informations, procédé de traitement d'informations et programme
CN108648194B (zh) 基于cad模型三维目标识别分割和位姿测量方法及装置
US11587299B2 (en) Systems and methods for detection of anomalies in civil infrastructure using context aware semantic computer vision techniques
CN114841923B (zh) 一种基于无人机的高精度实时裂缝检测方法
CN110009614A (zh) 用于输出信息的方法和装置
CN116703835B (zh) 基于卷积神经网络与双目视觉的智能配筋检测方法及系统
CN116385421A (zh) 光伏面板检测方法、无人机和计算机可读存储介质
US12125238B2 (en) Information processing device, information processing method, and computer program product
US11922659B2 (en) Coordinate calculation apparatus, coordinate calculation method, and computer-readable recording medium
JP2018036226A (ja) 画像処理プログラム、画像処理方法および画像処理装置
Jayaram Computer vision applications in construction material and structural health monitoring: A scoping review
Lin et al. Visual and virtual progress monitoring in Construction 4.0
Shah et al. Condition assessment of ship structure using robot assisted 3D-reconstruction
CN119544928A (zh) 换流站阀厅无人巡检方法、装置、设备及存储介质
CN118968347A (zh) 一种结构损伤自适应识别方法及系统
KR20220095831A (ko) 구조물 균열 측정 시스템, 방법, 및 상기 방법을 실행시키기 위한 컴퓨터 판독 가능한 프로그램을 기록한 기록 매체
US12223639B2 (en) Photographing guide device
Pan et al. Bolt loosening assessment using ensemble vision models for automatic localization and feature extraction with target‐free perspective adaptation
WO2025041337A1 (fr) Dispositif de génération d'informations de dégradation, procédé de génération d'informations de dégradation et support de stockage
KR102588141B1 (ko) 드론 영상정보를 이용한 딥러닝 기반 구조물 손상 점검 방법
KR102621971B1 (ko) 수직형 구조물의 딥러닝 기반 3차원 외관 손상 모델 구축 시스템 및 그 방법
Lin et al. Bridge inspection with aerial robots and computer vision: A japanese national initiative

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23949793

Country of ref document: EP

Kind code of ref document: A1