WO2020071559A1 - Vehicle state evaluation device, evaluation program therefor, and evaluation method therefor - Google Patents
Vehicle state evaluation device, evaluation program therefor, and evaluation method thereforInfo
- Publication number
- WO2020071559A1 WO2020071559A1 PCT/JP2019/039413 JP2019039413W WO2020071559A1 WO 2020071559 A1 WO2020071559 A1 WO 2020071559A1 JP 2019039413 W JP2019039413 W JP 2019039413W WO 2020071559 A1 WO2020071559 A1 WO 2020071559A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- component
- determination
- state
- captured image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01M—TESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
- G01M17/00—Testing of vehicles
- G01M17/007—Wheeled or endless-tracked vehicles
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Definitions
- the present invention relates to a vehicle state determination device that determines a damage state of a vehicle from a captured image, a determination program thereof, and a determination method.
- Patent Literature 1 discloses an accident vehicle repair cost estimation system capable of accurately and promptly evaluating damage to an accident vehicle and estimating repair.
- This estimation system includes a capture unit, a storage unit, an input unit, a display unit, a link unit, and an estimation unit.
- the capture means captures image data of the accident vehicle.
- the storage means stores vehicle attribute data necessary for estimating an accident vehicle repair cost.
- the input means inputs estimation data necessary for estimating an accident vehicle repair cost.
- the display means displays various data including the accident vehicle image data.
- the linking means determines which portion of the vehicle the image data clearly indicates damage to.
- the estimating means simultaneously displays the image data and the vehicle attribute data of the part corresponding to the image data on the display means.
- the estimating means estimates the cost required for repairing the accident vehicle based on the estimated data and the vehicle attribute data.
- an object of the present invention is to determine a damaged state of a vehicle with high flexibility and at high speed and with high accuracy.
- a first invention provides a vehicle state determination device that includes a component determination unit and a state determination unit and determines a damage state of a vehicle from a captured image.
- the component determination unit determines a vehicle component in the captured image with reference to the component learning model.
- the component learning model is constructed by supervised learning using data on vehicle components as teacher data using an object detection algorithm based on deep learning.
- the object detection algorithm inputs a captured image to a single neural network system, and collectively performs region extraction of vehicle components in the captured image with attribute classification by a regression problem approach.
- the state determination unit determines the damage state of each vehicle component determined by the component determination unit with reference to the state learning model.
- the state learning model is constructed by supervised learning using data on the damage state of the vehicle parts as teacher data.
- the object detection algorithm is preferably YOLO or SSD.
- a surface determination unit may be further provided.
- the surface determination unit determines the constituent surface of the vehicle in the captured image with reference to the surface learning model.
- the surface learning model is constructed by supervised learning using data on the constituent surfaces of the vehicle as teacher data.
- the component determination unit filters the vehicle component based on the constituent surface determined by the surface determination unit, and outputs the filtered vehicle component as a determination result.
- the surface determination unit may include a component surface corresponding to the specific component component in the determination result.
- an estimate calculation unit may be further provided.
- the estimation calculation unit estimates the repair cost of the vehicle from the damage state of each vehicle component determined by the state determination unit by referring to the estimation table.
- the estimation table holds a repair cost associated with a damaged state of a vehicle component for each vehicle component.
- At least one of a shift of processing from the surface determination unit to the component determination unit, a shift of processing from the component determination unit to the state determination unit, and a shift of processing from the state determination unit to the estimation calculation unit is preferably performed on condition that the user's approval is obtained, while allowing the user to correct the determination result.
- the second invention provides a vehicle state determination program that causes a computer to execute a process having a component determination step and a state determination step, and determines a damage state of the vehicle from a captured image.
- a vehicle component in the captured image is determined with reference to the component learning model.
- the component learning model is constructed by supervised learning using data on vehicle components as teacher data using an object detection algorithm based on deep learning.
- the object detection algorithm inputs a captured image to a single neural network system, and collectively performs region extraction of vehicle components in the captured image with attribute classification by a regression problem approach.
- the damage state of each vehicle component determined by the component determination unit is determined with reference to the state learning model.
- the state learning model is constructed by supervised learning using data on the damage state of the vehicle parts as teacher data.
- a third invention provides a vehicle state determination method that includes a component determination step and a state determination step, and determines a damage state of a vehicle from a captured image.
- a vehicle component in the captured image is determined with reference to the component learning model.
- the component learning model is constructed by supervised learning using data on vehicle components as teacher data using an object detection algorithm based on deep learning.
- the object detection algorithm inputs a captured image to a single neural network system, and collectively performs region extraction of vehicle components in the captured image with attribute classification by a regression problem approach.
- the damage state of each vehicle component determined by the component determination unit is determined with reference to the state learning model.
- the state learning model is constructed by supervised learning using data on the damage state of the vehicle parts as teacher data.
- the object detection algorithm is preferably YOLO or SSD.
- a surface determination step may be further provided.
- the vehicle constituent surface in the captured image is determined with reference to the surface learning model.
- the surface learning model is constructed by supervised learning using data on the constituent surfaces of the vehicle as teacher data.
- the component determining step filters the vehicle component based on the constituent surface determined in the surface determining step, and outputs the filtered vehicle component as a determination result.
- the component surface corresponding to the specific vehicle component may be included in the determination result.
- an estimate calculation step may be further provided.
- the repair cost of the vehicle is estimated from the damage state of each vehicle component determined in the state determination step by referring to the estimation table.
- the estimation table holds a repair cost associated with a damaged state of a vehicle component for each vehicle component.
- a transition of processing from the surface determination step to the component determination step, a transition of processing from the component determination step to the state determination step, and a transition of processing from the state determination step to the estimation calculation step At least one of them is preferably performed on condition that the user's approval is obtained while allowing the user to correct the determination result.
- the processing is separated into the component determination and the damage determination, and the damage determination of each component is performed after the component determination is performed, so that the determination accuracy as a whole can be improved.
- the captured image is input to a single neural network system as represented by YOLO or SSD, and a regression problem approach is used.
- Block diagram of the vehicle state determination device Diagram showing a screen display example of image reception Figure showing a screen display example of the surface determination result Illustration of the object detection algorithm YOLO network configuration diagram Diagram showing a screen display example of the vehicle part determination result Figure showing a screen display example of the damage state determination result Schematic configuration diagram of quote table The figure which shows the example of the screen display of the estimation result
- FIG. 1 is a block diagram of the vehicle state determination device 1 according to the present embodiment.
- the vehicle state determination device 1 determines a damage state of a vehicle from a captured image specified by a user, and presents an approximate cost required for vehicle repair to the user.
- the vehicle to be determined is assumed to be a privately-owned vehicle, but this is an example, and any vehicle including a truck, a bus, a motorcycle, etc. may be targeted depending on design specifications. Can be.
- the vehicle state determination device 1 can be equivalently realized by installing a computer program (form layout analysis program) in a computer.
- the vehicle state determination device 1 includes an image receiving unit 2, a surface determination unit 3, a component determination unit 4, a state determination unit 5, a estimation calculation unit 6, an input / output interface 7, learning models 8 to 10,
- the estimation table 11 is mainly configured.
- Each of the processing units 2 to 6 is connected to a display device 12 via an input / output interface 7, and the input / output interface 7 controls input and output between the display unit 12 and the input / output interface. Basically, each processing in the processing units 3 to 6 is sequentially performed.
- the transition of the processing between adjacent processing units specifically, the transition from the surface determination unit 3 to the component determination unit 4, the component determination The transition from the unit 4 to the state determination unit 5 and the transition from the state determination unit 5 to the estimation calculation unit 6 are performed on condition that the user's approval is obtained after allowing the user to correct the determination result. Done. This is to improve the overall determination accuracy by appropriately reflecting the user's intention in the process. However, part of the process transition may be performed automatically without the condition of the user's approval.
- the display device 12 is connected to a network such as the Internet, the input / output interface 7 has a communication function required for performing network communication.
- the image receiving unit 2 receives a captured image to be determined from the display device 12, specifically, an image obtained by capturing the external appearance of a damaged vehicle with a camera.
- the direction in which the vehicle is imaged may be any of the front, rear, and side, and may be diagonally forward, diagonally rear, or the like.
- FIG. 2 is a diagram illustrating an example of a screen display of image reception on the display device 12.
- the display screen 30 for receiving an image has an image receiving area 31.
- a thumbnail of a captured image to be determined is displayed.
- the user designates a determination target by designating a specific image file through the file reference button or by dropping a specific image file in the image receiving area 31.
- a plurality of determination targets can be specified, and even if the determination target has already been specified, it can be canceled through the cancel button.
- the determination start button When the specification of the determination target is completed, the user presses the determination start button. By this action, the determination target is output to the image receiving unit 2.
- the captured image received by the image receiving unit 2 is output to the surface determination unit 3.
- the captured image to be determined may be a color image, but a grayscale image may be received in order to reduce the amount of memory used.
- the size of the captured image is appropriately set in consideration of the memory usage of the system and the like.
- the surface determination unit 3 determines the constituent surface of the vehicle shown in the captured image to be determined.
- the “configuration surface” refers to an individual surface configuring a three-dimensional vehicle, and includes a front surface, a back surface, a side surface, and the like of the vehicle.
- the determination result is the front
- the determination result is the back.
- the number of constituent surfaces obtained as a result of the determination is not limited to one, but may be plural.
- the determination results are front and side. The reason for performing such a configuration surface determination is to improve the determination accuracy in the post-processing.
- the surface learning model 8 is mainly composed of a neural network system imitating a human brain nerve.
- the neural network system is formed on a work area of a computer, and has an input layer, a hidden layer, and an output layer.
- the input layer is weighted by the activation function.
- transmission with weighting according to the number of hidden layers is repeated, and the signal transmitted to the output layer is finally output.
- the number of ports in the input layer, the number of ports in the output layer, the number of layers in the hidden layer, and the like are arbitrary.
- the output layer also outputs the classification probabilities of the outputs (front, back, side, etc.).
- the construction of the plane learning model 8 is performed by supervised learning using data on the constituent planes of the vehicle as teacher data.
- the teacher data includes a captured image of the vehicle and classification of the constituent surfaces of the vehicle, and various and large amounts of data are used, including various types of vehicles, various body colors, various shooting directions, and the like.
- the classification probability of the output with respect to the input of the teacher data is verified, and the adjustment of the activation function (weighting) is repeatedly performed based on the verification to set the surface learning model 8 to a desired state.
- a machine learning method such as a support vector machine, a decision tree, a Bayesian network, a linear regression, a multivariate analysis, a logistic regression analysis, and a judgment analysis may be used in addition to the neural network.
- a convolutional neural network and R-CCN (Regions @ with @ CNN @ features) using the same may be used. This is the same for the state learning model 10 described later.
- the component surface may be estimated by focusing on the presence / absence of a unique vehicle component existing on the specific component surface. For example, front lights and front grilles are unique to the front of the vehicle and have a characteristic shape that can be distinguished from other vehicle parts. Can be considered included. Therefore, when a vehicle component unique to a specific component surface exists in a captured image, a component surface corresponding to the specific component component is included in the determination result.
- the surface learning model 8 may be reduced in weight in order to reduce the memory usage of the GPU (Graphics Processing Unit) or to improve the learning efficiency.
- reduction of the number of VGG blocks (one block by convolution ⁇ convolution ⁇ pooling) in a general-purpose model such as VGG-16 and reduction of the size of a captured image can be cited.
- VGG-16 is a method of using a trained model for a CNN (convolutional neural network), that is, one of transfer learning models, including a total of 16 layers of a convolutional layer and a fully connected layer, and a size of a convolution filter. All layers are 3 ⁇ 3, the total bonding layer is composed of two layers of 4096 units, and one unit of 1000 units for class classification.
- FIG. 3 is a diagram illustrating a screen display example of a surface determination result displayed on the display device 12.
- the determination result display screen 40 has an image display area 41, a plurality of determination result display areas 42, and a plurality of check boxes 43.
- the image receiving area 41 a captured image of the vehicle related to the determination target is displayed.
- candidates (front, side, and back) of the constituent surface of the vehicle are displayed with classification probabilities.
- the check boxes 43 are provided corresponding to the respective determination result display areas 42, and those satisfying predetermined conditions are marked with check marks.
- the predetermined condition is that the classification probability of the component surface candidate is equal to or higher than a predetermined threshold value (in this case, a plurality of check marks may be added), and the component surface classification probability is the highest. And the like.
- a predetermined threshold value in this case, a plurality of check marks may be added
- the component surface classification probability is the highest. And the like.
- the determination result corrected by the user may be reflected on the surface learning model 8. Thereby, the learning depth in the determination of the constituent surface can be increased.
- the component determination unit 4 individually extracts a vehicle component extracted in a captured image, specifically, a component region in which the vehicle component is extracted and an attribute of the vehicle component.
- the component learning model 9 is referred to.
- the learning model 9 is constructed based on an object detection algorithm based on deep learning, such as YOLO or SSD, in consideration of multi-scale performance and operation speed.
- FIG. 4 is an explanatory diagram of the object detection algorithm.
- processing for an input is divided into three stages: area search, feature extraction, and machine learning. That is, first, an area search is performed, then feature extraction is performed according to an object to be detected, and finally, an appropriate machine learning method is selected.
- the object detection is realized by being divided into three algorithms.
- the feature amount basically, only a specific target can be detected because it is designed exclusively for the detection target. Therefore, in order to eliminate such a restriction, an object detection algorithm based on deep learning as shown in FIGS.
- FIG. 2B in R-CNN and the like, feature amount extraction is automatically realized by using deep learning.
- the method shown in FIG. 3C which is represented by YOLO (You Only Look Once) and SSD (Single Shot MultiBox Detector), includes the area search in the deep learning.
- this method by inputting an input (captured image) to a single neural network, extraction of an item region and classification of its attribute are collectively performed.
- the first feature of this method is that it is a regression problem approach. Regression is an approach that predicts numerical values directly from trends in data. Instead of determining an area and then classifying it, the coordinates and size of an object are directly predicted. Second, the processing is completed on a single network.
- the YOLO process is generally as follows. First, the input image is divided into S ⁇ S areas. Next, the classification probability of the object in each area is derived. Then, the parameters (x, y, height, width) and the reliability (confidence) of the B (hyperparameter) bounding boxes are calculated. The bounding box is a circumscribed rectangle of the object area, and the reliability is the degree of coincidence between the prediction and the correct bounding box. For the object detection, a product of the classification probability of the object and the reliability of each bounding box is used.
- FIG. 5 is a network configuration diagram of YOLO.
- a form image is input to a CNN (Convolutional Neural Network) layer, and the result is output through a plurality of fully connected layers.
- the output includes the image area divided into S * S pieces, five parameters of a bounding box (BB) including the reliability (classification accuracy), and the number of classes (attributes of items).
- BB bounding box
- the construction of the part learning model 9 is performed by supervised learning using data on vehicle parts for each constituent surface of the vehicle as teacher data.
- the teacher data has a partial image of the vehicle part and the attribute of the vehicle part, and various and large amounts of data including various types of vehicles, various vehicle parts, various photographing directions, and the like are used.
- a source image that has been subjected to image processing is also used.
- the image is not horizontally inverted, which is one of image processing.
- the component determination unit 4 filters the vehicle component specified based on the component learning model 9 based on the configuration surface of the vehicle determined by the surface determination unit 3, and outputs the filtered vehicle component as a determination result.
- the determination result of the component surface is the front surface and the side surface
- Such a correlation is also recognized for a side door unique to a side surface, a headlight unique to a front surface, a front grill, a front window, and the like. Therefore, of the vehicle parts obtained as a result of the component determination, only the components related to the component surface obtained as the component surface determination result are included in the determination result, and other components are excluded, thereby improving the component determination accuracy. Can be planned.
- FIG. 6 is a diagram illustrating a screen display example of the component determination result.
- the determination result display screen 50 has an image display area 51, a plurality of determination result display areas 52, and a plurality of check boxes 53.
- image display area 51 a captured image of the vehicle related to the determination target is displayed, and a rectangular frame indicating each vehicle component extracted by the component determination unit 4 is displayed.
- determination result display area 52 respective vehicle component candidates (right headlight, left headlight, front window, etc.) are displayed with classification probabilities.
- Check boxes 53 are provided corresponding to the respective determination result display areas 52, and those satisfying predetermined conditions are marked with check marks.
- the predetermined condition is that the classification probability of the constituent surface candidate is equal to or higher than a predetermined threshold.
- the user determines that the determination result is valid, the user presses the determination continuation button.
- the user corrects the determination result including the change of the tick mark of the check box 53, and then presses the determination continue button. With this action, the determination result of the component determination unit 4 (including the determination result corrected by the user) is output to the state determination unit 5 together with the captured image.
- the determination result corrected by the user may be reflected on the component learning model 9. This makes it possible to increase the learning depth of component determination.
- the state determination unit 5 determines the damage state of each vehicle component extracted by the part determination unit 4. This damage state determination is performed by referring to a state learning model 10 constructed by supervised learning using teacher data on the damaged state of the vehicle component.
- the configuration of the state learning model 10 is basically the same as that of the surface learning model 8.
- the construction of the state learning model 10 is performed by supervised learning using data on the damaged state of the vehicle parts as teacher data.
- the teacher data includes a partial image of a damaged vehicle part and attributes (eg, “replacement”, “removal”, damage degree “large”, “medium”, “small”) of the damage state.
- attributes eg, “replacement”, “removal”, damage degree “large”, “medium”, “small” of the damage state.
- replacement means such damage that the vehicle parts themselves must be replaced without repair.
- Removable means that the vehicle part itself is undamaged, but must be temporarily removed from the vehicle body for replacement or repair of another damaged vehicle part.
- a source image that has been subjected to image processing may be used.
- a state learning model 10 to determine a damaged state for each vehicle component, it is possible to improve the accuracy of determining the damaged state.
- a network in which only the fully connected layer is replaced based on the above-described general-purpose model such as VGG-16 is used as the state determination unit 5.
- fine tuning using weights learned in advance by ImageNet may be performed in order to perform learning with a small data amount.
- Fine tuning is a method of reusing a part of an existing model to construct a new model.
- the clean-up by selecting the close-up image and trimming the target portion may be performed (trimming). Some used images are used).
- FIG. 7 is a diagram illustrating an example of a screen display of the result of the determination of the damage state displayed on the display device 12.
- the determination result display screen 60 includes an image display area 61, a plurality of determination result display areas 62, and a plurality of check boxes 63.
- the image display area 61 a captured image of the vehicle as the determination target is displayed, and a round frame indicating the damaged part determined by the state determination unit 5 is displayed.
- candidates for the damage state (replacement, detachment, large, medium, small) are displayed with classification probabilities.
- the check boxes 63 are provided corresponding to the respective determination result display areas 62, and those satisfying predetermined conditions are marked with check marks.
- the predetermined condition typically includes that the classification probability of the candidate for the degree of damage is equal to or higher than a predetermined threshold.
- the determination result corrected by the user may be reflected on the state learning model 10. Thereby, the learning depth of the damage state determination can be increased.
- the estimate calculation unit 6 estimates the cost required for repairing the vehicle to be determined by referring to the estimate table 11.
- FIG. 8 is a schematic configuration diagram of the estimation table 11.
- the estimation table 11 holds a vehicle part name, a degree of damage, and a cost including a labor cost in association with each other.
- the cost is specified for each vehicle part by searching the estimation table 11 using the vehicle part specified by the part determination unit 4 and the damaged state of the vehicle part determined by the state determination unit 5 as keys.
- the cost required for vehicle repair is the sum of the costs for each vehicle component.
- the state determination unit 5 determines that the damage degree of the front vehicle is “medium” and the damage degree of the latter is “small”, The wage is “ ⁇ 200,000”, the latter is “ ⁇ 30,000”, and the total amount is " ⁇ 230,000”.
- the model of the estimation table 11 may include a vehicle type (for example, “ABC”). In this way, by setting the cost finely for each vehicle type, the estimated amount of money according to the vehicle type can be accurately calculated.
- FIG. 9 is a diagram illustrating a screen display example of the estimation result displayed on the display device 12.
- the estimation result display screen 70 has an image display area 71 and an estimation result display area 72.
- the image display area 71 a captured image of the vehicle related to the determination target is displayed.
- the estimation result display area 72 the damaged vehicle parts (parts), the degree of damage, and the cost are displayed. If there are multiple damaged vehicle parts, the total cost is displayed along with the individual costs.
- the end button a series of processes in the vehicle state determination device 1 ends.
- the processing is separated by the component determination unit 4 and the state determination unit 5, and the damage state is determined for each vehicle component after the component determination is performed. In comparison, the determination accuracy as a whole can be improved.
- the captured image is input to a single neural network system as represented by YOLO or SSD.
- a single neural network system as represented by YOLO or SSD.
- an object detection algorithm that collectively extracts vehicle parts with attribute classification by a regression problem approach is adopted. This makes it possible to speed up the processing in the component determination unit 4.
- the surface determination unit 2 prior to the process of the component determination unit 4, specifies the constituent surface of the vehicle in the captured image, and then performs the component determination on the captured image.
- the surface determination unit 2 specifies the constituent surface of the vehicle in the captured image, and then performs the component determination on the captured image.
- those that cannot exist on the constituent surface specified by the surface determination unit 2 can be excluded as erroneous determinations, thereby improving the accuracy of the part determination. Can be achieved.
- the cost required for vehicle repair is presented to the user by providing the estimation calculation unit 6 for estimating the repair cost of the vehicle from the damage state of each vehicle component determined by the state determination unit 5. I do. Thereby, the convenience for the user can be further improved.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
本発明は、撮像画像から車両の損傷状態を判定する車両状態判定装置、その判定プログラムおよび判定方法に関する。 The present invention relates to a vehicle state determination device that determines a damage state of a vehicle from a captured image, a determination program thereof, and a determination method.
例えば、特許文献1には、事故車の損傷の評価や修理見積もりを的確かつ迅速に行うことが可能な事故車修理費見積システムが開示されている。この見積システムは、キャプチャー手段と、記憶手段と、入力手段と、表示手段と、リンク手段と、見積手段とを有する。キャプチャー手段は、事故車両の画像データを取り込む。記憶手段は、事故車修理費見積に必要な車両属性データを記憶する。入力手段は、事故車修理費見積に必要な見積データを入力する。表示手段は、事故車両画像データを含む各種データを表示する。リンク手段は、画像データが車両のどの部位の損傷を明瞭に示しているかを決定する。見積手段は、画像データおよび画像データに対応する部位の車両属性データを表示手段に同時に表示する。また、見積手段は、見積データおよび車両属性データに基いて、事故車の修理に要する費用の見積処理を行う。
For example,
しかしながら、特許文献1のシステムでは、事故車両の画像データについて、過去に修理をした車両の画像データとの近似性から損傷の程度を判定するため、適切な過去の画像データが存在しない場合には判定精度が低下する。
However, in the system of
そこで、本発明の目的は、車両の損傷状態を柔軟性をもって高速かつ精度よく判定することである。 Therefore, an object of the present invention is to determine a damaged state of a vehicle with high flexibility and at high speed and with high accuracy.
かかる課題を解決すべく、第1の発明は、部品判定部と、状態判定部とを有し、撮像画像から車両の損傷状態を判定する車両状態判定装置を提供する。部品判定部は、部品学習モデルを参照して、撮像画像における車両部品を判定する。部品学習モデルは、深層学習による物体検出アルゴリズムを用いて、車両部品に関するデータを教師データとした教師あり学習によって構築されている。物体検出アルゴリズムは、撮像画像を単一のニューラルネットワーク系に入力することで、回帰問題的なアプローチによって、撮像画像における車両部品の領域抽出を属性の分類付きでまとめて行う。状態判定部は、状態学習モデルを参照して、部品判定部によって判定された車両部品毎の損傷状態を判定する。状態学習モデルは、車両部品の損傷状態に関するデータを教師データとした教師あり学習によって構築されている。 べ く In order to solve such a problem, a first invention provides a vehicle state determination device that includes a component determination unit and a state determination unit and determines a damage state of a vehicle from a captured image. The component determination unit determines a vehicle component in the captured image with reference to the component learning model. The component learning model is constructed by supervised learning using data on vehicle components as teacher data using an object detection algorithm based on deep learning. The object detection algorithm inputs a captured image to a single neural network system, and collectively performs region extraction of vehicle components in the captured image with attribute classification by a regression problem approach. The state determination unit determines the damage state of each vehicle component determined by the component determination unit with reference to the state learning model. The state learning model is constructed by supervised learning using data on the damage state of the vehicle parts as teacher data.
ここで、第1の発明において、物体検出アルゴリズムは、YOLOまたはSSDであることが好ましい。また、第1の発明において、面判定部をさらに設けてもよい。面判定部は、面学習モデルを参照して、撮像画像における車両の構成面を判定する。面学習モデルは、車両の構成面に関するデータを教師データとした教師あり学習によって構築されている。この場合、部品判定部は、面判定部によって判定された構成面に基づいて、車両部品をフィルタリングし、このフィルタリングされた車両部品を判定結果として出力することが好ましい。また、面判定部は、撮像画像中に特定の構成面に固有の車両部品が存在する場合、この固有の車両部品に対応する構成面を判定結果に含めてもよい。 Here, in the first invention, the object detection algorithm is preferably YOLO or SSD. In the first aspect, a surface determination unit may be further provided. The surface determination unit determines the constituent surface of the vehicle in the captured image with reference to the surface learning model. The surface learning model is constructed by supervised learning using data on the constituent surfaces of the vehicle as teacher data. In this case, it is preferable that the component determination unit filters the vehicle component based on the constituent surface determined by the surface determination unit, and outputs the filtered vehicle component as a determination result. Further, when there is a vehicle component unique to a specific component surface in the captured image, the surface determination unit may include a component surface corresponding to the specific component component in the determination result.
第1の発明において、見積算出部をさらに設けてもよい。見積算出部は、見積テーブルを参照することによって、状態判定部によって判定された車両部品毎の損傷状態から、車両の修理費用を見積もる。見積テーブルは、車両部品の損傷状態に対応付けられた修理費用を車両部品毎に保持する。 に お い て In the first invention, an estimate calculation unit may be further provided. The estimation calculation unit estimates the repair cost of the vehicle from the damage state of each vehicle component determined by the state determination unit by referring to the estimation table. The estimation table holds a repair cost associated with a damaged state of a vehicle component for each vehicle component.
第1の発明において、面判定部から部品判定部への処理の移行、部品判定部から状態判定部への処理の移行、および、状態判定部から見積算出部への処理の移行のうちの少なくとも一つは、ユーザによる判定結果の修正を許容しつつ、ユーザの承認が得られたことを条件に行われることが好ましい。 In the first aspect, at least one of a shift of processing from the surface determination unit to the component determination unit, a shift of processing from the component determination unit to the state determination unit, and a shift of processing from the state determination unit to the estimation calculation unit. One is preferably performed on condition that the user's approval is obtained, while allowing the user to correct the determination result.
第2の発明は、部品判定ステップと、状態判定ステップとを有する処理をコンピュータに実行させ、撮像画像から車両の損傷状態を判定する車両状態判定プログラムを提供する。部品判定ステップでは、部品学習モデルを参照して、撮像画像における車両部品を判定する。部品学習モデルは、深層学習による物体検出アルゴリズムを用いて、車両部品に関するデータを教師データとした教師あり学習によって構築されている。物体検出アルゴリズムは、撮像画像を単一のニューラルネットワーク系に入力することで、回帰問題的なアプローチによって、撮像画像における車両部品の領域抽出を属性の分類付きでまとめて行う。状態判定ステップでは、状態学習モデルを参照して、部品判定部によって判定された車両部品毎の損傷状態を判定する。状態学習モデルは、車両部品の損傷状態に関するデータを教師データとした教師あり学習によって構築されている。 The second invention provides a vehicle state determination program that causes a computer to execute a process having a component determination step and a state determination step, and determines a damage state of the vehicle from a captured image. In the component determination step, a vehicle component in the captured image is determined with reference to the component learning model. The component learning model is constructed by supervised learning using data on vehicle components as teacher data using an object detection algorithm based on deep learning. The object detection algorithm inputs a captured image to a single neural network system, and collectively performs region extraction of vehicle components in the captured image with attribute classification by a regression problem approach. In the state determination step, the damage state of each vehicle component determined by the component determination unit is determined with reference to the state learning model. The state learning model is constructed by supervised learning using data on the damage state of the vehicle parts as teacher data.
第3の発明は、部品判定ステップと、状態判定ステップとを有し、撮像画像から車両の損傷状態を判定する車両状態判定方法を提供する。部品判定ステップでは、部品学習モデルを参照して、撮像画像における車両部品を判定する。部品学習モデルは、深層学習による物体検出アルゴリズムを用いて、車両部品に関するデータを教師データとした教師あり学習によって構築されている。物体検出アルゴリズムは、撮像画像を単一のニューラルネットワーク系に入力することで、回帰問題的なアプローチによって、撮像画像における車両部品の領域抽出を属性の分類付きでまとめて行う。状態判定ステップでは、状態学習モデルを参照して、部品判定部によって判定された車両部品毎の損傷状態を判定する。状態学習モデルは、車両部品の損傷状態に関するデータを教師データとした教師あり学習によって構築されている。 A third invention provides a vehicle state determination method that includes a component determination step and a state determination step, and determines a damage state of a vehicle from a captured image. In the component determination step, a vehicle component in the captured image is determined with reference to the component learning model. The component learning model is constructed by supervised learning using data on vehicle components as teacher data using an object detection algorithm based on deep learning. The object detection algorithm inputs a captured image to a single neural network system, and collectively performs region extraction of vehicle components in the captured image with attribute classification by a regression problem approach. In the state determination step, the damage state of each vehicle component determined by the component determination unit is determined with reference to the state learning model. The state learning model is constructed by supervised learning using data on the damage state of the vehicle parts as teacher data.
ここで、第2および第3の発明において、物体検出アルゴリズムは、YOLOまたはSSDであることが好ましい。また、第2および第3の発明において、面判定ステップをさらに設けてもよい。面判定ステップでは、面学習モデルを参照して、撮像画像における車両の構成面を判定する。面学習モデルは、車両の構成面に関するデータを教師データとした教師あり学習によって構築されている。この場合、部品判定ステップは、面判定ステップによって判定された構成面に基づいて、車両部品をフィルタリングし、このフィルタリングされた車両部品を判定結果として出力することが好ましい。また、面判定ステップは、撮像画像中に特定の構成面に固有の車両部品が存在する場合、この固有の車両部品に対応する構成面を判定結果に含めてもよい。 Here, in the second and third inventions, the object detection algorithm is preferably YOLO or SSD. In the second and third inventions, a surface determination step may be further provided. In the surface determination step, the vehicle constituent surface in the captured image is determined with reference to the surface learning model. The surface learning model is constructed by supervised learning using data on the constituent surfaces of the vehicle as teacher data. In this case, it is preferable that the component determining step filters the vehicle component based on the constituent surface determined in the surface determining step, and outputs the filtered vehicle component as a determination result. Further, in the surface determination step, when there is a vehicle component unique to the specific component surface in the captured image, the component surface corresponding to the specific vehicle component may be included in the determination result.
第2および第3の発明において、見積算出ステップをさらに設けてもよい。見積算出ステップでは、見積テーブルを参照することによって、状態判定ステップによって判定された車両部品毎の損傷状態から車両の修理費用を見積もる。見積テーブルは、車両部品の損傷状態に対応付けられた修理費用を車両部品毎に保持する。 In the second and third inventions, an estimate calculation step may be further provided. In the estimation calculation step, the repair cost of the vehicle is estimated from the damage state of each vehicle component determined in the state determination step by referring to the estimation table. The estimation table holds a repair cost associated with a damaged state of a vehicle component for each vehicle component.
第2および第3の発明において、面判定ステップから部品判定ステップへの処理の移行、部品判定ステップから状態判定ステップへの処理の移行、および、状態判定ステップから見積算出ステップへの処理の移行のうちの少なくとも一つは、ユーザによる判定結果の修正を許容しつつ、ユーザの承認が得られたことを条件に行われることが好ましい。 In the second and third inventions, a transition of processing from the surface determination step to the component determination step, a transition of processing from the component determination step to the state determination step, and a transition of processing from the state determination step to the estimation calculation step At least one of them is preferably performed on condition that the user's approval is obtained while allowing the user to correct the determination result.
本発明によれば、機械学習に基づいて車両の損傷状態を判定することで、未知の車両を含む様々な車両に対して、柔軟な対応が可能になる。その際、部品判定と損傷判定とに処理を分離し、部品判定を行った後に個々の部品の損傷判定を行うことで、全体としての判定精度の向上を図ることができる。また、撮像画像中に多くの車両部品が物体として検出され得る部品判定については、YOLOやSSDに代表されるように、撮像画像を単一のニューラルネットワーク系に入力し、回帰問題的なアプローチによって車両部品の領域抽出を属性の分類付きでまとめて行う物体検出アルゴリズムを採用することで、処理の高速化を図ることが可能になる。 According to the present invention, it is possible to flexibly deal with various vehicles including an unknown vehicle by determining the damage state of the vehicle based on machine learning. At this time, the processing is separated into the component determination and the damage determination, and the damage determination of each component is performed after the component determination is performed, so that the determination accuracy as a whole can be improved. In addition, as for component determination in which many vehicle components can be detected as an object in a captured image, the captured image is input to a single neural network system as represented by YOLO or SSD, and a regression problem approach is used. By adopting the object detection algorithm that collectively extracts the area of the vehicle component with the attribute classification, the processing speed can be increased.
図1は、本実施形態に係る車両状態判定装置1のブロック図である。この車両状態判定装置1は、ユーザによって指定された撮像画像から車両の損傷状態を判定し、車両の修理に要する概算費用をユーザに提示する。判定対象となる車両は、本実施形態では自家用の自動車を想定しているが、これは一例であって、トラック、バス、二輪車等を含めて、設計仕様次第で任意の車両を対象とすることができる。なお、車両状態判定装置1は、コンピュータプログラム(帳票レイアウト解析プログラム)をコンピュータにインストールすることによって等価的に実現することも可能である。
FIG. 1 is a block diagram of the vehicle
車両状態判定装置1は、画像受付部2と、面判定部3と、部品判定部4と、状態判定部5と、見積算出部6と、入出力インターフェース7と、学習モデル8~10と、見積テーブル11とを主体に構成されている。各処理ユニット2~6は、入出力インターフェース7を介して表示装置12に接続されており、入出力インターフェース7は、これらと表示装置12との間の入出力を司る。基本的に、処理ユニット3~6における各処理は逐次的に行われるが、隣接した処理ユニット間における処理の移行、具体的には、面判定部3から部品判定部4への移行、部品判定部4から状態判定部5への移行、および、状態判定部5から見積算出部6への移行は、ユーザによる判定結果の修正を許容した上で、ユーザの承認が得られたことを条件に行われる。これは、処理過程でユーザの意図を適宜反映することで、全体としての判定精度の向上を図るためである。ただし、処理の移行の一部については、ユーザの承認を条件とすることなく自動的に行ってもよい。なお、表示装置12がインターネットなどにネットワーク接続されている場合、入出力インターフェース7は、ネットワーク通信を行うために必要な通信機能を備える。
The vehicle
画像受付部2は、表示装置12から判定対象となる撮像画像、具体的には、損傷した車両の外観をカメラで撮像した画像を受け付ける。車両を撮像する向きは、前方、後方、側方のいずれであってもよく、斜め前方や斜め後方などであってもよい。
The
判定対象となる撮像画像は、ユーザが画面を見ながら表示装置12を操作することによって指定され、車両状態判定装置1に出力/アップロードされる。図2は、表示装置12における画像受付の画面表示例を示す図である。画像受付用の表示画面30は、画像受付領域31を有する。画像受付領域31には、判定対象となる撮像画像のサムネイルが表示される。ユーザは、ファイル参照ボタンを通じて特定の画像ファイルを指定することで、あるいは、画像受付領域31内に特定の画像ファイルをドロップすることで、判定対象を指定する。画像受付領域31に複数のブランク枠が存在することからも理解できるように、判定対象は複数指定することもでき、既に指定された判定対象であっても、取り消しボタンを通じて取り消すことができる。ユーザは、判定対象の指定が完了した場合、判定開始ボタンを押す。このアクションによって、判定対象は画像受付部2に出力される。画像受付部2によって受け付けられた撮像画像は、面判定部3に出力される。
The captured image to be determined is specified by the user operating the display device 12 while viewing the screen, and is output / uploaded to the vehicle
判定対象となる撮像画像は、カラー画像であってもよいが、メモリ使用量の低減を図るべく、グレースケール画像を受け付けるようにしてもよい。撮像画像のサイズは、システムのメモリ使用量などを考慮して適宜設定される。 The captured image to be determined may be a color image, but a grayscale image may be received in order to reduce the amount of memory used. The size of the captured image is appropriately set in consideration of the memory usage of the system and the like.
面判定部3は、判定対象となる撮像画像に写し出されている車両の構成面を判定する。ここで、「構成面」とは、立体的な車両を構成する個別の面をいい、車両の前面、背面、側面などが挙げられる。例えば、前方から車両を撮影した撮像画像の場合、判定結果は前面となり、後方から車両を撮影した撮像画像の場合、判定結果は背面となる。ただし、判定結果として得られる構成面の数は一つであるとは限らず、複数の場合もある。例えば、斜め前方から車両を撮影した撮像画像の場合、判定結果が前面および側面となるといった如くである。このような構成面の判定を行う理由は、後処理における判定精度の向上を図るためである。
The
構成面の判定では、面学習モデル8が参照される。この面学習モデル8は、人の脳神経を模したニューラルネットワーク系を主体に構成されている。ニューラルネットワーク系は、コンピュータの作業領域上に形成され、入力層と、隠れ層と、出力層とを有する。入力層は、隠れ層に入力信号を伝達する際、活性化関数による重み付けが行われる。そして、隠れ層の層数に応じた重み付けを伴う伝達が繰り返され、出力層に伝達された信号が最終的に出力される。入力層のポート数、出力層のポート数、隠れ層の層数などは任意である。出力層は、出力(前面、背面、側面など)の分類確率も出力する。 面 In the determination of the constituent surface, the surface learning model 8 is referred to. The surface learning model 8 is mainly composed of a neural network system imitating a human brain nerve. The neural network system is formed on a work area of a computer, and has an input layer, a hidden layer, and an output layer. When transmitting the input signal to the hidden layer, the input layer is weighted by the activation function. Then, transmission with weighting according to the number of hidden layers is repeated, and the signal transmitted to the output layer is finally output. The number of ports in the input layer, the number of ports in the output layer, the number of layers in the hidden layer, and the like are arbitrary. The output layer also outputs the classification probabilities of the outputs (front, back, side, etc.).
面学習モデル8の構築は、車両の構成面に関するデータを教師データとした教師あり学習によって行われる。教師データは、車両を撮影した撮像画像と、車両の構成面の分類とを有し、様々な車種、様々な車体色、様々な撮影方向などを含めて、多様かつ大量のデータが用いられる。教師あり学習では、教師データの入力に対する出力の分類確率が検証され、これに基づいて活性化関数(重み付け)の調整を繰り返すことによって、面学習モデル8が所望の状態に設定される。 The construction of the plane learning model 8 is performed by supervised learning using data on the constituent planes of the vehicle as teacher data. The teacher data includes a captured image of the vehicle and classification of the constituent surfaces of the vehicle, and various and large amounts of data are used, including various types of vehicles, various body colors, various shooting directions, and the like. In the supervised learning, the classification probability of the output with respect to the input of the teacher data is verified, and the adjustment of the activation function (weighting) is repeatedly performed based on the verification to set the surface learning model 8 to a desired state.
なお、面学習モデル8としては、ニューラルネットワークの他、サポートベクターマシン、決定木、ベイジアンネットワーク、線形回帰、多変量解析、ロジスティック回帰分析、判定分析等の機械学習手法を用いてもよい。また、畳み込みニューラルネットワークおよびそれを用いたR-CCN(Regions with CNN features)などを用いてもよい。この点は、後述する状態学習モデル10についても同様である。 As the surface learning model 8, a machine learning method such as a support vector machine, a decision tree, a Bayesian network, a linear regression, a multivariate analysis, a logistic regression analysis, and a judgment analysis may be used in addition to the neural network. Also, a convolutional neural network and R-CCN (Regions @ with @ CNN @ features) using the same may be used. This is the same for the state learning model 10 described later.
また、構成面の判定では、面学習モデル8の参照に加えて、特定の構成面に存在する固有の車両部品の有無に着目して、構成面を推定してもよい。例えば、フロントライトやフロントグリルなどは、車両の前面に固有であり、他の車両部品とは区別できる特徴的な形状を有することから、フロントライトなどの存在を以て、撮像画像中には少なくとも前面が含まれているとみなせる。このことから、撮像画像中に特定の構成面に固有の車両部品が存在する場合、この固有の車両部品に対応する構成面が判定結果に含められる。 In the determination of the component surface, in addition to the reference to the surface learning model 8, the component surface may be estimated by focusing on the presence / absence of a unique vehicle component existing on the specific component surface. For example, front lights and front grilles are unique to the front of the vehicle and have a characteristic shape that can be distinguished from other vehicle parts. Can be considered included. Therefore, when a vehicle component unique to a specific component surface exists in a captured image, a component surface corresponding to the specific component component is included in the determination result.
さらに、GPU(Graphics Processing Unit)のメモリ使用量を抑制するために、または、学習効率の向上を図るために、面学習モデル8の軽量化を図ってもよい。その一例として、VGG-16などの汎用モデルにおけるVGGブロック(畳み込み→畳み込み→プーリングで1ブロック)の個数を減らすことや、撮像画像のサイズを小さくすることが挙げられる。VGG-16は、既に学習済のモデルをCNN(畳み込みニューラルネットワーク)に用いる手法、すなわち、転移学習モデルの一つであり、畳み込み層と全結合層との合計16層を含み、畳み込みフィルタの大きさは全て3×3、全結合層は4096ユニット2層、クラス分類用の1000ユニット1層からなる。 {Furthermore, the surface learning model 8 may be reduced in weight in order to reduce the memory usage of the GPU (Graphics Processing Unit) or to improve the learning efficiency. As an example, reduction of the number of VGG blocks (one block by convolution → convolution → pooling) in a general-purpose model such as VGG-16 and reduction of the size of a captured image can be cited. VGG-16 is a method of using a trained model for a CNN (convolutional neural network), that is, one of transfer learning models, including a total of 16 layers of a convolutional layer and a fully connected layer, and a size of a convolution filter. All layers are 3 × 3, the total bonding layer is composed of two layers of 4096 units, and one unit of 1000 units for class classification.
面判定部3の判定結果は、表示装置12に出力される。図3は、表示装置12に表示される面判定結果の画面表示例を示す図である。この判定結果の表示画面40は、画像表示領域41と、複数の判定結果表示領域42と、複数のチェックボックス43とを有する。画像受付領域41には、判定対象に係る車両の撮像画像が表示される。それぞれの判定結果表示領域42には、車両の構成面の候補(前面、側面、背面)が分類確率付きで表示される。チェックボックス43は、それぞれの判定結果表示領域42に対応して設けられており、所定の条件を満たすものにはチェックマークが付されている。所定の条件としては、構成面の候補の分類確率が所定のしきい値以上であること(この場合は複数にチェックマークが付されることもある。)、構成面の分類確率が最も高いものなどが挙げられる。ユーザは、判定結果が妥当であると判断した場合、判定継続ボタンを押す。また、ユーザは、これが妥当でないと判断した場合、チェックボックス43のチックマークの変更を含む判定結果の修正を行った上で、判定継続ボタンを押す。このアクションによって、面判定部3の判定結果(ユーザによって修正された判定結果を含む。)は、撮像画像と共に部品判定部4に出力される。
The determination result of the
なお、ユーザによって修正された判定結果は、面学習モデル8に反映させてもよい。これにより、構成面の判定における学習深度を深めることができる。 The determination result corrected by the user may be reflected on the surface learning model 8. Thereby, the learning depth in the determination of the constituent surface can be increased.
部品判定部4は、撮像画像に写し出された車両部品の抽出処理、具体的には、車両部品が写し出されている部品領域と、車両部品の属性とを個別に抽出する。この部品判定では、部品学習モデル9が参照される。この学習モデル9は、マルチスケール性や動作速度などを考慮して、YOLOやSSDなどの深層学習による物体検出アルゴリズムに基づき構築されている。
(4) The
図4は、物体検出アルゴリズムの説明図である。同図(a)に示すように、顔検出などで用いられる従来の検出手法では、入力に対する処理として、領域探索、特徴量抽出、機械学習という3つの段階に別れている。すなわち、まず領域探索が行われ、つぎに検出する物体に合わせて特徴抽出が行われ、最後に適切な機械学習手法が選択される。この検出手法では、物体検出を3つのアルゴリズムに別けて実現される。特徴量についても、基本的に、検出対象に応じた専用設計になるため特定の対象しか検出できない。そこで、かかる制約を解消すべく、同図(b)および(c)に示すような深層学習による物体検出アルゴリズムが提案された。同図(b)に示すように、R-CNNなどでは、深層学習を用いることで特徴量抽出が自動で実現される。これによって、ネットワークの設計だけで、色々な物体に対する柔軟な分類が可能になる。しかしながら、領域探索については別処理として依然として残る。そこで、領域探索についても深層学習に含めたものが、YOLO(You Only Look Once)やSSD(Single Shot MultiBox Detector)に代表される同図(c)の手法である。本手法では、入力(撮像画像)を単一のニューラルネットワークに入力することで、項目領域の抽出と、その属性の分類とがまとめて行われる。本手法の特徴として、第1に、回帰問題的なアプローチであることが挙げられる。回帰(Regression)とは、データの傾向から数値を直接予測するアプローチをいい、領域を決めてからそれが何かを分類するのではなく、物体の座標と大きさが直接予測される。第2に、単一のネットワークで処理が完結することである。データ入力した後は深層学習だけで最後(出力結果)までいってしまうという意味で、「End-to-End」の処理ということもできる。本実施形態の特徴の一つは、撮像画像中に多くの車両部品が物体として検出され得る部品判定について、YOLOやSSDに代表される同図(c)の手法を採用していることである。 FIG. 4 is an explanatory diagram of the object detection algorithm. As shown in FIG. 1A, in a conventional detection method used for face detection or the like, processing for an input is divided into three stages: area search, feature extraction, and machine learning. That is, first, an area search is performed, then feature extraction is performed according to an object to be detected, and finally, an appropriate machine learning method is selected. In this detection method, the object detection is realized by being divided into three algorithms. As for the feature amount, basically, only a specific target can be detected because it is designed exclusively for the detection target. Therefore, in order to eliminate such a restriction, an object detection algorithm based on deep learning as shown in FIGS. As shown in FIG. 2B, in R-CNN and the like, feature amount extraction is automatically realized by using deep learning. This enables flexible classification of various objects by simply designing the network. However, the area search still remains as a separate process. Therefore, the method shown in FIG. 3C, which is represented by YOLO (You Only Look Once) and SSD (Single Shot MultiBox Detector), includes the area search in the deep learning. In this method, by inputting an input (captured image) to a single neural network, extraction of an item region and classification of its attribute are collectively performed. The first feature of this method is that it is a regression problem approach. Regression is an approach that predicts numerical values directly from trends in data. Instead of determining an area and then classifying it, the coordinates and size of an object are directly predicted. Second, the processing is completed on a single network. After data is input, it can be said to be “End-to-End” processing, meaning that it goes to the end (output result) only by deep learning. One of the features of the present embodiment is that a method shown in FIG. 3C typified by YOLO or SSD is used for component determination in which many vehicle components can be detected as objects in a captured image. .
例えば、YOLOの処理は、概ね以下のようになる。まず、入力画像がS×Sの領域に分割される。つぎに、それぞれの領域内における物体の分類確率が導出される。そして、B個(ハイパーパラメータ)のバウンディングボックスのパラメータ(x,y,height,width)と信頼度(confidence)とが算出される。バウンディングボックスとは、物体領域の外接四角形であり、信頼度とは、予測と正解のバウンディングボックスの一致度である。物体検出には、物体の分類確率と、各バウンディングボックスの信頼度との積が用いられる。図5は、YOLOのネットワーク構成図である。YOLOにおいて、帳票画像はCNN(Convolutional Neural Network)層に入力されると共に、複数段の全結合層を経て結果が出力される。出力は、S*S個に分割した画像領域と、信頼度(分類確度)を含むバウンディングボックス(BB)の5パラメータと、クラス数(項目の属性)とを含む。 For example, the YOLO process is generally as follows. First, the input image is divided into S × S areas. Next, the classification probability of the object in each area is derived. Then, the parameters (x, y, height, width) and the reliability (confidence) of the B (hyperparameter) bounding boxes are calculated. The bounding box is a circumscribed rectangle of the object area, and the reliability is the degree of coincidence between the prediction and the correct bounding box. For the object detection, a product of the classification probability of the object and the reliability of each bounding box is used. FIG. 5 is a network configuration diagram of YOLO. In YOLO, a form image is input to a CNN (Convolutional Neural Network) layer, and the result is output through a plurality of fully connected layers. The output includes the image area divided into S * S pieces, five parameters of a bounding box (BB) including the reliability (classification accuracy), and the number of classes (attributes of items).
部品学習モデル9の構築は、車両の構成面毎の車両部品に関するデータを教師データとした教師あり学習によって行われる。この教師データは、車両部品の部分画像と、この車両部品の属性とを有し、様々な車種、様々な車両部品、様々な撮影方向などを含めて、多様かつ大量のデータが用いられる。大量のデータを確保するために、あるソース画像に画像処理を施したものも用いられる。ただし、右ヘッドランプ、左ヘッドランプ等を区別するために、画像処理の一つである画像の左右反転は行わない。このような部品学習モデル9を用いることで、車両の損傷の有無やその程度に関わりなく、車両の構成面に含まれる車両部品を判定・抽出することができる。 The construction of the part learning model 9 is performed by supervised learning using data on vehicle parts for each constituent surface of the vehicle as teacher data. The teacher data has a partial image of the vehicle part and the attribute of the vehicle part, and various and large amounts of data including various types of vehicles, various vehicle parts, various photographing directions, and the like are used. In order to secure a large amount of data, a source image that has been subjected to image processing is also used. However, in order to distinguish the right head lamp, the left head lamp, and the like, the image is not horizontally inverted, which is one of image processing. By using such a part learning model 9, it is possible to determine and extract the vehicle parts included in the constituent surface of the vehicle regardless of the presence or absence of the damage of the vehicle.
部品判定部4は、面判定部3によって判定された車両の構成面に基づいて、部品学習モデル9に基づき特定された車両部品をフィルタリングし、フィルタリングされた車両部品を判定結果として出力する。例えば、構成面の判定結果が前面および側面の場合、部品判定の結果としてテールランプが抽出されたとしても、これが誤判定であることは明らかである。テールランプは背面に存在し、前面および側面には存在し得ないからである。このような相関性は、側面固有のサイドドア、前面固有のヘッドライト、フロントグリル、フロントウインドウなどについても認められる。よって、部品判定の結果として得られた車両部品のうち、構成面の判定結果として得られた構成面に関するもののみを判定結果に含め、それ以外については除外することで、部品判定精度の向上を図ることができる。
(4) The
部品判定部4の判定結果は、表示装置12に出力される。図6は、部品判定結果の画面表示例を示す図である。この判定結果の表示画面50は、画像表示領域51と、複数の判定結果表示領域52と、複数のチェックボックス53とを有する。画像表示領域51には、判定対象に係る車両の撮像画像が表示されると共に、部品判定部4によって抽出された個々の車両部品を示す矩形枠が表示される。判定結果表示領域52には、それぞれの車両部品の候補(右ヘッドライト、左ヘッドライト、フロントウインドウなど)が分類確率付きで表示される。チェックボックス53は、それぞれの判定結果表示領域52に対応して設けられており、所定の条件を満たすものにはチェックマークが付されている。所定の条件としては、典型的には、構成面の候補の分類確率が所定のしきい値以上であることが挙げられる。ユーザは、判定結果が妥当であると判断した場合、判定継続ボタンを押す。また、ユーザは、これが妥当でないと判断した場合、チェックボックス53のチックマークの変更を含む判定結果の修正を行った上で、判定継続ボタンを押す。このアクションによって、部品判定部4の判定結果(ユーザによって修正された判定結果を含む。)は、撮像画像と共に状態判定部5に出力される。
(4) The determination result of the
なお、ユーザによって修正された判定結果は、部品学習モデル9に反映させてもよい。これにより、部品判定の学習深度を深めることができる。 The determination result corrected by the user may be reflected on the component learning model 9. This makes it possible to increase the learning depth of component determination.
状態判定部5は、部品判定部4によって抽出された車両部品毎の損傷状態を判定する。この損傷状態判定は、車両部品の損傷状態に関する教師データを用いた教師あり学習によって構築された状態学習モデル10を参照することによって行われる。状態学習モデル10の構成は、基本的に、面学習モデル8のそれに準ずる。
The
また、状態学習モデル10の構築は、車両部品の損傷状態に関するデータを教師データとした教師あり学習によって行われる。この教師データは、損傷した車両部品の部分画像と、その損傷状態を分類した属性(例えば、「取替」、「脱着」、損傷程度の「大」、「中」、「小」)とを有し、様々な車種、様々な車両部品、様々な撮影方向などを含めて、多様かつ大量のデータが用いられる。ここで、「取替」は、補修では済まず車両部品自体を交換しなければならない程度の損傷を意味する。「脱着」は、その車両部品自体は損傷を受けていないものの、損傷を受けた他の車両部品の交換・修理のために、車体から一端外されなければならないことを意味する。なお、大量のデータを確保するために、あるソース画像に画像処理を施したものを用いてもよい。このような状態学習モデル10を用いて損傷状態を車両部品毎に判定することで、損傷状態の判定精度の向上を図ることができる。 {Circle around (2)} The construction of the state learning model 10 is performed by supervised learning using data on the damaged state of the vehicle parts as teacher data. The teacher data includes a partial image of a damaged vehicle part and attributes (eg, “replacement”, “removal”, damage degree “large”, “medium”, “small”) of the damage state. Various and large amounts of data are used, including various types of vehicles, various vehicle parts, various photographing directions, and the like. Here, "replacement" means such damage that the vehicle parts themselves must be replaced without repair. "Removable" means that the vehicle part itself is undamaged, but must be temporarily removed from the vehicle body for replacement or repair of another damaged vehicle part. In addition, in order to secure a large amount of data, a source image that has been subjected to image processing may be used. By using such a state learning model 10 to determine a damaged state for each vehicle component, it is possible to improve the accuracy of determining the damaged state.
本実施形態では、状態判定部5として、上述したVGG-16などの汎用モデルをベースとして全結合層のみを取り替えたネットワークを用いている。また、車両部品毎にデータ量のばらつきが大きいことから、少ないデータ量でも学習を行うべく、予めImageNetで学習した重みによるファインチューニングを実施してもよい。ファインチューニングとは、既存のモデルの一部を再利用して、新しいモデルを構築する手法である。さらに、データごとに引き気味の画像と、アップの画像と異なる種類の画像とが混在している場合に対応すべく、アップ画像の選別、対象部位のトリミングによるクレンジングを実施してもよい(トリミングされた画像も一部使用)。
In the present embodiment, a network in which only the fully connected layer is replaced based on the above-described general-purpose model such as VGG-16 is used as the
状態判定部5の判定結果は、表示装置12に出力される。図7は、表示装置12に表示される損傷状態の判定結果の画面表示例を示す図である。この判定結果の表示画面60は、画像表示領域61と、複数の判定結果表示領域62と、複数のチェックボックス63とを有する。画像表示領域61には、判定対象に係る車両の撮像画像が表示されると共に、状態判定部5によって判定された損傷部位を示す丸枠が表示される。判定結果表示領域62には、損傷状態の候補(取替、脱着、大、中、小)が分類確率付きで表示される。チェックボックス63は、それぞれの判定結果表示領域62に対応して設けられており、所定の条件を満たすものにはチェックマークが付されている。所定の条件としては、典型的には、損傷程度の候補の分類確率が所定のしきい値以上であることが挙げられる。ユーザは、判定結果が妥当であると判断した場合、見積開始継続ボタンを押す。また、ユーザは、これが妥当でないと判断した場合、チェックボックス63のチックマークの変更を含む判定結果の修正を行った上で、見積開始ボタンを押す。このアクションによって、状態判定部5の判定結果(ユーザによって修正された判定結果を含む。)は、撮像画像と共に見積算出部6に出力される。
判定 The judgment result of the
なお、ユーザによって修正された判定結果は、状態学習モデル10に反映させてもよい。これにより、損傷状態判定の学習深度を深めることができる。 The determination result corrected by the user may be reflected on the state learning model 10. Thereby, the learning depth of the damage state determination can be increased.
見積算出部6は、見積テーブル11を参照することによって、判定対象に係る車両の修理に要する費用を見積もる。図8は、見積テーブル11の概略的な構成図である。この見積テーブル11は、車両部品名と、損傷の程度と、工賃を含む費用とが対応付けて保持されている。部品判定部4によって特定された車両部品と、状態判定部5によって判定されたその車両部品の損傷状態とをキーに見積テーブル11を検索することによって、費用が車両部品別に特定される。車両の修理に要する費用は、車両部品毎の費用を合算した総額となる。例えば、部品判定部4によって「フロントバンパー」および「ヘッドライト」が抽出され、状態判定部5によって前車の損傷程度が「中」、後者の損傷程度が「小」と判定された場合、前者の工賃が「\200,000」、後者の工賃が「\30,000」となり、総額は「\230,000」となる。
The
なお、撮像画像の解析によって車種が特定されている場合、または、ユーザによって車種が指定されている場合といった如く、車種が既知であることを前提に費用を算出する設計仕様であれば、同図に示したように、見積テーブル11の項目として車種(例えば「ABC」)を含めてもよい。このように、車種別に費用を細かく設定することで、車種に応じた見積金額を的確に算出できる。 Note that if the vehicle type is specified by the analysis of the captured image, or if the vehicle type is known, such as when the vehicle type is specified by the user, the design is calculated assuming that the vehicle type is known. As shown in (1), the model of the estimation table 11 may include a vehicle type (for example, “ABC”). In this way, by setting the cost finely for each vehicle type, the estimated amount of money according to the vehicle type can be accurately calculated.
見積算出部6の見積結果は、表示装置12に出力される。図9は、表示装置12に表示される見積結果の画面表示例を示す図である。この見積結果の表示画面70は、画像表示領域71と、見積結果表示領域72とを有する。画像表示領域71には、判定対象に係る車両の撮像画像が表示される。見積結果表示領域72には、損傷した車両部品(パーツ)と、損傷の程度と、費用とが表示される。損傷した車両部品が複数存在する場合、個別の費用と共に総額も表示される。ユーザが終了ボタンを押すことによって、車両状態判定装置1における一連の処理が終了する。
The estimation result of the
このように、本実施形態によれば、機械学習に基づいて車両の損傷状態を判定することで、未知の車両を含む様々な車両に対して、柔軟な対応が可能になる。その際、部品判定部4と状態判定部5とで処理を分離し、部品判定を行った後に損傷状態を判定を車両部品毎に行うことで、これらを単一の機械学習で処理する場合と比較して、全体としての判定精度の向上を図ることができる。
As described above, according to the present embodiment, it is possible to flexibly deal with various vehicles including an unknown vehicle by determining the damage state of the vehicle based on machine learning. At this time, the processing is separated by the
また、本実施形態によれば、撮像画像中に多くの車両部品が物体として検出され得る部品判定については、YOLOやSSDに代表されるように、撮像画像を単一のニューラルネットワーク系に入力し、回帰問題的なアプローチによって車両部品の領域抽出を属性の分類付きでまとめて行う物体検出アルゴリズムを採用する。これにより、部品判定部4における処理の高速化を図ることが可能になる。
Further, according to the present embodiment, for the component determination in which many vehicle components can be detected as an object in the captured image, the captured image is input to a single neural network system as represented by YOLO or SSD. In addition, an object detection algorithm that collectively extracts vehicle parts with attribute classification by a regression problem approach is adopted. This makes it possible to speed up the processing in the
また、本実施形態によれば、部品判定部4の処理に先立ち、面判定部2によって撮像画像における車両の構成面を特定し、その上で、撮像画像に対する部品判定を実行する。これにより、部品判定部4の判定結果として得られた車両部品のうち、面判定部2によって特定された構成面上に存在し得ないものについては誤判定として除外できるため、部品判定の精度向上を図ることができる。
According to the present embodiment, prior to the process of the
さらに、本実施形態によれば、状態判定部5によって判定された車両部品毎の損傷状態から、車両の修理費用を見積もる見積算出部6を設けることで、車両の修理に要する費用をユーザに提示する。これにより、ユーザの利便性を更に高めることができる。
Furthermore, according to the present embodiment, the cost required for vehicle repair is presented to the user by providing the
1 車両状態判定装置
2 画像受付部
3 面判定部
4 部品判定部
5 状態判定部
6 見積算出部
7 入出力インターフェース
8 面学習モデル
9 部品学習モデル
10 状態学習モデル
11 見積テーブル
12 表示装置
DESCRIPTION OF
Claims (18)
深層学習による物体検出アルゴリズムを用いて、車両部品に関するデータを教師データとした教師あり学習によって構築された部品学習モデルを参照して、撮像画像における車両部品を判定する部品判定部と、
車両部品の損傷状態に関するデータを教師データとした教師あり学習によって構築された状態学習モデルを参照して、前記部品判定部によって判定された車両部品毎の損傷状態を判定する状態判定部とを有し、
前記物体検出アルゴリズムは、
前記撮像画像を単一のニューラルネットワーク系に入力することで、回帰問題的なアプローチによって、前記撮像画像における前記車両部品の領域抽出を属性の分類付きでまとめて行うことを特徴とする車両状態判定装置。 In a vehicle state determination device that determines a damage state of a vehicle from a captured image,
A component determination unit that determines a vehicle component in a captured image with reference to a component learning model constructed by supervised learning using data on a vehicle component as teacher data using an object detection algorithm based on deep learning,
A state determination unit that determines a damage state of each vehicle component determined by the component determination unit with reference to a state learning model constructed by supervised learning using data on the damage state of the vehicle part as teacher data. And
The object detection algorithm,
A vehicle state determination characterized in that, by inputting the captured image to a single neural network system, region extraction of the vehicle components in the captured image is collectively performed with attribute classification by a regression problem approach. apparatus.
前記部品判定部は、前記面判定部によって判定された前記構成面に基づいて、前記車両部品をフィルタリングし、当該フィルタリングされた車両部品を判定結果として出力することを特徴とする請求項1に記載された車両状態判定装置。 Reference is made to a surface learning model constructed by supervised learning using data on the constituent surfaces of the vehicle as teacher data, further comprising a surface determination unit for determining the constituent surfaces of the vehicle in the captured image,
The said parts determination part filters the said vehicle parts based on the said component surface determined by the said surface determination part, and outputs the filtered vehicle parts as a determination result, The claim 1 characterized by the above-mentioned. Vehicle state determination device.
深層学習による物体検出アルゴリズムを用いて、車両部品に関するデータを教師データとした教師あり学習によって構築された部品学習モデルを参照して、撮像画像における車両部品を判定する部品判定ステップと、
車両部品の損傷状態に関するデータを教師データとした教師あり学習によって構築された状態学習モデルを参照して、前記部品判定部によって判定された車両部品毎の損傷状態を判定する状態判定ステップとを有する処理をコンピュータに実行させ、
前記物体検出アルゴリズムは、
前記撮像画像を単一のニューラルネットワーク系に入力することで、回帰問題的なアプローチによって、前記撮像画像における前記車両部品の領域抽出を属性の分類付きでまとめて行うことを特徴とする車両状態判定プログラム。 In the vehicle state determination program for determining the damage state of the vehicle from the captured image,
A component determination step of determining a vehicle component in a captured image by referring to a component learning model constructed by supervised learning using data on the vehicle component as teacher data using an object detection algorithm by deep learning,
A state determination step of determining a damage state of each vehicle component determined by the component determination unit with reference to a state learning model constructed by supervised learning using data on the damage state of the vehicle component as teacher data. Let the computer execute the process,
The object detection algorithm,
A vehicle state determination characterized in that, by inputting the captured image to a single neural network system, region extraction of the vehicle components in the captured image is collectively performed with attribute classification by a regression problem approach. program.
前記部品判定ステップは、前記面判定ステップによって判定された前記構成面に基づいて、前記車両部品をフィルタリングし、当該フィルタリングされた車両部品を判定結果として出力することを特徴とする請求項7に記載された車両状態判定プログラム。 With reference to a surface learning model constructed by supervised learning using data on the constituent surfaces of the vehicle as teacher data, further comprising a surface determination step of determining a constituent surface of the vehicle in the captured image,
The said parts determination step filters the said vehicle parts based on the said component surface determined by the said surface determination step, and outputs the filtered vehicle parts as a determination result, The claim 7 characterized by the above-mentioned. Vehicle state determination program.
深層学習による物体検出アルゴリズムを用いて、車両部品に関するデータを教師データとした教師あり学習によって構築された部品学習モデルを参照して、撮像画像における車両部品を判定する部品判定ステップと、
車両部品の損傷状態に関するデータを教師データとした教師あり学習によって構築された状態学習モデルを参照して、前記部品判定部によって判定された車両部品毎の損傷状態を判定する状態判定ステップとを有し、
前記物体検出アルゴリズムは、
前記撮像画像を単一のニューラルネットワーク系に入力することで、回帰問題的なアプローチによって、前記撮像画像における前記車両部品の領域抽出を属性の分類付きでまとめて行うことを特徴とする車両状態判定方法。 In a vehicle state determination method for determining a damage state of a vehicle from a captured image,
A component determination step of determining a vehicle component in a captured image by referring to a component learning model constructed by supervised learning using data on the vehicle component as teacher data using an object detection algorithm by deep learning,
A state determination step of determining a damage state of each vehicle component determined by the component determination unit with reference to a state learning model constructed by supervised learning using data on a damaged state of the vehicle component as teacher data. And
The object detection algorithm,
A vehicle state determination characterized in that, by inputting the captured image to a single neural network system, region extraction of the vehicle components in the captured image is collectively performed with attribute classification by a regression problem approach. Method.
前記部品判定ステップは、前記面判定ステップによって判定された前記構成面に基づいて、前記車両部品をフィルタリングし、当該フィルタリングされた車両部品を判定結果として出力することを特徴とする請求項13に記載された車両状態判定方法。 With reference to a surface learning model constructed by supervised learning using data on the constituent surfaces of the vehicle as teacher data, further comprising a surface determination step of determining a constituent surface of the vehicle in the captured image,
14. The component determination step according to claim 13, wherein the vehicle component is filtered based on the constituent surface determined in the surface determination step, and the filtered vehicle component is output as a determination result. Vehicle state determination method.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2020551134A JPWO2020071559A1 (en) | 2018-10-05 | 2019-10-04 | Vehicle condition judgment device, its judgment program and its judgment method |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2018-190113 | 2018-10-05 | ||
| JP2018190113 | 2018-10-05 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2020071559A1 true WO2020071559A1 (en) | 2020-04-09 |
Family
ID=70055452
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2019/039413 Ceased WO2020071559A1 (en) | 2018-10-05 | 2019-10-04 | Vehicle state evaluation device, evaluation program therefor, and evaluation method therefor |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JPWO2020071559A1 (en) |
| WO (1) | WO2020071559A1 (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111553268A (en) * | 2020-04-27 | 2020-08-18 | 深圳壹账通智能科技有限公司 | Vehicle component identification method, device, computer equipment and storage medium |
| WO2021125294A1 (en) * | 2019-12-18 | 2021-06-24 | Arithmer株式会社 | Rental target management system, rental target management program, and rental target management method |
| US11244438B2 (en) | 2020-01-03 | 2022-02-08 | Tractable Ltd | Auxiliary parts damage determination |
| JP2023124327A (en) * | 2022-02-25 | 2023-09-06 | 三菱重工機械システム株式会社 | Vehicle imaging device, vehicle imaging method, vehicle appearance inspection system, and mechanical parking device |
| JP2024095654A (en) * | 2022-12-28 | 2024-07-10 | 株式会社ブロードリーフ | Information presentation device, information presentation method, and information presentation program |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2012083855A (en) * | 2010-10-07 | 2012-04-26 | Toyota Motor Corp | Object recognition device and object recognition method |
| WO2017055878A1 (en) * | 2015-10-02 | 2017-04-06 | Tractable Ltd. | Semi-automatic labelling of datasets |
| CN107392218A (en) * | 2017-04-11 | 2017-11-24 | 阿里巴巴集团控股有限公司 | Image-based vehicle damage assessment method, device and electronic equipment |
| WO2018055340A1 (en) * | 2016-09-21 | 2018-03-29 | Emergent Network Intelligence Ltd | Automatic image based object damage assessment |
-
2019
- 2019-10-04 WO PCT/JP2019/039413 patent/WO2020071559A1/en not_active Ceased
- 2019-10-04 JP JP2020551134A patent/JPWO2020071559A1/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2012083855A (en) * | 2010-10-07 | 2012-04-26 | Toyota Motor Corp | Object recognition device and object recognition method |
| WO2017055878A1 (en) * | 2015-10-02 | 2017-04-06 | Tractable Ltd. | Semi-automatic labelling of datasets |
| WO2018055340A1 (en) * | 2016-09-21 | 2018-03-29 | Emergent Network Intelligence Ltd | Automatic image based object damage assessment |
| CN107392218A (en) * | 2017-04-11 | 2017-11-24 | 阿里巴巴集团控股有限公司 | Image-based vehicle damage assessment method, device and electronic equipment |
Cited By (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP7015503B2 (en) | 2019-12-18 | 2022-02-03 | Arithmer株式会社 | Lending object management system, lending object management program and lending object management method. |
| WO2021125294A1 (en) * | 2019-12-18 | 2021-06-24 | Arithmer株式会社 | Rental target management system, rental target management program, and rental target management method |
| JPWO2021125294A1 (en) * | 2019-12-18 | 2021-12-16 | Arithmer株式会社 | Lending object management system, lending object management program and lending object management method. |
| US11361426B2 (en) | 2020-01-03 | 2022-06-14 | Tractable Ltd | Paint blending determination |
| US11244438B2 (en) | 2020-01-03 | 2022-02-08 | Tractable Ltd | Auxiliary parts damage determination |
| US11250554B2 (en) | 2020-01-03 | 2022-02-15 | Tractable Ltd | Repair/replace and labour hours determination |
| US11257204B2 (en) | 2020-01-03 | 2022-02-22 | Tractable Ltd | Detailed damage determination with image segmentation |
| US11257203B2 (en) | 2020-01-03 | 2022-02-22 | Tractable Ltd | Inconsistent damage determination |
| US11386543B2 (en) | 2020-01-03 | 2022-07-12 | Tractable Ltd | Universal car damage determination with make/model invariance |
| US11587221B2 (en) | 2020-01-03 | 2023-02-21 | Tractable Limited | Detailed damage determination with image cropping |
| US11636581B2 (en) | 2020-01-03 | 2023-04-25 | Tractable Limited | Undamaged/damaged determination |
| US12136068B2 (en) | 2020-01-03 | 2024-11-05 | Tractable Ltd | Paint refinish determination |
| CN111553268A (en) * | 2020-04-27 | 2020-08-18 | 深圳壹账通智能科技有限公司 | Vehicle component identification method, device, computer equipment and storage medium |
| JP2023124327A (en) * | 2022-02-25 | 2023-09-06 | 三菱重工機械システム株式会社 | Vehicle imaging device, vehicle imaging method, vehicle appearance inspection system, and mechanical parking device |
| JP2024095654A (en) * | 2022-12-28 | 2024-07-10 | 株式会社ブロードリーフ | Information presentation device, information presentation method, and information presentation program |
| JP7681144B2 (en) | 2022-12-28 | 2025-05-21 | 株式会社ブロードリーフ | Information presentation device, information presentation method, and information presentation program |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2020071559A1 (en) | 2021-10-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6991519B2 (en) | Vehicle damage estimation device, its estimation program and its estimation method | |
| WO2020071559A1 (en) | Vehicle state evaluation device, evaluation program therefor, and evaluation method therefor | |
| WO2020071560A1 (en) | Vehicle damage estimation device, estimation program therefor, and estimation method therefor | |
| US20210081698A1 (en) | Systems and methods for physical object analysis | |
| CN110210474B (en) | Target detection method and device, equipment and storage medium | |
| CN104424634A (en) | Object tracking method and device | |
| CN106250863A (en) | object tracking method and device | |
| CN102708572B (en) | Upgrade the method and system of model of place, use the camera arrangement of the method | |
| US11587180B2 (en) | Image processing system | |
| US11120308B2 (en) | Vehicle damage detection method based on image analysis, electronic device and storage medium | |
| KR102197724B1 (en) | Apparatus for crashworthiness prediction and method thereof | |
| CN114581456A (en) | Multi-image segmentation model construction method, image detection method and device | |
| CN110427912A (en) | A kind of method for detecting human face and its relevant apparatus based on deep learning | |
| US11604940B2 (en) | Systems and methods for part identification and assessment using multiple images | |
| CN117115783B (en) | Assembly line work behavior recognition method based on machine vision | |
| CN113887314B (en) | Vehicle driving direction recognition method, device, computer equipment and storage medium | |
| US11727551B2 (en) | Image processing system using recurrent neural networks | |
| US20250126353A1 (en) | Systems and methods for guiding the capture of vehicle images and videos | |
| US20230012796A1 (en) | Identification of a vehicle having various disassembly states | |
| Lee et al. | Performance comparison of soiling detection using anomaly detection methodology | |
| KR20170082412A (en) | Apparatus and method for generating customized object vision system | |
| US11514530B2 (en) | Image processing system using convolutional neural networks | |
| CN114445728B (en) | Processing method, device and storage medium for multi-target tracking of video | |
| Landabaso et al. | Reconstruction of 3D shapes considering inconsistent 2D silhouettes | |
| CN119130679B (en) | Vehicle damage assessment method, device, electronic device and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19868696 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2020551134 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19868696 Country of ref document: EP Kind code of ref document: A1 |