WO2018166116A1 - Procédé de reconnaissance de dégâts causés à une voiture, appareil électronique et support d'informations lisible par ordinateur - Google Patents
Procédé de reconnaissance de dégâts causés à une voiture, appareil électronique et support d'informations lisible par ordinateur Download PDFInfo
- Publication number
- WO2018166116A1 WO2018166116A1 PCT/CN2017/091373 CN2017091373W WO2018166116A1 WO 2018166116 A1 WO2018166116 A1 WO 2018166116A1 CN 2017091373 W CN2017091373 W CN 2017091373W WO 2018166116 A1 WO2018166116 A1 WO 2018166116A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- preset
- vehicle damage
- terminal
- damage
- car
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/08—Insurance
Definitions
- the present invention relates to the field of computer technologies, and in particular, to a vehicle damage recognition method, an electronic device, and a computer readable storage medium.
- the main object of the present invention is to provide a vehicle damage recognition method, an electronic device and a computer readable storage medium, which aim to improve the accuracy and recall rate of vehicle damage recognition.
- the first method of the present application provides a vehicle damage recognition method, and the method includes the following steps:
- the server receives the fixed loss request that is sent by the user through the first terminal, and uses the preset first preset type model to analyze the fixed loss photo to obtain the first car corresponding to the fixed loss photo. Losing the part classification information, and returning the first vehicle damage part classification information to the first terminal for display;
- the first predetermined type model is used to analyze the determined loss photo again.
- a second aspect of the present application provides a server, including a processing device and a storage device connected to the processing device, the storage device storing a vehicle damage recognition system, the vehicle damage recognition system including at least one computer readable instruction, the at least one Computer readable instructions are executable by the processing device to:
- the server receives a fixed loss request that is sent by the user through the first terminal, and uses the preset first preset type model to analyze the fixed loss photo to obtain the corresponding loss photo.
- the first vehicle damage part classification information and returning the first vehicle damage part classification information to the first terminal for display;
- the first predetermined type model is used to analyze the determined loss photo again.
- a third aspect of the present application provides a computer readable storage medium having stored thereon at least one computer readable instruction executable by a processing device to:
- the server receives the fixed loss request that is sent by the user through the first terminal, and uses the preset first preset type model to analyze the fixed loss photo to obtain the first car corresponding to the fixed loss photo. Losing the part classification information, and returning the first vehicle damage part classification information to the first terminal for display;
- the first predetermined type model is used to analyze the determined loss photo again.
- the first loss type photo is analyzed by the preset first preset type model to obtain the first vehicle damage part classification information, and if the user denies the first vehicle damage part classification information, the preset first The preset type model analyzes the fixed loss photo to obtain the second vehicle damage part classification information, and if the user denies the second vehicle damage part classification information, sends the fixed loss photo to the predetermined second terminal.
- Manual identification of the vehicle damage location to manually identify the vehicle damage location. Because the automatic identification of the vehicle damage is carried out with the user, the first preset type model is used to automatically identify the fixed loss photo twice, which improves the recognition accuracy and the passing rate, and saves manpower and material resources.
- the vehicle damage portion when the vehicle damage portion cannot be confirmed by two automatic recognitions, the vehicle damage portion is manually recognized for the fixed-loss photograph, thereby avoiding the occurrence of the missing portion of the damaged portion or the recognition error due to the inability to automatically identify the vehicle damage portion, and improving The accuracy and recall rate of vehicle damage identification.
- FIG. 1 is a schematic diagram of an application environment of an embodiment of a vehicle damage recognition method according to the present invention
- FIG. 2 is a schematic flow chart of a first embodiment of a vehicle damage recognition method according to the present invention
- FIG. 3 is a schematic flow chart of a second embodiment of a vehicle damage recognition method according to the present invention.
- FIG. 4 is a schematic diagram of functional modules of a first embodiment of a vehicle damage recognition system according to the present invention.
- FIG. 5 is a schematic diagram of functional modules of a second embodiment of the vehicle damage recognition system of the present invention.
- the invention provides a vehicle damage recognition method.
- FIG. 1 it is a schematic diagram of an application environment of an embodiment of a vehicle damage recognition method according to the present invention.
- the application environment diagram includes a server 1 and a terminal device 2.
- the server 1 can perform data interaction with the terminal device 2 through a suitable technology such as a network or a near field communication technology.
- the terminal device 2 includes, but is not limited to, any electronic product that can interact with a user through a keyboard, a mouse, a remote controller, a touch pad, or a voice control device, for example, a personal computer, a tablet computer, a smart phone, or an individual.
- PDA Personal Digital Assistant
- game console Internet Protocol Television (IPTV)
- smart wearable device etc.
- the server 1 is a device capable of automatically performing numerical calculation and/or information processing in accordance with an instruction set or stored in advance.
- the server 1 may be a computer, a single network server, a server group composed of a plurality of network servers, or a cloud-based cloud composed of a large number of hosts or network servers, wherein the cloud computing is a kind of distributed computing, and is loosely distributed by a group.
- a super virtual computer consisting of a set of coupled computers.
- the server 1 includes, but is not limited to, a storage device 11, a processing device 12, and a network interface 13 that are communicably connected to each other through a system bus. It is pointed out that Figure 1 only shows the server 1 with the components 11-13, but it should be understood that not all illustrated components are required to be implemented, and more or fewer components may be implemented instead.
- the storage device 11 includes a memory and at least one type of readable storage medium.
- the memory provides a cache for the operation of the server 1;
- the readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like.
- the readable storage medium may be an internal storage unit of the server 1, such as a hard disk of the server 1; in other embodiments, the non-volatile storage medium may also be an external storage device of the server 1, For example, a plug-in hard disk provided on the server 1, a smart memory card (SMC), a Secure Digital (SD) card, a flash card, and the like.
- SMC smart memory card
- SD Secure Digital
- the readable storage medium of the storage device 11 is generally used to store an operating system installed on the server 1 and various types of application software, such as program codes of the vehicle damage recognition system 10 in an embodiment of the present application. Further, the storage device 11 can also be used to temporarily store various types of data that have been output or are to be output.
- Processing device 12 may, in some embodiments, include one or more microprocessors, microcontrollers, Digital processor, etc.
- the processing device 12 is typically used to control the operation of the server 1, such as performing control and processing related to data interaction or communication with the terminal device 2.
- the processing device 12 is configured to run program code or process data stored in the storage device 11, such as running the vehicle damage recognition system 10 and the like.
- the network interface 13 may comprise a wireless network interface or a wired network interface, which is typically used to establish a communication connection between the server 1 and other electronic devices.
- the network interface 13 is mainly used to connect the server 1 with one or more terminal devices 2, and establish a data transmission channel and a communication connection between the server 1 and one or more terminal devices 2.
- the vehicle damage recognition system 10 is stored in the storage device 11 and includes at least one computer readable instructions executable by the processing device 12 to implement the control response area display control method of various embodiments of the present application. As described later, the at least one computer readable instruction can be classified into different logic modules depending on the functions implemented by its various parts.
- the vehicle damage recognition system 10 when executed by the processing device 12, the following operations are performed: receiving a loss request including a fixed loss photo sent by the user through the first terminal, using a preset first preset type model pair The determined loss photo is analyzed to obtain the first vehicle damage part classification information corresponding to the fixed loss photo, and the first vehicle damage part classification information is returned to the first terminal for display; if the user receives the user pass The rejecting instruction of the first vehicle damage part classification information sent by the first terminal is used to analyze the fixed loss photo by using a preset first preset type model to obtain a corresponding corresponding to the fixed loss photo.
- FIG. 2 is a schematic flow chart of a first embodiment of a vehicle damage recognition method according to the present invention.
- the vehicle damage recognition method includes:
- Step S10 The vehicle damage recognition system receives a fixed loss request that is sent by the user through the first terminal, and uses the preset first preset type model to analyze the fixed loss photo to obtain the fixed loss photo corresponding to The first vehicle damage part classification information, and returning the first vehicle damage part classification information to the first terminal for display.
- the server receives, by the first terminal (for example, a mobile phone, a tablet computer, a handheld device, etc.), a fixed-loss photo including a user-supplied part including a to-be-damaged car damage location (for example, a close-up photo of a car damage part). ) The loss request.
- the auto insurance claim application APP may be pre-installed in the first terminal, the user opens the auto insurance claim application APP and sends a loss request to the server through the auto insurance claim application APP; in another embodiment A browser system is pre-installed in the first terminal, and the user can access the server through the browser system, and send a loss request to the server through the browser system.
- the server After receiving the loss request including the loss-receiving photo sent by the user, the server analyzes the acquired fixed-loss photo by using the pre-generated first preset type model to analyze the first vehicle damage corresponding to the fixed-loss photo.
- Part classification information for example, front, side, rear, overall, etc.
- the first vehicle damage part classification information is returned to the first terminal and displayed on a predetermined operation interface of the first terminal (for example, the analyzed first vehicle damage part classification information is returned to the first terminal
- the auto insurance claim application APP is displayed on the operation interface generated by the auto insurance claim application APP).
- Step S20 If receiving the rejection instruction of the first vehicle loss location classification information sent by the first terminal, the method further analyzes the fixed loss photo by using a preset first preset type model. The second vehicle damage part classification information corresponding to the fixed loss photograph is obtained, and the second vehicle damage part classification information is returned to the first terminal for display.
- the server may receive an instruction sent by the user by using a button, a touch, a press, a shaking, a mobile phone, a fingerprint, or the like, such as displaying the first on a predetermined operation interface of the first terminal.
- the user may send feedback information about the classification information of the first vehicle damage part to the server by long pressing the first terminal screen or short pressing the first terminal screen, such as a confirmation instruction or a rejection instruction.
- the user sends the feedback information to the server in the manner that the user clicks the button on the first terminal.
- the predetermined operation interface of the first terminal includes a vehicle damage part classification information display area, a vehicle damage part classification information confirmation button, and a vehicle damage part classification information rejection button; if the user confirms the vehicle by using the vehicle damage part classification information confirmation button If the first vehicle damage part classification information is described, the server ends the vehicle damage part identification process, or if the user rejects the first vehicle damage part classification information through the vehicle damage part classification information rejection button, the server reuses the generated information.
- the first preset type model analyzes the acquired fixed loss photo to analyze the second vehicle damage part classification information corresponding to the fixed loss photo, and the second vehicle damage part classification information is the first preset type model pair
- the obtained vehicle damage part classification information may be the same as the first vehicle damage part classification information, or may be different from the first vehicle damage part classification information.
- the analyzed second vehicle damage part classification information is returned to the first terminal and displayed on a predetermined operation interface of the first terminal.
- Step S30 if receiving a rejection instruction for the second vehicle damage part classification information sent by the first terminal, sending a manual identification of the vehicle damage location to the fixed loss photo to the predetermined second terminal The instructions to manually identify the damage location.
- the server ends the vehicle damage part identification process, or if the user rejects the vehicle through the vehicle damage part classification information rejection button
- the second vehicle damage part classification information is sent to the predetermined second terminal (for example, the terminal of the vehicle risk determination personnel) to send an instruction for manually identifying the vehicle damage part of the fixed loss photo to perform the vehicle damage part. Manual identification.
- the first damage type photo is analyzed by using a preset first preset type model to obtain the first vehicle damage part classification information. If the user denies the first vehicle damage part classification information, the preset first preset is used again.
- the type model analyzes the fixed loss photo to obtain the second vehicle damage part classification information, if When the user denies the second vehicle damage part classification information, the user sends a command for manually identifying the vehicle damage part to the predetermined second terminal to manually identify the vehicle damage part. Because the automatic identification of the vehicle damage is carried out with the user, the first preset type model is used to automatically identify the fixed loss photo twice, which improves the recognition accuracy and the passing rate, and saves manpower and material resources.
- the vehicle damage portion when the vehicle damage portion cannot be confirmed by two automatic recognitions, the vehicle damage portion is manually recognized for the fixed-loss photograph, thereby avoiding the occurrence of the missing portion of the damaged portion or the recognition error due to the inability to automatically identify the vehicle damage portion, and improving The accuracy and recall rate of vehicle damage identification.
- a second embodiment of the present invention provides a vehicle damage recognition method.
- the above step S20 is replaced by:
- Step S201 If the user manually receives the instruction of the vehicle damage location issued by the first terminal, the first terminal generates an area selection of the preset size and shape in the preset position of the display area of the fixed loss photo. a frame, the area selection box is configured for the user to adjust the current area selection frame to the preset direction to select the damaged photo feature area; and send the fixed loss photo feature area to the server;
- Step S202 The server receives the fixed loss photo feature area, and analyzes the fixed loss photo feature area to obtain corresponding second car damage part classification information.
- the predetermined operation interface of the first terminal further includes a fixed photo display area and a vehicle frame manual button. If the user confirms the first vehicle damage part classification information by using the vehicle damage part classification information confirmation button, the server ends the vehicle damage part identification process, or if the user receives the manual framed button through the vehicle damage part
- the first terminal responds to the instruction by manually instructing or refusing the first vehicle damage part classification information through the vehicle damage part classification information rejection button (for example, the first terminal's automobile insurance claim application APP rings) It should be instructed to generate an area selection frame of a preset size and shape (for example, a rectangle of X*Y pixels) at a preset position (for example, a geometric center position) of the fixed-loss photo display area, the area selection box is used for The user manually adjusts the boundary line of the fixed-loss photo area included in the current area selection frame to the preset direction (for example, up, down, left, and right) to select the fixed-loss photo feature area selected by the user.
- the first terminal receives, by the first terminal, a secondary identification instruction issued by the user that includes the fixed loss photo feature area selected based on the area selection box, the first terminal (eg, the first terminal's auto insurance claim application APP) rings
- the instruction should be identified twice and the fixed loss photo feature area sent to the server.
- the server After receiving the fixed-feature photo feature area, the server analyzes the fixed-loss photo feature area to analyze the second car-loss part classification information corresponding to the fixed-loss photo.
- the user when the first car damage part classification information obtained by analyzing the fixed loss photo is rejected by the user as the misclassification information, the user is first analyzed by the user before re-analysing the fixed loss photo.
- the identified feature area of the fixed loss photo is manually selected, and then the secondary damage analysis feature area is subjected to secondary analysis to obtain the corresponding second vehicle damage part classification information.
- the secondary analysis it is an analysis of the feature area of the fixed loss photo that is confirmed by the user, which effectively improves the accuracy of the secondary recognition.
- the generating step of the first preset type model includes: obtaining, according to a preset vehicle damage part classification, each preset vehicle damage part from a preset vehicle risk claim database.
- the claim photo corresponding to the class is preprocessed for the claim photo corresponding to each preset car damage part classification, so as to convert the format of the claim photo into a preset format; and using the converted preset preset car damage part classification corresponding pre
- a convolutional neural network model of the preset model structure is trained to generate a convolutional neural network model corresponding to each preset car damage location classification.
- the first preset type model is a convolutional neural network (CNN) model
- the first preset type model generating rule is: acquiring each preset car from a preset car insurance claim database according to a preset car damage part classification
- the claim photo corresponding to the damage part is classified, and the obtained claim photo of the preset car damage part classification is preprocessed to convert the obtained claim photo format into a preset format (for example, leveldb format);
- a preset format for example, leveldb format
- Each of the preset car damage parts is classified into a preset format picture, and the CNN model of the preset model structure is trained to generate a CNN model corresponding to each preset car damage part classification.
- the purpose of training is to optimize the values of the weights in the CNN model, so that the CNN model as a whole can be well applied to the classification and identification of vehicle damage parts in practical applications.
- the specific training process is as follows: Before the training starts, the system randomly and uniformly generates the initial values of the weights in the CNN model (for example, -0.05 to 0.05).
- the CNN model was trained using a stochastic gradient descent method. The entire training process can be divided into two stages: forward propagation and backward propagation. In the forward propagation phase, the system randomly samples the samples from the training data set, inputs them into the CNN network for calculation, and obtains the actual calculation results.
- the training process is iterated several times (for example, 100 times), and the training ends when the overall effective error of the CNN model is less than a predetermined threshold (for example, plus or minus 0.01).
- the method after receiving the confirmation instruction of the first vehicle damage part classification information or the second vehicle damage part classification information sent by the first terminal, the method further includes:
- the server analyzes the fixed loss photo by using a preset second preset type model, determines a car damage level corresponding to the fixed loss photo, and maps according to pre-stored car damage parts, vehicle damage levels and repair methods. Relationship, finding a repair method corresponding to the determined vehicle damage location and the vehicle damage level, and returning the determined vehicle damage location, the vehicle damage level, and the corresponding repair mode to the first terminal for display;
- the server Receiving, by the first terminal, a rejection instruction for the vehicle damage level or repair mode issued by the first terminal, the server sends a manual identification or repair manner of the vehicle damage level to the fixed loss terminal to the predetermined second terminal. Manually identified instructions for manual identification of vehicle damage levels or repair methods.
- the predetermined operation interface of the first terminal further includes a vehicle damage level information display area and a repair mode information display area, the vehicle damage part classification information display area, the vehicle damage level information display area, and the repair mode information.
- the display areas correspond to one selection item.
- the server pairs the fixed loss type image by using a second preset type model generated in advance. Perform an analysis to determine the vehicle damage level corresponding to the fixed loss photo, and find out the determined vehicle damage location and the vehicle damage level according to the mapping relationship between the pre-stored vehicle damage location, the vehicle damage level and the repair mode.
- the repair method should be applied (for example, for sheet metal parts, the repair method includes only full spray, light sheet metal, light sheet metal + full spray, heavy sheet metal + full spray, replacement, etc.), and the determined A vehicle damage part classification information and its corresponding vehicle damage level and repair method are returned to the first terminal and displayed on a predetermined operation interface of the first terminal, or the determined second vehicle damage part classification information and The corresponding vehicle damage level and repair mode are returned to the first terminal and displayed on a predetermined operation interface of the first terminal.
- the server sends an instruction to identify the vehicle damage level to the predetermined loss photo to the predetermined second terminal (for example, the terminal of the car insurance loss person) to manually identify the vehicle damage level.
- the server sends an instruction to identify the repair mode to the predetermined second terminal (for example, the terminal of the car insurance loss person) to manually identify the repair mode.
- the determined vehicle damage level and the repair mode corresponding to the determined vehicle damage portion are further automatically recognized, and the identified When the vehicle damage level and repair method are wrong, manual identification can be carried out to more comprehensively identify the vehicle damage, so that the subsequent vehicle damage treatment can be carried out more conveniently and quickly.
- the generating step of the second preset type model includes:
- a predetermined number of fixed loss photos corresponding to each preset car damage level are obtained from the preset car insurance claim database; each of the acquired car damage parts corresponds to each preset car damage level.
- the classified fixed loss photo is preprocessed to convert the fixed loss photo into a preset size and a preset format; and the preset pre-format image corresponding to each preset car damage level is converted by using each converted car damage part, and the training pre-
- the convolutional neural network model of the model structure is set to generate a convolutional neural network model corresponding to each preset vehicle damage level.
- the second preset type model is a convolutional neural network (CNN) model
- the generating step of the second preset type model includes: the server classifies according to a preset vehicle loss level, for example, the pre- The classification of vehicle damage levels includes primary damage (for example, damage without deformation, no rupture), secondary damage (for example, 2 or less slight recoverable deformation, damage without rupture), and tertiary damage (1) More than one serious recoverable deformation or more than three minor recoverable deformations, no rupture damage), four-level damage (for example, damage that cannot be repaired manually), etc., from a preset auto insurance claim database (for example, The auto insurance claim database stores the mapping relationship or tag data of the preset car damage level classification, the car damage part and the fixed loss photo, and the fixed loss photo refers to the photo taken by the repair shop at the time of the loss).
- a preset vehicle loss level for example, the pre- The classification of vehicle damage levels includes primary damage (for example, damage without deformation, no rupture), secondary damage (for example, 2 or less
- the part corresponds to a preset number (for example, 100,000 sheets) of the predetermined preset vehicle damage level classification, for example, obtaining 100,000 corresponding left front doors, and is a fixed-loss photo of the first-level damage.
- the server generates rules according to the preset model, and each of the obtained vehicle damage parts is corresponding to each a fixed-length photo of the preset car damage level classification, generating a second preset type model for analyzing the preset car damage level classification corresponding to the fixed-loss photo (for example, based on the car damage parts corresponding to the first-level damage)
- the preset number of fixed loss photos is generated, and a second preset type model for analyzing the car damage level corresponding to the fixed loss photo is generated.
- the preset model generation rule is:
- the CNN model of the preset model structure is trained by using the preset preset image of each of the preset car damage levels corresponding to each car damage part to generate each car damage part corresponding to each The CNN model of the preset car damage level classification.
- the purpose of the training is to optimize the values of the weights in the CNN model, so that the CNN model as a whole can be well applied to the classification of each car damage location corresponding to each preset car damage level in practical applications.
- the CNN model can have seven layers, five convolutional layers, one downsampled layer, and one fully connected layer.
- the convolutional layer is formed by a feature map constructed by a plurality of feature vectors, and the function of the feature map is to extract key features by using a convolution filter.
- the function of the downsampling layer is to remove the feature points of repeated expression and reduce the number of feature extractions by sampling method, thereby improving the efficiency of data communication between network layers.
- the available sampling methods include maximum sampling method, mean sampling method and random sampling method.
- the role of the fully connected layer is to connect the previous convolutional layer with downsampling and calculate the weight matrix for subsequent actual classification. After entering the CNN model, each image undergoes two processes: forward iteration and backward iteration. Each iteration generates a probability distribution. The probability distributions after multiple iterations are superimposed, and the system selects the probability distribution to obtain the maximum value.
- the category is the final classification result.
- the present invention further provides a vehicle damage recognition system that operates in the server 1 described above.
- FIG. 4 is a schematic diagram of functional modules of the first embodiment of the vehicle damage recognition system 10 of the present invention.
- the vehicle damage recognition system 10 includes:
- the first analysis module 01 is configured to receive a fixed loss request that is sent by the user through the first terminal, and use the preset first preset type model to analyze the fixed loss photo to obtain the fixed loss photo. Corresponding first vehicle damage part classification information, and returning the first vehicle damage part classification information to the first terminal for display;
- the first analysis module 01 receives a fixed-loss photo (for example, a vehicle damage) that is sent by the user through the first terminal (for example, a mobile phone, a tablet computer, a handheld device, etc.) and includes a user-supplied part including a to-be-determined damage. A close-up photo of the part) of the damage request.
- a fixed-loss photo for example, a vehicle damage
- the first terminal for example, a mobile phone, a tablet computer, a handheld device, etc.
- the auto insurance claim application APP may be pre-installed in the first terminal, and the user opens the auto insurance claim application APP and sends a loss request to the first analysis module 01 through the auto insurance claim application APP;
- a browser system is pre-installed in the first terminal, and the user can access the first analysis module 01 of the vehicle damage recognition system 10 through the browser system, and send the first analysis module 01 to the first analysis module 01 through the browser system. Fixed loss request.
- the first analysis module 01 uses the pre-determination after receiving the request for the loss of the fixed-length photo sent by the user.
- the first preset type model generated first analyzes the acquired fixed loss photo to analyze the first car damage part classification information corresponding to the fixed loss photo (for example, front, side, rear, overall, etc.) And returning the analyzed first vehicle damage part classification information to the first terminal and displaying it on a predetermined operation interface of the first terminal (for example, returning the analyzed first vehicle damage part classification information to the first
- a terminal auto insurance claim application APP is displayed on the operation interface generated by the auto insurance claim application APP).
- the second analysis module 02 is configured to: if the user rejects the rejection instruction for the first vehicle damage location information sent by the first terminal, use the preset first preset type model to Performing analysis on the damage photo, obtaining the second vehicle damage part classification information corresponding to the fixed loss photograph, and returning the second vehicle damage part classification information to the first terminal for display;
- the second analysis module 02 receives the feedback information of the first vehicle damage part classification information sent by the user through the first terminal, such as confirming that the first vehicle damage part classification information is correct or confirming the first Rejection instruction for classification information of vehicle damage parts. It should be noted that, in this embodiment, the second analysis module 02 can receive an instruction sent by the user by using a button, a touch, a press, a shaking mobile phone, a fingerprint, and the like, such as on a predetermined operation interface of the first terminal. After displaying the first vehicle damage part classification information, the user may send the classification information of the first vehicle damage part to the second analysis module 02 by long pressing the first terminal screen or short pressing the first terminal screen.
- the feedback information such as the acknowledgment command, the refusal command, and the like, is not limited herein.
- the predetermined operation interface of the first terminal includes a vehicle damage part classification information display area, a vehicle damage part classification information confirmation button, and a vehicle damage part classification information rejection button; if the user confirms the vehicle by using the vehicle damage part classification information confirmation button
- the vehicle damage recognition system 10 ends the vehicle damage part identification process, or if the user rejects the first vehicle damage part classification information by the vehicle damage part classification information rejection button
- the second analysis module 02 analyzes the acquired fixed loss photo by using the generated first preset type model to analyze the second vehicle damage part classification information corresponding to the fixed loss photo, and the second vehicle damage part classification information is After the re-analysis of the obtained fixed-loss photo by using the first preset type model, the second vehicle-loss part classification information may be the same as the first vehicle-loss part classification information, or may be the first The classification information of the vehicle damage parts is different.
- the analyzed second vehicle damage part classification information is
- the manual identification module 03 is configured to: if the user receives the rejection instruction for the second vehicle damage part classification information sent by the first terminal, send the vehicle to the predetermined second terminal
- the manual identification of the damage part is to manually identify the damage part.
- the vehicle damage recognition system 10 ends the vehicle damage part identification process, or if the user passes the vehicle damage part classification information
- the reject button rejects the second vehicle damage part classification information
- the manual identification module 03 sends a command for manually identifying the vehicle damage part to the fixed loss photo to the predetermined second terminal (for example, the terminal of the vehicle risk determination personnel). , to manually identify the car damage parts.
- the first damage type photo is analyzed by using a preset first preset type model to obtain the first vehicle damage part classification information. If the user denies the first vehicle damage part classification information, the preset first preset is used again.
- the type model analyzes the fixed loss photo to obtain the second vehicle damage part classification information, and if the user denies the second vehicle damage part classification information, sends the vehicle damage part to the fixed loss photo to the predetermined second terminal. Manually recognized instructions to manually identify the damage location. Because the automatic identification of the vehicle damage is carried out with the user, the first preset type model is used to automatically identify the fixed loss photo twice, which improves the recognition accuracy and the passing rate, and saves manpower and material resources.
- the vehicle damage portion when the vehicle damage portion cannot be confirmed by two automatic recognitions, the vehicle damage portion is manually recognized for the fixed-loss photograph, thereby avoiding the occurrence of the missing portion of the damaged portion or the recognition error due to the inability to automatically identify the vehicle damage portion, and improving The accuracy and recall rate of vehicle damage identification.
- the foregoing second analysis module 02 is further configured to:
- the fixed loss photo feature area is obtained by: Receiving, by the first terminal, a frame selection frame of a preset size and shape in a preset position of the display area of the fixed-loss photo, if the user manually receives a command for the vehicle damage location issued by the first terminal, The area selection box is used for the user to adjust the current area selection box to the preset direction to select the damaged photo feature area.
- the predetermined operation interface of the first terminal further includes a fixed photo display area and a vehicle frame manual button. If the user confirms the first vehicle damage part classification information by using the vehicle damage part classification information confirmation button, the vehicle damage recognition system 10 ends the vehicle damage part identification process, or if the user receives the vehicle damage through the vehicle damage part
- the first terminal responds to the instruction (for example, the first terminal's auto insurance claim) by manually instructing the vehicle damage portion issued by the frame button or rejecting the first vehicle damage portion classification information through the vehicle damage portion classification information rejection button.
- the application APP responds to the instruction to generate an area selection frame of a preset size and shape (for example, a rectangle of X*Y pixels) at a preset position (for example, a geometric center position) of the fixed-loss photo display area, the area
- the selection box is used for the user to manually adjust the boundary line of the fixed loss photo area included in the current area selection box to the preset direction (for example, up, down, left, and right) to select the selected loss photo feature selected by the user. region.
- the first terminal receives, by the first terminal, a secondary identification instruction issued by the user that includes the fixed loss photo feature area selected based on the area selection box, the first terminal (eg, the first terminal's auto insurance claim application APP) rings
- the instruction should be recognized twice, and the fixed loss photo feature area is sent to the second analysis module 02.
- the second analysis module 02 analyzes the fixed loss photo feature area to analyze the second car damage part classification information corresponding to the fixed loss photo.
- the user when the first car damage part classification information obtained by analyzing the fixed loss photo is rejected by the user as the misclassification information, the user is first analyzed by the user before re-analysing the fixed loss photo.
- the identified feature area of the fixed loss photo is manually selected, and then the secondary damage analysis feature area is subjected to secondary analysis to obtain the corresponding second vehicle damage part classification information.
- the secondary analysis it is an analysis of the feature area of the fixed loss photo that is confirmed by the user, which effectively improves the secondary knowledge. Other accuracy.
- the generating step of the first preset type model includes: obtaining, according to a preset vehicle damage part classification, a claim photo corresponding to each preset car damage part classification from a preset car insurance claim database, Pre-processing the claim photo corresponding to each preset car damage part classification to convert the format of the claim photo into a preset format; using the preset preset format picture corresponding to each preset car damage part classification, training pre- A convolutional neural network model of the model structure is set to generate a convolutional neural network model corresponding to each preset car damage location classification.
- the first preset type model is a convolutional neural network (CNN) model
- the first preset type model generating rule is: acquiring each preset car from a preset car insurance claim database according to a preset car damage part classification
- the claim photo corresponding to the damage part is classified, and the obtained claim photo of the preset car damage part classification is preprocessed to convert the obtained claim photo format into a preset format (for example, leveldb format);
- a preset format for example, leveldb format
- Each of the preset car damage parts is classified into a preset format picture, and the CNN model of the preset model structure is trained to generate a CNN model corresponding to each preset car damage part classification.
- the purpose of training is to optimize the values of the weights in the CNN model, so that the CNN model as a whole can be well applied to the classification and identification of vehicle damage parts in practical applications.
- the specific training process is as follows: Before the training starts, the system randomly and uniformly generates the initial values of the weights in the CNN model (for example, -0.05 to 0.05).
- the CNN model was trained using a stochastic gradient descent method. The entire training process can be divided into two stages: forward propagation and backward propagation. In the forward propagation phase, the system randomly samples the samples from the training data set, inputs them into the CNN network for calculation, and obtains the actual calculation results.
- the training process is iterated several times (for example, 100 times), and the training ends when the overall effective error of the CNN model is less than a predetermined threshold (for example, plus or minus 0.01).
- a second embodiment of the present invention provides a vehicle damage recognition system 10. Based on the foregoing embodiments, the method further includes:
- the third analysis module 04 is configured to: after receiving the confirmation instruction for the first vehicle damage part classification information or the second vehicle damage part classification information sent by the first terminal, the preset second preset The type model analyzes the fixed loss photo, determines the vehicle damage level corresponding to the fixed loss photo, and finds the determined vehicle damage location according to the mapping relationship between the pre-stored vehicle damage location, the vehicle damage level and the repair mode. And the repairing method corresponding to the vehicle damage level, and returning the determined vehicle damage part, the vehicle damage level and the corresponding repairing manner to the first terminal for display;
- the manual identification module 03 is further configured to: if receiving, by the first terminal, a rejection instruction for the vehicle damage level or the repair mode, send the determined loss photo to the predetermined second terminal.
- the manual identification of the vehicle damage level or the manual identification of the repair method to manually identify the damage level or repair method.
- the predetermined operation interface of the first terminal further includes a vehicle damage level information display area and a repair mode information display area, the vehicle damage part classification information display area, the vehicle damage level information display area, and the repair mode information.
- the display areas correspond to one selection item.
- the third analysis module 04 corrects the fixed loss by using a second preset type model generated in advance. The photo is analyzed to determine the vehicle damage level corresponding to the fixed loss photo, and the determined vehicle damage location and the vehicle damage level are determined according to the mapping relationship between the pre-stored vehicle damage location, the vehicle damage level and the repair mode.
- Repair method for example, for sheet metal parts, repair methods include only full spray, light sheet metal, light sheet metal + full spray, heavy sheet metal + full spray, replacement, etc.
- the damage part classification information and its corresponding vehicle damage level and repair mode are returned to the first terminal and displayed on a predetermined operation interface of the first terminal, or the determined second vehicle damage part classification information and corresponding The vehicle damage level and repair mode are returned to the first terminal and displayed at a predetermined operational interface of the first terminal.
- the manual identification module 03 sends a command for identifying the damage level of the fixed loss photo to the predetermined second terminal (for example, the terminal of the vehicle risk-determining person) to perform the vehicle damage level.
- the manual identification module 03 sends an instruction to identify the repair mode of the fixed-loss photo to a predetermined second terminal (for example, the terminal of the car-losing person) to manually identify the repair mode.
- the determined vehicle damage level and the repair mode corresponding to the determined vehicle damage portion are further automatically recognized, and the identified When the vehicle damage level and repair method are wrong, manual identification can be carried out to more comprehensively identify the vehicle damage, so that the subsequent vehicle damage treatment can be carried out more conveniently and quickly.
- the generating step of the second preset type model includes:
- a predetermined number of fixed loss photos corresponding to each preset car damage level are obtained from the preset car insurance claim database; each of the acquired car damage parts corresponds to each preset car damage level.
- the classified fixed loss photo is preprocessed to convert the fixed loss photo into a preset size and a preset format; and the preset pre-format image corresponding to each preset car damage level is converted by using each converted car damage part, and the training pre-
- the convolutional neural network model of the model structure is set to generate a convolutional neural network model corresponding to each preset vehicle damage level.
- the second preset type model is a convolutional neural network (CNN) model
- the generating step of the second preset type model includes: classifying according to a preset vehicle damage level, for example, the preset
- the classification of vehicle damage levels includes primary damage (for example, damage without deformation, no rupture), secondary damage (for example, 2 or less slight recoverable deformation, damage without rupture), and tertiary damage (1
- the claim database stores the preset car damage level classification, the car damage location and the fixed
- the mapping relationship or label data of the photo loss, the photo of the fixed loss refers to the photo taken by the repair shop at the time of the loss determination) the preset number of each car damage location corresponding to each preset car damage level classification (for example, 10 10,000 sheets) Fixed loss photos, for example, 100,000
- a second preset type model for analyzing the preset vehicle damage level classification corresponding to the fixed loss photo is generated based on the obtained fixed loss photos corresponding to the respective preset vehicle damage levels. (For example, based on a predetermined number of fixed-loss photos that have occurred in each vehicle damage portion corresponding to the first-level damage, a second preset type model for analyzing the vehicle damage level corresponding to the fixed-loss photo is generated).
- the preset model generation rule is:
- the CNN model of the preset model structure is trained by using the preset preset image of each of the preset car damage levels corresponding to each car damage part to generate each car damage part corresponding to each The CNN model of the preset car damage level classification.
- the purpose of the training is to optimize the values of the weights in the CNN model, so that the CNN model as a whole can be well applied to the classification of each car damage location corresponding to each preset car damage level in practical applications.
- the CNN model can have seven layers, five convolutional layers, one downsampled layer, and one fully connected layer.
- the convolutional layer is formed by a feature map constructed by a plurality of feature vectors, and the function of the feature map is to extract key features by using a convolution filter.
- the function of the downsampling layer is to remove the feature points of repeated expression and reduce the number of feature extractions by sampling method, thereby improving the efficiency of data communication between network layers.
- the available sampling methods include maximum sampling method, mean sampling method and random sampling method.
- the role of the fully connected layer is to connect the previous convolutional layer with downsampling and calculate the weight matrix for subsequent actual classification. After entering the CNN model, each image undergoes two processes: forward iteration and backward iteration. Each iteration generates a probability distribution. The probability distributions after multiple iterations are superimposed, and the system selects the probability distribution to obtain the maximum value.
- the category is the final classification result.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Business, Economics & Management (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Technology Law (AREA)
- General Business, Economics & Management (AREA)
- Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
Abstract
La présente invention concerne un procédé de reconnaissance de dégâts causés à une voiture, un serveur et un support d'informations. Le procédé comprend les étapes qui consistent : à recevoir une demande d'évaluation de préjudice envoyée par un utilisateur au moyen d'un premier terminal, à utiliser un premier modèle de type préétabli prédéfini pour analyser une image d'évaluation de préjudice dans le but d'obtenir des premières informations correspondantes de classification de partie de dégâts causés à une voiture, et à renvoyer les premières informations de classification de partie de dégâts causés à une voiture au premier terminal pour affichage (S10) ; si une instruction de rejet, envoyée par l'utilisateur au moyen du premier terminal, concernant les premières informations de classification de partie de dégâts causés à une voiture est reçue, à réutiliser le premier modèle de type préétabli prédéfini pour analyser l'image d'évaluation de préjudice afin d'obtenir des secondes informations correspondantes de classification de partie de dégâts causés à une voiture, et à renvoyer les secondes informations de classification de partie de dégâts causés à une voiture au premier terminal pour affichage (S20) ; et, si une instruction de rejet, envoyée par l'utilisateur au moyen du premier terminal, concernant les secondes informations de classification de partie de dégâts causés à une voiture est reçue, à envoyer à un second terminal prédéterminé une instruction destinée à effectuer une reconnaissance manuelle de partie de dégâts causés à une voiture sur l'image d'évaluation de préjudice (S30). La solution améliore la précision et le taux de rappel de la reconnaissance de dégâts causés à une voiture.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710147701.1 | 2017-03-13 | ||
| CN201710147701.1A CN107092922B (zh) | 2017-03-13 | 2017-03-13 | 车损识别方法及服务器 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018166116A1 true WO2018166116A1 (fr) | 2018-09-20 |
Family
ID=59648872
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/091373 Ceased WO2018166116A1 (fr) | 2017-03-13 | 2017-06-30 | Procédé de reconnaissance de dégâts causés à une voiture, appareil électronique et support d'informations lisible par ordinateur |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN107092922B (fr) |
| WO (1) | WO2018166116A1 (fr) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109767346A (zh) * | 2018-12-17 | 2019-05-17 | 中国平安财产保险股份有限公司 | 车辆业务处理方法、装置、计算机设备和存储介质 |
| CN110660000A (zh) * | 2019-09-09 | 2020-01-07 | 平安科技(深圳)有限公司 | 数据预测方法、装置、设备及计算机可读存储介质 |
| CN111191400A (zh) * | 2019-12-31 | 2020-05-22 | 上海钧正网络科技有限公司 | 基于用户报障数据的车辆零部件寿命预测方法及系统 |
| CN111291779A (zh) * | 2018-12-07 | 2020-06-16 | 深圳光启空间技术有限公司 | 一种车辆信息识别方法、系统、存储器及处理器 |
| CN114493903A (zh) * | 2022-02-17 | 2022-05-13 | 平安科技(深圳)有限公司 | 人伤风险评估中估损模型优化方法及相关设备 |
| EP3859592A4 (fr) * | 2018-09-26 | 2022-07-06 | Advanced New Technologies Co., Ltd. | Procédé et dispositif d'optimisation de résultat d'identification de dommages |
| EP4044098A4 (fr) * | 2019-10-30 | 2023-10-04 | PATEO CONNECT+ Technology (Shanghai) Corporation | Procédé de génération d'informations d'assurance, dispositif mobile et support de stockage lisible par ordinateur |
Families Citing this family (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108596047A (zh) * | 2018-03-30 | 2018-09-28 | 上海与德通讯技术有限公司 | 车损识别方法、智能终端以及计算机可读存储介质 |
| CN108734702A (zh) * | 2018-04-26 | 2018-11-02 | 平安科技(深圳)有限公司 | 车损判定方法、服务器及存储介质 |
| CN108647712A (zh) * | 2018-05-08 | 2018-10-12 | 阿里巴巴集团控股有限公司 | 车辆损伤识别的处理方法、处理设备、客户端及服务器 |
| CN108875648A (zh) * | 2018-06-22 | 2018-11-23 | 深源恒际科技有限公司 | 一种基于手机视频流的实时车辆损伤和部件检测的方法 |
| CN110570316A (zh) * | 2018-08-31 | 2019-12-13 | 阿里巴巴集团控股有限公司 | 训练损伤识别模型的方法及装置 |
| CN110569837B (zh) * | 2018-08-31 | 2021-06-04 | 创新先进技术有限公司 | 优化损伤检测结果的方法及装置 |
| CN109359542B (zh) * | 2018-09-18 | 2024-08-02 | 平安科技(深圳)有限公司 | 基于神经网络的车辆损伤级别的确定方法及终端设备 |
| CN109523405A (zh) * | 2018-10-30 | 2019-03-26 | 平安医疗健康管理股份有限公司 | 一种理赔提示方法、装置、服务器及计算机可读介质 |
| CN109670545B (zh) * | 2018-12-13 | 2023-08-11 | 北京深智恒际科技有限公司 | 由粗到细的车辆图像定损方法 |
| CN110110732B (zh) * | 2019-05-08 | 2020-04-28 | 杭州视在科技有限公司 | 一种用于餐饮后厨的智能巡查方法 |
| CN110363238A (zh) * | 2019-07-03 | 2019-10-22 | 中科软科技股份有限公司 | 智能车辆定损方法、系统、电子设备及存储介质 |
| CN111193868B (zh) * | 2020-01-09 | 2021-03-16 | 中保车服科技服务股份有限公司 | 车险查勘的拍照方法、装置、计算机设备和可读存储介质 |
| CN112329596B (zh) * | 2020-11-02 | 2021-08-24 | 中国平安财产保险股份有限公司 | 目标物定损方法、装置、电子设备及计算机可读存储介质 |
| CN113361457A (zh) * | 2021-06-29 | 2021-09-07 | 北京百度网讯科技有限公司 | 基于图像的车辆定损方法、装置及系统 |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070052973A1 (en) * | 2003-06-19 | 2007-03-08 | Tsubasa System Co., Ltd. | Damage analysis-supporting system |
| CN101242379A (zh) * | 2008-03-18 | 2008-08-13 | 北京中车检信息技术有限公司 | 基于移动通讯终端或网络终端的车辆定损方法 |
| CN105678622A (zh) * | 2016-01-07 | 2016-06-15 | 平安科技(深圳)有限公司 | 车险理赔照片的分析方法及系统 |
| CN105956667A (zh) * | 2016-04-14 | 2016-09-21 | 平安科技(深圳)有限公司 | 车险定损理赔审核方法及系统 |
| CN106097103A (zh) * | 2016-06-01 | 2016-11-09 | 深圳市永兴元科技有限公司 | 机动车辆车险赔偿策略确定方法和装置 |
| CN106231263A (zh) * | 2016-08-03 | 2016-12-14 | 深圳市永兴元科技有限公司 | 基于移动网络通信的车辆定损理赔方法及装置 |
| CN106296118A (zh) * | 2016-08-03 | 2017-01-04 | 深圳市永兴元科技有限公司 | 基于图像识别的车辆定损方法及装置 |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8996240B2 (en) * | 2006-03-16 | 2015-03-31 | Smartdrive Systems, Inc. | Vehicle event recorders with integrated web server |
| US8086474B1 (en) * | 2007-07-30 | 2011-12-27 | Intuit Inc. | Managing insurance claim data |
| US20130297353A1 (en) * | 2008-01-18 | 2013-11-07 | Mitek Systems | Systems and methods for filing insurance claims using mobile imaging |
| CN103310223A (zh) * | 2013-03-13 | 2013-09-18 | 四川天翼网络服务有限公司 | 一种基于图像识别的车辆定损系统及方法 |
| CN104268783B (zh) * | 2014-05-30 | 2018-10-26 | 翱特信息系统(中国)有限公司 | 车辆定损估价的方法、装置和终端设备 |
| CN105933289A (zh) * | 2016-04-08 | 2016-09-07 | 苏州花坞信息科技有限公司 | 一种在线广播平台 |
| CN106056142A (zh) * | 2016-05-27 | 2016-10-26 | 大连楼兰科技股份有限公司 | 基于人工智能能量模型方法建立不同车型分区域远程定损系统及方法 |
-
2017
- 2017-03-13 CN CN201710147701.1A patent/CN107092922B/zh active Active
- 2017-06-30 WO PCT/CN2017/091373 patent/WO2018166116A1/fr not_active Ceased
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070052973A1 (en) * | 2003-06-19 | 2007-03-08 | Tsubasa System Co., Ltd. | Damage analysis-supporting system |
| CN101242379A (zh) * | 2008-03-18 | 2008-08-13 | 北京中车检信息技术有限公司 | 基于移动通讯终端或网络终端的车辆定损方法 |
| CN105678622A (zh) * | 2016-01-07 | 2016-06-15 | 平安科技(深圳)有限公司 | 车险理赔照片的分析方法及系统 |
| CN105956667A (zh) * | 2016-04-14 | 2016-09-21 | 平安科技(深圳)有限公司 | 车险定损理赔审核方法及系统 |
| CN106097103A (zh) * | 2016-06-01 | 2016-11-09 | 深圳市永兴元科技有限公司 | 机动车辆车险赔偿策略确定方法和装置 |
| CN106231263A (zh) * | 2016-08-03 | 2016-12-14 | 深圳市永兴元科技有限公司 | 基于移动网络通信的车辆定损理赔方法及装置 |
| CN106296118A (zh) * | 2016-08-03 | 2017-01-04 | 深圳市永兴元科技有限公司 | 基于图像识别的车辆定损方法及装置 |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3859592A4 (fr) * | 2018-09-26 | 2022-07-06 | Advanced New Technologies Co., Ltd. | Procédé et dispositif d'optimisation de résultat d'identification de dommages |
| CN111291779A (zh) * | 2018-12-07 | 2020-06-16 | 深圳光启空间技术有限公司 | 一种车辆信息识别方法、系统、存储器及处理器 |
| CN109767346A (zh) * | 2018-12-17 | 2019-05-17 | 中国平安财产保险股份有限公司 | 车辆业务处理方法、装置、计算机设备和存储介质 |
| CN110660000A (zh) * | 2019-09-09 | 2020-01-07 | 平安科技(深圳)有限公司 | 数据预测方法、装置、设备及计算机可读存储介质 |
| EP4044098A4 (fr) * | 2019-10-30 | 2023-10-04 | PATEO CONNECT+ Technology (Shanghai) Corporation | Procédé de génération d'informations d'assurance, dispositif mobile et support de stockage lisible par ordinateur |
| CN111191400A (zh) * | 2019-12-31 | 2020-05-22 | 上海钧正网络科技有限公司 | 基于用户报障数据的车辆零部件寿命预测方法及系统 |
| CN111191400B (zh) * | 2019-12-31 | 2023-12-29 | 上海钧正网络科技有限公司 | 基于用户报障数据的车辆零部件寿命预测方法及系统 |
| CN114493903A (zh) * | 2022-02-17 | 2022-05-13 | 平安科技(深圳)有限公司 | 人伤风险评估中估损模型优化方法及相关设备 |
| CN114493903B (zh) * | 2022-02-17 | 2024-04-09 | 平安科技(深圳)有限公司 | 人伤风险评估中估损模型优化方法及相关设备 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107092922B (zh) | 2018-08-31 |
| CN107092922A (zh) | 2017-08-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018166116A1 (fr) | Procédé de reconnaissance de dégâts causés à une voiture, appareil électronique et support d'informations lisible par ordinateur | |
| US10885397B2 (en) | Computer-executed method and apparatus for assessing vehicle damage | |
| WO2020047420A1 (fr) | Procédé et système pour faciliter la reconnaissance de parties de véhicule sur la base d'un réseau neuronal | |
| WO2019174130A1 (fr) | Procédé de reconnaissance de facture, serveur et support de stockage lisible par ordinateur | |
| KR20180104609A (ko) | 다수의 이미지 일치성을 바탕으로 보험클레임 사기 방지를 실현하는 방법, 시스템, 기기 및 판독 가능 저장매체 | |
| JP2020504358A (ja) | 画像ベースの車両損害評価方法、装置、およびシステム、ならびに電子デバイス | |
| WO2019169688A1 (fr) | Procédé et appareil d'évaluation de perte de véhicule, dispositif électronique et support de stockage | |
| CN112487848B (zh) | 文字识别方法和终端设备 | |
| WO2020164278A1 (fr) | Dispositif et procédé de traitement des images, appareil électronique, et support d'enregistrement lisible | |
| CN117456389B (zh) | 一种基于YOLOv5s的改进型无人机航拍图像密集和小目标识别方法、系统、设备及介质 | |
| CN110291527B (zh) | 信息处理方法、系统、云处理设备以及计算机程序产品 | |
| WO2020155790A1 (fr) | Procédé et appareil d'extraction d'informations de règlement de sinistre, et dispositif électronique | |
| CN114387451B (zh) | 异常图像检测模型的训练方法、装置及介质 | |
| CN109598298B (zh) | 图像物体识别方法和系统 | |
| CN112232336A (zh) | 一种证件识别方法、装置、设备及存储介质 | |
| CN117831056A (zh) | 票据信息提取方法、装置及票据信息提取系统 | |
| CN111353514A (zh) | 模型训练方法、图像识别方法、装置及终端设备 | |
| CN110766007A (zh) | 证件遮挡检测方法、装置、设备及可读存储介质 | |
| CN114663871A (zh) | 图像识别方法、训练方法、装置、系统及存储介质 | |
| CN111414889B (zh) | 基于文字识别的财务报表识别方法及装置 | |
| CN112396059A (zh) | 一种证件识别方法、装置、计算机设备及存储介质 | |
| CN115546813A (zh) | 一种文档分析方法、装置、存储介质及设备 | |
| CN117689935A (zh) | 证件信息识别方法、装置、系统、电子设备及存储介质 | |
| EP4224417A1 (fr) | Évaluation de dommages sur des véhicules | |
| CN114386013B (zh) | 学籍自动认证方法、装置、计算机设备及存储介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17900321 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09/12/2019) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17900321 Country of ref document: EP Kind code of ref document: A1 |