Disclosure of Invention
In view of this, the embodiments of the present disclosure provide three target object processing methods. One or more embodiments of the present specification relate to three target object processing apparatuses, a computing device, a computer-readable storage medium, and a computer program, so as to solve the technical deficiencies in the prior art.
According to a first aspect of embodiments of the present specification, there is provided a target object processing method including:
determining a target identification area of a target object according to a first picture shot by first shooting equipment;
shooting a target object in the target identification area according to second shooting equipment to obtain a plurality of second pictures;
processing the plurality of second pictures to obtain the ratio of the objects to be processed in the target objects;
and determining the target deduction weight of the target object according to the proportion of the objects to be processed.
Optionally, the determining a target identification area of a target object according to a first picture taken by a first shooting device includes:
the method comprises the steps of obtaining a plurality of first pictures shot by first shooting equipment, carrying out difference processing on the first pictures, determining a loading area of a loading object containing a target object, and determining a target identification area of the target object according to the loading area.
Optionally, before determining the target identification area of the target object according to the loading area, the method further includes:
and acquiring a plurality of third pictures shot by a third shooting device, and adjusting the loading area according to the plurality of third pictures.
Optionally, before acquiring the plurality of first pictures taken by the first photographing device, the method further includes:
and under the condition that the loading object containing the target object is detected to enter a detection area, triggering the first shooting device and the third shooting device to shoot the loading object.
Optionally, after the shooting the target object in the target recognition area according to the second shooting device and obtaining a plurality of second pictures, the method further includes:
and shooting the target object in the target recognition area according to third shooting equipment to obtain a plurality of third pictures.
Optionally, the processing the plurality of second pictures to obtain a ratio of the object to be processed in the target object includes:
and processing the plurality of second pictures and the plurality of third pictures to obtain the ratio of the objects to be processed in the target object.
Optionally, the processing the plurality of second pictures and the third picture to obtain a ratio of the objects to be processed in the target object includes:
inputting each second picture in the plurality of second pictures into a detection model to obtain target object areas with different levels and a first object area to be processed in each second picture;
inputting each third picture in the plurality of third pictures into the detection model, and obtaining target object areas with different levels and a first object area to be processed in each third picture;
determining the target level of the target object according to the target object areas with different levels in each second picture and the target object areas with different levels in each third picture;
and determining the proportion of second objects to be processed in the target object according to the target level, and determining the proportion of first objects to be processed in the target object according to the areas of the first object to be processed in each second picture and the first object to be processed in each third picture.
Optionally, the determining the target level of the target object according to the target object region of different level in each second picture and the target object region of different level in each third picture includes:
determining the initial level of the target object in each second picture according to the areas of the target object regions with different levels in each second picture;
determining the initial level of the target object in each third picture according to the areas of the target object regions with different levels in each third picture;
and performing fusion calculation on the initial level of the target object in each second picture and the initial level of the target object in each third picture to obtain the target level of the target object.
Optionally, the determining an initial level of the target object in each second picture according to the areas of the target object regions of different levels in each second picture includes:
determining target object regions of different levels in each second picture, calculating the area of the target object region of each level in each second picture, and determining the level corresponding to the target object region with the largest area as the initial level of the target object in each second picture;
correspondingly, the determining the initial level of the target object in each third picture according to the areas of the target object regions of different levels in each third picture includes:
determining target object regions of different levels in each third picture, calculating the area of the target object region of each level in each third picture, and determining the level corresponding to the target object region with the largest area as the initial level of the target object in each third picture.
Optionally, the determining the proportion of the second object to be processed in the target object according to the target level includes:
determining a target object region with a level smaller than the target level in each second picture as a second object region to be processed of the target object in each second picture, and determining a first ratio of the area of the second object region to be processed of the target object in each second picture to the area of the target object in each second picture;
determining a target object region with a level smaller than the target level in each third picture as a second object region to be processed of the target object in each third picture, and determining a second ratio of the area of the second object region to be processed of the target object in each third picture to the area of the target object in each third picture;
and determining the occupation ratio of a second object to be processed of the target object according to the first occupation ratio and the second occupation ratio.
Optionally, determining a proportion of the first object to be processed in the target object according to the area of the first object to be processed in each second picture and the area of the first object to be processed in each third picture, including:
determining a first ratio of the area of the first object region to be processed in each second picture to the area of the target object in each second picture according to the area of the first object region to be processed in each second picture;
determining a second ratio of the area of the first object region to be processed in each third picture to the area of the target object in each third picture according to the area of the first object region to be processed in each third picture;
and determining the proportion of a first object to be processed of the target object according to the first proportion and the second proportion.
Optionally, after inputting each of the plurality of second pictures into a detection model and inputting each of the plurality of third pictures into the detection model, the method further includes:
obtaining a third object area to be processed in the target object of each second picture and a third object area to be processed in the target object of each third picture;
processing a third object area to be processed in the target object of each second picture and a third object area to be processed in the target object of each third picture to obtain a target area of a third object to be processed in the target object;
and determining the proportion of a third object to be processed in the target object according to the area of a target region of the third object to be processed in the target object.
Optionally, after obtaining the proportion of the object to be processed in the target object, the method further includes:
and determining the degree of mixing of the target objects according to the distribution of the initial levels of the target objects in each second picture and the initial levels of the target objects in each third picture.
Optionally, before determining the target deduction weight of the target object according to the proportion of the object to be processed, the method further includes:
determining the weight of a loading object containing a target object and the weight of a loading object not containing the target object, and determining the difference value between the weight of the loading object containing the target object and the weight of the loading object not containing the target object as the weight of the target object.
Optionally, the determining the target deduction weight of the target object according to the proportion of the object to be processed includes:
inputting at least two of the proportion of the object to be processed, the degree of mixing of the target object and the weight of the target object into a regression model to obtain the target deduction weight of the target object.
Optionally, the first photographing apparatus, the second photographing apparatus, and the third photographing apparatus are different in position.
According to a second aspect of embodiments herein, there is provided a target object processing apparatus including:
the area determination module is configured to determine a target identification area of a target object according to a first picture shot by a first shooting device;
the picture obtaining module is configured to obtain a plurality of second pictures by shooting the target object in the target recognition area according to second shooting equipment;
the picture processing module is configured to process the plurality of second pictures to obtain the proportion of objects to be processed in the target objects;
a weight calculation module configured to determine a target deduction weight of the target object according to the occupation ratio of the object to be processed.
According to a third aspect of embodiments herein, there is provided a target object processing method including:
displaying an image input interface for a user based on a call request of the user;
receiving a target object input by the user based on the image input interface, and determining a target identification area of the target object according to a first picture shot by first shooting equipment;
shooting a target object in the target identification area according to second shooting equipment to obtain a plurality of second pictures;
processing the plurality of second pictures to obtain the ratio of the objects to be processed in the target objects;
and determining the target deduction weight of the target object according to the proportion of the objects to be processed.
According to a fourth aspect of embodiments of the present specification, there is provided a target object processing method including:
receiving a calling request sent by a user, wherein the calling request carries a target object;
determining a target identification area of a target object according to a first picture shot by first shooting equipment;
shooting a target object in the target identification area according to second shooting equipment to obtain a plurality of second pictures;
processing the plurality of second pictures to obtain the ratio of the objects to be processed in the target objects;
and determining the target deduction weight of the target object according to the proportion of the objects to be processed.
According to a fifth aspect of embodiments herein, there is provided a target object processing apparatus including:
the interface display module is configured to display an image input interface for a user based on a call request of the user;
the area determination module is configured to receive a target object input by the user based on the image input interface, and determine a target identification area of the target object according to a first picture shot by first shooting equipment;
the picture obtaining module is configured to obtain a plurality of second pictures by shooting the target object in the target recognition area according to second shooting equipment;
the picture processing module is configured to process the plurality of second pictures to obtain the proportion of objects to be processed in the target objects;
a weight calculation module configured to determine a target deduction weight of the target object according to the occupation ratio of the object to be processed.
According to a sixth aspect of embodiments herein, there is provided a target object processing apparatus comprising:
the request receiving module is configured to receive a calling request sent by a user, wherein the calling request carries a target object;
the area determination module is configured to determine a target identification area of the target object according to a first picture shot by a first shooting device;
the picture obtaining module is configured to obtain a plurality of second pictures by shooting the target object in the target recognition area according to second shooting equipment;
the picture processing module is configured to process the plurality of second pictures to obtain the proportion of objects to be processed in the target objects;
a weight calculation module configured to determine a target deduction weight of the target object according to the occupation ratio of the object to be processed.
According to a seventh aspect of embodiments herein, there is provided a computing device comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions, which when executed by the processor, implement the steps of the above-described target object processing method.
According to an eighth aspect of embodiments herein, there is provided a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of the above-described target object processing method.
According to a ninth aspect of embodiments herein, there is provided a computer program, wherein when the computer program is executed in a computer, the computer is caused to execute the steps of the above-described target object processing method.
One embodiment of the present specification implements a target object processing method and apparatus, wherein the method includes determining a target identification area of a target object according to a first picture taken by a first photographing device; shooting a target object in the target identification area according to second shooting equipment to obtain a plurality of second pictures; processing the plurality of second pictures to obtain the ratio of the objects to be processed in the target objects; and determining the target deduction weight of the target object according to the proportion of the objects to be processed.
The method can realize that the picture is captured by the first shooting equipment and the second shooting equipment in the self-unloading process of the target object (such as scrap steel) so as to obtain rich multi-angle pictures, the detection and identification of various objects to be processed (such as various sundries) in the target object are realized by processing the rich multi-angle pictures, the target deduction weight is finally calculated, the manual operation is avoided, the labor cost is reduced, and the calculation accuracy of the target deduction weight is greatly improved.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present description. This description may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, as those skilled in the art will be able to make and use the present disclosure without departing from the spirit and scope of the present disclosure.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present specification relate are explained.
Scrap steel: the steel scrap material refers to steel scrap materials (such as trimming, end cutting and the like) which are not products in the production process of steel plants and steel materials in used scrapped equipment and components, and the components of the steel scrap materials are steel; the component is pig iron called scrap iron, which is commonly called scrap steel.
The scrap steel is a very important steelmaking raw material, has an important effect on reducing energy consumption and cost, and particularly has important significance in the aspects of energy conservation, emission reduction, production regulation and the like.
At present, the grade of scrap is mainly determined by the thickness of the scrap, and the scrap is divided into a plurality of grades, such as 20mm scrap, 15mm scrap, 10mm scrap, 8mm scrap, 6mm scrap, 4mm scrap, 2mm scrap and the like.
The steel scrap grading process mainly comprises two parts, namely steel scrap grade judgment and steel scrap weight deduction and impurity deduction, and is a core project of a steel scrap grading system. The weight deduction and impurity deduction of the scrap steel are mainly to identify and count the unqualified steel products and non-steel products and obtain the weight of the scrap steel needing to be deducted additionally in the scrap steel of a car.
For the waste steel of a certain grade of finished truck pulled into a steel mill, non-steel sundries on the truck are firstly identified, and if the sundries (such as cast steel and the like) which are not allowed to exist, the operation of rejecting and returning goods and the like are required. For other types of sundries (soil, plastics and the like), estimating the weight as the sundry weight; and estimating the weight of the waste steel which is lower than the grade of the waste steel of the whole vehicle and does not reach the standard, taking the estimated weight as the weight of the deduction, and finally deducting the weight of the waste steel of the whole vehicle according to the weight of the deduction and the deduction.
At present, scrap steel grade judgment, deduction of weight and deduction of impurities and the like are operated by manual and visual estimation and are influenced by various factors such as mood, fatigue state, cognitive difference, interpersonal relationship and the like, manual operation has great instability, the operation process is opaque, the scrap steel grading cost is directly influenced, and loss is caused to a steel mill.
In an actual operating environment, a considerable part of enterprises directly carry out self-discharging scrap steel treatment on a scrap steel truck in order to improve the operating efficiency, quality inspection personnel carry out grading and weight deduction on of scrap steel scattered on the ground by self-discharging, and the manual quality inspection accuracy of the scene is not high.
In view of the above technical problem, in the present specification, three target object processing methods are provided. One or more embodiments of the present specification relate to three kinds of target object processing apparatuses, a computing device, a computer-readable storage medium, and a computer program, which are described in detail one by one in the following embodiments.
Referring to fig. 1, fig. 1 shows a schematic structural diagram of a target object processing method applied to a scrap deduction and deduction calculation scenario according to an embodiment of the present disclosure.
Fig. 1 includes a first image capturing terminal 102, a second image capturing terminal 104, a third image capturing terminal 106, and a server 108. At least one of the first image acquisition terminal 102, the second image acquisition terminal 104 and the third image acquisition terminal 106 is a high-definition high-speed camera, so that the scrap steel can be quickly and intensively captured when being unloaded from a hopper loaded with the scrap steel to the ground, and the number of the first image acquisition terminal 102, the second image acquisition terminal 104 and the third image acquisition terminal 106 is not limited, so that the scrap steel can be set according to actual requirements.
For convenience of understanding, in the embodiment of the present specification, the number of the first image capturing terminal 102, the second image capturing terminal 104, and the third image capturing terminal 106 is described as one.
Specifically, a vehicle loaded with scrap steel is swiped to enter an unloading area, and a second image acquisition terminal 104 and a third image acquisition terminal 106 are triggered to snapshot the scrap steel unloading area before the vehicle enters the unloading area, during the scrap steel unloading and after the scrap steel unloading, so that three or more photos can be respectively snapshot; and the shot image is sent to the server 108, the server 108 firstly carries out differential processing on the images of the scrap steel before, during and after the scrap steel is unloaded, which are shot by the second image acquisition terminal 104 and the third image acquisition terminal 106, determines the hopper area of the vehicle, determines the scrap steel unloading area based on the hopper area, eliminates the interference outside the scrap steel unloading area, and bases subsequent sundries detection segmentation and grade judgment algorithms.
After the scrap steel discharging area is determined, the server 108 controls the first image acquisition terminal 102 to carry out rapid intensive snapshot on the moment of scrap steel discharging and landing to obtain N layers of scrap steel areas, and then the grade of the scrap steel of each layer of the scrap steel area is calculated. Meanwhile, the grade of the scrap steel in the scrap steel area is determined again through the image which is additionally shot after the end of the scrap steel discharge through the third image acquisition terminal 106.
And finally, when the grade of the whole vehicle steel scrap is judged, converging the judgment results of the steel scrap grade of each layer of steel scrap area based on the first image acquisition terminal 102 and the third image acquisition terminal 106 to obtain the target grade of the whole vehicle steel scrap. Meanwhile, the proportion of the area of each layer of unqualified scrap steel is counted based on the target grade of the scrap steel of the whole vehicle.
In addition, the server 108 is also used for performing sundry segmentation on each image uploaded by the first image acquisition terminal 102, the second image acquisition terminal 104 and the third image acquisition terminal 106 when detecting that the images are not qualified, so as to obtain the proportion of impurities in each layer of scrap steel region, and finally calculating the proportion of unqualified scrap steel and the proportion of impurities in the whole vehicle scrap steel based on the proportion of unqualified scrap steel area in each layer and the proportion of impurities in each layer of scrap steel region.
And estimating the mixing degree of the scrap steel types of the whole vehicle according to the distribution of different grades of the scrap steel of the whole vehicle, combining the net weight obtained by the difference of two weighing of the vehicle and the proportion of special devices such as overlong pieces, closed containers and the like with special penalty points, and calculating the penalty of the scrap steel of the whole vehicle by adopting a regression model.
Namely, when the server 108 performs the deduction and deduction calculation on the whole vehicle scrap based on the scrap pictures shot by the three cameras, the steps of car hopper ROI extraction, scrap grade judgment, unqualified scrap detection and segmentation, non-steel impurity detection and segmentation and the like are mainly included, and finally the deduction and deduction calculation on the whole vehicle scrap is obtained.
The target object processing method provided by the embodiment of the specification is applied to a scrap steel deduction and impurity deduction calculation scene, the first image acquisition terminal 102, the second image acquisition terminal 104 and the third image acquisition terminal 106 which are arranged at different positions can be used for acquiring image data in the scrap steel self-discharging operation, and subsequently, the scrap steel grade judgment, the automatic estimation of the scrap steel deduction and impurity deduction and the like can be quickly and accurately realized through image intelligent analysis based on the abundant images at different angles; the work efficiency of very big improvement steel scrap grading promotes user's use and experiences.
Referring to fig. 2, fig. 2 is a flowchart illustrating a target object processing method according to an embodiment of the present disclosure, which specifically includes the following steps.
Step 202: and determining a target identification area of the target object according to the first picture shot by the first shooting device.
The first shooting equipment, the second shooting equipment and the third shooting equipment can be understood as shooting equipment arranged at different positions, the number of each shooting equipment is not limited, and the shooting equipment can be arranged according to actual needs. And the first shooting device, the second shooting device and the third shooting device can be any type of high-definition shooting device, and in practical application, the second shooting device can be understood as a high-definition high-speed shooting device.
For convenience of understanding, in the following embodiments, the target object processing method provided in the embodiments of the present specification is described in detail by taking the number of each of the first shooting device, the second shooting device, and the third shooting device as one, and taking the second shooting device as a high-definition high-speed shooting device as an example.
In practical application, the target object processing method has different practical application scenes and different target objects, for example, the target object processing method is applied to a scrap steel grading scene, and the target object is scrap steel; the target object processing method is applied to the recycling scene of other articles (such as plastics, copper and the like), and the target object can be other articles. In the examples of the present specification, the target object is explained as scrap steel.
Specifically, before determining the target identification area of the target object according to the first picture shot by the first shooting device, the first shooting device needs to be triggered to shoot the target object to obtain the first picture, and the specific triggering mode of the first shooting device may be understood as triggering the first shooting device to shoot the loading object when it is detected that the loading object including the target object enters the detection area. The specific implementation mode is as follows:
before the obtaining of the plurality of first pictures shot by the first shooting device, the method further includes:
and under the condition that the loading object containing the target object is detected to enter a detection area, triggering the first shooting device and the third shooting device to shoot the loading object.
In the case where the target object is scrap steel, the loading object may be understood as a scrap steel-loaded vehicle, and the detection area may be understood as a scrap steel discharge area.
In a specific application scene, when a vehicle loaded with the waste steel enters a detection area, a card needs to be swiped for entering, and when a server receives a card swiping signal of a card swiping machine, the vehicle loaded with the waste steel is determined to enter the detection area, and a first shooting device is triggered to shoot the vehicle loaded with the waste steel; the first shooting device sends the shot first picture to the server, and the server determines the target recognition area of the target object according to the first picture shot by the first shooting device. The first picture may be one, two or more pictures.
In specific implementation, the determining a target identification area of a target object according to a first picture taken by a first shooting device includes:
the method comprises the steps of obtaining a plurality of first pictures shot by first shooting equipment, carrying out difference processing on the first pictures, determining a loading area of a loading object containing a target object, and determining a target identification area of the target object according to the loading area.
In practical application, the first picture at least includes pictures of the loading object of the target object at three moments before, during and after the loading object is unloaded.
Taking a target object as scrap steel and a loading object as a vehicle as an example, pictures of the vehicle shot by a first shooting device at three moments before, during and after scrap steel removal are obtained, then difference processing is carried out on the pictures to determine a hopper area of the vehicle filled with the scrap steel, and then a target identification area (a target scattering area of the scrap steel on the ground) of the scrap steel is determined based on the hopper area.
In the embodiment of the present specification, according to the difference processing of the plurality of first pictures shot by the first shooting device, the target recognition area of the target object can be determined quickly, and the target object in the target recognition area can be recognized subsequently by the second shooting device and the third shooting device. By the method, the identification range of the target object is narrowed, and the subsequent image processing speed is increased.
In order to ensure the accuracy of the target recognition area of the target object, after the loading area of the target object is determined according to the first picture, the loading area may be optimized according to a third picture taken by a third photographing apparatus. The specific implementation mode is as follows:
before determining the target identification area of the target object according to the loading area, the method further includes:
and acquiring a plurality of third pictures shot by a third shooting device, and adjusting the loading area according to the plurality of third pictures.
The third shooting device can be understood as a high-definition camera or the like which is installed at a different position from the first shooting device.
In practical applications, in the process of calculating the target identification area of the target object based on the first picture taken by the first shooting device, the calculation process can be supplemented based on the third picture taken by the third shooting device so as to improve the accuracy of the target identification area of the target object.
Step 204: and shooting the target object in the target identification area according to second shooting equipment to obtain a plurality of second pictures.
The second shooting device can be understood as a high-speed high-definition camera with different installation positions from the first shooting device and the third shooting device.
Specifically, when the target identification area of the target object is determined, the second shooting device is triggered to shoot the target object in the target identification area, and a plurality of second pictures are obtained. The shooting of the target object in the target recognition area can be understood as shooting the target object in the process of unloading from the loading object to the target recognition area in the target recognition area.
In specific implementation, since the second photographing device photographs the target object in the target recognition area in the process of unloading the loaded object to the target recognition area, and the target object is not complete in the unloading process due to high speed, in order to avoid such a situation, a third photographing device may be added to perform supplementary photographing on the target object in the target recognition area to obtain a plurality of third pictures, and the second picture and the third pictures may be combined to calculate the occupation ratio of the object to be processed in the target object more accurately. The specific implementation mode is as follows:
the shooting the target object in the target recognition area according to the second shooting device, and after obtaining a plurality of second pictures, further comprising:
and shooting the target object in the target recognition area according to third shooting equipment to obtain a plurality of third pictures.
The practical application scene is as follows: and under the condition that the target scattering area of the scrap steel is determined, triggering second shooting equipment to shoot the scrap steel discharged in the target scattering area at a high speed so as to obtain a plurality of second pictures in the process of unloading the scrap steel from the vehicle to the target scattering area. And simultaneously triggering the third shooting equipment to carry out supplementary shooting on the discharged steel scraps in the target scattering area so as to obtain a plurality of third pictures in the process of discharging the steel scraps from the vehicle to the target scattering area. And acquiring the picture of the scrap steel in the unloading process through a second shooting device and a third shooting device.
Step 206: and processing the plurality of second pictures to obtain the ratio of the objects to be processed in the target object.
In combination with the above example, after the plurality of second pictures are obtained, the plurality of second pictures may be processed to obtain the ratio of the object to be processed in the target object.
Specifically, the processing the plurality of second pictures to obtain the ratio of the objects to be processed in the target object includes:
inputting each second picture in the plurality of second pictures into a detection model to obtain target object areas with different levels and a first object area to be processed in each second picture;
determining the target level of the target object according to the target object areas with different levels in each second picture;
and determining the proportion of second objects to be processed in the target objects according to the target level, and determining the proportion of first objects to be processed in the target objects according to the area of the first object area to be processed in each second picture.
Specifically, the determining the target level of the target object according to the target object regions of different levels in each second picture includes:
determining the initial level of the target object in each second picture according to the areas of the target object regions with different levels in each second picture;
and determining the target grade of the target object according to the initial grade of the target object in each second picture.
Specifically, the determining the initial level of the target object in each second picture according to the areas of the target object regions in different levels in each second picture includes:
determining target object regions of different levels in each second picture, calculating the area of the target object region of each level in each second picture, and determining the level corresponding to the target object region with the largest area as the initial level of the target object in each second picture.
Specifically, the determining the proportion of the second object to be processed in the target objects according to the target level includes:
determining a target object region with a level smaller than the target level in each second picture as a second object region to be processed of the target object in each second picture, and determining the ratio of the area of the second object region to be processed of the target object in each second picture to the area of the target object in each second picture;
and determining the occupation ratio of the second object to be processed of the target object according to the occupation ratio of the area of the second object area to be processed of the target object in each second picture relative to the area of the target object in each second picture.
Specifically, the determining the proportion of the first object to be processed in the target object according to the area of the first object to be processed region in each second picture includes:
determining the ratio of the area of the first object region to be processed in each second picture to the area of the target object in each second picture according to the area of the first object region to be processed in each second picture;
and determining the proportion of the first object to be processed of the target object according to the proportion of the area of the first object to be processed region in each second picture relative to the area of the target object in each second picture.
Specifically, after each of the plurality of second pictures is input into the detection model, the method further includes:
obtaining a third object area to be processed in the target object of each second picture;
processing a third object area to be processed in the target object of each second picture to obtain a target area of the third object to be processed in the target object;
and determining the proportion of a third object to be processed in the target object according to the area of a target region of the third object to be processed in the target object.
In practical applications, the specific processing procedure for the plurality of second pictures may refer to the following specific processing procedure for the plurality of second pictures, which is not described herein again.
In order to improve the calculation accuracy of the ratio of the object to be processed in the target object, the processing on the plurality of third pictures may be increased, and the ratio of the object to be processed in the target object may be obtained based on the processing results of the plurality of second pictures and the plurality of third pictures. The specific implementation mode is as follows:
the processing the plurality of second pictures to obtain the ratio of the objects to be processed in the target object comprises:
and processing the plurality of second pictures and the plurality of third pictures to obtain the ratio of the objects to be processed in the target object.
Specifically, after the plurality of second pictures and the plurality of third pictures are obtained, the plurality of second pictures and the plurality of third pictures are processed to obtain the proportion of the object to be processed in the target object. The specific implementation mode is as follows:
the processing the plurality of second pictures and the third picture to obtain the ratio of the objects to be processed in the target object comprises:
inputting each second picture in the plurality of second pictures into a detection model to obtain target object areas with different levels and a first object area to be processed in each second picture;
inputting each third picture in the plurality of third pictures into the detection model, and obtaining target object areas with different levels and a first object area to be processed in each third picture;
determining the target level of the target object according to the target object areas with different levels in each second picture and the target object areas with different levels in each third picture;
and determining the proportion of second objects to be processed in the target object according to the target level, and determining the proportion of first objects to be processed in the target object according to the areas of the first object to be processed in each second picture and the first object to be processed in each third picture.
The Detection model includes Mask-RCNN (Mask-Regions with CNN features, regional convolutional neural network), SSD (Single Shot MultiBox Detector), or image instance segmentation algorithm models based on CNN such as YOLO (You Only Look Online: Unifield, Real-Time Object Detection), and may also be classified deep neural network algorithm models such as Resnet50 (residual error network) and densenet (classified network).
Referring to fig. 3, fig. 3 is a schematic structural diagram illustrating a detection model in a target object processing method according to an embodiment of the present disclosure.
The detection model in fig. 3 is a Mask-RCNN model, and example segmentation is realized through the Mask-RCNN model, so that the detection, classification and segmentation effects of pictures are improved.
As can be seen from FIG. 3, Mask-RCNN can be largely divided into two major parts, RPN (region pro-social network) and RCNN. The RPN is connected to Feature Map (Feature Map) output by the backbone network, and a suggested region (which may be a target object (such as a second picture, a third picture, etc.) is extracted, and then the suggested region is subjected to further classification and position optimization by a subsequent RCNN layer. And simultaneously, extracting a target Mask according to the target position and the Feature Map so as to obtain the results of detection, classification and segmentation.
Specifically, the second picture is taken as a scrap steel picture as an example.
Inputting each second picture in the plurality of second pictures into the detection model to obtain target object areas with different levels and a first object area to be processed in each second picture; and inputting each of the plurality of third pictures into the detection model, to obtain a target object region and a first object region to be processed at different levels in each of the plurality of third pictures, which may be understood as:
inputting each picture shot by second shooting equipment into a detection model to obtain different grades of scrap steel areas in each picture and impurity areas in each picture; and inputting each picture shot by the third shooting device into the detection model, and obtaining different grades of scrap steel areas in each picture and impurity areas in each picture.
For example, 20 second pictures are obtained through shooting by second shooting equipment, each second picture is input into the detection model, and different grades of steel scrap regions in each second picture are obtained, such as a 6mm steel scrap region, a 4mm steel scrap region and a 2mm steel scrap region; and obtaining impurity regions, such as dust regions and the like, in each second picture. Similarly, the third picture shot by the third shooting device also obtains different grades of scrap steel areas and impurity areas in each picture in the mode.
And then determining the target level of the target object according to the target object areas with different levels in each second picture and the target object areas with different levels in each third picture.
Specifically, the determining the target level of the target object according to the target object regions of different levels in each second picture and the target object regions of different levels in each third picture includes:
determining the initial level of the target object in each second picture according to the areas of the target object regions with different levels in each second picture;
determining the initial level of the target object in each third picture according to the areas of the target object regions with different levels in each third picture;
and performing fusion calculation on the initial level of the target object in each second picture and the initial level of the target object in each third picture to obtain the target level of the target object.
Specifically, according to the areas of the target object regions with different levels in each second picture, determining the initial level of the target object in each second picture; determining the initial level of the target object in each third picture according to the areas of the target object regions with different levels in each third picture; it can be understood that, the area of the target object region of each level in each second picture is determined, and then the level with the largest area of the target object region is taken as the initial level of the target object in each second picture; and determining the area of the target object region of each grade in each third picture, and then taking the grade with the largest area of the target object region as the initial grade of the target object in each third picture. The specific implementation mode is as follows:
determining an initial level of the target object in each second picture according to the areas of the target object regions of different levels in each second picture, including:
determining target object regions of different levels in each second picture, calculating the area of the target object region of each level in each second picture, and determining the level corresponding to the target object region with the largest area as the initial level of the target object in each second picture;
correspondingly, the determining the initial level of the target object in each third picture according to the areas of the target object regions of different levels in each third picture includes:
determining target object regions of different levels in each third picture, calculating the area of the target object region of each level in each third picture, and determining the level corresponding to the target object region with the largest area as the initial level of the target object in each third picture.
Along the above example, if the first second picture comprises a 6mm steel scrap region, a 4mm steel scrap region and a 2mm steel scrap region, the area of the 6mm steel scrap region accounts for 30% of the whole picture, the area of the 4mm steel scrap region accounts for 60% of the whole picture, and the area of the 2mm steel scrap region accounts for 10% of the whole picture; then the area of the 4mm scrap region can be determined to be the largest and the scrap grade in this second picture can be determined to be a grade of 4mm scrap. Similarly, the initial levels of the target objects in all the second pictures and the initial levels of the target objects in all the third pictures are determined in the above manner.
And after the initial grades of the target objects in all the second pictures and the initial grades of the target objects in all the third pictures are obtained, performing fusion calculation on the initial grades of the target objects in all the second pictures and the initial grades of the target objects in all the third pictures to obtain the integral target grade of the target object, such as the grade of the steel scrap of the whole vehicle.
In practical application, when the target level of the target object is determined, the initial levels of the target object in all the second pictures and the initial levels of the target object in all the third pictures are merged to obtain the initial levels of the target object of N +1 single layers, where N is the number of the second pictures and 1 is the number of the third pictures, and the number of N is selected according to actual situations, such as 10-30.
And fusing the initial levels of the target objects in all the second pictures and the initial levels of the target objects in all the third pictures in a surface-to-bottom linear attenuation weighting mode. If each layer (each picture is a layer) contains the levels of the target objects of four types a, B, C, and D, wherein the ratios are RA, RB, RC, and RD, respectively, then the weight calculation of the ith layer from the 1 st layer to the N +1 st layer is as shown in formula 1:
ri ═ N +1-i)/(N +1) formula 1
Then the overall target object rank calculation is as described in equation 2:
max (RA × i), sum (RB × i), sum (RC × i), and sum (RD × i)) (i ═ 1 to N +1) formula 2
And calculating to obtain the target grade of the whole target object, such as the grade of the scrap steel of the whole vehicle, by the formula 1 and the formula 2.
Furthermore, based on the distribution of the initial level of the target object in each second picture and the initial level of the target object in each third picture, the degree of clutter of the target object may also be determined.
Specifically, after obtaining the proportion of the object to be processed in the target object, the method further includes:
and determining the degree of mixing of the target objects according to the distribution of the initial levels of the target objects in each second picture and the initial levels of the target objects in each third picture.
Along the above example, if the initial level distribution of all the second pictures and the third pictures is: 2mm,4mm,6mm,8mm,10mm account for 10%, 20%, 25%, 25%, 20%, respectively, then the initial grade distribution is 6mm,8mm,10mm, 20%, 30%, 50% more miscellaneous than all of the second and third pictures. That is, the more types of scrap steel are classified, the higher the degree of mixing.
In this embodiment of the present specification, the overall target level of the target object may be accurately obtained through the above fusion calculation according to the areas of the target object regions of different levels in all the second pictures and the areas of the target object regions of different levels in all the third pictures, and the blending degree of the target object may also be determined according to the initial levels of the target objects in all the second pictures and the distribution of the initial levels of the target objects in all the third pictures, and then, when performing the deduction and the blending on the target object, the blending degree factor may be considered, so that the deduction and the blending on the target object may be more accurate.
Furthermore, after the target level of the target object is determined, the proportion of the second object to be processed (such as steel scrap whose level does not meet the standard) in the target object may be determined based on the target level, and the proportion of the first object to be processed in the target object may be determined according to the area of the first object to be processed in each second picture and the area of the first object to be processed in each third picture. Namely, the proportion of the waste steel with the grade not up to the standard in the waste steel of the whole vehicle and the proportion of the impurities in the waste steel of the whole vehicle can be determined according to the target grade of the waste steel of the whole vehicle. The determination method of the ratio of the second object to be processed is as follows:
the determining the proportion of a second object to be processed in the target objects according to the target level comprises:
determining a target object region with a level smaller than the target level in each second picture as a second object region to be processed of the target object in each second picture, and determining a first ratio of the area of the second object region to be processed of the target object in each second picture to the area of the target object in each second picture;
determining a target object region with a level smaller than the target level in each third picture as a second object region to be processed of the target object in each third picture, and determining a second ratio of the area of the second object region to be processed of the target object in each third picture to the area of the target object in each third picture;
and determining the occupation ratio of a second object to be processed of the target object according to the first occupation ratio and the second occupation ratio.
Specifically, the target object region with the level smaller than the target level in all the second pictures is determined as the second object region to be processed of the target object in the second pictures, and a first ratio of the area of the second object region to be processed of the target object in each second picture to the area of the target object in each second picture is determined.
Along the above example, if the first second picture comprises a 6mm steel scrap region, a 4mm steel scrap region and a 2mm steel scrap region, the area of the 6mm steel scrap region accounts for 30% of the whole picture, the area of the 4mm steel scrap region accounts for 60% of the whole picture, and the area of the 2mm steel scrap region accounts for 10% of the whole picture; then the 4mm steel scrap area is the target grade, then the 2mm steel scrap area is the second object area to be processed of the target object in the second picture, that is, the steel scrap area with the grade not meeting the standard, then the first percentage of the second object area to be processed in the second picture is 10%, that is, the steel scrap area with the grade not meeting the standard in the second picture accounts for 10% of the steel scrap in the picture. Similarly, the second ratio of the area of the target object in all the third pictures is also calculated in the above manner.
Then, after the first proportion of the second object to be processed in all the second pictures and the second proportion of the second object to be processed in all the third pictures are obtained, all the first proportions and all the second proportions are subjected to weighted summation, and for the condition that the waste steel with the same grade which does not reach the standard is sampled for multiple times in different layers, the intersection, union or mean value is adopted to carry out the deduplication operation, and finally the waste steel proportion with the finished vehicle grade which does not reach the standard is obtained. I.e. the proportion of the second object to be processed of the target object.
In this embodiment of the present specification, the accurate second ratio of the target object to be processed may be obtained after performing operations such as weighted summation of the first ratios of the second objects to be processed in all the second pictures and the second ratios of the second objects to be processed in all the third pictures, and performing deduplication on the weighted summation.
The calculation method of the ratio to the first object to be processed of the target object is as follows:
determining the proportion of the first object to be processed in the target object according to the area of the first object to be processed in each second picture and the area of the first object to be processed in each third picture, including:
determining a first ratio of the area of the first object region to be processed in each second picture to the area of the target object in each second picture according to the area of the first object region to be processed in each second picture;
determining a second ratio of the area of the first object region to be processed in each third picture to the area of the target object in each third picture according to the area of the first object region to be processed in each third picture;
and determining the proportion of a first object to be processed of the target object according to the first proportion and the second proportion.
Specifically, the proportion of the first object to be processed in all the second pictures and the proportion of the first object to be processed in all the third pictures are obtained through calculation, after the first proportion of the first object to be processed in all the second pictures and the second proportion of the first object to be processed in all the third pictures are obtained, all the first proportions and the second proportions are subjected to weighted summation, and for the case that impurities are sampled for multiple times in different layers, the operation of removing the weight is performed in a mode of intersection, union or average value, and finally the proportion of the impurities in the steel scrap of the whole vehicle is obtained. I.e. the proportion of the first object to be processed of the target object.
In this embodiment of the present specification, an accurate first ratio of the target object to be processed may be obtained after performing operations such as weighted summation of the first ratios of the first objects to be processed in all the second pictures and weighted summation of the second ratios of the first objects to be processed in all the third pictures, and performing deduplication on the weighted summation.
In the implementation, when calculating the target deduction weight of the target object, a third object to be processed, which does not meet the processing, in the target object should be considered. For example, when calculating the balance and balance of scrap steel of a whole vehicle, the proportion of items with special balance and balance in the project needs to be considered, such as an overlong piece and a closed container, because the overlong piece may break a steel furnace, and the closed container may cause explosion in the process of melting steel, the special steel material in the scrap steel is removed. When the scrap steel deduction and impurity deduction calculation of the whole vehicle is carried out, the proportion of special deduction and penalty articles is also considered. The specific implementation mode is as follows:
after the inputting each of the plurality of second pictures into a detection model and each of the plurality of third pictures into the detection model, the method further includes:
obtaining a third object area to be processed in the target object of each second picture and a third object area to be processed in the target object of each third picture;
processing a third object area to be processed in the target object of each second picture and a third object area to be processed in the target object of each third picture to obtain a target area of a third object to be processed in the target object;
and determining the proportion of a third object to be processed in the target object according to the area of a target region of the third object to be processed in the target object.
In the scenario where the target object processing method is applied to deduction and deduction of scrap steel, the third object to be processed may be understood as the above-mentioned special deduction article, and in another scenario where the target object processing method is applied, the third object to be processed may also be understood as another article that needs to be deducted and cannot be processed.
In practical application, the detection model may detect and obtain a third object to be processed from the input multiple second pictures and third pictures, and then perform a deduplication operation on the multiple second pictures and third pictures to obtain a final target area of the third object to be processed in the target object; and then determining the proportion of a third object to be processed in the target object according to the area of the target region of the third object to be processed in the target object.
Step 208: and determining the target deduction weight of the target object according to the proportion of the objects to be processed.
Specifically, the determining the target deduction weight of the target object according to the proportion of the object to be processed includes:
and determining the target deduction weight of the target object according to at least two of the proportion of the first object to be processed, the proportion of the second object to be processed and the proportion of the third object to be processed.
Specifically, the target deduction weight for determining the target object may be calculated based on a degree of clutter of the target object and a weight of the target object, wherein the weight of the target object may be calculated from a difference between a loaded object loaded with the target object and a loaded object not containing the target object. The specific implementation mode is as follows:
before determining the target deduction weight of the target object according to the proportion of the object to be processed, the method further comprises the following steps:
determining the weight of a loading object containing a target object and the weight of a loading object not containing the target object, and determining the difference value between the weight of the loading object containing the target object and the weight of the loading object not containing the target object as the weight of the target object.
Specifically, the weight of the target object can be obtained by subtracting the weight of the loading object not including the target object from the weight of the loading object including the target object.
Along the above example, the net weight of the scrap steel may be calculated based on the weight of the scrap steel loaded on the vehicle minus the difference between two pounds of the weight of the scrap steel unloaded from the vehicle.
In the embodiment of the present specification, when obtaining the target deduction weight of the target object, the target deduction weight of the target object may be accurately calculated according to factors such as the proportion of the first object to be processed, the proportion of the second object to be processed, the proportion of the third object to be processed, the degree of mixing of the target object, and the net weight of the target object. The specific implementation mode is as follows:
determining the target deduction weight of the target object according to the proportion of the object to be processed, wherein the determining comprises the following steps:
inputting at least two of the proportion of the object to be processed, the degree of mixing of the target object and the weight of the target object into a regression model to obtain the target deduction weight of the target object.
In specific implementation, at least two or more of the proportion of the first object to be processed, the proportion of the second object to be processed, the proportion of the third object to be processed, the degree of mixing of the target object, and the weight of the target object may be used as input arguments of the regression model, and the regression may be performed to obtain the target deduction weight of the target object. For the regression model, a Machine learning regression algorithm may be selected, and specifically, the regression model includes, but is not limited to, logistic regression, random forest, GDBT (Gradient Boosting Decision Tree), xgboot (eXtreme Gradient Boosting), lightGBM (Light Gradient Boosting Machine, framework for implementing GBDT algorithm), and other models, and a deep neural network regression model may also be selected, and for training of the regression model, the regression learning may be performed by using a target deduction weight of a historical target object.
The target object processing method provided by the embodiment of the specification can realize picture snapshot through the first shooting device and the second shooting device in the self-unloading process of the target object (such as scrap steel) to obtain rich multi-angle pictures, realize detection and identification of various objects to be processed (such as various sundries) in the target object through processing the rich multi-angle pictures, finally calculate the target deduction weight, avoid manual operation, reduce labor cost and greatly improve the calculation accuracy of the target deduction weight.
The following description will further describe the target object processing method with reference to fig. 4 by taking an application of the target object processing method provided in this specification in a scrap steel deduction and impurity deduction scene as an example. Fig. 4 shows a processing flow chart of a target object processing method provided in an embodiment of the present specification, and specifically includes the following steps.
Step 402: and the truck is parked and begins to unload.
In particular, the start of the discharge when the truck is parked, is understood to mean that the truck loaded with scrap is parked in the area to be discharged.
Step 404: and (5) unloading and shooting.
Specifically, the unloading and shooting can be understood as that the truck punches a card to enter an unloading (namely, scrap steel) area, the camera 2 and the camera 3 are triggered to shoot, pictures of the truck at three moments before, during and after unloading, shot by the camera 2 and the camera 3, are subjected to differential processing, and a target scrap steel area where the scrap steel is scattered on the ground is determined. Meanwhile, triggering the camera 1 to capture the picture of the unloading process in the target scrap steel area intensively to obtain N layers of scrap steel pictures.
Specifically, the installation positions of the camera 1, the camera 2, and the camera 3 can be referred to fig. 5, and fig. 5 shows a schematic installation position diagram of a shooting device in a target object processing method provided in an embodiment of the present specification.
Including light filling lamp, cloud platform, projection lamp and stop line (being the regional frame of dotted line) in FIG. 5, wherein, light filling lamp, cloud platform and projection lamp are the auxiliary device of camera, and the stop line is the target region of unloading of steel scrap.
Specifically, the No. 1 camera is a high-speed camera and is mainly used for quickly and intensively capturing the steel scrap at the moment of falling to the ground; the No. 2 camera mainly carries out overall picture snapshot on the car hopper and the ground scrap steel, carries out primary shooting on the area frame before unloading, confirms other residual scrap steel in the frame, and then independently carries out primary snapshot on the partial area after unloading is finished and combines the picture shot by the No. 1 camera; the No. 3 camera is used for supplementing the snapshot of the No. 1 camera and the No. 2 camera. And determining the position where the scrap steel in the vehicle is unloaded, wherein the shooting range of the position is slightly larger than the dotted line area frame. In the subsequent grading process of the scrap steel of the whole vehicle, the vehicle is firstly parked in the dotted line area frame, and the camera carries out first snapshot on the scrap steel on the surface layer in the vehicle and the whole area; then, the scrap steel is required to be unloaded into the frame, and the camera carries out secondary snapshot on the scrap steel in the frame; and finally, spreading out the scrap steel in the frame by a steel grabbing machine/sucker, and carrying out third-time grabbing by a camera to finish the grading of the whole vehicle. The number of the images and the time interval of each snapshot can be adjusted according to the actual unloading situation on site. This is not intended to be limiting in this specification.
Step 406: and (4) algorithm identification.
Specifically, the specific implementation of the algorithm identification is as follows:
and (3) carrying out segmentation of different grades of scrap steel on the N layers of scrap steel pictures densely shot by the camera 1 by using a deep learning image example segmentation algorithm, calculating the area of each grade of scrap steel, estimating the grade of the scrap steel of each layer by area size voting, and finally obtaining the grade of the scrap steel of each layer of the N layers of scrap steel pictures. Similarly, the deep learning image example segmentation algorithm is used for carrying out different grades of steel scrap segmentation on the picture formed by the supplementary shooting after the unloading of the camera 3, and calculating the ratio of each grade.
For the grade judgment of the whole vehicle waste steel, merging the grade judgment result of each layer of the N layers of waste steel pictures shot by the camera 2 and the grade judgment result of each layer of the camera 3 to obtain N +1 single-layer grade judgment results, fusing in a surface-to-bottom linear attenuation weighting mode, and if each layer contains four types of waste steel grades, namely A, B, C and D, wherein the proportions are RA, RB, RC and RD respectively, the weight calculation formula of the ith layer from the 1 st layer to the (N +1-i) and N +1, the whole vehicle grade is converted into Max (RA), sum (RB i), sum (RC i) and sum (RD) i) (i is 1-N + 1). And N can be selected from 10 to 30 according to actual conditions.
Secondly, various sundries in the car hopper image are detected by using a deep learning image example segmentation algorithm, the sundry region segmentation is realized, and the identification of the type and the area of each sundry is realized. Specific implementation manners of the method can be seen in the above embodiments.
Finally, according to the basic grade of the waste steel of the whole vehicle, counting the non-standard waste steel area ratio of each layer below the basic grade, then calculating the impurity ratio of each layer, finally carrying out weighted summation on the non-standard waste steel ratio and the impurity ratio of all sampling layers, carrying out deduplication operation on the same waste steel or impurity under the condition that the same waste steel or impurity is sampled for multiple times in different layers by adopting a method of intersection, union and mean value solving, and finally obtaining the non-standard waste steel ratio kouzhong _ ratio and the impurity ratio kouza _ ratio of the whole vehicle; estimating the mix degree mix _ ratio (between 0 and 1) of the waste steel types of the whole vehicle according to the distribution of different grades of the whole vehicle; and meanwhile, the net weight netweight of the scrap steel obtained by combining the difference of two weighing times of the vehicle and the ratio ex _ kouzhong _ ratio of special devices such as a super-long member and a closed container for identifying the project special deduction penalty are combined to finally estimate the deduction penalty of the total vehicle weight.
Step 408: and (5) carrying out level weighing and impurity deduction.
Specifically, a regression model is adopted for prediction, two or more single factors or combinations of the single factors of kouzhong _ ratio, kouza _ ratio, mix _ ratio, ex _ kouzhong _ ratio and netweight can be selected as input independent variables of the regression algorithm model, and the deduction weight and the gross weight of the whole vehicle are obtained through regression.
In practical application, for the deduction and deduction estimation, calculation can be performed according to calculated proportion distribution of different levels, according to a deduction and penalty rule of a project, in combination with the net weight and the purity of the whole vehicle, and the deduction and deduction weight can also be estimated by matching the historical most similar operation vehicle number in a picture comparison mode and referring to a deduction and penalty result corresponding to the historical vehicle number.
The target object processing method provided by the embodiment of the specification is applied to a calculation scene of deduction and deduction of scrap steel, can realize quick picture snapshot before, during and after the self-unloading of the scrap steel to obtain rich multi-angle pictures, and meanwhile, based on a deep learning image detection and segmentation algorithm, in the process of unloading the scrap steel, detection and segmentation of scrap steel and sundries of different levels can be carried out on each layer of the scrap steel, the proportion of steel and iron in each level in the whole vehicle scrap steel and the type and content of the sundries are obtained, and further the weight of deduction and deduction is calculated. Therefore, when the self-discharging scrap steel is operated, the scrap steel is rapidly graded. User experience is greatly improved.
Corresponding to the above method embodiment, the present specification further provides an embodiment of a target object processing apparatus, and fig. 6 shows a schematic structural diagram of a target object processing apparatus provided in an embodiment of the present specification. As shown in fig. 6, the apparatus includes:
a region determination module 602 configured to determine a target recognition region of a target object from a first picture taken by a first photographing apparatus;
a picture obtaining module 604, configured to obtain a plurality of second pictures by shooting the target object in the target recognition area according to a second shooting device;
a picture processing module 606 configured to process the plurality of second pictures to obtain a ratio of objects to be processed in the target object;
a weight calculation module 608 configured to determine a target deduction weight of the target object according to the occupancy of the object to be processed.
Optionally, the region determining module 602 is further configured to:
the method comprises the steps of obtaining a plurality of first pictures shot by first shooting equipment, carrying out difference processing on the first pictures, determining a loading area of a loading object containing a target object, and determining a target identification area of the target object according to the loading area.
Optionally, the apparatus further comprises:
an adjustment device configured to:
and acquiring a plurality of third pictures shot by a third shooting device, and adjusting the loading area according to the plurality of third pictures.
Optionally, the apparatus further comprises:
a photographing trigger module configured to:
and under the condition that the loading object containing the target object is detected to enter a detection area, triggering the first shooting device and the third shooting device to shoot the loading object.
Optionally, the apparatus further comprises:
a third picture taking module configured to:
and acquiring a plurality of third pictures shot by a third shooting device, and adjusting the loading area according to the plurality of third pictures.
Optionally, the picture processing module 606 is further configured to:
and processing the plurality of second pictures and the plurality of third pictures to obtain the ratio of the objects to be processed in the target object.
Optionally, the picture processing module 606 is further configured to:
inputting each second picture in the plurality of second pictures into a detection model to obtain target object areas with different levels and a first object area to be processed in each second picture;
inputting each third picture in the plurality of third pictures into the detection model, and obtaining target object areas with different levels and a first object area to be processed in each third picture;
determining the target level of the target object according to the target object areas with different levels in each second picture and the target object areas with different levels in each third picture;
and determining the proportion of second objects to be processed in the target object according to the target level, and determining the proportion of first objects to be processed in the target object according to the areas of the first object to be processed in each second picture and the first object to be processed in each third picture.
Optionally, the picture processing module 606 is further configured to:
determining the initial level of the target object in each second picture according to the areas of the target object regions with different levels in each second picture;
determining the initial level of the target object in each third picture according to the areas of the target object regions with different levels in each third picture;
and performing fusion calculation on the initial level of the target object in each second picture and the initial level of the target object in each third picture to obtain the target level of the target object.
Optionally, the picture processing module 606 is further configured to:
determining target object regions of different levels in each second picture, calculating the area of the target object region of each level in each second picture, and determining the level corresponding to the target object region with the largest area as the initial level of the target object in each second picture;
correspondingly, the determining the initial level of the target object in each third picture according to the areas of the target object regions of different levels in each third picture includes:
determining target object regions of different levels in each third picture, calculating the area of the target object region of each level in each third picture, and determining the level corresponding to the target object region with the largest area as the initial level of the target object in each third picture.
Optionally, the picture processing module 606 is further configured to:
determining a target object region with a level smaller than the target level in each second picture as a second object region to be processed of the target object in each second picture, and determining a first ratio of the area of the second object region to be processed of the target object in each second picture to the area of the target object in each second picture;
determining a target object region with a level smaller than the target level in each third picture as a second object region to be processed of the target object in each third picture, and determining a second ratio of the area of the second object region to be processed of the target object in each third picture to the area of the target object in each third picture;
and determining the occupation ratio of a second object to be processed of the target object according to the first occupation ratio and the second occupation ratio.
Optionally, the picture processing module 606 is further configured to:
determining a first ratio of the area of the first object region to be processed in each second picture to the area of the target object in each second picture according to the area of the first object region to be processed in each second picture;
determining a second ratio of the area of the first object region to be processed in each third picture to the area of the target object in each third picture according to the area of the first object region to be processed in each third picture;
and determining the proportion of a first object to be processed of the target object according to the first proportion and the second proportion.
Optionally, the apparatus further comprises:
a duty ratio determination module configured to:
obtaining a third object area to be processed in the target object of each second picture and a third object area to be processed in the target object of each third picture;
processing a third object area to be processed in the target object of each second picture and a third object area to be processed in the target object of each third picture to obtain a target area of a third object to be processed in the target object;
and determining the proportion of a third object to be processed in the target object according to the area of a target region of the third object to be processed in the target object.
Optionally, the apparatus further comprises:
a confounding degree calculation module configured to:
and determining the degree of mixing of the target objects according to the distribution of the initial levels of the target objects in each second picture and the initial levels of the target objects in each third picture.
Optionally, the apparatus further comprises:
a weight calculation module configured to:
determining the weight of a loading object containing a target object and the weight of a loading object not containing the target object, and determining the difference value between the weight of the loading object containing the target object and the weight of the loading object not containing the target object as the weight of the target object.
Optionally, the weight calculating module 608 is further configured to:
inputting at least two of the proportion of the object to be processed, the degree of mixing of the target object and the weight of the target object into a regression model to obtain the target deduction weight of the target object.
Optionally, the first photographing apparatus, the second photographing apparatus, and the third photographing apparatus are different in position.
The target object processing device provided by the embodiment of the specification can capture pictures through the first shooting device and the second shooting device in the self-unloading process of a target object (such as scrap steel) to obtain rich multi-angle pictures, and can detect and identify various objects to be processed (such as various sundries) in the target object through processing the rich multi-angle pictures, so that the target deduction weight is finally calculated, manual operation is avoided, the labor cost is reduced, and the calculation accuracy of the target deduction weight is greatly improved.
The above is a schematic configuration of a target object processing apparatus of the present embodiment. It should be noted that the technical solution of the target object processing apparatus and the technical solution of the target object processing method belong to the same concept, and details that are not described in detail in the technical solution of the target object processing apparatus can be referred to the description of the technical solution of the target object processing method.
Referring to fig. 7, fig. 7 is a flowchart illustrating another target object processing method according to an embodiment of the present disclosure. Specifically, the another target object processing method specifically includes the following steps.
Step 702: and displaying an image input interface for the user based on the call request of the user.
Step 704: and receiving a target object input by the user based on the image input interface, and determining a target identification area of the target object according to a first picture shot by first shooting equipment.
Step 706: and shooting the target object in the target identification area according to second shooting equipment to obtain a plurality of second pictures.
Step 708: and processing the plurality of second pictures to obtain the ratio of the objects to be processed in the target object.
Step 710: and determining the target deduction weight of the target object according to the proportion of the objects to be processed.
The target object processing method provided by the embodiment of the specification can realize picture snapshot through the first shooting device and the second shooting device in the self-unloading process of the target object (such as scrap steel) to obtain rich multi-angle pictures, realize detection and identification of various objects to be processed (such as various sundries) in the target object through processing the rich multi-angle pictures, finally calculate the target deduction weight, avoid manual operation, reduce labor cost and greatly improve the calculation accuracy of the target deduction weight.
The above is a schematic scheme of another target object processing method of the present embodiment. It should be noted that the technical solution of the other target object processing method belongs to the same concept as the technical solution of the one target object processing method, and details of the technical solution of the other target object processing method, which are not described in detail, can be referred to the description of the technical solution of the one target object processing method.
Referring to fig. 8, fig. 8 is a flowchart illustrating a further target object processing method according to an embodiment of the present disclosure. Specifically, the another target object processing method specifically includes the following steps.
Step 802: receiving a call request sent by a user, wherein the call request carries a target object.
Step 804: and determining a target identification area of the target object according to the first picture shot by the first shooting device.
Step 806: and shooting the target object in the target identification area according to second shooting equipment to obtain a plurality of second pictures.
Step 808: and processing the plurality of second pictures to obtain the ratio of the objects to be processed in the target object.
Step 810: and determining the target deduction weight of the target object according to the proportion of the objects to be processed.
The target object processing method provided by the embodiment of the specification can realize picture snapshot through the first shooting device and the second shooting device in the self-unloading process of the target object (such as scrap steel) to obtain rich multi-angle pictures, realize detection and identification of various objects to be processed (such as various sundries) in the target object through processing the rich multi-angle pictures, finally calculate the target deduction weight, avoid manual operation, reduce labor cost and greatly improve the calculation accuracy of the target deduction weight.
The above is a schematic scheme of still another target object processing method of the present embodiment. It should be noted that the technical solution of the further target object processing method is the same as the technical solution of the above-mentioned target object processing method, and details of the technical solution of the further target object processing method, which are not described in detail, can be referred to the description of the technical solution of the above-mentioned target object processing method.
Corresponding to the above method embodiment, the present specification further provides another target object processing apparatus embodiment, and fig. 9 shows a schematic structural diagram of another target object processing apparatus provided in an embodiment of the present specification. As shown in fig. 9, the apparatus includes:
an interface presentation module 902 configured to present an image input interface for a user based on a call request of the user;
a region determining module 904 configured to receive a target object input by the user based on the image input interface, determine a target recognition region of the target object according to a first picture taken by a first photographing device;
a picture obtaining module 906 configured to obtain a plurality of second pictures by shooting the target object in the target recognition area according to a second shooting device;
a picture processing module 908 configured to process the plurality of second pictures to obtain a ratio of objects to be processed in the target object;
a weight calculation module 910 configured to determine a target deduction weight of the target object according to the occupation ratio of the object to be processed.
The target object processing device provided by the embodiment of the specification can capture pictures through the first shooting device and the second shooting device in the self-unloading process of a target object (such as scrap steel) to obtain rich multi-angle pictures, and can detect and identify various objects to be processed (such as various sundries) in the target object through processing the rich multi-angle pictures, so that the target deduction weight is finally calculated, manual operation is avoided, the labor cost is reduced, and the calculation accuracy of the target deduction weight is greatly improved.
The above is a schematic configuration of another target object processing apparatus of the present embodiment. It should be noted that the technical solution of the other object processing apparatus belongs to the same concept as the technical solution of the one object processing method, and details of the technical solution of the other object processing apparatus, which are not described in detail, can be referred to the description of the technical solution of the one object processing method.
Corresponding to the above method embodiments, the present specification further provides an embodiment of a target object processing apparatus, and fig. 10 shows a schematic structural diagram of another target object processing apparatus provided in an embodiment of the present specification. As shown in fig. 10, the apparatus includes:
a request receiving module 1002, configured to receive a call request sent by a user, where the call request carries a target object;
a region determination module 1004 configured to determine a target recognition region of the target object from a first picture taken by a first photographing apparatus;
a picture obtaining module 1006, configured to obtain a plurality of second pictures by shooting the target object in the target recognition area according to a second shooting device;
a picture processing module 1008, configured to process the plurality of second pictures to obtain a ratio of objects to be processed in the target object;
a weight calculation module 1010 configured to determine a target deduction weight of the target object according to the occupation ratio of the object to be processed.
The target object processing device provided by the embodiment of the specification can capture pictures through the first shooting device and the second shooting device in the self-unloading process of a target object (such as scrap steel) to obtain rich multi-angle pictures, and can detect and identify various objects to be processed (such as various sundries) in the target object through processing the rich multi-angle pictures, so that the target deduction weight is finally calculated, manual operation is avoided, the labor cost is reduced, and the calculation accuracy of the target deduction weight is greatly improved.
The above is a schematic configuration of still another target object processing apparatus of the present embodiment. It should be noted that the technical solution of the further target object processing apparatus is the same as the technical solution of the above-mentioned target object processing method, and details of the technical solution of the further target object processing apparatus, which are not described in detail, can be referred to the description of the technical solution of the above-mentioned target object processing method. .
FIG. 11 illustrates a block diagram of a computing device 1100 provided in accordance with one embodiment of the present description. The components of the computing device 1100 include, but are not limited to, memory 1110 and a processor 1120. The processor 1120 is coupled to the memory 1110 via a bus 1130 and the database 1150 is used to store data.
The computing device 1100 also includes an access device 1140, the access device 1140 enabling the computing device 1100 to communicate via one or more networks 1060. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 1140 may include one or more of any type of network interface, e.g., a Network Interface Card (NIC), wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 1100, as well as other components not shown in FIG. 11, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 11 is for purposes of example only and is not limiting as to the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 1100 can be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 1100 can also be a mobile or stationary server.
The processor 1120 is configured to execute computer-executable instructions, which when executed by the processor, implement the steps of the target object processing method described above.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the target object processing method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the target object processing method.
An embodiment of the present specification also provides a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of the above-mentioned target object processing method.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the target object processing method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the target object processing method.
An embodiment of the present specification further provides a computer program, wherein when the computer program is executed in a computer, the computer is caused to execute the steps of the target object processing method.
The above is an illustrative scheme of a computer program of the present embodiment. It should be noted that the technical solution of the computer program is the same as the technical solution of the target object processing method, and details that are not described in detail in the technical solution of the computer program can be referred to the description of the technical solution of the target object processing method.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the foregoing method embodiments are described as a series of acts, but those skilled in the art should understand that the present embodiment is not limited by the described acts, because some steps may be performed in other sequences or simultaneously according to the present embodiment. Further, those skilled in the art should also appreciate that the embodiments described in this specification are preferred embodiments and that acts and modules referred to are not necessarily required for an embodiment of the specification.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present specification disclosed above are intended only to aid in the description of the specification. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the embodiments. The specification is limited only by the claims and their full scope and equivalents.