[go: up one dir, main page]

WO2023084543A1 - Système et procédé pour tirer parti d'un modèle d'extraction de caractéristique hybride basé sur un réseau neuronal pour une analyse de qualité de grains - Google Patents

Système et procédé pour tirer parti d'un modèle d'extraction de caractéristique hybride basé sur un réseau neuronal pour une analyse de qualité de grains Download PDF

Info

Publication number
WO2023084543A1
WO2023084543A1 PCT/IN2022/050993 IN2022050993W WO2023084543A1 WO 2023084543 A1 WO2023084543 A1 WO 2023084543A1 IN 2022050993 W IN2022050993 W IN 2022050993W WO 2023084543 A1 WO2023084543 A1 WO 2023084543A1
Authority
WO
WIPO (PCT)
Prior art keywords
grain
grains
image
model
package
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IN2022/050993
Other languages
English (en)
Inventor
Subramanian Akkulan
Vignesh Kumar MANOGARAN
Elayaraja Padmanabhan
Hemanth KUMAR S
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Waycool Foods And Products Private Ltd
Original Assignee
Waycool Foods And Products Private Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Waycool Foods And Products Private Ltd filed Critical Waycool Foods And Products Private Ltd
Publication of WO2023084543A1 publication Critical patent/WO2023084543A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/02Food
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0278Product appraisal
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G19/00Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning

Definitions

  • the present invention relates to the field of artificial neural networks for grain quality analysis, and more specifically, to the system and method for analyzing quality measures related to grains using Convolutional Neural Networks (CNNs) based prediction model for implementing image classification techniques.
  • CNNs Convolutional Neural Networks
  • Grain quality parameters are defined by several factors such as physical conditions (moisture content, bulk density, kernel size, kernel hardness, vitreousness, kernel density and damaged kernels), safety related factors (fungal infection, mycotoxins, insects and mites and their fragments, foreign material odor and dust) and compositional factors (milling yield, oil content, protein content, starch content and viability).
  • One of the conventional methods (a manual method) of grain quality evaluation includes visual inspection by a field inspector for ascertaining physical dimensions and other quality parameters, which is challenging even for trained personnel and is compromised in terms of efficiency, reliability and accuracy.
  • the decision making capabilities of a grain inspector can be affected by his/her physical condition such as fatigue and eyesight, mental state caused due to work pressure, and working conditions such as lighting, climate etc.
  • grains that mostly overlap make it difficult to predict parameters like grain count, grain size, and grain quality accurately.
  • this task requires automation and developed image identification system with edge connect function that would be helpful to identify purity of grains using technology-based solution.
  • EP publication EP3038054B 1 discloses a grain quality monitoring method and devices to capture an image of bulk grain and apply a feature extractor to the image to determine a feature of the bulk grain in the image. A determination is made regarding a classification score for the presence of a classification of material and a quality of the bulk grain of the image is determined based upon an aggregation of the classification scores for the presence of the classification of material.
  • the existing art does not take into consideration several factors such as moisture content, bulk density, kernel size, damaged kernels, fungal infection, insects and mites and their fragments, foreign material, oil content, protein content, starch content, viability etc. collectively to assess the grain quality parameters in real time.
  • the existing art utilizes various image processing techniques and sensors to provide results for classification of bulk grains. Such system does not consider multiple views of the food grains to capture the exact image from multiple views from different angles and hence fails to provide accurate quality parameters without ascertaining above- mentioned features.
  • the embodiments of the present disclosure provide a neural network based food - grain (crop) detection and classification system for providing a better approach for the identification and classification of different types of grains based on color and geometrical features using probabilistic neural network and image processing concepts.
  • crop a neural network based food - grain detection and classification system for providing a better approach for the identification and classification of different types of grains based on color and geometrical features using probabilistic neural network and image processing concepts.
  • useful grain features are predicted directly from the raw representations of input data using Convolutional Neural Networks (CNN), and intuitions of the selected features based on a De-convolutional Network (DN) approach are gained.
  • CNN Convolutional Neural Networks
  • DN De-convolutional Network
  • the proposed system has the potential to replace manual (visual) methods of inspection and gain wide acceptance in industries as a tool for quality evaluation of numerous agricultural products.
  • the primary objective of the present embodiments is to analyze quality measures related to grains using deep learning based prediction model with high accuracy and speed.
  • Another objective of the invention is to identify purity of grains by identifying and classifying the food grain image samples using probabilistic neural network based prediction model which works on the principle of artificial intelligence and machine learning.
  • Another objective of the invention is to provide a better approach for the identification of different types of grains and rice quality based on color and geometrical features using probabilistic neural networks and intensive image processing concepts.
  • Another objective of the invention is to analyze the grain quality based on parameters classified as milling efficiency, grain shape and appearance, foreign matter, insect infestation, microbial infection, discolored grains, cooking and edibility characteristics, moisture, nutritional quality, not limited thereto.
  • One of the objectives of the deep learning based prediction model is to generate findings like categories of grains. These findings are then combined with various sensory inputs to generate quality measures like grain count, size and other quality parameters by selecting appropriate feature extraction models, which is the combination of object detection and object classification model. Therefore, based on the combined results, the deep learning based prediction model gains insights into the design of new hybrid feature extraction models for improvising the grain quality parameters.
  • Another objective of the present embodiment is to leverage a hybrid feature extraction which is a combination of two approaches: Object Detection model and Object classification model. The goal of combining the predictions of a set of models is to form an improved predictor.
  • Another objective of the proposed embodiments is to provide grain quality measures by employing cloud-based deep learning solution consisting of a multi-layer continuous learning architecture with feedback mechanism for prediction of grain health in both offline and online state.
  • Yet another objective of the present disclosure is to develop a robust and efficient system based on hybrid feature extraction models for detection and classification of food grains with accuracy and speed, which can act as a Trust Machine for determining the quality parameters of the grains.
  • Yet another objective of the invention is real time price prediction based on defects evaluated on the basis of grain size, count and other quality parameters of the food grains by combining historical data collected by the deep learning model for quality prediction with very high accuracy and speed which give satisfactory results to customers with a combination of previous customer rating and review.
  • Embodiments of the invention provide techniques for identifying, measuring, and analyzing various quality parameters related to grains in order to predict consumer purchasing behaviors.
  • the disclosed techniques use probabilistic neural network based prediction model which works on the principle of artificial intelligence and machine learning.
  • the disclosed system is essentially consisting of a prediction model implementing deep learning principles in order to detect and classify grains with more reliable, accurate and faster means in comparison with conventional methods.
  • a system for analyzing grain quality parameters comprises of a plurality of database systems for storing data related to grains, a package holding module configured to hold the grains or grains package, an image identification unit interconnected to the package holding module and in communication with a plurality of database systems via a cloud based network.
  • the package holder helps to place the Grains or Grains Package in order to capture one or more views for the grain images (front view, top view, side views) and convert it into a 3D image.
  • An image identification unit comprises an image capture device, a processor, and a user interface.
  • the image capture device is configured to obtain the images of the grains package placed in a package holding module which is rotated in order to cover one or more views of the grains package wherein the one or more views defined by one or more cameras and the processor is configured to process and store the captured grain image captured based on one or more views of the grains package.
  • a user interface is used to manage stored images and to provide text input to the system.
  • the image identification unit is connected to a deep learning based prediction model and provides processed grain image as an input.
  • a deep learning based prediction model analyzes the image input with respect to one or more views of the grains package; selects the appropriate grain model based on the image input and generate prediction output in the form of grain type based on the selected grain model by comparing each input to a reference grain image stored in a database.
  • the system comprises a sensing unit equipped with plurality of sensors being placed inside the package holder positioned to measure weight and humidity of the grains package with the help of weight and humidity sensors, respectively, in order to generate various sensor inputs. The sensor input and the prediction output generated by the sensor unit and prediction model, respectively, are combined and are being fed into the hybrid feature extraction model.
  • the hybrid feature extraction model further analyzes the combined inputs and applies various image detection and classification techniques to generate the quality parameters related to grains.
  • the predicted results are further fed into the prediction model that undergoes continuous learning by training with the deep learning based complex network that learns through a feedback mechanism to further improve the accuracy of quality measures.
  • a method for evaluating grain quality parameters is disclosed.
  • the method is implemented by the system for evaluating grain quality parameters.
  • the method includes placing the Grains or Grains Package in order to capture one or more views for grain’s images and convert it into a 3D image, obtaining the images of the grains package placed in a package holding module which is rotated in order to cover one or more views of the grains package wherein the one or more views defined by one or more cameras, processing and storing the grain image captured by the image capture device in a database, receiving an input grain image by a deep learning based prediction model connected to an image identification unit, analyzing the captured grain image captured based on one or more views of the grains package, selecting appropriate grain model based on the processed grain image, generating prediction output based on the selected grain model in a form of a grain type by comparing each input to a reference grain image stored in a database, generating a sensor input by means of a sensor unit equipped with one or more sensors being placed inside the package holder positioned to measure weight and humidity of the grains package,
  • the method further includes, the hybrid feature extraction model, analyzing the combined input and applying various image detection and classification techniques for generating the quality parameters related to grains. Subsequently, the predicted results are fed into the prediction model to undergo continuous learning by training with the deep learning based complex network that learns through a feedback mechanism to further the improvise the accuracy of prediction of quality measures.
  • FIG. 1 illustrates a block diagram depicting a system of evaluating grain quality parameters
  • FIG. 2 is a flow chart depicting steps of advanced image processing techniques used for grain quality prediction
  • FIG. 3(a) illustrates a scenario with one input that goes into the prediction model in the form of image capture for predicting grain category
  • FIG. 3(b) illustrates a scenario with two inputs which is a combination of the captured image with optional text input that goes into the prediction model for predicting grain category;
  • FIG. 4 depicts the hybrid feature extraction model as a combination of object detection and object classification model to generate grain quality parameters
  • FIG. 5 is a diagram that illustrates an Edge detection function performed by the hybrid feature extraction model
  • FIG. 6 illustrates the input image sample and predicted outputs in the form of size and quality parameters.
  • FIG. 7 illustrates a block diagram depicting a method for assessing grain quality parameters
  • FIG. 8 is a block diagram depicting offline and online mode of operations performed by neural network based prediction model
  • FIG. 9 is a hierarchical neural network based prediction model with deep learning for predicting the quality parameters
  • Image Identification Unit refers to a computer system supported by hardware devices like input devices, one or more processors, memory configured to carry out the methods disclosed.
  • the Image Identification unit is a combination of “user device” (e.g., desktop computer, laptop computer, smartphone, personal digital assistant, tablet or other computing device) equipped with an “image capture device” (one or more cameras) and is connected to a database via cloud network.
  • user device e.g., desktop computer, laptop computer, smartphone, personal digital assistant, tablet or other computing device
  • image capture device one or more cameras
  • grain or “food grains” or “commodity” can be used interchangeably and include at least one of the following: wheat, rice, corn, pulses such as green moong, chana dal, red gram, bengal gram, black gram, etc.
  • sensor data refers to input generated from the multi-sensor device equipped with weight sensor, humidity sensor, etc. for detecting plurality of parameters like protein, moisture, starch, fat, etc.
  • a “neural network model” or “deep learning prediction model”, or simply “prediction model”, “deep learning” hereinafter refers to any model that uses at least one of the machine learning operations to predict the parameters related to grain quality evaluation which includes grain category, count, size, quality index etc. and is trained on information comprising various inputs using one or more deep learning operations.
  • quality parameters refers to any measure of quality of the commodity/food grains such as good grains, foreign material, broken seeds, immature seeds, shrunken seeds, husk, split grains, damage grains, quality index, moisture content, quality of batch with help of sample weight, Humidity factor, etc. or other quality parameters as described herein. Quality parameters tend to vary from commodity to commodity. Broadly, the number of quality parameters ranges from 5 to 10.
  • Edge Detection Algorithm or “Edge Detection Function” can be used interchangeably in the foregoing paragraphs, which converts raw images to sketch the edges of grains and is capable of hallucinating edges in missing regions of the grain image with pixel intensities of the rest of the image and finally estimates RGB pixel intensities of the missing regions.
  • database may refer to either a body of data, a relational database management system (RDBMS), or to both.
  • RDBMS relational database management system
  • a database may include any collection of data including hierarchical databases, relational databases, flat file databases, object -relational databases, object-oriented databases, and any other structured collection of records or data that is stored in a computer system.
  • RDBMS include, but are not limited to Oracle® Database, IBM® DB2, Microsoft® SQL Server, Sybase®, and PostgreSQL.
  • any database may be used that enables the systems and methods described herein.
  • the terms “software” and “firmware” are interchangeable, and include any computer program stored in memory for execution by a processor, including RAM memory, ROM memory, EPROM memory, EEPROM memory, and non-volatile RAM (NVRAM) memory.
  • RAM random access memory
  • ROM memory read-only memory
  • EPROM memory erasable programmable read-only memory
  • EEPROM memory electrically erasable programmable read-only memory
  • NVRAM non-volatile RAM
  • ANN Artificial Neural Networks
  • ANN based models can take multiple inputs, generate reasoning based on mathematical algorithms and can predict results on the basis of reasoning. Additionally, such model can be trained instead of programming them thoroughly. Deep learning is a concept which allows Neural Network (NN) based intelligent system to learn from past data and statistical data in order to generate accurate results and improvise the result each time the model performs prediction based on real-time value and parameters.
  • NN Neural Network
  • Image classification involves image processing, image analysis, and edge detection and classification techniques. Pre-processing of the images is performed to improve their quality, reduce noise or correct lighting problems.
  • image analysis refers to the process of separating regions of interest from other regions to extract information.
  • Image pre-processing includes operations to grayscale adjustment, focus correction, contrast or sharpness enhancement, and noise reduction.
  • Edge detection and classification techniques are used to convert raw images to sketch the edges of grains and to segregate them from foreign material.
  • the Image Classification Techniques can be categorized as parametric and nonparametric or supervised and unsupervised, as well as hard and soft classifiers.
  • this technique delivers results based on the decision boundary created, which mostly relies on the input and output provided while training the model.
  • unsupervised classification the technique provides the result based on the analysis of the input dataset on its own; features are not directly fed to the models.
  • the main steps involved in image classification techniques are determining a suitable classification system, feature extraction, selecting good training samples, image pre-processing and selection of appropriate classification method, post-classification processing, and finally assessing the overall accuracy.
  • the inputs are usually an image of a specific object
  • the outputs are the predicted classes that define and match the input objects.
  • Convolutional Neural Networks are the most popular neural network model that is used for image classification problems.
  • the convolutional neural network prediction model with deep learning operations makes use of a hierarchical (layered) architecture consisting of many hidden layers forming complex networks that learn with a feedback mechanism.
  • Such deep learning based Agriculture Intelligence System facilitates iterative analysis of data from past practice in the form of data and predicts results using mathematical approach to build intelligent machines.
  • the subject matter described herein relates generally to techniques for identifying, measuring, and analyzing various quality parameters related to grains in order to benchmark the price of the grain and predict the consumer purchasing behaviors based on the collected historical data.
  • the disclosed techniques implement probabilistic neural network based prediction models, which work on the principles of artificial intelligence and machine learning.
  • Machine learning based self-learning systems have the potential to replace manual methods of inspection which is compromised in terms of efficiency, reliability, and accuracy.
  • the decision making capabilities of a grain inspector can be affected by his/her physical condition such as fatigue and eyesight, mental state caused due to work pressure, and working conditions such as lighting, climate etc. which can influence the prediction of grains.
  • Machine Learning based prediction model with deep learning can accurately classify the grain kernels and improve the result based on the past prediction to provide rapid and accurate information about external quality aspects of food grains.
  • the proposed system analyzes the grain quality based on parameters classified as milling efficiency, grain shape and appearance, foreign matter, insect infestation, microbial infection, discolored grains, cooking and edibility characteristics, moisture, nutritional quality, etc.
  • the prediction of grain quality parameters employs sensing mechanism using one or more sensors, advanced image classification techniques for image identification and deep learning solution with multi-layer continuous learning architecture with feedback mechanism for prediction of grain health in both offline and online state.
  • the main purpose of the disclosure is to offer a robust and efficient system based on a prediction model for evaluating quality parameters in order to reduce the required effort, cost and time.
  • the prediction model uses Hybrid Feature Extraction Models for detection and classification of food grains with accuracy, which can act as a Trust Machine for determining the quality parameters of the grains.
  • the hybrid feature extraction model is a combination of two novel methods for combining predictors, i.e., one for the task of Object Detection, and the other for the task of Object Classification. The goal of combining the predictions of a set of models is to form an improved predictor.
  • the disclosed techniques helps to analyze a wide range of grains and oilseeds for moisture, protein, oil and many other parameters with high efficiency using various image classification techniques and principles of deep learning to produce accurate result by improvising the result by feedback and self-learning.
  • the proposed approach is based on multi-level representation and abstraction of grain data.
  • the system demonstrates the hierarchical transformation of grain-related features from lower-level to higher-level abstraction, corresponding to various species and classes of grains.
  • the disclosed deep learning based prediction model generate findings like categories of grains. These findings are then combined with various sensory inputs to generate grain count, size and quality parameters by selecting appropriate feature extraction models, which is the combination of object detection and object classification model. Based on the combined results, the deep learning based prediction model gains insights into the design of new hybrid feature extraction models.
  • deep learning based prediction models leverage appropriate hybrid feature extraction models to further improve the discriminative power of grain quality parameters systems.
  • the present embodiment demonstrates how a combining scheme can rely on the stability of consensus opinion and, at the same time, capitalize on the unique contributions of each model.
  • the hybrid feature extraction model by combining approaches, satisfy these criteria by relying upon Singular Value Decomposition as a tool for filtering out the redundancy and noise in the predictions of the learn models, and for characterizing the areas of the sample space where each model is superior.
  • the system of evaluating quality measures by combining different approaches aids in avoiding false prediction discarding any learned models. Therefore, the unique contributions of each model can still be discovered and exploited.
  • An added advantage of the combining algorithms derived in this thesis is that they are not limited to models generated by a single algorithm; they may be applied to model sets generated by a diverse collection of computer vision algorithms along with machine learning.
  • One of the purpose of the disclosed embodiment is to provide real time pricing of the food grains after ascertaining defects in the grains by evaluating size and count of the grain sample and comparing those samples with the reference images and features stored in the cloud database to generate prediction in both online and offline mode.
  • the model will update automatically as soon as it is connected to the internet.
  • the grain quality parameters are evaluated on the basis of grain size, count and other quality measures such as quality index, moisture content, quality of batch with help of sample weight, Humidity factor, etc. of the food grains by combining historical data collected by the deep learning model for quality prediction.
  • FIG. 1 presents a system for analyzing the grain quality parameter, which comprises of a database system, an image identification unit 101 and a package/grain holding module 102 for holding the grains or grains package.
  • An image identification unit 101 is interconnected to the package holding modulel02 and in communication with a plurality of database systems via a cloud-based network.
  • the package holder/ grain holder helps to hold the Grains or Grains Package and can be rotated in order to capture one or more views of the grain’s images (front view, top view, side views) captured by one or more cameras.
  • An image identification unit 101 essentially consist of an image capture device 101A for capturing the image of the grains placed over the package holder, which is usually a camera, a processor 10 IB for processing the captured grain images, analyzing the images based on one or more views of package holder and storing it to the database system via cloud based network for future reference; and a user interface 103C for handling the image related function and providing text input if needed.
  • an image capture device 101A for capturing the image of the grains placed over the package holder, which is usually a camera
  • a processor 10 IB for processing the captured grain images, analyzing the images based on one or more views of package holder and storing it to the database system via cloud based network for future reference
  • a user interface 103C for handling the image related function and providing text input if needed.
  • the grain image is sent to the deep learning based prediction model 301, which then automatically predict the type of grain by mapping and comparing the captured image with the reference grain image stored in cloud database.
  • a deep learning based prediction model 301 selects the appropriate grain model to predict grain category as an output and accordingly the hybrid feature extraction model 401 model generates quality measures 410Y.
  • the system also comprises a sensing unit 103 equipped with a plurality of sensors like weight sensor, humidity sensor positioned to measure weight and humidity of the grains placed inside the package in order to generate various sensor inputs.
  • the sensor input and the prediction output generated by the sensor unit 103 and prediction model 301, respectively, are combined and are being fed into the hybrid feature extraction model 401.
  • the hybrid feature extraction model 401 is a combination of both detection and classification models which further analyze the combined input and apply various image classification techniques for quality evaluation of food grains and give final output in the form of quality measures related to grains which include but not limited to the count, sizes, moisture content, quality of batch with help of sample weight, humidity, quality parameters such as good grains, foreign material, broken seeds, immature seeds, shrunken seeds, husk, split grains, damage grains etc. Quality parameters tend to vary from commodity to commodity. Broadly, the number of quality parameters ranges from 5 to 10.
  • FIG. 2 depicts the steps involved in Advanced Image Classification techniques used throughout the proposed system in order to generate classified food grains along with quality parameters from the grain image.
  • First step being the Image Acquisition 201 is performed by the image identification unit 101 which captures the image of the food grains lying on the package holderl02 with the help of image capture device 101A and process the acquired image through image Pre-Processing step 202 which enhance the captured image to remove noise and perform resizing.
  • the third step is of Image Segmentation 203, which identifies the patches and performs image segmentation.
  • the next step of Feature Extraction 204 is performed by prediction model 301 by selecting appropriate hybrid feature extraction model 401 on the basis of various nuances selected based on the grain type in order to generate grain count, size, and other quality parameters.
  • the prediction model output 301Y along with other sensory inputs generated by the Sensor Unit 103 goes in to the hybrid feature extraction model 401 for next step of Classification 205 as depicted in FIG. 2 which eventually classify the food grains image samples using neural network based detection and classification model to generate output in the form of classified food grains along with count, size and other quality parameters in percentages by means of Quality Analysis 206 step as depicted in FIG 2.
  • the image captured 101A by the image identification unit 101 goes as a input to the prediction model 301 as depicted in FIG 3 (a) to generate predicted output in a form of grain category or the type of grains.
  • the prediction model will select quality parameters based on the grain type, which varies in number from 5 to 10.
  • An image captured can be optionally combined with the text input as shown in FIG 3(b) and fed as a input to the prediction model 301, which then identifies the grain type and generates a grains category as an output.
  • the hybrid feature extraction model 401 is a combination of different models which are selected based on the grain category generated by the prediction model. Based on the nuances in terms of complexity and variation in the predicted output, an appropriate grain model is selected to improve the accuracy of prediction of quality parameters with high processing speed.
  • FIG. 4 depicts a scenario where the hybrid feature extraction model 401 is shown as a combination of object detection and classification models which further analyze the combined input generated by the prediction model 301 and sensor unit 103 to apply various image classification techniques to give final output in the form of quality prediction 401Y related to grains which include but not limited to the count, size, quality parameters such as percentage of foreign matter, broken seeds, damage seeds, immature seeds, shrunken/ shriveled seeds, weevilled seeds, green seeds, split grains, etc., moisture content, quality of batch with total weight of the good grains. Quality parameters tend to vary from commodity to commodity. Broadly, the number of quality parameters ranges from 5 to 10.
  • This idea is designed to provide a better approach for the identification of different types of grains based on color and geometrical features using probabilistic neural networks and image processing concepts. More than 120 images are used to test the system and it is found that the accuracy of identifying grain is 100%. Whereas the accuracy of identifying the quality of grains and its grade is 92% and 91% for each grain type.
  • the hybrid feature extraction model acts as a combined feature model, which is a combination of detection model and classification model.
  • the hybrid feature extraction model uses an Edge Detection function illustrated in FIG. 5, which is applied over the image to measure the sizes, count and quality parameters of the respective grains.
  • the edge detection algorithm converts raw images to sketch the edges of the grains. It consists of following steps: Noise reduction, Gradient calculation, Non-maximum suppression, and Double threshold.
  • Edge detection function produces a missing edge and is depicted with the help of a generator G1 500 as depicted in FIG. 5, which takes grain image with missed edge as an input 501 and performs edge generation 502 on it using an end-to-end trainable network.
  • An Edge Detection Algorithm illustrated in FIG. 5 is based on two-stage process, first stage being the Edge Detection 503 and Edge Generation 504, which is capable of hallucinating edges in missing regions of the grain image with grayscale pixel intensities of the rest of the image.
  • the second stage uses an Image Completion Network for filling the missing edges which considers the hallucinated edges, generated by the first stage and estimates RGB pixel intensities of the missing regions.
  • an Image Completion Network combines edges in the missing regions with color and texture information from the rest of the image to fill in the missing regions. Both stages follow an Adversarial Framework based on machine learning to ensure that the hallucinated edges are being generated.
  • An Adversarial Network is based on unsupervised learning model that has a generator G1 500 as depicted in FIG. 5, which produces a missing edge 502 as output based on the analysis of the input dataset 501 on its own and a discriminator, which gets input either from the generator or from a real data set and has to distinguish between the two considering one as real and other as fake.
  • An end-to-end trainable network is provided that combines edge generation and image completion networks to fill in missing regions exhibiting fine details.
  • the generators G1 as shown in FIG. 5 follow an architecture similar to the method proposed by Johnson, which has achieved good results for style transfer, super -resolution, and image-to-image translation. Specifically, the generators consist of encoders that down-sample twice, followed by eight residual blocks and decoders that up-sample images back to their original size. Dilated convolutions with a dilation factor of two are used instead of regular convolutions in the residual layers, resulting in a receptive field at the final residual block. For discriminators, a 70x70 PatchGAN architecture is used, which determines whether or not overlapping image patches of size 70 x 70 are real. Instance normalization across all layers of the network has been used.
  • training labels i.e. edge maps
  • the Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images.
  • the sensitivity of the Canny edge detector is controlled by the standard deviation of the Gaussian smoothing filter o. For conducting tests, it is empirically found that o ⁇ 2 yields the best results.
  • the effect of the quality of edge maps on overall image completion has been investigated.
  • regular, and irregular have been used. Regular masks are square masks of a fixed size (25% of total image pixels) centered at a random location within the image.
  • Irregular masks are augmented by introducing four rotations (0°, 90°, 180°, 270°) and a horizontal reflection for each mask. They are classified based on their sizes relative to the entire image in increments of 10% (e.g., 0-10%, 10-20%, etc.).
  • Ground truth refers to information collected on original image.
  • the image reference of the original image that are captured from the captured device are called “Ground truth images”.
  • Ground truth allows image data to be related to real features and materials.
  • ⁇ Igt ground truth images.
  • Their edge map and grayscale counterpart will be denoted by Cgt and ⁇ I gray, respectively.
  • Q denotes the Hadamard product 1 .
  • the generator predicts the edge map for the masked region.
  • the proposed model can be implemented in PyTorch/Tensorflow lite platform.
  • the system could be trained using 256 x 256 images with a batch size of eight.
  • Generators G1 500 are trained separately using Canny edges with a learning rate of 10-4 until the losses plateau. The learning rate can be lowered down to 10-5 and continue to train G1 500.
  • the proposed system for grain quality evaluation can be effectively used to monitor grain quality during processing and for grading applications by applying novel methods for combining predictors: one for the task of Object Detection, and the other for the task of Object classification as explained above.
  • the goal of combining the predictions of a set of models is to form an improved predictor.
  • One of the objectives of the disclosed embodiments is to provide a better approach for the identification of different types of grains quality based on color and geometrical features using probabilistic neural networks and image processing concepts.
  • Different food grains like wheat, corn, and rice were considered in the study.
  • Both the Prediction model and Hybrid feature extraction model work on the multi-neural network process and are majorly focused on speed and accuracy of evaluation of quality in terms of size, count and quality parameters of the grains.
  • FIG. 6 illustrates grain image as an input 601 and grain size 602, count 603 ; detected grains with labels 604 and detected grains with percentage 605 and quality parameters 606 as outputs.
  • pixels per metric ratio which describes the number of pixels that can fit into a given number of inches, millimeters, meters, etc.
  • Reference Box is used to predict the size of food grains and will take reference from each grain in order to give the accurate size as a result.
  • pixels per metric ratio which describes the number of pixels that can fit into a given number of inches, millimeters, meters, etc.
  • the reference box In order to compute this ratio, the reference box should have known dimensions (such as width or height) in terms of a measurable unit (inches, millimeters, etc.) and the reference box should be easy to find, either in terms of the location of the object or in its appearance. In this manner, a reference box can be utilized to calibrate the pixels_per_metric variable, and the size of other objects in an image can be computed. As per the experimental results, accuracy of size prediction without reference box is around 80 % whereas after using the reference box, the accuracy of grain size prediction is around 97% shown in Table. 1, which is shown against manual validation sheet with respect to the system count, i.e. Vernier caliper measurement.
  • Table. 1 shows an exemplary size prediction of two types of Pulses: Green gram and Bengal gram. A minimum deviation of +/- 0.25mm is acceptable and average accuracy of 97% for size prediction is achieved.
  • Prediction model will automatically predict the types of grain and accordingly the hybrid feature extraction model gives the parameters related grain quality evaluation by detecting good grains and classifying them to give total quality index which will be used to provide real time pricing of the food grains after ascertaining defects in the grains.
  • Quality parameters tend to vary from commodity to commodity. Broadly, the number of quality parameters, ranges from 5 to 10. Along with the above parameters, Moisture content is also one of the parameters. Table.3 illustrates various quality parameters considered by the system such as good grains, foreign material, broken seeds, immature seeds, shrunken seeds, husk, split grains, damaged grains etc. Quality parameters for two types of Pulses: Bengal gram and Green gram were evaluated and an average quality of 95.88% is achieved. In order to test the speed of prediction, the complete system including the cloud based GPU server, a UNIX based job scheduler- Cron, Tensor Flow environment with object detection API based on machine learning and Al is set up. Tensor flow-based open source library is used for classification and prediction.
  • the image is uploaded to the cloud GPU server. Cron is set in the GPU server every 30 secs and it checks for new ticket availability. If the new ticket or sample image is uploaded immediately, it takes the request to process and completes it in less than 2 mins.
  • Table. 4 depicts the experimental data related to time taken for image uploading, scheduler processing and server processing for a month's time. This is how the speed of the system based on deep learning based prediction models with image processing is tested.
  • FIG. 7 depicts the network flow diagram of the series of operations performed by the prediction model for evaluating the grain quality parameters.
  • Grains or Grains Package are placed 701 over the package holding module for capturing one or more views for grain’s images and convert it into a 3D image
  • the image captured device then obtain 702 the images of the grains package placed over a package holding module and the one or more processor process 703 the captured grain image and store 703 the captured imaged in the database system.
  • the method further comprises of a neural network based prediction model receiving and analyzing an input from the image identification unit which includes input parameters such as a processed grain image and an optional text input the grain image with an optional text input 704, selecting 705 the appropriate grain model based on the input and generating prediction output in a form of grain type.
  • the method includes providing by a sensor unit, a sensor input with the help of one or more sensors being placed inside the package holder positioned to measure weight and humidity of the grains package.
  • the sensor input generated by the sensor unit and the prediction output based on the selected grain model are combined and fed into the hybrid feature extraction model 706.
  • the hybrid feature extraction model analyzing the combined input and applying 707 various image detection and classification techniques over the combined input generated by the sensor unit and the prediction model and generating the quality parameters related to grains 708.
  • the predicted results are further fed into the prediction model to under goes continuous learning by training with the deep learning based complex network that learns through a feedback mechanism performed iteratively.
  • FIG. 8 is an exemplary system for evaluating quality parameters based on neural networks models in both online and offline state.
  • User can use the system directly in an offline state 850 via offline NN model 820, and in an online state 840 using network, via online NN model 810.
  • the online model 810 generates the quality measures in the form of grain count, size and other quality parameters, it stores the quality measures in a database 830 via a cloud network.
  • An online state model 810 receives feedback 860 via internet from output generated by the system, performs continuous learning for improvement in the predictions. After the online model receives updates through feedback 860 based on iteration, the latest version of quality-measures will be updated in an offline model 820 as soon as it is exposed to network, such that the offline model also provides better result.
  • Table. 5 depicts the experimental data related to time taken for finding the count of various commodities in offline state using offline model.
  • NN model based prediction model takes grain image as an input and based on the input grain image, selects an appropriate grain model to predict the grain category and based on the type of grain category, it is expected to predict quality measures after evaluating the food grains.
  • NN model can use or be trained by any deep learning, machine learning operation or any combination of machine learning operations for predictions of quality measures.
  • NNs are mathematical models which are an interconnected network of nodes, where each node assigned to the network represents a neuron. In a network, the neurons play an important role; they accept and process the inputs and create the outputs, and the connection between two neurons carries the weights in which the electrical information is encoded implicitly. The electrical information simulates with specific values stored in those weights that enable the networks to have capabilities like learning, generalization, imagination and creating the relationship within the network.
  • FIG. 9 A system based NN model with deep learning for predicting grain quality measures is shown in FIG. 9, which operates in a feed-forward mode from the input layer 910 through the hidden layers 920 to the output layer 930.
  • the last layer or the output layer 930 consist of the nodes usually computed by a non-linear combination of the nodes of input 910 and hidden layers 920.
  • the user input in a form of grain image, grain density, grain texture, etc. are fed as weights from the input layer 910 which undergoes series of mathematical formulation based on the series of hidden layer 920 involved in the machine learning algorithms and generate output from the output layer 930 in form of quality measures such as grain count, size, and other quality parameters with total percentage of good grains.
  • the output data generated is further fed as a feedback 940 into the system in the form of input in the next iteration, which alters the weights to minimize errors and generates more accurate results subsequently.
  • This process continues for numerous iterations in a very complex manner until the desired accuracy is achieved in terms of threshold, such as deviation in this case.
  • a threshold for predicting any category is set. Only if the prediction score is greater than the threshold, then the system accepts the category as correct else the system throws a result that the category is not trained and it goes to a continued learning process.
  • the neural network based prediction model with deep learning undergoes continuous learning to improve the prediction results.
  • the prediction model would dynamically select appropriate hybrid feature extraction model as per the nuances generated in the previous stage.
  • the product quality of the grains type is determined to fix the pricing of the commodity based on the derived quality in terms of count, size and other quality parameters.
  • the proposed approach is based on multi-level representation and abstraction of grain data. The system demonstrates the hierarchical transformation of grain related features from lower -level to higher-level abstraction, corresponding to various species and classes of grains.
  • the disclosed deep learning based prediction model generates findings based on various nuances representing variations and complexity of the grain quality parameters and shows that these findings fit with the hierarchical feature learning definitions of grain characters.
  • the deep learning based prediction model gained insights into the design of new hybrid feature extraction models which are able to further improve the discriminative power of grain quality parameters systems.
  • the hybrid feature extraction model by combining approaches like object detection and object classification can rely on the stability of the consensus opinion and, at the same time rely upon Singular Value Decomposition as a tool for filtering out the redundancy and noise in the predictions of the learned models, and for characterizing the areas of the sample space where each model is superior.
  • the measures generated by combining approaches such as object detection and object classification aids in avoiding false prediction by discarding any learned models which tend to give results with accuracy below certain acceptable levels. Therefore, in this way, the unique contributions of each model can still be discovered and exploited for generation of prediction results with high accuracy.
  • Reliable and accurate food grain quality analysis is a deciding factor for assessing quality parameters and on the basis of moisture content and humidity, storage stability of the food grains can be assessed to make arrangements for storage of food grains in advance.
  • Such prediction will help in ascertaining the customer buying behavior, evaluating customer satisfaction by means of reviews, and keeping a record of customer reviews for future reference.
  • Such model for grain quality evaluation can be integrated with the third party pricing application to provide real-time price of the assessed grains and will assist in gauging market acceptability of the food grains.
  • grade of grains can be predicted. Based on the predicted grade of grains, live data related to pricing of the grains from the pricing application can be used to determine the best price of the grains. In such manner, the prediction model supports market price integration based on grades.
  • the deep learning based prediction model performs quality prediction based video processing as well.
  • Certain applications, for example Farm Lots support prediction of quality measures for bigger lots instantaneously just by placing the image capture device on top of the grain lots and predicting through video processing.
  • Such prediction models can act as a catalyst in various supply chain-based application areas such as procurement and retail supply chain by supporting sellers to determine the quality before procurement of grains and buyers to assess the quality of grains before purchasing. Therefore systems based on deep learning based prediction models have been effectively used to monitor grain quality during processing and for grading applications and have also shown successful results for product and variety based classification of food grains.
  • Hybrid feature extraction models give more accuracy compared to existing models in the market.
  • An added advantage of the embodiments, by combining approaches is that they are not limited to models generated by a single algorithm, they may be applied to model sets generated by a diverse collection of computer vision along with machine learning.
  • a computer program is provided, and the program is embodied on a computer- readable medium.
  • the system is executed on a single computer system, without requiring a connection to a server computer.
  • the system is being run in a Windows® environment.
  • the system is run on a mainframe environment and a UNIX® server environment.
  • the application is flexible and designed to run in various different environments without compromising any major functionality.
  • the system includes multiple components distributed among a plurality of computing devices. One or more components may be in the form of computer-executable instructions embodied in a computer-readable medium.
  • non-transitory computer-readable media is intended to be representative of any tangible computer-based device implemented in any method or technology for short-term and long-term storage of information, such as, computer -readable instructions, data structures, program modules and submodules, or other data in any device. Therefore, the methods described herein may be encoded as executable instructions embodied in a tangible, non-transitory, computer readable medium, including, without limitation, a storage device and/or a memory device. Such instructions, when executed by a processor, cause the processor to perform at least a portion of the methods described herein.
  • non-transitory computer-readable media includes all tangible, computer-readable media, including, without limitation, non-transitory computer storage devices, including, without limitation, volatile and nonvolatile media, and removable and non-removable media such as a firmware, physical and virtual storage, CD-ROMs, DVDs, and any other digital source such as a network or the Internet, as well as yet to be developed digital means, with the sole exception being a transitory, propagating signal.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Strategic Management (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Business, Economics & Management (AREA)
  • Chemical & Material Sciences (AREA)
  • Food Science & Technology (AREA)
  • Finance (AREA)
  • Databases & Information Systems (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Accounting & Taxation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Game Theory and Decision Science (AREA)
  • Biochemistry (AREA)
  • Primary Health Care (AREA)
  • Human Resources & Organizations (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Mining & Mineral Resources (AREA)
  • Agronomy & Crop Science (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Animal Husbandry (AREA)
  • Medicinal Chemistry (AREA)
  • Analytical Chemistry (AREA)
  • Tourism & Hospitality (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)

Abstract

La présente invention concerne un système et un procédé pour analyser des mesures de qualité associées à des grains à l'aide d'un modèle de prédiction basé sur des réseaux neuronaux convolutifs de classification d'image pour fournir une prédiction de prix en temps réel sur la base de défauts liés à une taille, un nombre et d'autres paramètres de qualité de grains. Le système comprend : une pluralité de bases de données, un module de maintien de pack, une unité capteur, une unité d'identification d'image interconnectée au module de maintien de pack et un modèle de prédiction basé sur un apprentissage profond avec une pluralité de systèmes de bases de données via un réseau en nuage. Le procédé consiste à obtenir des images, traiter, et stocker les images dans le système de bases de données, recevoir et analyser une image d'entrée par un modèle d'apprentissage profond, sélectionner un modèle approprié sur la base de l'image traitée, générer une sortie de prédiction sur la base du modèle sélectionné sous une forme d'un type de grains et combiner l'entrée de capteur et la sortie de prédiction générées par l'unité capteur et le modèle de prédiction.
PCT/IN2022/050993 2021-11-12 2022-11-11 Système et procédé pour tirer parti d'un modèle d'extraction de caractéristique hybride basé sur un réseau neuronal pour une analyse de qualité de grains Ceased WO2023084543A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202141051970 2021-11-12
IN202141051970 2021-11-12

Publications (1)

Publication Number Publication Date
WO2023084543A1 true WO2023084543A1 (fr) 2023-05-19

Family

ID=86335223

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2022/050993 Ceased WO2023084543A1 (fr) 2021-11-12 2022-11-11 Système et procédé pour tirer parti d'un modèle d'extraction de caractéristique hybride basé sur un réseau neuronal pour une analyse de qualité de grains

Country Status (1)

Country Link
WO (1) WO2023084543A1 (fr)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116629690A (zh) * 2023-05-29 2023-08-22 荆州洗涮涮环保科技有限公司 基于大数据分析的制药信息化全流程管理系统
CN116704247A (zh) * 2023-06-05 2023-09-05 中南林业科技大学 一种基于透射偏振光图像的谷物不完善粒识别方法及系统
CN116805239A (zh) * 2023-06-30 2023-09-26 昆明黑马软件股份有限公司 一种基于大数据的生鲜商品安全智能检测管理方法及系统
CN116958066A (zh) * 2023-07-03 2023-10-27 上海悠络客电子科技股份有限公司 一种基于视觉算法的糕点蛋挞品质检测方法
CN117036672A (zh) * 2023-07-04 2023-11-10 张家口卷烟厂有限责任公司 基于图像识别剔除烟叶中烟梗的方法
CN117132828A (zh) * 2023-08-30 2023-11-28 常州润来科技有限公司 一种铜管加工过程固体废料的自动分类方法及系统
CN117437459A (zh) * 2023-10-08 2024-01-23 昆山市第一人民医院 基于决策网络实现用户膝关节髌骨软化状态分析方法
CN117829698A (zh) * 2024-03-06 2024-04-05 成都运荔枝科技有限公司 一种食品供应链调度管理系统
CN118015365A (zh) * 2024-02-19 2024-05-10 英飞智信(苏州)科技有限公司 基于深度学习的固体颗粒物特征识别方法
CN118506008A (zh) * 2024-07-15 2024-08-16 安徽高哲信息技术有限公司 热损伤玉米颗粒检测方法、装置、电子设备和介质
CN118506349A (zh) * 2024-07-18 2024-08-16 安徽高哲信息技术有限公司 谷物识别模型的训练方法、谷物识别方法、设备和介质
CN118749987A (zh) * 2024-06-06 2024-10-11 齐鲁工业大学(山东省科学院) 基于改进编码器解码器结构的心电信号多波形检测方法
CN119672688A (zh) * 2024-11-28 2025-03-21 云南农业大学 一种基于SCConv轻量化改进YOLOv8模型的谷物流量检测方法
CN120196911A (zh) * 2025-05-26 2025-06-24 北京麦麦趣耕科技有限公司 一种基于大数据的油菜种子品质评估模型
CN120508024A (zh) * 2025-05-09 2025-08-19 广东穗方源实业有限公司 一种用于大米精加工设备的智能控制方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190122411A1 (en) * 2016-06-23 2019-04-25 LoomAi, Inc. Systems and Methods for Generating Computer Ready Animation Models of a Human Head from Captured Data Images
WO2019168855A2 (fr) * 2018-02-27 2019-09-06 TeleSense, Inc. Procédé et appareil pour la surveillance et la gestion à distance d'un contenant à l'aide de l'apprentissage machine et de l'analyse des données
WO2019177663A1 (fr) * 2018-03-13 2019-09-19 Jiddu, Inc. Appareil basé sur l'ido permettant d'évaluer la qualité de produits alimentaires
US20200193368A1 (en) * 2018-12-12 2020-06-18 Aptiv Technologies Limited Transporting objects using autonomous vehicles

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190122411A1 (en) * 2016-06-23 2019-04-25 LoomAi, Inc. Systems and Methods for Generating Computer Ready Animation Models of a Human Head from Captured Data Images
WO2019168855A2 (fr) * 2018-02-27 2019-09-06 TeleSense, Inc. Procédé et appareil pour la surveillance et la gestion à distance d'un contenant à l'aide de l'apprentissage machine et de l'analyse des données
WO2019177663A1 (fr) * 2018-03-13 2019-09-19 Jiddu, Inc. Appareil basé sur l'ido permettant d'évaluer la qualité de produits alimentaires
US20200193368A1 (en) * 2018-12-12 2020-06-18 Aptiv Technologies Limited Transporting objects using autonomous vehicles

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GE QIAN; LOBATON EDGAR: "Consensus-Based Image Segmentation via Topological Persistence", 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), IEEE, 26 June 2016 (2016-06-26), pages 1050 - 1057, XP033027931, DOI: 10.1109/CVPRW.2016.135 *
P. POON, W. NG, VARUN SRIDHARAN: "Image Denoising with Singular Value Decomposition and Principal Component Analysis", IMAGE DENOISING WITH SINGULAR VALUE DECOMPOSITION AND PRINCIPAL COMPONENT ANALYSIS, pages 1 - 29, XP009545708, Retrieved from the Internet <URL:https://web.archive.org/web/20170829232425/https://www.u.arizona.edu/~ppoon/ImageDenoisingWithSVD.pdf> [retrieved on 20230209] *
YASOTHAI R.: "Factors Affecting Grain Quality: A Review", INTERNATIONAL JOURNAL OF CURRENT MICROBIOLOGY AND APPLIED SCIENCES, EXCELLENT PUBLISHERS, INDIA, vol. 9, no. 9, 20 September 2020 (2020-09-20), India , pages 205 - 210, XP093067023, ISSN: 2319-7692, DOI: 10.20546/ijcmas.2020.909.026 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116629690B (zh) * 2023-05-29 2023-12-15 北京金安道大数据科技有限公司 基于大数据分析的制药信息化全流程管理系统
CN116629690A (zh) * 2023-05-29 2023-08-22 荆州洗涮涮环保科技有限公司 基于大数据分析的制药信息化全流程管理系统
CN116704247A (zh) * 2023-06-05 2023-09-05 中南林业科技大学 一种基于透射偏振光图像的谷物不完善粒识别方法及系统
CN116805239A (zh) * 2023-06-30 2023-09-26 昆明黑马软件股份有限公司 一种基于大数据的生鲜商品安全智能检测管理方法及系统
CN116958066A (zh) * 2023-07-03 2023-10-27 上海悠络客电子科技股份有限公司 一种基于视觉算法的糕点蛋挞品质检测方法
CN117036672A (zh) * 2023-07-04 2023-11-10 张家口卷烟厂有限责任公司 基于图像识别剔除烟叶中烟梗的方法
CN117132828B (zh) * 2023-08-30 2024-03-19 常州润来科技有限公司 一种铜管加工过程固体废料的自动分类方法及系统
CN117132828A (zh) * 2023-08-30 2023-11-28 常州润来科技有限公司 一种铜管加工过程固体废料的自动分类方法及系统
CN117437459A (zh) * 2023-10-08 2024-01-23 昆山市第一人民医院 基于决策网络实现用户膝关节髌骨软化状态分析方法
CN117437459B (zh) * 2023-10-08 2024-03-22 昆山市第一人民医院 基于决策网络实现用户膝关节髌骨软化状态分析方法
CN118015365A (zh) * 2024-02-19 2024-05-10 英飞智信(苏州)科技有限公司 基于深度学习的固体颗粒物特征识别方法
CN117829698A (zh) * 2024-03-06 2024-04-05 成都运荔枝科技有限公司 一种食品供应链调度管理系统
CN118749987A (zh) * 2024-06-06 2024-10-11 齐鲁工业大学(山东省科学院) 基于改进编码器解码器结构的心电信号多波形检测方法
CN118506008A (zh) * 2024-07-15 2024-08-16 安徽高哲信息技术有限公司 热损伤玉米颗粒检测方法、装置、电子设备和介质
CN118506349A (zh) * 2024-07-18 2024-08-16 安徽高哲信息技术有限公司 谷物识别模型的训练方法、谷物识别方法、设备和介质
CN119672688A (zh) * 2024-11-28 2025-03-21 云南农业大学 一种基于SCConv轻量化改进YOLOv8模型的谷物流量检测方法
CN120508024A (zh) * 2025-05-09 2025-08-19 广东穗方源实业有限公司 一种用于大米精加工设备的智能控制方法及系统
CN120196911A (zh) * 2025-05-26 2025-06-24 北京麦麦趣耕科技有限公司 一种基于大数据的油菜种子品质评估模型

Similar Documents

Publication Publication Date Title
WO2023084543A1 (fr) Système et procédé pour tirer parti d&#39;un modèle d&#39;extraction de caractéristique hybride basé sur un réseau neuronal pour une analyse de qualité de grains
Cinar et al. Identification of rice varieties using machine learning algorithms
Kalantar et al. A deep learning system for single and overall weight estimation of melons using unmanned aerial vehicle images
Bhatt et al. Automatic apple grading model development based on back propagation neural network and machine vision, and its performance evaluation
Khojastehnazhand et al. Development of a lemon sorting system based on color and size
JP7131617B2 (ja) 照明条件を設定する方法、装置、システム及びプログラム並びに記憶媒体
Eshkevari et al. Automatic dimensional defect detection for glass vials based on machine vision: A heuristic segmentation method
Zhang et al. Computer vision estimation of the volume and weight of apples by using 3d reconstruction and noncontact measuring methods
Sharma et al. Image processing techniques to estimate weight and morphological parameters for selected wheat refractions
JP6749655B1 (ja) 検査装置、異常検出方法、コンピュータプログラム、学習モデルの生成方法、及び学習モデル
Szczypiński et al. Computer vision algorithm for barley kernel identification, orientation estimation and surface structure assessment
Fermo et al. Development of a low-cost digital image processing system for oranges selection using hopfield networks
Goel et al. An efficient approach for to predict the quality of apple through its appearance
CN110095436A (zh) 苹果轻微损伤分类方法
CN118154562A (zh) 一种基于yolo7的深度学习的金属表面在线缺陷检测系统
CN119780119A (zh) 基于多光谱成像的电子元器件外观缺陷检测方法及系统
Peng et al. Defects recognition of pine nuts using hyperspectral imaging and deep learning approaches
CN119131006A (zh) 一种基于深度学习的陶瓷缺陷智能检测方法、系统、设备及存储介质
Sugadev et al. Computer vision based automated billing system for fruit stores
Samaniego et al. Image Processing Model for Classification of Stages of Freshness of Bangus using YOLOv8 Algorithm
Srinivasaiah et al. Analysis and prediction of seed quality using machine learning.
Gao et al. Mass detection of walnut based on X‐ray imaging technology
Nguyen et al. Rating pome fruit quality traits using deep learning and image processing
Huang et al. Evaluating and deploying Large Vision-Language Models for fruit quality assessment in smart agriculture systems
Xu et al. Developing a machine vision system for real-time, automated quality grading of sweetpotatoes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22892299

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22892299

Country of ref document: EP

Kind code of ref document: A1