[go: up one dir, main page]

WO2025056990A1 - Dispositifs trieurs utilisant des modèles de trieur d'objets par apprentissage automatique, et procédés associés - Google Patents

Dispositifs trieurs utilisant des modèles de trieur d'objets par apprentissage automatique, et procédés associés Download PDF

Info

Publication number
WO2025056990A1
WO2025056990A1 PCT/IB2024/056637 IB2024056637W WO2025056990A1 WO 2025056990 A1 WO2025056990 A1 WO 2025056990A1 IB 2024056637 W IB2024056637 W IB 2024056637W WO 2025056990 A1 WO2025056990 A1 WO 2025056990A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
sorter
machine learning
computing device
cause
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IB2024/056637
Other languages
English (en)
Inventor
Gianluca MONTANARI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cimbria SRL
Original Assignee
Cimbria SRL
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cimbria SRL filed Critical Cimbria SRL
Publication of WO2025056990A1 publication Critical patent/WO2025056990A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/85Investigating moving fluids or granular solids
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/34Sorting according to other particular properties
    • B07C5/342Sorting according to other particular properties according to optical properties, e.g. colour
    • B07C5/3425Sorting according to other particular properties according to optical properties, e.g. colour of granular material, e.g. ore particles, grain
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/363Sorting apparatus characterised by the means used for distribution by means of air
    • B07C5/367Sorting apparatus characterised by the means used for distribution by means of air using a plurality of separation means
    • B07C5/368Sorting apparatus characterised by the means used for distribution by means of air using a plurality of separation means actuated independently
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
    • G01N21/35Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light
    • G01N21/3563Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light for analysing solids; Preparation of samples therefor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
    • G01N21/35Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light
    • G01N21/359Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light using near infrared light
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N2021/8466Investigation of vegetal material, e.g. leaves, plants, fruits
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/85Investigating moving fluids or granular solids
    • G01N2021/8592Grain or other flowing solid samples
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/12Circuits of general importance; Signal processing
    • G01N2201/129Using chemometrical methods
    • G01N2201/1296Using chemometrical methods using neural networks

Definitions

  • Embodiments of the present disclosure relate generally to optical bulk sorter devices. More particularly, embodiments of the present disclosure relate to methods of training sorter devices to sort a bulk product, and related sorter devices using machine learning object sorter models and methods.
  • Sorter devices are conventionally utilized to sort bulk products.
  • sorter devices are typically utilized to separate specific pieces (e.g., grains) from a bulk product in order to be sorted and/or discarded.
  • Sorter devices generally include a conveyor system to create a stream of the bulk product, which may include nuts, grain, seeds, or plastic pieces.
  • the sorter device typically includes optical detection systems arranged for acquiring and analyzing images of the stream of the bulk product.
  • the detection systems are typically configured to provide the image data to a controller, which, based on information determined from the acquired images, sends control signals to an expulsion device to remove selected pieces (e.g., grains) from the bulk product, such as with air jets produced by air nozzles.
  • a sorter device includes an infeed system configured to provide a stream of a commodity proximate at least one detection system including at least one emitting device configured to emit electromagnetic radiation and at least one optical sensor configured to measure electromagnetic radiation reflected from the stream, an ejector configured to selectively expose the stream to air to sort the commodity, and a computing device operably coupled to the detection system and the ejector.
  • the computing device includes at least one processor, and at least one non-transitory computer-readable storage medium storing instructions thereon that, when executed by at least one processor, cause the computing device to cause an image processing system to generate image data of the stream based on the measured electromagnetic radiation, cause a communications interface to display an output image based on the image data, and cause the communications interface to receive a user input including labeling data of at least some pixels defining objects of the commodity in the output image to generate training data, and train a machine learning object sorter model based on the training data.
  • the computing device includes instructions thereon that, when executed by the at least one processor, cause the ejector to sort the commodity with the machine learning object sorter model.
  • the instructions may cause the ejector to selectively expose objects of the commodity to air based on the machine learning object sorter model.
  • the instructions may, when executed by the at least one processor, cause the image processing system to perform a background thresholding operation on the image data to generate modified image data.
  • the instructions when executed by the at least one processor, may cause the image processing system to perform one or more of a background thresholding operation, a contouring operation, an erosion operation, a growing operation, an area thresholding operation, and a feature extraction operation on the image data to generate modified image data.
  • the instructions when executed by the at least one processor, may cause the image processing system to perform one or more of a background thresholding operation, a contouring operation, an erosion operation, a growing operation, an area thresholding operation, and a feature extraction operation on the image data to generate modified image data.
  • the instructions when executed by the at least one processor, the instructions may cause the image processing system to perform one or more of a background thresholding operation, a contouring operation, an erosion operation, a growing operation, an area thresholding operation, and a feature extraction operation on the image data to generate modified image data.
  • the instructions when executed by the at least one processor, may cause the image processing system to perform one or more of a background thresholding operation, a contouring operation, an erosion operation, a growing operation,
  • the training data may be generated based on a color of the commodity.
  • the sorter device may further include a machine learning object sorter system configured to receive additional image data during a sorting operation, and cause an ejector controller to control the ejector based on the additional image data and the machine learning object sorter model.
  • the machine learning object sorter system may update the machine learning object sorter model based on additional user input.
  • the computing device may include instructions thereon that, when executed by at least one processor, cause a machine learning object sorter training system to train the machine learning object sorter model based on only a color of the at least some pixels defining objects of the commodity in the output image.
  • a sorter device includes an infeed system configured to provide a stream of a commodity, at least one emitting device configured to emit electromagnetic radiation at the stream, at least one optical sensor configured to measure electromagnetic radiation reflected from the stream, an ejector configured to selectively expose the stream to air to sort the commodity, and a computing device operably coupled to the detection system and the ejector.
  • the computing device includes at least one processor, and at least one non-transitory computer-readable storage medium storing instructions thereon that, when executed by at least one processor, cause the computing device to cause an image processing system to generate image data of the stream based on the measured electromagnetic radiation, apply a machine learning object sorter model to the image data to generate a labeling decision for objects of the commodity in the stream, and control the ejector based on the labeling decision.
  • the computing device may include instructions thereon that, when executed by the at least one processor, cause the computing device to generate the labeling decision based on a color of objects in the image data.
  • the instructions when executed by the at least one processor, cause the computing device to modify the image data using one or more of a thresholding operation, an area thresholding operation, and a contouring operation.
  • the sorter device further includes a machine learning object sorter training system configured to generate the machine learning object sorter model based on user input received by a communications interface of the computing device.
  • a method of operating a sorter device includes obtaining image data of a commodity within a sorter device, displaying, on a communications interface, an output image based on the image data, receiving, on the communications interface, a user input for each of a plurality of selected object in the output image, the user input indicating a label for at least some pixels of each of the plurality of selected objects, creating training data from the labels for the at least some pixels of each of the plurality of selected objects, training a machine learning object sorter model to generate a sorting decision for a given object based on an input image of the given object, wherein the machine learning object sorter model is trained based on the labels and associated image data of the plurality of selected objects within the training data, and sorting the commodity with the trained machine learning sorter model.
  • the method may further include performing one or more image processing operations on the image data prior to displaying the output image on the communications interface.
  • the modified image may be displayed on the communications interface prior to receiving the user input.
  • the method further includes performing one or more image processing operations on the image data includes performing one or more of a background thresholding operation, a contouring operation, an erosion operation, a growing operation, an area thresholding operation, and a feature extraction operation on the image data to create modified image data.
  • Displaying the output image may include displaying the output image based on the modified image data.
  • Sorting the commodity with the machine learning sorter model based on the training set may include selectively controlling operation of an ejector based on sorting decisions generated by the trained machine learning sorter model while continuously receiving additional image data.
  • sorting the commodity with the machine learning sorter model includes receiving additional image data of a stream of the commodity within the sorter device, generating a labeling decision for objects of the commodity within the stream based on the additional image data using the machine learning object sorter model, and providing the labeling decision to an ejector controller.
  • FIG. 1 is a simplified schematic representation of a sorter device, according to one or more embodiments of the disclosure
  • FIG. 2A is a simplified schematic illustrating a machine learning object sorter training system, in accordance with embodiments of the disclosure
  • FIG. 2B is an example graphical user interface of the sorter device, in accordance with embodiments of the disclosure.
  • FIG. 3 is a simplified schematic illustrating a machine learning object sorter system, in accordance with embodiments of the disclosure.
  • FIG. 4 is a simplified flow chart illustrating a method of performing a training operation to create a trained machine learning object sorter model, in accordance with embodiments of the disclosure
  • FIG. 5 is a simplified flow chart illustrating a method of sorting a bulk product using a trained machine learning object sorter model, in accordance with one or more embodiments of the disclosure.
  • FIG. 6 is a schematic of a computer-readable storage medium including processor-executable instructions configured to embody one or more of the methods of operating the sorter device.
  • the term “configured” refers to a size, shape, material composition, and arrangement of one or more of at least one structure and at least one apparatus facilitating operation of one or more of the structure and the apparatus in a predetermined way.
  • spatially relative terms such as “beneath,” “below,” “lower,” “bottom,” “above,” “upper,” “top,” “front,” “rear,” “left,” “right,” and the like, may be used for ease of description to describe one element's or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Unless otherwise specified, the spatially relative terms are intended to encompass different orientations of the materials in addition to the orientation depicted in the figures.
  • the term "substantially" in reference to a given parameter, property, or condition means and includes to a degree that one of ordinary skill in the art would understand that the given parameter, property, or condition is met with a degree of variance, such as within acceptable manufacturing tolerances.
  • the parameter, property, or condition may be at least 90.0% met, at least 95.0% met, at least 99.0% met, or even at least 99.9% met.
  • ranges are used as shorthand for describing each and every value that is within the range. Any value within the range can be selected as the terminus of the range.
  • spectral band means and includes a range of wavelengths of the electromagnetic spectrum. Spectral bands may overlap with one another and/or spectral bands may include non-overlapping ranges of wavelengths.
  • Non-limiting examples of spectral bands include infrared spectral bands (e.g., with wavelengths ranging from about 0.7 pm to about 1.0 mm), the far-infrared region (e.g., with wavelengths ranging from about 10 pm to about 1.0 mm), the mid-infrared region (e.g., with wavelengths ranging from about 2.5 pm to about 10 pm), the near-infrared region (e.g., with wavelengths ranging from about 700 nm to about 2.5 pm), short-wave infrared (IR) region (e.g., with wavelengths ranging from 900 nm to about 1.7 pm, or from about 700 nm to about 2.5 pm), the ultraviolet region (e.g., with wavelengths ranging from about 10 nm to about 400
  • a sorter device is configured to sort a bulk product (e.g., grain, nuts, seeds, rice, plastic) by selectively removing undesired components from the bulk product.
  • the sorter device includes at least one radiation emitting device (also referred to as an "emitter” or an “emitting device”) configured to expose a stream of the bulk product to electromagnetic radiation, and at least one optical sensor configured to measure electromagnetic radiation reflected from and/or transmitted through the stream while exposing the stream to the electromagnetic radiation from the at least one emitting device.
  • the sorter device further includes an ejector configured to selectively introduce pulses of air (e.g., air jets) to the stream to remove undesired components from the stream to sort the bulk product.
  • the sorter device further includes a computing device in operable communication with the ejector and with an image processing system.
  • the image processing system is configured to receive image data from the at least one optical sensor (e.g., an intensity of electromagnetic radiation measured by the at least one optical sensor).
  • the image data may correspond to electromagnetic radiation received by the optical sensor during an exposure of the stream to one or more particular spectral bands.
  • the computing device may include a machine learning (ML) object sorter model training system configured to train a ML object sorter model, and a ML object sorter system configured to implement the trained ML object sorter model to facilitate sorting of the bulk product.
  • the ML object sorter training system may implement the image processor to receive the image data received by the optical sensor(s), perform one or more operations on the image data to generate modified image data, and display an output image on a communications interface based on the modified image data.
  • the ML object sorter training system may cause the communications interface to receive a user input indicative of a label (e.g., classification) of objects (which may be identified as groups of pixels having a similar color (e.g., measured intensity)) and/or of one or more pixels of an object (e.g., only a defect area of an object) of the bulk product (e.g., individual components of the bulk product) in the output image to assign a label to each object and/or pixel(s) selected by the user.
  • a label e.g., classification
  • objects which may be identified as groups of pixels having a similar color (e.g., measured intensity)
  • an object e.g., only a defect area of an object
  • the bulk product e.g., individual components of the bulk product
  • the labeled objects and/or pixels(s) may be provided to a ML model as training data (e.g., ground truth data) and the ML object sorter training system may cause the ML model to be trained to create the trained ML object sorter model based on the user input.
  • the ML object sorter model may be configured to identify one or more color features in the image data and a correlation between the one or more color features and the assigned label. Accordingly, the ML object sorter model may be trained to generate a sorting decision based on one or more color features of the objects (including only parts of the objects, such as defect areas) in the bulk product, the objects defined within the image data.
  • the ML object sorter system may cause the image processor to receive image data received by the optical sensor(s) and, optionally, perform one or more operations on the image data.
  • the ML object sorter system may cause the ML object sorter model to receive the image data and classify objects (e.g., groups of pixels) in the image data as either a pass (e.g., do not separate from the bulk product) or a fail (e.g., separate from the bulk product).
  • the classification may be based on an entirety of the object(s), may be based on only defect areas of the objects(s), or both.
  • training with the user input may facilitate on-site training of the ML object sorter model based on the actual bulk product to be sorted.
  • conventional sorter devices may include a sorting algorithm based on the particular characteristics of the particular bulk product to be sorted. For example, rice may be separated based on a different sorting algorithm than almonds.
  • conventional sorting operations may require analyzing the image data in real time, which may limit the speed of the sorting operation.
  • the training data since the training data is based on objects in an output image generated by the modified image data, the training data may exhibit less noise and have fewer outliers compared to training data formed from the raw image data.
  • the training data may be based on raw image data that has been passed through one or more of a thresholding operation, a contouring operation, an erosion operation, a growing operation, and an area thresholding operation to remove noise from the raw image data.
  • the ML object sorter model may facilitate sorting the commodity based only on the color and, therefore, does not require performing object detection on the image data, further improving the processing speed of the sorting decision.
  • the ML object sorter model training system may be configured to create the ML object sorter model for different commodities, without requiring a separate sorting algorithm preloaded on the sorter device.
  • the computing device 112 includes an image processing system 114, an ejector controller 115 operably coupled to the image processing system 114, a machine learning (ML) object sorter model training system 117, and a machine learning (ML) object sorter system 121.
  • the sorter device 100 may be utilized to sort bulk product (e.g., granular product) 118 such as, for example, nuts, seeds, grain, plastic pieces, etc.
  • the sorter device 100 may sort the pieces of the bulk product 118 (referred to hereinafter as "grainafter as "grainafter as "grainafter as "grains") based on one or more of the color and type of materials of the grains.
  • the sorter device 100 may sort the grains of the bulk product 118 according to preselected sorting criteria.
  • the sorter device 100 may be utilized to sort grain based on the quality of the grain, which, in some instances, may be determined by a color of the grain. As another non-limiting example, the sorter device 100 may be utilized to sort bulk plastic pieces based on plastic type, which may be determined based on the color of the plastic.
  • the infeed system 104 of the sorter device 100 may include a hopper 119 and a chute and/or belt 120.
  • the hopper 119 may define a pathway to the chute and/or belt 120, and in some embodiments, the hopper 119 may include one or more vibrators (e.g., a vibrator feeder), augers, or other feeders to feed the bulk product 118 from the hopper 119 to the chute and/or belt 120.
  • the chute and/or belt 120 may be sized, shaped, and oriented to cause the bulk product 118 to descend due to gravity and/or a conveyor belt to pass in front of the at least one detection system 106.
  • the chute and/or belt 120 may be configured to produce a stream 122 of the bulk product 118 to pass in front of the at least one detection system 106 according to a selected velocity (e.g., speed).
  • the at least one detection system 106 may include a plurality of emitting devices 124, one or more background elements 126, one or more optical sensors 128, a plurality of intensity sensors 130, and a plurality of reference elements 132.
  • the plurality of emitting devices 124 may be configured to emit electromagnetic radiation at the stream 122 of the bulk product 118 as the stream 122 passes in front of the at least one detection system 106.
  • the plurality of emitting devices 124 may include one or more light-emitting-diodes (LEDs) for emitting light.
  • LEDs light-emitting-diodes
  • the plurality of emitting devices 124 may emit one or more of visible light, short-wave infrared light (SWIR light), near infrared light (NIR light), infrared (IR) light, or ultra-violet (UV) light.
  • the plurality of emitting devices 124 are configured to emit light within a specific (e.g., selected) spectral band of the electromagnetic spectrum.
  • the plurality of emitting devices 124 includes at least four emitting devices.
  • At least one of the at least four emitting devices 124 is configured to emit a first type of electromagnetic radiation (e.g., UV light), and at least one of the at least four emitting devices is configured to emit a second type of electromagnetic radiation (e.g., NIR light).
  • each of the emitting devices 124 is configured to emit electromagnetic radiation within a different spectral band within the infrared (IR) region.
  • the emitting devices 124 may individually be configured to emit electromagnetic radiation within a particular spectral band within the infrared (IR) region (such as within the IR-A band having wavelengths within the range of from about 1.4 pm to about 3.0 pm). In some embodiments, one or more of the emitting devices 124 is configured to emit electromagnetic radiation within the short-wave IR (SWIR) spectrum, such as within the range of from about 900 nm to about 2.5 pm.
  • SWIR short-wave IR
  • the one or more optical sensors 128 may include one or more of a charged- coupled device (CCD) camera, an IR camera, a UV camera, or an RGB camera. During use and operation, the one or more optical sensors 128 may be oriented and configured to detect (e.g., capture) electromagnetic radiation reflected from the stream 122 and/or electromagnetic radiation transmitted through the stream 122 of the bulk product 118 due to the plurality of emitting devices 124. For instance, the field of view of the one or more optical sensors 128 may include at least a portion of the stream 122 of bulk product 118.
  • CCD charged- coupled device
  • the at least one detection system 106 may further include one or more optical filters for filtering (e.g., narrowing) the reflected light being detected (e.g., captured) by the one or more optical sensors 128.
  • the one or more optical filters may narrow the reflected light into specific (e.g., selected) wavelengths that may accentuate sorting criteria (e.g., criteria distinguishing grades or types of bulk products).
  • the sorter device 100 includes a monochromatic sorter device and the at least one detection system 106 includes a single optical filter between the stream 122 of the bulk product 118 and a respective optical sensor 128.
  • the single optical filter may produce, for example, a light/dark separation.
  • the sorter device 100 includes a bi-chromatic sorter device, the at least one detection system 106 includes two optical filters between the stream 122 of bulk product 118 and a respective optical sensor 128.
  • the at least one detection system 106 may include any two conventional optical filters.
  • the at least one detection system 106 may include one or more background elements 126, and the one or more background elements 126 may be disposed and oriented behind the stream 122 relative to the one or more optical sensors 128.
  • the background elements 126 may facilitate better detection and imaging of the individual grains of the bulk product 118 compared to sorter devices 100 not including the background elements 126.
  • the one or more background elements 126 may include any known background elements used in sorter devices.
  • the plurality of intensity sensors 130 may be oriented relative to the plurality of emitting devices 124 such that the plurality of intensity sensors 130 may be utilized to measure an intensity of the electromagnetic radiation emitted by the plurality of emitting devices 124.
  • the at least one detection system 106 may include a plurality of reference elements 132. Each reference element 132 may be disposed within a field of view of at least one optical sensor 128.
  • the intensity sensors 130 and the reference elements 132 may individually be substantially similar to (e.g., the same as) the respective intensity sensors and reference sensors described in European Patent Application No. EP23425017.3, "[]Sorter Devices, Detection [] Systems of Sorter Devices, and Related Methods," filed April 28, 2023.
  • the sorter device may include a plurality of current sensors, as described in European Patent Application No. EP23425017.3.
  • each of the infeed system 104, the at least one detection system 106, and the at least one ejector 108 may be operably coupled to and at least partially operated by the computing device 112.
  • the computing device 112 may be configured to provide control signals to the infeed system 104 to cause the infeed system 104 to feed the bulk product 118 to the chute and/or belt 120 and to generate the stream 122.
  • the computing device 112 may be configured to provide control signals to the plurality of emitting devices 124 and the optical sensors 128 to control operation of the plurality of emitting devices 124 and the optical sensors 128.
  • the computing device 112 may include the image processing system 114 that may receive image data (e.g., sets of image data) from the plurality of optical sensors 128, analyze the image data, and generate output image data that is received by one or both of the ML object sorter model training system 117 and the ML object sorter system 121 (e.g., depending on whether the sorter device 100 is performing a training operation or a sorting operation). For example, during a training operation, the computing device 112 may cause the ML object sorter model training system 117 to receive the image data from the image processing system 114 to generate training data (e.g., a training set of data) and train a ML object sorter model based on the training data.
  • image data e.g., sets of image data
  • the computing device 112 may cause the ML object sorter model training system 117 to receive the image data from the image processing system 114 to generate training data (e.g., a training set of data) and train a ML object sorter model based on the
  • the computing device 112 may cause the ML object sorter system 121 to receive image data from the image processing system 114 and to classify objects in the image data (corresponding to objects of the bulk product 118 in the stream 122) based on one or more color properties of the objects.
  • the computing device 112 may cause the ML object sorter system 121 to receive image data from the image processing system 114 and to generate label decisions for each object in the image data using the ML object sorter model.
  • the label decisions may be received by the ejector controller 115 and cause the ejector controller 115 to cause the at least one ejector 108 to operate (e.g., selectively open one or more nozzles thereof to selectively remove one or more pieces of the bulk product 118 as the bulk product 118 passes by the respective one or more nozzles) based on the label decisions.
  • FIG. 2A is a simplified schematic illustrating the ML object sorter model training system 117 and generating a ML object sorter model 252 (also referred to as a "trained ML object sorter model 252"), in accordance with embodiments of the disclosure.
  • the system 200 includes the image processing system 114 configured to receive input image data 202 from the optical sensors 128.
  • the image data 202 includes video data including two or more frames of image data.
  • the image data 202 may include data received from the optical sensors 128 during operation of the sorter device 100.
  • the image data 202 includes an intensity of electromagnetic radiation measured (e.g., reflected and/or transmitted through the stream 122) with the optical sensors 128 during exposure of the stream 122 of bulk product 118 to electromagnetic radiation from the emitting device(s) 124.
  • the image data 202 may be in the form of pixels, each pixel indicative of an intensity of measured electromagnetic radiation and corresponding to a particular location within the sorter device 100.
  • the image data 202 may include single channel image data or may include multichannel image data.
  • the image data 202 includes multilayer image data, each layer including single channel image data indicative of a different color (e.g., RGB).
  • the image data 202 may include RGB image data.
  • the image processing system 114 may be configured to analyze the image data 202 and generate modified image data 206.
  • the image processing system 114 may include one or more computer vision operation 204 for modifying the image data 202 (e.g., preprocessing the image data 202) to generate the modified image data 206.
  • the one or more of computer vision operations 204 may include a thresholding operation 208, a contouring operation 210, an erosion operation 212, a growing operation 214, an area thresholding operation 216, and a feature extraction operation 217.
  • the thresholding operation 208 be configured to perform a thresholding operation on the image data 202.
  • the thresholding operation may be performed on single channel image data (e.g., on individual channels of the image data 202) or may be performed on multichannel image data.
  • the thresholding operation is performed on RGB image data, such as on each component of RGB image data.
  • the thresholding operation may include a segmentation technique configured to facilitate separating groups of pixels defining an object of the bulk product 118 from the background.
  • the thresholding operation may be based on one or more of a histogram-based thresholding method, a clustering-based thresholding method, an entropy-based thresholding method, an object attribute-based thresholding method, and a spatial thresholding method.
  • the thresholding operation includes a background thresholding operation configured to exclude the background image data from the image data 202.
  • the groups of pixels may share one or more color characteristics.
  • the groups of pixels may be defined by an intensity value within a predetermined range (corresponding to a predetermined range of color shades), each pixel within the predetermined range neighbored by at least another pixel defined by an intensity within the predetermined range.
  • the contouring operation 210 may include a contour detection operation configured to identify edges of groups of pixels defining objects of the bulk product 118 in the image data 202, and may also be referred to as an edge detection operation.
  • the contouring operation may identify pixels in the image data 202 corresponding to (e.g., defining) a contour of the groups of pixels in the image data 202.
  • the contouring operation 210 may be configured to selectively remove the pixels defining the contour of each group of pixels defining object of the bulk product 118 from the image data 202.
  • the erosion operation 212 may be configured to perform an erode operation to erode boundaries of regions of foreground pixels, which may shrink a size (a number of pixels) of foreground pixels in the image data 202 and increase the size of holes within those areas.
  • the structuring element (also referred to as the "kernel") of the erosion operation 212 is a square with 9 pixels (e.g., a 3x3 square; 8 pixels surrounding an input pixel (e.g., anchor point or anchor pixel)) or a cross with 5 pixels (e.g., a 4 pixels; one above, one below, one to the right, and one to the left of the input pixel).
  • the growing operation 214 may include a growing operation (also referred to as a dilation operation) configured to replace an input pixel with a maximum pixel value of pixels in a kernel as the kernel is scanned across the image data, which may cause bright regions in the image data to grow.
  • a growing operation also referred to as a dilation operation
  • one or both of the erosion operation 212 and the growing operation 214 may be applied to the image data 202 to remove noise in the image data 202 and/or to isolate individual groups of pixels (e.g., corresponding to objects) of the bulk product 118 in the image data 202.
  • the image processing system 114 performs an opening operation (an erosion operation followed by a growing operation) to generate the modified image data 206.
  • a closing operation e.g., a growing operation followed by an erosion operation
  • the combination of the erosion operation and the growing operation removes defect areas from the image data 202.
  • the area thresholding operation 216 may be configured to perform an area thresholding operation on the image data 202 to remove groups of pixels in the image data 202 defined by less than a predetermined number of pixels and/or to remove groups of pixels from the image data 202 defined by more than a different predetermined number of pixels.
  • the feature extraction operation 217 may be configured to process the image data 202 to enhance differences in one or more features (e.g., colors, the measured intensity) of the pixels in the image data 202. In some embodiments, the feature extraction operation 217 is configured to determine a correlation between the one or more features (e.g., colors, measured intensity) of the pixels in the image data 202. In some embodiments, the feature extraction operation 217 includes a principal component analysis (PCA) algorithm.
  • PCA principal component analysis
  • FIG. 2B is an example of the GUI 222 that is displayed by the communications interface 218 (FIG. 2A).
  • the GUI 222 includes the output image 220.
  • the GUI 222 includes the menu 224.
  • the communications interface 218 may be configured to receive user input 226 via the GUI 222, such as via the menu 224.
  • the menu 224 includes multiple toolbars, such as a labeling toolbar 227, a modification toolbar 228, a filtering toolbar 230, and an additional toolbar 232.
  • a user may select "mark good” (e.g., pass) on the first selector 234 and select (e.g., by touching) an object and/or one or more pixels on the GUI 222 to add the object and/or one or more pixels to a passing group; and the user may select "mark bad” (e.g., fail) on the first selector 234 and select (e.g., by touching) an object and/or one or more pixels on the GUI 222 to add the object and/or one or more pixels to a failing group.
  • mark good e.g., pass
  • select e.g., by touching
  • "mark bad” e.g., fail
  • the labeling toolbar 227 may include a second selector 236 configured to facilitate toggling between labeling an object or one or more pixels (e.g., a group of pixels) when assigning a label to an object and/or pixel(s) using the first selector 234.
  • the second selector 236 may be configured to be toggle between "mark object” and "mark pixel.”
  • the GUI 222 may include one or more selectors in the labeling toolbar 227 configured to facilitate selection of the desired pixels to assign the label.
  • the user may select multiple objects and/or multiple pixel(s) in the output image 220 and classify each object and/or pixel(s) as either passing or failing (good or bad) based on visual inspection by the user.
  • the user may provide the user input 226 to the communications interface 218 via the GUI 222 to indicate which objects and/or pixel(s) pass and would not be separated from the bulk product 118 and which objects and/or pixel(s) fail and would be separated from the bulk product 118.
  • the user may determine a quality of the objects and/or pixel(s) of the commodity based on one or more features of the object and/or pixels(s), such as one or more of a quality, color, a size, a contour, a shape, and a chemical characteristic.
  • the modification toolbar 228 may include a defect toolbar 240 and a pass toolbar 242.
  • the communications interface 218 may be configured to receive instructions to perform a grow operation and/or an erode operation on pixels and may further be configured to identify the color(s) of the object and/or pixel(s).
  • the communications interface 218 may be configured to perform a grow operation and/or an erode operation on the pixe l(s)s (e.g., of the object as a whole or on only the selected pixel(s)) and may further be configured to define the color(s) thereof.
  • the modification toolbar 228 facilitates receipt of user input 226 to cause the image processing system 114 to perform one or more image processing techniques on the image data and generate modified image data, as described in additional detail herein.
  • a user may label objects and/or pixel(s) within the output image 220 that pass and/or objects and/or pixel(s) that fail via the GUI 222, such as with one or more of the first selector 234, the second selector 236, the defect toolbar 240, and the pass toolbar 242.
  • the ML object sorter model training system 117 may be configured to train a ML object sorter model to classify objects and/or pixel(s) within image data (e.g., different output images obtained during operation of the sorter device 100) based on the user input 226.
  • the user input 226 may label each object (e.g., pass or fail) and the ML object sorter model training system 117 may use the labeled objects as training data to train the ML object sorter model.
  • the user input 226 may label only some pixels of objects (e.g., pass or fail), such as only defect areas of the objects, and the ML object sorter model training system 117 may use only labeled pixels as training data to train the ML object sorter model.
  • the boundaries of the object may be defined using the defect toolbar 240 and/or the pass toolbar 242, as described in further detail herein.
  • the filtering toolbar 230 may include one or more selectors configured for performing one or more operations on the output image 220, such as for applying one or more filters on the output image 220.
  • the additional toolbar 232 may include one or more selectors configured to facilitate operation of the selector device 100.
  • the system 200 is configured to generate training data (e.g., a training set of data) 244 based on the user input 226.
  • the training data 244 may include labels including image data individually including an object and metadata including a label of the object in the image data.
  • the training data 244 may include labeled objects and/or labeled pixel(s) based on the user input 226.
  • the training data 244 may include, for example, a pass label 246 (also referred to as a "good label”) including objects and/or pixel(s) labeled as passing, and a fail label 248 (also referred to as a "bad label”) including objects and/or pixel(s) labeled as failing.
  • the training set of data 244 may also be referred to herein as "ground truth data” and may be used for training a ML object sorter model.
  • the training data 244 additionally includes the content of the corresponding objects, such as image data or other characteristics derived from the image data generated and/or presented to the user.
  • the system 200 further includes a machine learning system including ML object sorter model 252 generated at least in part, based on the training data 244.
  • the ML object sorter model 252 includes a supervised ML model.
  • the ML object sorter model 252 may include a classification ML model configured to classify (e.g., label) an object and/or pixel(s) based on one or more features thereof.
  • the ML object sorter model 252 may include one or more of a decision tree algorithm, a random forest algorithm, a Gaussian mixture model (GMM), a logistic regression algorithm (e.g., linear regression, lasso regression, ridge regression, polynomial regression), a naive bayes algorithm, a K-nearest neighbors (KNN) algorithm, a support vector machine (SVM) algorithm, a stochastic gradient descent (SGD) algorithm, and a half of threshold (sHoT) training algorithm.
  • the ML object sorter model 252 includes one or more of a decision tree model, a random forest model, and a Gaussian mixture model.
  • the ML object sorter model 252 receives the training data 244 including the pass labels 246 and the fail labels 248 and the associated image data and train the ML object sorter model 252
  • the ML object sorter model 252 may be configured to facilitate classification and sorting of multiple types of commodities (bulk products) based (e.g., based only) on image data (e.g., the color of pixels in the image data).
  • the ML object sorting model 252 may include multiple classification algorithms 254, such as a classification algorithm 254 for each type of bulk product that may be sorted by the sorter device 100.
  • the ML model 252 may include a classification algorithm for one or more of grain (e.g., each of wheat, barley, oats, corn, rye, quinoa, bulgar), nuts (e.g., each of peanuts, almonds, cashews, etc.), seeds, rice, and plastic.
  • the ML object sorter model 252 may be trained with different sets of training data244 specific to each of the classification algorithms 254 (e.g., the ML object sorting model 252 for classifying rice may be trained with image data 202 obtained while the stream 122 includes rice).
  • the ML object sorter model 252 includes determining a correlation between one or more color features of the objects (e.g., the pixels defining the objects) and/or pixel(s) in the training data 244.
  • the color features may include an average intensity of measured radiation in one or more (e.g., each) channel of image data defining a group of pixels, a number of pixels in a group of pixels having an intensity outside of a predetermined range (e.g., corresponding to pixels in the groups of pixels having an undesired color), or another color feature.
  • the ML object sorter model 252 is trained to predict a class of objects and/or pixel(s) (e.g., pass or fail) based on pixel data including the measured electromagnetic radiation and corresponding to one or more color features of the objects and/or pixel(s) in the image data.
  • a portion of the training data 244 is provided to the ML object sorter model 252 as test data 256 configured to test the ML object sorter model 252.
  • the test data 256 includes labeled data (the label defined by the user input 226).
  • the test data 256 may be received by the ML object sorter model 252 and the ML object sorter model 252 performs a classification of the test data 256 to determine an accuracy of the ML object sorter model 252.
  • the test data 256 may be used to determine one or more of the precision of the ML object sorter model 252 for the particular classification algorithm 254 (e.g., the percentage of correct classifications), the recall of the ML object sorter model 252 (e.g., the percentage of bad labels in the test data 256 identified by the ML object sorter model 252), and the Fl-score (e.g., an average metric accounting for the precision and the recall).
  • the test data 256 may be displayed on the GUI 222 so that the user may determine a suitability of the ML object sorter model 252 prior to use of the ML object sorter model 252 in a sorting operation.
  • the ML object sorter model 252 may be continuously or intermittently updated with additional training data.
  • the ML object sorter model 252 may receive incremental learning data (e.g., additional user input including labeled image data of image data obtained during a sorting operation) and/or may receive online machine learning data.
  • FIG. 3 is a schematic representation of the ML object sorter system 121, in accordance with embodiments of the disclosure.
  • the ML object sorter system 121 includes image data 302 received from one or more of the optical sensors 128.
  • the image data 302 may be substantially the same as the image data 202, except that the image data 302 may be obtained during a sorting operation rather than during a training operation.
  • the image data 302 may be obtained during sorting of a bulk product 118 after the ML object sorter model 252 has been trained using the ML object sorter model training system 117 (FIG. 2A).
  • the image processing system 114 may receive the image data 302 and analyze the image data to identify one or more regions within the image data 302 corresponding to one or more objects (e.g., one or more objects of the bulk product 118) in the stream 122.
  • the image processing system 114 may be configured to identify groups 304 of pixels (also referred to as "clusters" of pixels) in the image data 302 (e.g., pixels neighboring one another) having the same color (e.g., substantially the same color) and/or having an intensity value in one or more channels within a predetermined range.
  • the groups 304 of pixels may include pixels having an intensity within a predetermined range, corresponding to a shade of one or more colors. In some such embodiments, the groups 304 of pixels may be identified based on the detected color of the bulk product 118 in the stream 122. In some embodiments, a number of the groups 304 of pixels may correspond to a number of objects of the bulk product 118 captured in the image data 302. One or more of the groups 304 of pixels may correspond to an object and/or one or more of the groups 304 of pixels may correspond to an area (e.g., a defect area) of an object of the bulk product 118.
  • an area e.g., a defect area
  • the image processing system 114 may perform one or more image processing operations on the image data 302, as described above with reference to FIG. 2A. In some embodiments, during the sorting operation, the image processing system 114 performs one or more different operations and/or sequences of operations on the image data 302 compared to the operations performed on the image data 202 during the training operation described above with reference to FIG. 2A. By way of non-limiting example, in some embodiments, the image processing system 114 performs an area thresholding operation 216 on the image data 302 to identify groups 304 of pixels in the image data 302 defined by more than a predetermined number of pixels. The image processing system 114 may further perform one or more of a background thresholding operation 208, a contouring operation 210, an erosion operation 212, a growing operation 214, and a feature extraction operation 217 on the image data 302.
  • the image processing system 114 may be configured to perform one or more object detection techniques to identify the objects of the bulk product 118 within the stream 122 rather than or in addition to identifying the groups 304 of pixels.
  • the ML object sorter system 121 may cause the ML model 252 to receive each group 304 of image data and analyze each group 304 of image data to generate a labeling decision 306, which may also be referred to herein as a "sorting decision".
  • the ML model 252 may create the labeling decision 306 based on, for example, one or more color features of the pixels in the group 304 of pixels.
  • the ML object sorter model 252 may determine an average intensity value of the pixels of a group 304 of pixels and compare the average intensity value of the pixels of the group 304 to the training set of data.
  • the labeling decision 306 may include, for example, a decision to label the object and/or pixel(s) defined by the group 304 of pixels as passing (e.g., good) or to label the object and/or pixel(s) as failing (e.g., bad).
  • the labeling decision 306 may be made based on the ML object sorter model 252.
  • the labeling decision 306 may be received by the ejector controller 115 and may cause the ejector controller 115 to cause the ejector 108 to operate.
  • the ejector controller 115 may cause the ejector 108 to direct a stream of air to the object defined by the group 304 of pixels to separate the object from the stream 122 for groups 304 of pixels labeled as failing (or bad).
  • the ejector controller 115 may cause the ejector 108 not to direct a stream of air at the object.
  • the ML object sorter system 121 may be configured to provide additional training data to the ML object sorter model 252 for further training thereof.
  • at least some of the groups 304 of pixels may be received by a communications interface 218.
  • the communications interface may display the image data 302 as an output image 320 on a GUI 222, as described above with reference to FIG. 2A.
  • a predetermined percentage of the groups 304 of pixels may be received by the communications interface 218.
  • a user may view the output image 320 and provide user input 326 to the ML object sorter system 121 and/or the ML object sorter model training system 117 via the communications interface 218.
  • the user input 326 may include a label for the output image 320, such as whether the object and/or pixel(s) of the bulk product 118 in the output image 320 should be labeled as a pass or a fail.
  • the ML object sorter model training system 117 or the ML object sorter system 121 may perform a compare operation 308 to compare the user input 326 to the labeling decision 306 and determine whether the ML object sorter model 252 correctly labeled the image data 302.
  • An output 330 of the compare operation 308 may be received by the ML object sorter model 252 and may be configured to facilitate continuous training of the ML object sorter model 252.
  • the ML object sorter model 252 is not continuously trained and the labeling decision 306 is not compared to the user input 326.
  • FIG. 4 is a simplified flow diagram illustrating a method 400 of training a ML object sorter model, in accordance with embodiments of the disclosure.
  • the method 400 includes receiving image data with one or more optical sensors 128, as shown in act 402.
  • the image data may include, for example, the image data 202.
  • the image data may be obtained during operation of the sorter device 100, such as du ring flow of a stream 122 of the bulk product 118 in front of the at least one detection system 106.
  • the method 400 may further include processing the image data to create modified image data, as shown in act 404.
  • the image data may be processed by performing one or more operations on the image data, such as one or more of a thresholding operation 208, a contouring operation 210, an erosion operation 212, a growing operation 214, an area thresholding operation 216, and a feature extraction operation 217, as described above with reference to FIG. 2A.
  • Creating the modified image data may include sequentially performing a background thresholding operation 208 on the image data, performing a contouring operation 210 on the image data, and performing a feature extraction operation 217 on the image data.
  • creating the modified image data may include performing one or both of an erosion operation 212 and a growing operation 214 on the image data, followed by performing an area thresholding operation on the image data.
  • the method 400 may further include displaying an output image based on the modified image data, as shown in act 406.
  • Displaying the output image may include displaying an output image on a communications interface 222, such as a display (e.g., a monitor).
  • the method 400 may further include receiving a user input indicative of a label of an object and/or pixel(s) in the output image and creating training data, as shown in act 408.
  • a user may select an object and/or pixel(s) in the output image, such as by touching the object and/or pixel(s) on a display to select the object and/or pixel(s).
  • the user may label the object and/or pixel(s) (e.g., either as a pass or a fail) based on the appearance thereof.
  • the user may label the object and/or pixel(s) based on one or more features of thereof, such as one or more of a color, a variation in color, a shape, a size, and a texture thereof.
  • the user input may be provided to the communications interface 218 and may be configured to classify the object and/or pixel(s). For example, as described above with reference to FIG. 2A and FIG. 2B, responsive to receiving the user input, the ML object sorter model training system 117 may create training data. Responsive to receiving user input from a plurality of output images, the method 400 may include creating a set of training data.
  • the method 400 may include training a machine learning model based on the training data to create a trained ML object sorter model 252, as shown in act 410.
  • the ML object sorter model 252 may include a decision tree model, a random forest model, or a Gaussian mixture model.
  • the ML object sorter model training system 117 may train a ML model to create the ML sorting model 252 based on the training data.
  • the ML sorting model 252 may be trained to sort the same type of object from which the training data is generated.
  • the ML sorting model 252 may be trained based on only a color feature of the labeled objects and/or pixel(s) in the training data.
  • the ML object sorter model 252 may identify patterns in the color the objects and/or pixel(s) with a passing label 246 and objects and/or pixel(s) with a failing label 248 (e.g., the intensity pixels defining the objects, an average intensity of the pixels defining the objects and/or pixel(s), one or more other patterns in the color data of the labeled objects and/or pixel(s)) to create the ML sorting model 252.
  • the ML object sorter model 252 may be trained based on user input defining the labels of the objects and/or pixel(s).
  • the input to the ML object sorter model training system 117 may be more accurate and of higher quality than input data including raw image data (e.g., the image data 202).
  • Training the ML object sorter model 252 with higher quality training data may facilitate creating the ML object sorter model 252 to exhibit a higher accuracy than a model trained with only the raw image data 202.
  • the ML object sorter model 252 may classify (label) objects and/or pixel(s) faster than conventional methods of sorting a bulk product. Accordingly, in some embodiments, since only some pixels of an object (e.g., only regions of an object having defect areas) may be labeled and provided as training data, the ML object sorter model 252 may be trained with better training data than labeling the entire object which may include only small or minor regions having defect areas whereas other regions (e.g., a majority) of the object would be passing (e.g., have passing colors).
  • color data e.g., intensity data
  • some objects may include only some areas that include defects.
  • Training the ML object sorter model 252 with fail labels 248 defined only by the defect areas and not the entire object (which may include some pixels that would otherwise be labeled with a pass label 246) facilitates training the ML object sorter model 252 with improved training data and improved performance of the ML object sorter model 252.
  • the method 500 may optionally include creating modified image data, as shown in act 504.
  • Creating the modified image data may include performing one or more of a thresholding operation 208, a contouring operation 210, an erosion operation 212, a growing operation 214, an area thresholding operation 216, and a feature extraction operation 217 on the image data, as described above with reference to FIG. 3.
  • the image processing system 114 performs an area thresholding operation 216 on the image data to identify groups 304 of pixels in the image data defined by more than a predetermined number of pixels and further performs one or more of a background thresholding operation 208, a contouring operation 210, an erosion operation 212, and a growing operation 214, as described with reference to FIG. 3.
  • image processing system 114 may be configured to perform one or more object detection techniques to identify the particles of the bulk product 118 within the stream 122 rather than or in addition to identifying the groups of pixels.
  • the method 500 may further include analyzinggroups of pixels of the image data using a ML object sorting model to generate a labeling decision for each group of pixels, as shown in act 506.
  • the ML object sorting model 252 may analyze groups (e.g., groups 304) of image data to identify one or more color features of the groups of image data and generate the labeling decision based on the one or more color features.
  • the ML sorting model 252 may determine an average intensity value of the pixels of a group of pixels and compare the average intensity value of the pixels of the group 304 to the training set of data.
  • the labeling decision may include an identification of whether each object defined by each group of pixels is a pass (e.g., should remain with the bulk product 118) or a fail (e.g., should be separated from the bulk product 118) and may be determined based on the ML object sorting model 252.
  • the method 500 includes receiving the labeling decision with an ejector controller 115, as shown in act 508.
  • the ejector controller 115 may receive the labeling decision from the ML object sorter model 252.
  • the method 500 includes controlling an ejector 108 based on the labeling decision, as shown in act 510.
  • the ejector 108 receives a control signal from the ejector controller 115 responsive to the ejector controller 115 receiving the labeling decision.
  • the ejector controller 115 may send a signal to the ejector 108 to expose the object to a stream (e.g., pulse) of air to separate the object from the bulk product 118; and responsive to receiving a pass labeling decision for an object from the ML object sorting model 252, the ejector controller 115 may control the ejector 108 to not expose the object to a stream (e.g., pulse) of air to separate the object from the bulk product 118 (e.g., by not sending a signal to the ejector 108).
  • a stream e.g., pulse
  • the sorter device 100 may be configured to receive training data from a user and to train a ML object sorting model to label objects to be sorted based on one or more color properties of the objects and/or one or more color properties of one or more regions (e.g., defect areas) of the objects.
  • the sorter device 100 may be configured to perform a sorting operation of the bulk product 118 based on the ML object sorting model 252 (and without a predefined algorithm for sorting the particular bulk product 118). Training the ML object sorting model 252 to facilitate the sorting operation may provide advantages compared to conventional methods of sorting a bulk product. For example, conventional methods of sorting may sort based on wavelengths of electromagnetic radiation that are reflected from the bulk product.
  • the color of the bulk product may depend on the type of bulk product to be sorted. As one example, different types of rice may exhibit different properties color properties.
  • the sorter device 100 may not require a different algorithm for each type of product to be sorted. Rather, the sorter device 100 may be trained to sort each unique bulk product based on the user input that defines the training data train the ML object sorting model for each unique type of bulk product. Further, the labeling decision (and the sorting of the objects) may be based solely on the image data and the ML object sorting model 252 without modification of the image data, which may be required for some conventional sorting operations.
  • machine learning object sorter model training system 117 and the ML object sorter system 121 have been described and illustrated as being located on the same computing device 112 of the sorter device 100, the disclosure is not so limited. In other embodiments, the machine learning object sorter model training system 117 may be located remotely from the sorter device 100. In some such embodiments, the user input 226 may be received at the sorter device 100 and communicated to the machine learning object sorter model training system 117 over a network. The machine learning object sorter model training system 117 may train the ML object sorter model 252 (as described above with reference to FIG. 2A and FIG.
  • FIG. 6 is a schematic view of a computer device 602, in accordance with embodiments of the disclosure.
  • the computing device 112 includes a computer device such as the computer device 602 of FIG. 6.
  • the computing device 602 may include a communication interface 604, at least one processor 606, a memory 608, a storage device 610, an input/output device 612, and a bus 614.
  • the computing device 602 may be used to implement various functions, operations, acts, processes, and/or methods disclosed herein, such as the method 400 and/or the method 500.
  • the communication interface 604 may include hardware, software, or both.
  • the communication interface 604 may provide one or more interfaces for communication (such as, for example, packet-based communication) between the computing device 602 and one or more other computing devices or networks (e.g., a server).
  • the communication interface 604 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a Wi-Fi.
  • NIC network interface controller
  • WNIC wireless NIC
  • the at least one processor 606 may include hardware for executing instructions, such as those making up a computer program.
  • the at least one processor 606 may retrieve (or fetch) the instructions from an internal register, an internal cache, the memory 608, or the storage device 610 and decode and execute them to execute instructions.
  • the at least one processor 606 includes one or more internal caches for data, instructions, or addresses.
  • the at least one processor 606 may include one or more instruction caches, one or more data caches, and one or more translation look aside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in the memory 608 or the storage device 610.
  • TLBs translation look aside buffers
  • the memory 608 may be coupled to the at least one processor 606.
  • the memory 608 may be used for storing data, metadata, and programs for execution by the processor(s).
  • the memory 608 may include one or more of volatile and non-volatile memories, such as Random- Access Memory (“RAM”), Read-Only Memory (“ROM”), a solid state disk (“SSD”), Flash, Phase Change Memory (“PCM”), or other types of data storage.
  • RAM Random- Access Memory
  • ROM Read-Only Memory
  • SSD solid state disk
  • Flash Phase Change Memory
  • the memory 608 may be internal or distributed memory.
  • the storage device 610 may include storage for storing data or instructions. As an example, and not by way of limitation, storage device 610 may include a non-transitory storage medium described above.
  • the storage device 610 may include a hard disk drive (HDD), Flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these.
  • the storage device 610 may include removable or non-removable (or fixed) media, where appropriate.
  • the storage device 610 may be internal or external to the storage device 610.
  • the storage device 610 is non-volatile, solid-state memory.
  • the storage device 610 includes read-only memory (ROM).
  • this ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or Flash memory or a combination of two or more of these.
  • the storage device 610 may include machine-executable code stored thereon.
  • the storage device 610 may include, for example, a non-transitory computer-readable storage medium.
  • the machine-executable code includes information describing functional elements that may be implemented by (e.g., performed by) the at least one processor 606.
  • the at least one processor 606 is adapted to implement (e.g., perform) the functional elements described by the machine-executable code.
  • the at least one processor 606 may be configured to perform the functional elements described by the machine-executable code sequentially, concurrently (e.g., on one or more different hardware platforms), or in one or more parallel process streams.
  • the machine-executable code When implemented by the at least one processor 606, the machine-executable code is configured to adapt the at least one processor 606 to perform operations of embodiments disclosed herein.
  • the machine-executable code may be configured to adapt the at least one processor 606 to perform at least a portion or a totality of the method 400 of FIG. 4 and/or the method 500 of FIG. 5.
  • the machine-executable code may be configured to adapt the at least one processor 606 to perform at least a portion or a totality of the operations discussed for the sorter device 100 of FIG. 1.
  • the machine-executable code may be configured to adapt the at least one processor 606 to cause the ML object sorter model training system 117 to train a ML object sorter model 252 and/or to cause the ML object sorter model 252 to sort a bulk product, as described above with reference to FIG. 1 through FIG. 5.
  • the input/output device 612 may allow an operator of the sorter device 100 to provide input to, receive output from, the computing device 602.
  • the input/output device 612 may include a mouse, a keypad or a keyboard, a joystick, a touch screen, a camera, an optical scanner, network interface, modem, other known I/O devices, or a combination of such I/O interfaces.
  • the input/output device 612 may include one or more devices for the operator to toggle between various displays of the output image.
  • the bus 614 may include hardware, software, or both that couples components of computing device 602 to each other and to external components.
  • CAN Controller Area Network
  • ISOBUS ISO 11783 Compliant Implement Control

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un dispositif trieur comprenant un système d'alimentation, un système de détection comprenant au moins un dispositif d'émission et au moins un trieur optique, un éjecteur et un dispositif informatique couplé de manière fonctionnelle au système de détection et à l'éjecteur. Le dispositif informatique comprend des instructions qui, lorsqu'elles sont exécutées par au moins un processeur, amènent un système de traitement d'image à générer des données d'image, amènent une interface de communication à afficher une image de sortie sur la base des données d'image, amènent l'interface de communication à recevoir une entrée d'utilisateur comprenant des données d'étiquetage d'au moins certains pixels définissant des objets de la marchandise dans l'image de sortie pour générer des données d'entraînement, et entraînent un modèle de trieur d'objets par apprentissage automatique sur la base des données d'entraînement. Des dispositifs trieurs et des procédés associés sont également divulgués.
PCT/IB2024/056637 2023-09-11 2024-07-08 Dispositifs trieurs utilisant des modèles de trieur d'objets par apprentissage automatique, et procédés associés Pending WO2025056990A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP23425047 2023-09-11
EP23425047.0 2023-09-11

Publications (1)

Publication Number Publication Date
WO2025056990A1 true WO2025056990A1 (fr) 2025-03-20

Family

ID=89029604

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2024/056637 Pending WO2025056990A1 (fr) 2023-09-11 2024-07-08 Dispositifs trieurs utilisant des modèles de trieur d'objets par apprentissage automatique, et procédés associés

Country Status (1)

Country Link
WO (1) WO2025056990A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160136693A1 (en) * 2014-06-27 2016-05-19 Key Technology, Inc. Method and apparatus for sorting
WO2022170273A1 (fr) * 2021-02-08 2022-08-11 Sortera Alloys, Inc. Tri de matières plastiques de couleurs foncées et noires

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160136693A1 (en) * 2014-06-27 2016-05-19 Key Technology, Inc. Method and apparatus for sorting
WO2022170273A1 (fr) * 2021-02-08 2022-08-11 Sortera Alloys, Inc. Tri de matières plastiques de couleurs foncées et noires

Similar Documents

Publication Publication Date Title
Luo et al. Classification of weed seeds based on visual images and deep learning
Blasco et al. Recognition and classification of external skin damage in citrus fruits using multispectral data and morphological features
AU2013347861B2 (en) Scoring and controlling quality of food products
Shahin et al. A machine vision system for grading lentils
CA2230784C (fr) Appareil de triage d'aliments en masse se deplacant a grande vitesse destine a l'inspection et au tri optique de produits alimentaires en vrac
Kleynen et al. Development of a multi-spectral vision system for the detection of defects on apples
Al Ohali Computer vision based date fruit grading system: Design and implementation
Zareiforoush et al. Design, development and performance evaluation of an automatic control system for rice whitening machine based on computer vision and fuzzy logic
US20090050540A1 (en) Optical grain sorter
JP7497760B2 (ja) 被選別物の識別方法、選別方法、選別装置、および識別装置
US20250058360A1 (en) Devices, systems and methods for sorting and labelling food products
Neelamegam et al. Analysis of rice granules using image processing and neural network
Liang et al. A high-throughput maize kernel traits scorer based on line-scan imaging
Wang et al. Separation and identification of touching kernels and dockage components in digital images
Alfatni et al. Colour feature extraction techniques for real time system of oil palm fresh fruit bunch maturity grading
Shajahan et al. Identification and counting of soybean aphids from digital images using shape classification
Bautista et al. Automated sorter and grading of tomatoes using image analysis and deep learning techniques
US10902575B2 (en) Automated grains inspection
Aznan et al. Rice seed varieties identification based on extracted colour features using image processing and artificial neural network (ANN)
WO2025056990A1 (fr) Dispositifs trieurs utilisant des modèles de trieur d'objets par apprentissage automatique, et procédés associés
JP7512853B2 (ja) 被選別物の識別方法、選別方法及び選別装置
US20240428388A1 (en) Soybean Quality Assessment
Kanjanawanishkul et al. Design and assessment of an automated sweet pepper seed sorting machine
KR101673056B1 (ko) 곡물불량검출방법
Lim Jr et al. De-husked Coconut Quality Evaluation using Image Processing and Machine Learning Techniques

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24745531

Country of ref document: EP

Kind code of ref document: A1