[go: up one dir, main page]

WO2025123132A1 - System and method for autonomously inspecting object closures - Google Patents

System and method for autonomously inspecting object closures Download PDF

Info

Publication number
WO2025123132A1
WO2025123132A1 PCT/CA2024/051645 CA2024051645W WO2025123132A1 WO 2025123132 A1 WO2025123132 A1 WO 2025123132A1 CA 2024051645 W CA2024051645 W CA 2024051645W WO 2025123132 A1 WO2025123132 A1 WO 2025123132A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processing unit
bag
sealing element
imaging apparatus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CA2024/051645
Other languages
French (fr)
Inventor
Mathieu JUNKER
Martin CABOTTE
Billy Lapointe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Premier Tech Technologies Ltd
Original Assignee
Premier Tech Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Premier Tech Technologies Ltd filed Critical Premier Tech Technologies Ltd
Publication of WO2025123132A1 publication Critical patent/WO2025123132A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/90Investigating the presence of flaws or contamination in a container or its contents
    • G01N21/9054Inspection of sealing surface and container finish
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8883Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]

Definitions

  • Materials such as grains, seeds, animal food, biomass fuels, flours, pellets, plastics polymers, mineral powders, food produce, and milled products, etc. are typically stored and packaged in bags.
  • the bags are typically constructed using materials such as paper, polyester (PET), woven polypropylene (WPP), polyethylene (PE), metalized polyester (MET PET), burlap, cotton or other natural or synthetic fabric. These materials are used due to their durability and strength, ensuring that the products inside the bags remain dry and are not destroyed during transport.
  • the materials used to construct the bags are also convenient for sealing the bags easily.
  • the bags can be sealed by sewing or fastening the opening shut.
  • the bags can also be resealable by using zippers or slider fasteners to allow an end-user to open and re-seal the bags easily.
  • the zippers or fasteners may also have to be sewn onto or into the bags.
  • One limitation of the sewing method used to seal the bags is that sewing may have to be done by an operator. This may be difficult since the bags containing the products may be heavy and difficult to maneuver when sewing. Additionally, it may be difficult to identify and inspect bags that were sealed using different methods, or by different operators. As such, there may be a large variability in the quality of bag closures.
  • an imaging system for inspecting an object having at least one sealing element.
  • the system can comprise: an imaging apparatus having an optical axis for capturing an image; a light source located proximal to the imaging apparatus; and a processing unit coupled to the imaging apparatus configured to receive image data from the imaging apparatus.
  • the object having the at least one sealing element can be appropriately disposed to permit a portion of the object to pass in front of the imaging apparatus transversely to the optical axis of the imaging apparatus, the light source illuminates the sealing element, and the imaging apparatus is configured to capture an image of the at least one sealing element.
  • the processing unit receives the image from the imaging apparatus and identifies parameters of the at least one sealing element with respect to the object.
  • the imaging apparatus can be formed within an enclosure, such that the enclosure blocks external light and provides contrast in the image of the sealing element.
  • the imaging system further can comprise a separator positioned between the imaging apparatus and the light source, such that the separator blocks out excessive light from the light source.
  • the enclosure can comprise an inlet for receiving the object and outlet for retrieving the object after the image from the imaging apparatus is captured.
  • the processing unit can comprise an inspection model for analyzing the image using a trained machine learning algorithm and identifying, using the image, the parameters of the at least one sealing element with respect to the object.
  • the machine learning algorithm can comprise at least one of: a convolutional neural network (CNN), and semantic segmentation.
  • the semantic segregation can comprise at least one of: ll-Nets, PSP-Nets, Deeplab, and Parse Nets.
  • the inspection model can be trained to analyse the image and determine whether the sealing element of the object is faulty or proper.
  • the sealing element can comprise a seam line and/or a closing strip.
  • the inspection model can comprise: identifying, by the processing unit, the seam line; identifying, by the processing unit, an outline or outer boundary of the object, identifying an upper limit of the object; identifying, by the processing unit, the closing strip; identifying, by the processing unit, a bottom of the closing strip; determining, by the processing unit, a distance between the upper limit of the object and the bottom of the closing strip; and determining, by the processing unit, the distance between the seam line and a top or bottom limit of the closing strip.
  • the processor can determine that the object is faulty if one or more of the following parameters are met: an upper limit of the object is located less than a first threshold distance above the sealing element, the sealing element is absent, incomplete, or crooked; the sealing element is located at a second threshold distance from the bottom of the closing strip; the seam line is located at a third threshold distance from the top of the closing strip; and the upper limit of the object is located less than a fourth threshold distance above the lower limit of the closing strip.
  • the processor actuating an actuator to separate the faulty object from another object which is determined to be proper.
  • a method for inspecting an object can comprise: capturing an image of the object; providing the image of the object to a processing unit; and applying, via the processing unit, an inspection model to the image of the object.
  • the inspection model can comprise at least one of: identifying, by the processing unit, using the image of the object, an outline of the object; identifying, by the processing unit, using the image of the object, an upper limit of the object; identifying, by the processing unit, using the image of the object, at least a first element on the object; determining, by the processing unit, whether the first element on the object is positioned within an acceptable tolerance with respect to the outline of the object, and the upper limit of the object; and determining, by the processing unit, whether the first element on the object is positioned within an acceptable tolerance with respect to the outline of the object, and the upper limit of the object.
  • the method can further comprise: locating, by the processor, using the image of the object, a second element on the object; the second element having a bottom edge connected to the object, and a top edge opposing the bottom edge; determining, by the processor, using the image of the object, a distance between the upper limit of the object and the bottom edge of the second element; and identifying, by the processor, using the image of the object, a distance between the first element and the bottom edge of the second element.
  • the method can further comprise: rejecting an object if at least one or more of the following conditions are met: the first element is absent; the second element is absent; the upper limit of the object is located less than a first threshold distance above the first element; the first element is located at a distance less than a second threshold distance from the bottom edge of the second element; the first element is located at a distance less than a third threshold distance from the top edge of the second element; and the upper limit of an object is located less than a fourth threshold distance below the top edge of the second element.
  • the method further comprises a pre-sorting step after the image capturing step, and before the application of the inspection model step.
  • the pre-sorting step can comprise: identifying images that are obvious fails; and sending the remaining images to the inspection system for application of the inspection model step.
  • FIG. 1 is a block diagram of a system for autonomously inspecting a bag
  • FIGS. 2A to 2E are diagrams of alternative embodiments of the imaging system
  • FIG. 3 is a diagram of the bag sealing system, imaging system, and computing unit
  • FIGS. 4A and 4B are diagrams showing the bag within the imaging system for inspection
  • FIG. 5A is a flowchart showing a complete method of filling, sealing and inspecting a bag
  • FIG. 5B is a flowchart showing a method of autonomously inspecting a bag
  • FIG. 8A is a photograph showing a folded bag
  • FIG. 8B is a photograph showing a folded bag
  • FIG. 9B is a diagram showing a folded bag having a label element
  • FIGS. 10A and 10B are photographs showing pre-sorted bags which have failed the pre-sorting test.
  • Coupled can have several different meanings depending on the context in which these terms are used.
  • the terms coupled or coupling can have a mechanical, electrical or communicative connotation.
  • the terms coupled or coupling can indicate that two elements or devices can be directly connected to one another or connected to one another through one or more intermediate elements or devices via an electrical element, an electrical signal, a light signal or a mechanical element depending on the particular context.
  • X and/or Y is intended to mean X or Y or both X and Y, for example.
  • X, Y, and/or Z is intended to mean X or Y or Z or any combination thereof.
  • communicative as in “communicative pathway”, “communicative coupling”, and in variants such as “communicatively coupled” is generally used to refer to any engineered arrangement for transferring and/or ex-changing information.
  • communicative pathways include, but are not limited to, electrically conductive pathways (e.g., electrically conductive wires, physiological signal conduction), electromagnetically radiative pathways (e.g., radio waves, optical signals, etc.), or any combination thereof.
  • communicative couplings include, but are not limited to, electrical couplings, magnetic couplings, radio couplings, optical couplings or any combination there-of.
  • At least some of the software programs used to implement at least one of the embodiments de-scribed herein may be stored on a storage media or a device that is readable by a general or special purpose programmable device.
  • the software program code when read by the programmable device, which may also be referred to as a computing device, configures the programmable device to operate in a new, specific and predefined manner in order to perform at least one of the methods described herein.
  • programs associated with the systems and methods of the embodiments described herein may be capable of being distributed in a computer program product comprising a computer readable medium that bears computer usable instructions, such as program code, for one or more processors.
  • the program code may be preinstalled and embedded during manufacture and/or may be later installed as an update for an already deployed computing system.
  • the medium may be provided in various forms, including non- transitory forms such as, but not limited to, one or more diskettes, compact disks, tapes, memory chips, and magnetic and electronic storage.
  • the medium may be transitory in nature such as, but not limited to, wire-line transmissions, satellite transmissions, internet transmissions (e.g., downloads), media, digital and analog signals, and the like.
  • the computer useable instructions may also be in various formats, including compiled and non-compiled code.
  • Any module, unit, component, server, computer, terminal or computing device described herein that executes software instructions in accordance with the teachings herein may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape.
  • Computer storage media may include volatile and nonvolatile, re-movable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD- ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information, and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable thereto.
  • the various embodiments described herein generally relate to a method of autonomously inspecting a bag containing a material to ensure a proper sealing operation.
  • the method relates to the use of artificial intelligence (Al) among others to scan closed bags containing bulk material as they come out of the sealing process.
  • the system can capture images of the bag to identify whether a first element such as a seam line, or sealing line is accurately positioned.
  • the system can also capture images of the bag to identify whether a second element such as edge banding, which can be a crepe tape or a paper strip or a WPP strip that lies across the top edge of the bag, is well positioned, and that the sealing of the bag is adequate, in that there are no openings, and that the seam line is within certain tolerances.
  • the system 100 comprises a bag production and filling system 101 coupled to a bag sealing system 102.
  • the bag sealing system 102 can comprise, for example, a sewing machine such as an industrial sewing machine, a heat-sealing apparatus, or a folding apparatus.
  • the system 100 further comprises an imaging system 103.
  • the imaging system 103 can include at least one imaging apparatus, such as camera or image sensor.
  • the imaging system 103 can be coupled to a processing unit or computing unit 104 located on a mobile device or computer.
  • the processing unit may also comprise a graphics processing unit (GPU) for processing images that are captured by the imaging system 103.
  • GPU graphics processing unit
  • the imaging system 103 can include at least one imaging apparatus 202, such as camera or image sensor.
  • the imaging system 103 can further include a lighting system 204 which can include a light or lamp 201 .
  • the light or lamp 201 can provide a backlight, or direct light.
  • the imaging system 103 can be suspended on a frame or chassis 211 to maintain or ensure a consistent field-of-view.
  • the imaging system 103 can be positioned or fastened on a frame or chassis 211 .
  • the imaging system 103 can optionally be enclosed or formed within an enclosure 208 to block external light and provide darkness or contrast in the image.
  • the suspension frame 211 or enclosure 208 can comprise an inlet 209 and an outlet 210.
  • the inlet and outlet can be configured to receive a conveyor belt or another automated track passing therethrough.
  • Sealed bags or objects can be conveyed or otherwise transported through the enclosure 208, and the imaging apparatus appropriately disposed to permit a portion of the sealed bags, such as the top portion, or the entire sealed bag, to pass in front of the imaging apparatus transversely to the optical axis of the imaging apparatus.
  • the camera 202 can be placed such that the imaging bed 206 is within a camera frame.
  • the conveyor belt may act as the imaging bed 206.
  • the background can act at the imaging bed 206.
  • the camera can be a digital camera, video camera, low light, infrared (IR) camera, depth sensor, ultraviolet (UV) camera, multispectral camera or image sensor.
  • the camera can capture a full frame image or line images.
  • the light or lamp 201 can provide additional lighting to the imaging bed 206 as the enclosure may block out external light from the surroundings.
  • the imaging system or enclosure 208 can further comprise a separator 205 which separates the camera from the light source, so that the light source does not saturate the camera, and blocks out any excessive light coming from the light source.
  • the imaging system can include a plurality of imaging apparatuses 202a, 202b.
  • the imaging system can further include a plurality of lighting systems 204a, 204b which can include a light or lamp 201 a, 201 b.
  • a first light or lamp 201 a can be provided opposite a second light or lamp 201 b, providing additional lighting on an imaging bed 206.
  • the imaging bed can be placed between the plurality of imaging apparatuses 202 and light sources 201 , such that the imaging bed receives an equal amount of light from each light source 201 .
  • Each of the light sources 201 can provide a backlight, or direct light. Alternatively, the light sources 201 can be used to provide backlight or direct light interchangeably.
  • the imaging system 103 can further comprise a separator 205 which separates the camera from the light source 201 , so that the light source does not saturate the camera. In the case of a plurality of cameras, a plurality of separators 205a and 205b can be used.
  • the imaging system 103 can comprise a relatively larger imaging bed 206, allowing a plurality of objects to be imaged.
  • a plurality of imaging apparatuses 202a - 202d can be used to image the entirety of the imaging bed 206.
  • a plurality of light sources 201a - 201 d can also be used to provide sufficient lighting for the entirety of the imaging bed 206.
  • Each of the lighting sources 201 can be separated from the imaging apparatuses 202 with separators 205.
  • the imaging system 103 can comprise a relatively larger imaging bed 206, allowing a plurality of objects to be imaged.
  • the lighting sources can be placed on a first side of the imaging bed 206, and the imaging apparatuses 202 can be placed on a second of the imaging bed, opposite the first side.
  • Each of the lighting sources 201 can be separated from the remaining lighting sources with separators 205.
  • the imaging apparatuses 202 can also optionally be separated from the remaining imaging apparatuses with separators 205.
  • the imaging system 103 can comprise a transparent imaging bed 206 having the light sources 201 embedded within, allowing a plurality of objects to be underlit when being imaged.
  • the lighting sources can be placed underneath the imaging bed, providing up-lighting or backlighting to the objects to be imaged.
  • the lightning source could be placed above the imaging bed, providing downward-lighting or backlighting to the objects to be imaged.
  • the imaging bed 206 can comprise a rigid transparent or translucent material to keep the objects spaced apart from the lighting sources 201.
  • the imaging apparatuses 202 can be placed, for example, above the imaging bed 206.
  • the imaging apparatuses 202 can optionally be separated from the remaining imaging apparatuses with separators 205. [0047] It can be understood that the configurations of the imaging apparatus 202 and the lighting sources 201 can be interchanged and configured to image the objects in an efficient and accurate manner.
  • a method of sealing a bag, using the bag sealing system 102 is included herein.
  • the bag may comprise a first part including a top part, or area of the bag to be closed; and a second part, including the body of the bag.
  • the method of sealing the bag can include, optionally, trimming the bag where the bag is to be sealed.
  • the method of sealing the bag can further include applying or contacting a second element such as the edge banding or closing strip 302 to the first part of the bag; and sewing the edge banding or closing strip 302 to the first part of the bag.
  • the bags can be guided or supported from above, by way of bag upper section conveyance means.
  • second element can refer to edge banding or closing strip 302, or to a secondary seam line.
  • the bag sealing system 102 can comprise a sealing apparatus including, but not limited to: a sewing machine such as an industrial sewing machine, a heat-sealing apparatus, or a folding apparatus, or any other suitable sealing apparatus to seal or close the bag.
  • the bag sealing system 102 can be configured to attach a closing strip 302 to a first or top edge of a bag 304 by sewing, heat-sealing, folding-over, or otherwise closing a bag.
  • the sealing apparatus 303 which affixes, for example by sewing, gluing or adhering the closing strip 302 to bag 304, as shown between the first bag 304a, and second bag 304b, and third bag 304c.
  • the closing strip 302 can be affixed along the top edge of the bag 304. In alternative embodiments, the closing strip can be affixed to any edge or opening of the bag 304.
  • the bag can exit the bag sealing system 102 once the bag is sealed, as shown by bag 304c. As the bag with the sewn portion 302 exits the bag sealing system 102, it enters the enclosure 208 of the imaging system 103 via the inlet 209.
  • the bag sealing apparatus 102 can comprise a conveyor belt or automated track for moving the objects from the sewing machine 303 to the inlet of the imaging system 103 and therethrough.
  • FIG. 4A provides a diagram showing the object within the imaging system 103 for image acquisition and inspection.
  • the bag 304 can be passed through the enclosure 208 via the inlet 209, and be placed on the imaging bed 206. Optionally, just a portion of the bag can be inspected, for example the top portion.
  • the lighting system 204 illuminates the bag with light sources 201 .
  • the bag or object can be backlit or directly lit with the light sources 201 .
  • the bag or object is translucent such that it is possible to see the sewing patterns through the object or bag.
  • the lighting source can provide backlighting such that a first element such as an adhesion line or seam line 407, created during the bag sealing step, is visible by the imaging apparatus 202.
  • the imaging apparatus 202 can then capture an image or video of the bag or object 304.
  • the imaging apparatus can take an image of the complete object, or of a portion of the object using an apparatus such as a 2D camera.
  • a bag or object 304 can sometimes be warped during imaging.
  • the bag When the bag is filled, it can be deposited on a conveyor towards the sewing/closing system.
  • the bag or object 304 can be held under tension when being filled.
  • the bag or object 304 can then be deposited to the conveyor optionally without tension toward the sewing/closing system.
  • the bag or object 304 can be tensioned for sewing/closing.
  • the bag of object 304 can be placed upright on the conveyor towards the imaging apparatus.
  • the bag or object may become warped by the different tensions previously applied or by movement of material inside the bag.
  • the image may also be blurry due to vibrations in the environment.
  • the bag or object can be outside of the focus of the imaging apparatus thus creating a blurring in the image, and negatively affecting the quality of the image taken.
  • the bag can be held up or kept under tension during the image acquisition steps.
  • the quality of the image can be controlled/normalized.
  • a suitable tensioning mechanism can be used to keep the bag or object under tension.
  • the tensioning mechanism can include: guidance rails on the conveyor, metal bars, robotic arms, pincers, railings, and the like.
  • the tensioning mechanism can be momentarily disengaged while the image is captured to avoid the tensioning mechanism from appearing in the image.
  • only a portion of the bag or object is in the frame of the imaging apparatus 202.
  • a line-by-line or line scan of the object or bag is taken by the imaging apparatus 202.
  • the bags being imaged may be quite large so it may not be convenient to image the complete bag.
  • a portion of the bag having at least one seam line, or at least a portion of the seam line can be imaged. This provides an advantage in terms of space required for the imaging system as bagging lines may not have the adequate space required for the imaging system.
  • the image or video taken by the imaging apparatus 202 can be sent, in real-time or near-real time, to the computing unit or processor 104 for processing.
  • the processor once an image is received, can apply an inspection model to identify accurate positioning of the seam line 407 with respect to the bag 304.
  • first element can refer to the adhesion line, seam line, fold line, edge banding or closing strip 302.
  • the inspection model applied by the processor can include: identifying the seam line 407, identifying the outline of the bag, identifying the upper limit of the bag; identifying the closing strip 302; identifying a bottom of the closing strip; measuring/determining the distance between the upper limit of the bag 304 and the bottom of the closing strip 302, across the entire width of the bag 304, and measuring the distance between the seam line 407 and the top/bottom limit of the closing strip 302.
  • a closing strip may not be necessary.
  • the inspection model applied by the processor can include identifying the outline of the bag, or identifying the upper limit of the bag; and identifying a secondary element.
  • secondary elements can include: a seam line without a closing strip, a simple fold, a double fold, or an adhesive closing strip, as shown in FIGs. 8A, 8A, 9A and 9B.
  • the processor can further be adapted to capture the image, control the lighting and image capture, analyse the image by applying the inspection model, and can actuate an actuator on the conveyor belt to discard an improperly sewn bag or object.
  • the inspection model can be a machine learning model, which can be trained to analyse the images and determine which bags are faulty and which are not.
  • the processor can determine that a bag is faulty if one or more of the following parameters are met: the upper limit of a bag is located less than a first threshold distance above the seam line 407; the upper limit of a bag is located less than 2 mm above the seam line 407; the closing strip 302 absent, the seam line 407 is absent, incomplete, crooked; the seam line 407 is located at a distance less than 1 mm from the bottom of the closing strip 302; the seam line 407 is located at a distance less than 2mm from the top of the closing strip 302; the upper limit of a bag is located less than 2mm above the lower limit of the closing strip 302. It can be understood that the distances can be amended as needed for various bag sizes and shapes.
  • the parameters can be met across the entire length of the bag for it to be considered "not faulty". If one or more of the listed inspected elements are tom, stretched, unravelled or otherwise damaged, the bag may be considered “faulty”.
  • a camera when a bag enters the inspection system via the inlet 209, a camera can sense a leading edge of the bag and it can send a signal for the camera to start recording or capturing what is in its field of view.
  • An image or a plurality of images, or a video of the bag can be acquired by the camera.
  • the image data can be communicated to a processor 104 relying on the connection architecture.
  • the image can be passed to or through the inspection model, which may include a trained machine learning algorithm.
  • the machine learning algorithm used may be a convolutional neural network (CNN), or an algorithm designed toward semantic segmentation, including but not limited to: U- Nets, PSP-Nets, Deeplab, and Parse Nets, etc.
  • CNN convolutional neural network
  • the algorithm can perform semantic segmentation according to the training it has been subjected to. Images with localized elements can be output from the algorithm. The distance measurement metrics between the detected elements and the established criteria for pass or fail can then be applied to determine if a particular bag is faulty or not.
  • training an algorithm can comprise the step of gathering previous or historic images or image data; sorting the previously captured or historic images as faulty or not faulty.
  • the training step can further comprise segmenting specific elements on both the faulty sample set and the non-faulty sample set.
  • the specific elements of the bag may include: bag portion, closing strip, seam line label, bar code.
  • the specific elements of the bag may then be labeled or classified on the images.
  • the training step may further consist of feeding the untouched images to the algorithm along with the labeled and/or segmented images, allowing the algorithm to adjust and determine the difference between the two sets of samples in a process referred to, by a person trained in the art, as training.
  • Training can be an iterative process in which the first cycles of training imply a limited number of segmented images. Once the algorithm has had these first phases the training, then more significant amounts of data are used in a process call reinforcement training, in which the initial capabilities of the algorithm are exploited to generate a preliminary segmentation of untouched images. Defects found in the automatically analysed bags can then be used to reinforce the training toward defective bags.
  • the outcome of the training step is an algorithm that can identify or determine specific trained elements on an image and return segmented images for the element detected.
  • FIG. 4B an example of a misplaced closing strip 302 is shown therein.
  • the closing strip 302 has been incorrectly placed on the bag 304.
  • the seam line 407 can be seen by the imaging apparatus.
  • the processor can then identify the outline of the bag, locate the upper limit of the bag; identify the closing strip 302; measure the distance between the upper limit of the bag 304 and the bottom of the closing strip 302, across the entire width of the bag 304, and measure the distance between the seam line 407 and the top/bottom limit of the closing strip 302.
  • the processor can determine that the closing strip 302 has been incorrectly placed on the bag 304 as the distance between the upper limit of the bag 304 and the bottom of the closing strip 302, is not consistent across the entire width of the bag 304.
  • the parameters can be met across the entire length of the bag for it to be considered "good". For example, in FIG. 4B the aforementioned parameters would be met for the right-hand side of the bag while not for the left-hand side, and that particular bag would thus be rejected.
  • FIG. 5A provides a flowchart showing a complete method of filling, sealing and inspecting a bag.
  • the method begins by filling the bag at step 501 .
  • the bag can be filled by any suitable filling techniques and with any suitable materials.
  • the bag is then sealed at step 502 by any of the sealing techniques previously described.
  • the image acquisition is completed using the imaging system 103.
  • the processor 410 can be configured to capture the image. Once an image is captured, the image can be pre-processed by a sorting algorithm at step 504.
  • the pre-sorting step 504 can involve using the output of the imaging system 103 to conduct a sorting algorithm.
  • an optional pre-sorting step 504 can be completed after the image acquisition step 503, and before the segmentation step 505.
  • the image can be first acquired using the imaging system, then classified or sorted by the sorting algorithm, and the pre-classified images can be sent to the inspection model for analysis.
  • An image sorting algorithm in machine learning can comprise a computer- implemented method for categorizing or arranging digital images based on specific characteristics, such as metadata, pixel patterns, and/or features extracted via convolutional neural networks (CNNs).
  • the sorting algorithm can receive a plurality of digital images as input and process each image to extract feature vectors using a trained sorting model.
  • the feature vectors can represent numerical representations of the image attributes, such as bag outer edges, bag outline, straightness of the seam line, etc.
  • the method further includes organizing the images into clusters, categories, or a ranked order based on the identified similarity or predefined sorting criteria.
  • the sorting algorithm can employ reinforcement learning or supervised learning to optimize sorting accuracy over successive iterations by adjusting weighting parameters or feature extraction methods based on user feedback or performance metrics.
  • the sorting algorithm can provide a relativity fast classification to images, so that the failed images do not need to enter the inspection system and segmentation steps.
  • the inspection method 505 can be trained to give adequate analysis of rejected bags and flag any trends to be corrected in the production line (ex: skewed labels or crepe tapes). Therefore, the sorting algorithm 504 can be used to obtain a faster and more accurate result with the inspection software.
  • the quality of the inspection system can be dependent on the quality and/or quantity of the input data and training images.
  • FIG. 5B provides a flowchart showing the steps taken by the processor to analyse the image and conduct the image inspection 505.
  • the processor can identify the outline of the bag.
  • the processor can locate the upper limit of the bag.
  • the processor can identify a first element, such as a seam line.
  • the processor can identify a second element, such as but not limited to a closing strip, or label.
  • the processor can measure the distance between the upper limit of the bag 304 and the bottom of the second element such as closing strip 302. In a further embodiment, the measurements can be taken across the entire width of the bag 304.
  • FIGS. 6A to 6E provide photographs showing inspected bags which have failed the inspection.
  • the processor may determine that either: the upper limit of a bag is located less than a first threshold distance above the first element or seam line 407; the upper limit of a bag is located less than 2 mm above the first element or seam line 407; the second element or closing strip 302 absent, the first element or seam line 407 is absent, incomplete, crooked; the seam line 407 is located at a distance less than 1 mm from the bottom of the closing strip 302; the seam line 407 is located at a distance less than 2mm from the top of the closing strip 302; or that the upper limit of a bag is located more than 2mm above the lower limit of the closing strip 302.
  • the imaging system can be used for a plurality of auxiliary applications, including but not limited to: determining correct location of a label 902 sewn into the closure; reading, interpretation and confirmation of the contents of the label 902 such as date, SKU, barcode, QR code, etc.
  • the imaging system can also be used to determine the location of a bar code, or QR code.
  • the imaging system can be used to read the contents of the barcode or QR code and confirm that the code is accurate.
  • the imaging system can also be used to determine the location and reading and confirmation of conformity of information on the bag.
  • the label 902 can be on top or underneath the closing strip, as shown in FIG. 7A, and 9B.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Pathology (AREA)
  • Biochemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Analytical Chemistry (AREA)
  • Molecular Biology (AREA)
  • Immunology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

Various embodiments are described herein for a systems and method for inspecting the seal on objects such as bags. An imaging system can be used for inspecting an object having at least one sealing element. The system can comprise an imaging apparatus having an optical axis for capturing an image; a light source located proximal to the imaging apparatus; and a processing unit coupled to the imaging apparatus configured to receive image data from the imaging apparatus. When the object having the at least one sealing element is appropriately disposed to permit a portion of the object to pass in front of the imaging apparatus transversely to the optical axis of the imaging apparatus, the light source illuminates the sealing element, and the imaging apparatus is configured to capture an image of the at least one sealing element.

Description

SYSTEM AND METHOD FOR AUTONOMOUSLY INSPECTING OBJECT CLOSURES
CROSS REFERENCE TO RELATED APPLICATION
[0001] This application claims priority from United States Provisional Patent Application No. 63/608,611 filed on December 11 , 2023, which is incorporated herein by reference in its entirety.
FIELD
[0002] The various embodiments described herein generally relate to methods of autonomously inspecting bag closure systems.
BACKGROUND
[0003] Materials such as grains, seeds, animal food, biomass fuels, flours, pellets, plastics polymers, mineral powders, food produce, and milled products, etc. are typically stored and packaged in bags. The bags are typically constructed using materials such as paper, polyester (PET), woven polypropylene (WPP), polyethylene (PE), metalized polyester (MET PET), burlap, cotton or other natural or synthetic fabric. These materials are used due to their durability and strength, ensuring that the products inside the bags remain dry and are not destroyed during transport.
[0004] The materials used to construct the bags are also convenient for sealing the bags easily. For example, the bags can be sealed by sewing or fastening the opening shut. The bags can also be resealable by using zippers or slider fasteners to allow an end-user to open and re-seal the bags easily. The zippers or fasteners may also have to be sewn onto or into the bags.
[0005] One limitation of the sewing method used to seal the bags is that sewing may have to be done by an operator. This may be difficult since the bags containing the products may be heavy and difficult to maneuver when sewing. Additionally, it may be difficult to identify and inspect bags that were sealed using different methods, or by different operators. As such, there may be a large variability in the quality of bag closures.
[0006] Automated sealing techniques can also be used to seal the bags. However, these techniques can sometimes result in improperly sealed bags, which leads to leakage and loss of products during transit, or storage. This may result in fines, returns, complaints, reputational damage or issues, operational issues such as spills, health and safety hazards, downtime, or contamination. Therefore, there is a need for inspecting the bag closures prior to shipping or storing a bag.
SUMMARY OF VARIOUS EMBODIMENTS
[0007] In an aspect, in at least one embodiment described herein, there is provided an imaging system for inspecting an object having at least one sealing element. The system can comprise: an imaging apparatus having an optical axis for capturing an image; a light source located proximal to the imaging apparatus; and a processing unit coupled to the imaging apparatus configured to receive image data from the imaging apparatus. In at least one embodiment, when the object having the at least one sealing element can be appropriately disposed to permit a portion of the object to pass in front of the imaging apparatus transversely to the optical axis of the imaging apparatus, the light source illuminates the sealing element, and the imaging apparatus is configured to capture an image of the at least one sealing element. In at least one embodiment, the processing unit receives the image from the imaging apparatus and identifies parameters of the at least one sealing element with respect to the object.
[0008] In at least one embodiment, the imaging apparatus can be formed within an enclosure, such that the enclosure blocks external light and provides contrast in the image of the sealing element. In at least one embodiment, the imaging system further can comprise a separator positioned between the imaging apparatus and the light source, such that the separator blocks out excessive light from the light source. In at least one embodiment, the enclosure can comprise an inlet for receiving the object and outlet for retrieving the object after the image from the imaging apparatus is captured.
[0009] In at least one embodiment, the processing unit can comprise an inspection model for analyzing the image using a trained machine learning algorithm and identifying, using the image, the parameters of the at least one sealing element with respect to the object. In at least one embodiment, the machine learning algorithm can comprise at least one of: a convolutional neural network (CNN), and semantic segmentation. In at least one embodiment, the semantic segregation can comprise at least one of: ll-Nets, PSP-Nets, Deeplab, and Parse Nets. In at least one embodiment, the inspection model can be trained to analyse the image and determine whether the sealing element of the object is faulty or proper. In at least one embodiment, the sealing element can comprise a seam line and/or a closing strip.
[0010] In at least one embodiment, the inspection model can comprise: identifying, by the processing unit, the seam line; identifying, by the processing unit, an outline or outer boundary of the object, identifying an upper limit of the object; identifying, by the processing unit, the closing strip; identifying, by the processing unit, a bottom of the closing strip; determining, by the processing unit, a distance between the upper limit of the object and the bottom of the closing strip; and determining, by the processing unit, the distance between the seam line and a top or bottom limit of the closing strip.
[0011] In at least one embodiment, the processor can determine that the object is faulty if one or more of the following parameters are met: an upper limit of the object is located less than a first threshold distance above the sealing element, the sealing element is absent, incomplete, or crooked; the sealing element is located at a second threshold distance from the bottom of the closing strip; the seam line is located at a third threshold distance from the top of the closing strip; and the upper limit of the object is located less than a fourth threshold distance above the lower limit of the closing strip.
[0012] In at least one embodiment, if the processing unit determines that the sealing element of the object is faulty, the processor actuating an actuator to separate the faulty object from another object which is determined to be proper.
[0013] In an aspect, in at least one embodiment described herein, there is provided a method for inspecting an object. The method can comprise: capturing an image of the object; providing the image of the object to a processing unit; and applying, via the processing unit, an inspection model to the image of the object. The inspection model can comprise at least one of: identifying, by the processing unit, using the image of the object, an outline of the object; identifying, by the processing unit, using the image of the object, an upper limit of the object; identifying, by the processing unit, using the image of the object, at least a first element on the object; determining, by the processing unit, whether the first element on the object is positioned within an acceptable tolerance with respect to the outline of the object, and the upper limit of the object; and determining, by the processing unit, whether the first element on the object is positioned within an acceptable tolerance with respect to the outline of the object, and the upper limit of the object.
[0014] In at least one embodiment, the method can further comprise: locating, by the processor, using the image of the object, a second element on the object; the second element having a bottom edge connected to the object, and a top edge opposing the bottom edge; determining, by the processor, using the image of the object, a distance between the upper limit of the object and the bottom edge of the second element; and identifying, by the processor, using the image of the object, a distance between the first element and the bottom edge of the second element.
[0015] In at least one embodiment, the method can further comprise: rejecting an object if at least one or more of the following conditions are met: the first element is absent; the second element is absent; the upper limit of the object is located less than a first threshold distance above the first element; the first element is located at a distance less than a second threshold distance from the bottom edge of the second element; the first element is located at a distance less than a third threshold distance from the top edge of the second element; and the upper limit of an object is located less than a fourth threshold distance below the top edge of the second element.
[0016] In at least one embodiment, the inspection model can comprise at least one of: a convolutional neural network (CNN), and semantic segmentation. In at least one embodiment, the semantic segregation can comprise at least one of: U- Nets, PSP-Nets, Deeplab, and Parse Nets.
[0017] In at least one embodiment, the method can further comprise: training the processing unit to analyse the image and determine whether the sealing element of the object is faulty or proper.
[0018] In at least one embodiment, if the processing unit determines that the sealing element of the object is faulty, the processor actuating an actuator to separate the faulty object from another object which is determined to be proper.
[0019] In at least one embodiment, the processing unit determines that the sealing element of the object is proper if at least: the identified sealing element is parallel to the upper limit of the object; the top edge of the second element is aligned with the outline of the object; and/or the top edge of the second element is aligned with the upper limit of the object.
[0020] In at least one embodiment, the method further comprises a pre-sorting step after the image capturing step, and before the application of the inspection model step. In at least one embodiment, the pre-sorting step can comprise: identifying images that are obvious fails; and sending the remaining images to the inspection system for application of the inspection model step. BRIEF DESCRIPTION OF THE DRAWINGS
[0021] For a better understanding of the various embodiments described herein, and to show more clearly how these various embodiments may be carried into effect, reference will be made, by way of example, to the accompanying drawings which show at least one example embodiment, and in which:
FIG. 1 is a block diagram of a system for autonomously inspecting a bag;
FIGS. 2A to 2E are diagrams of alternative embodiments of the imaging system;
FIG. 3 is a diagram of the bag sealing system, imaging system, and computing unit;
FIGS. 4A and 4B are diagrams showing the bag within the imaging system for inspection;
FIG. 5A is a flowchart showing a complete method of filling, sealing and inspecting a bag;
FIG. 5B is a flowchart showing a method of autonomously inspecting a bag;
FIGS. 6A to 6E are photographs showing inspected bags which have failed the inspection;
FIGS. 7A to 7E are photographs showing inspected bags which have passed the inspection;
FIG. 8A is a photograph showing a folded bag;
FIG. 8B is a photograph showing a folded bag;
FIG. 9A is a diagram showing a folded bag;
FIG. 9B is a diagram showing a folded bag having a label element; and FIGS. 10A and 10B are photographs showing pre-sorted bags which have failed the pre-sorting test.
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0022] Various apparatuses or processes will be described below to provide an example of an embodiment of each claimed invention. No embodiment described below limits any claimed invention and any claimed invention may cover processes or apparatuses that differ from those described below. The claimed inventions are not limited to apparatuses or processes having all of the features of any one apparatus or process described below or to features common to multiple or all of the apparatuses or processes described below. It is possible that an apparatus or process de-scnbed below is not an embodiment of any claimed invention. Any invention disclosed in an apparatus or process described below that is not claimed in this document may be the subject matter of another protective instrument, for example, a continuing patent application, and the applicants, inventors or owners do not intend to abandon, disclaim or dedicate to the public any such invention by its disclosure in this document.
[0023] Furthermore, it will be appreciated that for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein. Also, the description is not to be considered as limiting the scope of the embodiments described here-in.
[0024] The following detailed description is merely exemplary in nature and is not intended to limit the described embodiments of the application and uses of the described embodiments. As used herein, the word “exemplary” or “illustrative” means “serving as an example, instance, or illustration.” Any implementation described herein as “exemplary” or “illustrative” is not necessarily to be construed as preferred or advantageous over other implementations. All of the implementations described below are exemplary implementations provided to enable persons skilled in the art to practice the disclosure and are not intended to limit the scope of the appended claims. Further-more, there is no intention to be bound by any expressed or implied theory presented in the pre-ceding technical field, background, brief summary or the following detailed description.
[0025] It should also be noted that the terms “coupled” or “coupling” as used herein can have several different meanings depending on the context in which these terms are used. For example, the terms coupled or coupling can have a mechanical, electrical or communicative connotation. For example, as used herein, the terms coupled or coupling can indicate that two elements or devices can be directly connected to one another or connected to one another through one or more intermediate elements or devices via an electrical element, an electrical signal, a light signal or a mechanical element depending on the particular context.
[0026] It should also be noted that, as used herein, the wording “and/or” is intended to represent an inclusive-or. That is, “X and/or Y” is intended to mean X or Y or both X and Y, for example. As a further example, “X, Y, and/or Z” is intended to mean X or Y or Z or any combination thereof.
[0027] It should be noted that terms of degree such as “substantially”, “about” and “approximately” as used herein mean a reasonable amount of deviation of the modified term such that the end result is not significantly changed. These terms of degree may also be construed as including a deviation of the modified term, such as by 1 %, 2%, 5% or 10%, for example, if this deviation does not negate the meaning of the term it modifies. [0028] Furthermore, the recitation of numerical ranges by endpoints herein includes all numbers and fractions subsumed within that range (e.g., 1 to 5 includes 1 , 1.5, 2, 2.75, 3, 3.90, 4, and 5). It is also to be understood that all numbers and fractions thereof are presumed to be modified by the term “about” which means a variation of up to a certain amount of the number to which reference is being made if the end result is not significantly changed, such as 1 %, 2%, 5%, or 10%, for example.
[0029] Reference throughout this specification to “one embodiment”, “an embodiment”, “at least one embodiment” or “some embodiments” means that one or more particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments, unless otherwise specified to be not combinable or to be alternative options.
[0030] Similarly, throughout this specification and the appended claims the term “communicative” as in “communicative pathway”, “communicative coupling”, and in variants such as “communicatively coupled” is generally used to refer to any engineered arrangement for transferring and/or ex-changing information. Examples of communicative pathways include, but are not limited to, electrically conductive pathways (e.g., electrically conductive wires, physiological signal conduction), electromagnetically radiative pathways (e.g., radio waves, optical signals, etc.), or any combination thereof. Examples of communicative couplings include, but are not limited to, electrical couplings, magnetic couplings, radio couplings, optical couplings or any combination there-of.
[0031] A portion of the example embodiments of the systems, devices, or methods described in accordance with the teachings herein may be implemented as a combination of hardware or software. For example, a portion of the embodiments described herein may be implemented, at least in part, by using one or more computer programs, executing on one or more programmable devices comprising at least one processing element, and at least one data storage element (including volatile and/or non-volatile memory). These devices may also have at least one input de-vice (e.g., a keyboard, a mouse, a touchscreen, an input pin, an input port and the like for providing at least one input such as an input signal, for example) and at least one output device (e.g., a display screen, a printer, a wireless radio, an output port, an output pin and the like for providing at least one output such as an output signal, for example) depending on the nature of the device.
[0032] It should also be noted that there may be some elements that are used to implement at least part of the embodiments described herein that may be implemented via software that is written in a high-level procedural language such as object-oriented programming. The program code may be written in C, C++ or any other suitable programming language and may comprise modules or classes, as is known to those skilled in object-oriented programming. Alternatively, or in addition thereto, some of these elements implemented via software may be written in assembly language, machine language, or firmware as needed.
[0033] At least some of the software programs used to implement at least one of the embodiments de-scribed herein may be stored on a storage media or a device that is readable by a general or special purpose programmable device. The software program code, when read by the programmable device, which may also be referred to as a computing device, configures the programmable device to operate in a new, specific and predefined manner in order to perform at least one of the methods described herein.
[0034] Furthermore, at least some of the programs associated with the systems and methods of the embodiments described herein may be capable of being distributed in a computer program product comprising a computer readable medium that bears computer usable instructions, such as program code, for one or more processors. The program code may be preinstalled and embedded during manufacture and/or may be later installed as an update for an already deployed computing system. The medium may be provided in various forms, including non- transitory forms such as, but not limited to, one or more diskettes, compact disks, tapes, memory chips, and magnetic and electronic storage. In alternative embodiments, the medium may be transitory in nature such as, but not limited to, wire-line transmissions, satellite transmissions, internet transmissions (e.g., downloads), media, digital and analog signals, and the like. The computer useable instructions may also be in various formats, including compiled and non-compiled code.
[0035] Any module, unit, component, server, computer, terminal or computing device described herein that executes software instructions in accordance with the teachings herein may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Computer storage media may include volatile and nonvolatile, re-movable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD- ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information, and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable thereto.
[0036] The various embodiments described herein generally relate to a method of autonomously inspecting a bag containing a material to ensure a proper sealing operation. In one embodiment, the method relates to the use of artificial intelligence (Al) among others to scan closed bags containing bulk material as they come out of the sealing process. The system can capture images of the bag to identify whether a first element such as a seam line, or sealing line is accurately positioned. The system can also capture images of the bag to identify whether a second element such as edge banding, which can be a crepe tape or a paper strip or a WPP strip that lies across the top edge of the bag, is well positioned, and that the sealing of the bag is adequate, in that there are no openings, and that the seam line is within certain tolerances.
[0037] Referring now to FIG. 1 , shown therein is a block diagram of a system 100 for autonomously inspecting a bag. The system 100 comprises a bag production and filling system 101 coupled to a bag sealing system 102. The bag sealing system 102 can comprise, for example, a sewing machine such as an industrial sewing machine, a heat-sealing apparatus, or a folding apparatus. The system 100 further comprises an imaging system 103. The imaging system 103 can include at least one imaging apparatus, such as camera or image sensor. The imaging system 103 can be coupled to a processing unit or computing unit 104 located on a mobile device or computer. The processing unit may also comprise a graphics processing unit (GPU) for processing images that are captured by the imaging system 103.
[0038] Turning now to FIG. 2A, shown therein is a block diagram of an imaging system 103. The imaging system 103 can include at least one imaging apparatus 202, such as camera or image sensor. The imaging system 103 can further include a lighting system 204 which can include a light or lamp 201 . In some examples, the light or lamp 201 can provide a backlight, or direct light.
[0039] The imaging system 103 can be suspended on a frame or chassis 211 to maintain or ensure a consistent field-of-view. In one embodiment, the imaging system 103 can be positioned or fastened on a frame or chassis 211 . The imaging system 103 can optionally be enclosed or formed within an enclosure 208 to block external light and provide darkness or contrast in the image.
[0040] The suspension frame 211 or enclosure 208 can comprise an inlet 209 and an outlet 210. The inlet and outlet can be configured to receive a conveyor belt or another automated track passing therethrough. Sealed bags or objects can be conveyed or otherwise transported through the enclosure 208, and the imaging apparatus appropriately disposed to permit a portion of the sealed bags, such as the top portion, or the entire sealed bag, to pass in front of the imaging apparatus transversely to the optical axis of the imaging apparatus. The camera 202 can be placed such that the imaging bed 206 is within a camera frame.
[0041] In the embodiment where the bag or object is laying down, in a horizontal plane, the conveyor belt may act as the imaging bed 206. In other embodiments such as when the bag or object is standing or in a vertical position, the background can act at the imaging bed 206.
[0042] The camera can be a digital camera, video camera, low light, infrared (IR) camera, depth sensor, ultraviolet (UV) camera, multispectral camera or image sensor. The camera can capture a full frame image or line images. The light or lamp 201 can provide additional lighting to the imaging bed 206 as the enclosure may block out external light from the surroundings. The imaging system or enclosure 208 can further comprise a separator 205 which separates the camera from the light source, so that the light source does not saturate the camera, and blocks out any excessive light coming from the light source.
[0043] Turning now to FIG. 2B, shown therein is a block diagram of an alternative embodiment of imaging system 103. In another embodiment, the imaging system can include a plurality of imaging apparatuses 202a, 202b. The imaging system can further include a plurality of lighting systems 204a, 204b which can include a light or lamp 201 a, 201 b. In some examples, a first light or lamp 201 a can be provided opposite a second light or lamp 201 b, providing additional lighting on an imaging bed 206. In one example, the imaging bed can be placed between the plurality of imaging apparatuses 202 and light sources 201 , such that the imaging bed receives an equal amount of light from each light source 201 . Each of the light sources 201 can provide a backlight, or direct light. Alternatively, the light sources 201 can be used to provide backlight or direct light interchangeably. The imaging system 103 can further comprise a separator 205 which separates the camera from the light source 201 , so that the light source does not saturate the camera. In the case of a plurality of cameras, a plurality of separators 205a and 205b can be used. [0044] In another embodiment, and as shown in FIG. 2C, the imaging system 103 can comprise a relatively larger imaging bed 206, allowing a plurality of objects to be imaged. In this embodiment, a plurality of imaging apparatuses 202a - 202d can be used to image the entirety of the imaging bed 206. A plurality of light sources 201a - 201 d can also be used to provide sufficient lighting for the entirety of the imaging bed 206. Each of the lighting sources 201 can be separated from the imaging apparatuses 202 with separators 205.
[0045] In another embodiment, and as shown in FIG. 2D, the imaging system 103 can comprise a relatively larger imaging bed 206, allowing a plurality of objects to be imaged. In this embodiment, the lighting sources can be placed on a first side of the imaging bed 206, and the imaging apparatuses 202 can be placed on a second of the imaging bed, opposite the first side. Each of the lighting sources 201 can be separated from the remaining lighting sources with separators 205. The imaging apparatuses 202 can also optionally be separated from the remaining imaging apparatuses with separators 205.
[0046] In another embodiment, and as shown in FIG. 2E, the imaging system 103 can comprise a transparent imaging bed 206 having the light sources 201 embedded within, allowing a plurality of objects to be underlit when being imaged. In this embodiment, the lighting sources can be placed underneath the imaging bed, providing up-lighting or backlighting to the objects to be imaged. In another embodiment, the lightning source could be placed above the imaging bed, providing downward-lighting or backlighting to the objects to be imaged. The imaging bed 206 can comprise a rigid transparent or translucent material to keep the objects spaced apart from the lighting sources 201. The imaging apparatuses 202 can be placed, for example, above the imaging bed 206. The imaging apparatuses 202 can optionally be separated from the remaining imaging apparatuses with separators 205. [0047] It can be understood that the configurations of the imaging apparatus 202 and the lighting sources 201 can be interchanged and configured to image the objects in an efficient and accurate manner.
[0048] Turning now to FIG. 3, the bag sealing system 102, imaging system 103, and computing unit 104 are shown. In one embodiment, a method of sealing a bag, using the bag sealing system 102 is included herein. The bag may comprise a first part including a top part, or area of the bag to be closed; and a second part, including the body of the bag. The method of sealing the bag can include, optionally, trimming the bag where the bag is to be sealed. The method of sealing the bag can further include applying or contacting a second element such as the edge banding or closing strip 302 to the first part of the bag; and sewing the edge banding or closing strip 302 to the first part of the bag. In one embodiment, the bags can be guided or supported from above, by way of bag upper section conveyance means.
[0049] The term “second element” can refer to edge banding or closing strip 302, or to a secondary seam line.
[0050] The bag sealing system 102 can comprise a sealing apparatus including, but not limited to: a sewing machine such as an industrial sewing machine, a heat-sealing apparatus, or a folding apparatus, or any other suitable sealing apparatus to seal or close the bag. The bag sealing system 102 can be configured to attach a closing strip 302 to a first or top edge of a bag 304 by sewing, heat-sealing, folding-over, or otherwise closing a bag. As the bag moves through the bag sealing system, it can pass under the sealing apparatus 303 which affixes, for example by sewing, gluing or adhering the closing strip 302 to bag 304, as shown between the first bag 304a, and second bag 304b, and third bag 304c. In one embodiment, the closing strip 302 can be affixed along the top edge of the bag 304. In alternative embodiments, the closing strip can be affixed to any edge or opening of the bag 304. [0051] The bag can exit the bag sealing system 102 once the bag is sealed, as shown by bag 304c. As the bag with the sewn portion 302 exits the bag sealing system 102, it enters the enclosure 208 of the imaging system 103 via the inlet 209. In one embodiment, the bag sealing apparatus 102 can comprise a conveyor belt or automated track for moving the objects from the sewing machine 303 to the inlet of the imaging system 103 and therethrough.
[0052] FIG. 4A provides a diagram showing the object within the imaging system 103 for image acquisition and inspection. The bag 304 can be passed through the enclosure 208 via the inlet 209, and be placed on the imaging bed 206. Optionally, just a portion of the bag can be inspected, for example the top portion. The lighting system 204 illuminates the bag with light sources 201 . The bag or object can be backlit or directly lit with the light sources 201 . In one embodiment, the bag or object is translucent such that it is possible to see the sewing patterns through the object or bag. When the bag 304 or object is translucent, the lighting source can provide backlighting such that a first element such as an adhesion line or seam line 407, created during the bag sealing step, is visible by the imaging apparatus 202.
[0053] The imaging apparatus 202 can then capture an image or video of the bag or object 304. In one embodiment, the imaging apparatus can take an image of the complete object, or of a portion of the object using an apparatus such as a 2D camera.
[0054] When installing the inspection system on an existing production line, a bag or object 304 can sometimes be warped during imaging. When the bag is filled, it can be deposited on a conveyor towards the sewing/closing system. In one embodiment, the bag or object 304 can be held under tension when being filled. The bag or object 304 can then be deposited to the conveyor optionally without tension toward the sewing/closing system. The bag or object 304 can be tensioned for sewing/closing. Afterwards, the bag of object 304 can be placed upright on the conveyor towards the imaging apparatus. In some instances, the bag or object may become warped by the different tensions previously applied or by movement of material inside the bag. The image may also be blurry due to vibrations in the environment. Additionally, the bag or object can be outside of the focus of the imaging apparatus thus creating a blurring in the image, and negatively affecting the quality of the image taken. In some embodiments, to overcome the blurring in the image, the bag can be held up or kept under tension during the image acquisition steps. Thus, by reducing the vibrations or isolating the vibrations, the quality of the image can be controlled/normalized.
[0055] A suitable tensioning mechanism can be used to keep the bag or object under tension. In some embodiments, the tensioning mechanism can include: guidance rails on the conveyor, metal bars, robotic arms, pincers, railings, and the like. In some embodiments the tensioning mechanism can be momentarily disengaged while the image is captured to avoid the tensioning mechanism from appearing in the image.
[0056] In one embodiment, only a portion of the bag or object is in the frame of the imaging apparatus 202. In this embodiment, a line-by-line or line scan of the object or bag is taken by the imaging apparatus 202. In some examples the bags being imaged may be quite large so it may not be convenient to image the complete bag. In these cases, a portion of the bag having at least one seam line, or at least a portion of the seam line can be imaged. This provides an advantage in terms of space required for the imaging system as bagging lines may not have the adequate space required for the imaging system. Once the bag or object has been imaged by the imaging apparatus 202, the bag or object can pass through the enclosure 208 and exit via the outlet 210.
[0057] The image or video taken by the imaging apparatus 202 can be sent, in real-time or near-real time, to the computing unit or processor 104 for processing. The processor, once an image is received, can apply an inspection model to identify accurate positioning of the seam line 407 with respect to the bag 304. [0058] The term “first element” can refer to the adhesion line, seam line, fold line, edge banding or closing strip 302.
[0059] In one embodiment, the inspection model applied by the processor can include: identifying the seam line 407, identifying the outline of the bag, identifying the upper limit of the bag; identifying the closing strip 302; identifying a bottom of the closing strip; measuring/determining the distance between the upper limit of the bag 304 and the bottom of the closing strip 302, across the entire width of the bag 304, and measuring the distance between the seam line 407 and the top/bottom limit of the closing strip 302.
[0060] In some embodiments, and as shown in FIG. 8, a closing strip may not be necessary. In this embodiment, the inspection model applied by the processor can include identifying the outline of the bag, or identifying the upper limit of the bag; and identifying a secondary element. Examples of secondary elements can include: a seam line without a closing strip, a simple fold, a double fold, or an adhesive closing strip, as shown in FIGs. 8A, 8A, 9A and 9B.
[0061] The processor can further be adapted to capture the image, control the lighting and image capture, analyse the image by applying the inspection model, and can actuate an actuator on the conveyor belt to discard an improperly sewn bag or object. The inspection model can be a machine learning model, which can be trained to analyse the images and determine which bags are faulty and which are not. In one embodiment, the processor can determine that a bag is faulty if one or more of the following parameters are met: the upper limit of a bag is located less than a first threshold distance above the seam line 407; the upper limit of a bag is located less than 2 mm above the seam line 407; the closing strip 302 absent, the seam line 407 is absent, incomplete, crooked; the seam line 407 is located at a distance less than 1 mm from the bottom of the closing strip 302; the seam line 407 is located at a distance less than 2mm from the top of the closing strip 302; the upper limit of a bag is located less than 2mm above the lower limit of the closing strip 302. It can be understood that the distances can be amended as needed for various bag sizes and shapes. In one embodiment, the parameters can be met across the entire length of the bag for it to be considered "not faulty". If one or more of the listed inspected elements are tom, stretched, unravelled or otherwise damaged, the bag may be considered “faulty”.
[0062] In one embodiment, when a bag enters the inspection system via the inlet 209, a camera can sense a leading edge of the bag and it can send a signal for the camera to start recording or capturing what is in its field of view. An image or a plurality of images, or a video of the bag can be acquired by the camera. The image data can be communicated to a processor 104 relying on the connection architecture. The image can be passed to or through the inspection model, which may include a trained machine learning algorithm. In one embodiment, the machine learning algorithm used may be a convolutional neural network (CNN), or an algorithm designed toward semantic segmentation, including but not limited to: U- Nets, PSP-Nets, Deeplab, and Parse Nets, etc. The algorithm can perform semantic segmentation according to the training it has been subjected to. Images with localized elements can be output from the algorithm. The distance measurement metrics between the detected elements and the established criteria for pass or fail can then be applied to determine if a particular bag is faulty or not.
[0063] In one embodiment, training an algorithm can comprise the step of gathering previous or historic images or image data; sorting the previously captured or historic images as faulty or not faulty. The training step can further comprise segmenting specific elements on both the faulty sample set and the non-faulty sample set. The specific elements of the bag may include: bag portion, closing strip, seam line label, bar code. The specific elements of the bag may then be labeled or classified on the images. The training step may further consist of feeding the untouched images to the algorithm along with the labeled and/or segmented images, allowing the algorithm to adjust and determine the difference between the two sets of samples in a process referred to, by a person trained in the art, as training. Training can be an iterative process in which the first cycles of training imply a limited number of segmented images. Once the algorithm has had these first phases the training, then more significant amounts of data are used in a process call reinforcement training, in which the initial capabilities of the algorithm are exploited to generate a preliminary segmentation of untouched images. Defects found in the automatically analysed bags can then be used to reinforce the training toward defective bags. The outcome of the training step is an algorithm that can identify or determine specific trained elements on an image and return segmented images for the element detected.
[0064] Turning now to FIG. 4B, an example of a misplaced closing strip 302 is shown therein. In this example, the closing strip 302 has been incorrectly placed on the bag 304. The seam line 407 can be seen by the imaging apparatus. The processor can then identify the outline of the bag, locate the upper limit of the bag; identify the closing strip 302; measure the distance between the upper limit of the bag 304 and the bottom of the closing strip 302, across the entire width of the bag 304, and measure the distance between the seam line 407 and the top/bottom limit of the closing strip 302. In this case, the processor can determine that the closing strip 302 has been incorrectly placed on the bag 304 as the distance between the upper limit of the bag 304 and the bottom of the closing strip 302, is not consistent across the entire width of the bag 304. In one embodiment, the parameters can be met across the entire length of the bag for it to be considered "good". For example, in FIG. 4B the aforementioned parameters would be met for the right-hand side of the bag while not for the left-hand side, and that particular bag would thus be rejected.
[0065] FIG. 5A provides a flowchart showing a complete method of filling, sealing and inspecting a bag. The method begins by filling the bag at step 501 . The bag can be filled by any suitable filling techniques and with any suitable materials. The bag is then sealed at step 502 by any of the sealing techniques previously described. At step 503, the image acquisition is completed using the imaging system 103. In one embodiment, the processor 410 can be configured to capture the image. Once an image is captured, the image can be pre-processed by a sorting algorithm at step 504. The pre-sorting step 504 can involve using the output of the imaging system 103 to conduct a sorting algorithm.
[0066] In one embodiment, an optional pre-sorting step 504 can be completed after the image acquisition step 503, and before the segmentation step 505. As such, the image can be first acquired using the imaging system, then classified or sorted by the sorting algorithm, and the pre-classified images can be sent to the inspection model for analysis.
[0067] Once an image is acquired, it can be sorted using a sorting algorithm. An image sorting algorithm in machine learning can comprise a computer- implemented method for categorizing or arranging digital images based on specific characteristics, such as metadata, pixel patterns, and/or features extracted via convolutional neural networks (CNNs). The sorting algorithm can receive a plurality of digital images as input and process each image to extract feature vectors using a trained sorting model. The feature vectors can represent numerical representations of the image attributes, such as bag outer edges, bag outline, straightness of the seam line, etc. The method further includes organizing the images into clusters, categories, or a ranked order based on the identified similarity or predefined sorting criteria. Optionally, the sorting algorithm can employ reinforcement learning or supervised learning to optimize sorting accuracy over successive iterations by adjusting weighting parameters or feature extraction methods based on user feedback or performance metrics.
[0068] In one embodiment, the pre-sorting algorithm can be trained on images and output a pass/fail or yes/no verdict. The pre-sorting algorithm can be useful for identifying obvious fails that if sent to the inspection system for further inspection, would take up unnecessary resources or time. In another embodiment, the sorting algorithm can determine if the new input image is different or similar to its training data set. In this embodiment, the training dataset includes a set of average and above average images. [0069] FIGs. 10a and 10b provide examples of obviously-failed images 1000. The sorting algorithm can eliminate the images corresponding to bags that are currently unable to be segmented or grossly defective (e.g.: warped bags (see FIG. 10a), elements missing, hole in the bag, opened bag, etc.) The sorting algorithm can provide a relativity fast classification to images, so that the failed images do not need to enter the inspection system and segmentation steps. In one embodiment, the inspection method 505 can be trained to give adequate analysis of rejected bags and flag any trends to be corrected in the production line (ex: skewed labels or crepe tapes). Therefore, the sorting algorithm 504 can be used to obtain a faster and more accurate result with the inspection software. The quality of the inspection system can be dependent on the quality and/or quantity of the input data and training images.
[0070] Details of the inspection method 505 and system are provided with reference to FIG. 5B. Once the inspection method 505 is completed, the system can make a determination at step 506 with regards to the quality of the bag and bag sealing.
[0071] FIG. 5B provides a flowchart showing the steps taken by the processor to analyse the image and conduct the image inspection 505. At step 510, the processor can identify the outline of the bag. At step 520, the processor can locate the upper limit of the bag. At step 530, the processor can identify a first element, such as a seam line. At step 540, the processor can identify a second element, such as but not limited to a closing strip, or label. At step 550, the processor can measure the distance between the upper limit of the bag 304 and the bottom of the second element such as closing strip 302. In a further embodiment, the measurements can be taken across the entire width of the bag 304. In a further embodiment, the processor can determine the distance between the seam line 407 and the top/bottom limit of the closing strip 302. The measured parameters can be provided to the algorithm as the distance measurement metrics between the detected elements and the established criteria for pass or fail. The algorithm can then be applied to classify or determine if a particular bag is faulty or not.
[0072] FIGS. 6A to 6E provide photographs showing inspected bags which have failed the inspection. In FIGS. 6A to 6E, the processor may determine that either: the upper limit of a bag is located less than a first threshold distance above the first element or seam line 407; the upper limit of a bag is located less than 2 mm above the first element or seam line 407; the second element or closing strip 302 absent, the first element or seam line 407 is absent, incomplete, crooked; the seam line 407 is located at a distance less than 1 mm from the bottom of the closing strip 302; the seam line 407 is located at a distance less than 2mm from the top of the closing strip 302; or that the upper limit of a bag is located more than 2mm above the lower limit of the closing strip 302. Once the processor has determined that a particular bag has not passed the inspection, the processor can actuate an actuator such as a paddle to separate a faulty bag or object from the passed bags or objects. For example, for a bag that has not passed the inspection, the bag can be ejected with a kicker on the conveyor, ejected later in the layout, or can fall at the end of the conveyor. The bags which have passed the inspection can be pushed onto another conveyor by a kicker or actuator, or can be sorted by falling into a bin or container with other bags that have passed the inspection.
[0073] FIGS. 7A to 7E are photographs showing inspected bags which have passed the inspection. In FIGS. 7A to 7E, the processor may have determined that: the upper limit of a bag is located at the appropriate distance above the seam line 407; the closing strip 302 is present, the seam line 407 is present, complete, and straight; the seam line 407 is located at an appropriate distance from the bottom of the closing strip 302; the seam line 407 is located at the appropriate distance from the top of the closing strip 302; or that the upper limit of a bag is at the appropriate distance above the lower limit of the closing strip 302.
[0074] In other embodiments, the imaging system can be used for a plurality of auxiliary applications, including but not limited to: determining correct location of a label 902 sewn into the closure; reading, interpretation and confirmation of the contents of the label 902 such as date, SKU, barcode, QR code, etc. The imaging system can also be used to determine the location of a bar code, or QR code. Furthermore, the imaging system can be used to read the contents of the barcode or QR code and confirm that the code is accurate. The imaging system can also be used to determine the location and reading and confirmation of conformity of information on the bag. The label 902 can be on top or underneath the closing strip, as shown in FIG. 7A, and 9B.
[0075] While the applicant's teachings described herein are in conjunction with various embodiments for illustrative purposes, it is not intended that the applicant's teachings be limited to such embodiments. On the contrary, the applicant's teachings described and illustrated herein encompass various alternatives, modifications, and equivalents, without departing from the embodiments, the general scope of which is defined in the appended claims.

Claims

CLAIMS What is claimed is:
1. An imaging system for inspecting an object having at least one sealing element, the system comprising: an imaging apparatus having an optical axis for capturing an image; a light source located proximal to the imaging apparatus; and a processing unit coupled to the imaging apparatus configured to receive image data from the imaging apparatus; such that when the object having the at least one sealing element is appropriately disposed to permit a portion of the object to pass in front of the imaging apparatus transversely to the optical axis of the imaging apparatus, the light source illuminates the sealing element, and the imaging apparatus is configured to capture an image of the at least one sealing element; and such that the processing unit receives the image from the imaging apparatus and identifies parameters of the at least one sealing element with respect to the object.
2. The imaging system of claim 1 , wherein the imaging apparatus is formed within an enclosure, such that the enclosure blocks external light and provides contrast in the image of the sealing element.
3. The imaging system of any one of claims 1 or 2, wherein the imaging system further comprises a separator positioned between the imaging apparatus and the light source, such that the separator blocks out excessive light from the light source.
4. The imaging system of claim 2, wherein the enclosure comprises an inlet for receiving the object and outlet for retrieving the object after the image from the imaging apparatus is captured.
5. The imaging system of any one of claims 1 to 4, wherein the processing unit comprises an inspection model for analyzing the image using a trained machine learning algorithm and identifying, using the image, the parameters of the at least one sealing element with respect to the object.
6. The imaging system of claim 5, wherein the machine learning algorithm comprises at least one of: a convolutional neural network (CNN), and semantic segmentation.
7. The imaging system of claim 6, wherein the semantic segregation comprises at least one of: ll-Nets, PSP-Nets, Deeplab, and Parse Nets.
8. The imaging system of claim 5, wherein the inspection model is trained to analyse the image and determine whether the sealing element of the object is faulty or proper.
9. The imaging system of any one of claim 1 to 8, wherein the sealing element comprises a seam line and a closing strip.
10. The imaging system of claim 9, wherein the inspection model comprises: identifying, by the processing unit, the seam line; identifying, by the processing unit, an outline or outer boundary of the object, identifying an upper limit of the object; identifying, by the processing unit, the closing strip; identifying, by the processing unit, a bottom of the closing strip; determining, by the processing unit, a distance between the upper limit of the object and the bottom of the closing strip; and determining, by the processing unit, the distance between the seam line and a top or bottom limit of the closing strip.
11. The imaging system of claim 10, wherein the processor can determine that the object is faulty if one or more of the following parameters are met: an upper limit of the object is located less than a first threshold distance above the sealing element, the sealing element is absent, incomplete, or crooked; the sealing element is located at a second threshold distance from the bottom of the closing strip; the seam line is located at a third threshold distance from the top of the closing strip; and the upper limit of the object is located less than a fourth threshold distance above the lower limit of the closing strip.
12. The imaging system of claim 10, wherein if the processing unit determines that the sealing element of the object is faulty, the processor actuating an actuator to separate the faulty object from another object which is determined to be proper.
13. A method for inspecting an object, the method comprising: capturing an image of the object; providing the image of the object to a processing unit; applying, via the processing unit, an inspection model to the image of the object; and wherein the inspection model comprises at least one of: identifying, by the processing unit, using the image of the object, an outline of the object; identifying, by the processing unit, using the image of the object, an upper limit of the object; identifying, by the processing unit, using the image of the object, at least a first element on the object; and determining, by the processing unit, whether the first element on the object is positioned within an acceptable tolerance with respect to the outline of the object, and the upper limit of the object.
14. The method according to claim 13, further comprising: locating, by the processor, using the image of the object, a second element on the object; the second element having a bottom edge connected to the object, and a top edge opposing the bottom edge; determining, by the processor, using the image of the object, a distance between the upper limit of the object and the bottom edge of the second element; and identifying, by the processor, using the image of the object, a distance between the first element and the bottom edge of the second element.
15. The method according to any one of claims 13 or 14, further comprising rejecting an object if at least one or more of the following conditions are met: the first element is absent; the second element is absent; the upper limit of the object is located less than a first threshold distance above the first element; the first element is located at a distance less than a second threshold distance from the bottom edge of the second element; the first element is located at a distance less than a third threshold distance from the top edge of the second element; and the upper limit of an object is located less than a fourth threshold distance below the top edge of the second element.
16. The method of any one of claims 13 to 15, wherein the inspection model comprises at least one of: a convolutional neural network (CNN), and semantic segmentation.
17. The method of claim 16, wherein the semantic segregation comprises at least one of: ll-Nets, PSP-Nets, Deeplab, and Parse Nets.
18. The method of any one of claims 13 to 17, further comprising training the processing unit to analyse the image and determine whether the sealing element of the object is faulty or proper. The method of claim 18, wherein if the processing unit determines that the sealing element of the object is faulty, the processor actuating an actuator to separate the faulty object from another object which is determined to be proper.
19. The method of claim 18, wherein the processing unit determines that the sealing element of the object is proper if at least: the identified sealing element is parallel to the upper limit of the object; the top edge of the second element is aligned with the outline of the object; and/or the top edge of the second element is aligned with the upper limit of the object.
20. The method of any one of claims 13 to 19, further comprising a pre-sorting step after the image capturing step, and before the application of the inspection model step; and wherein the pre-sorting step comprises: identifying images that are obvious fails; and sending the remaining images to the inspection system for application of the inspection model step.
PCT/CA2024/051645 2023-12-11 2024-12-11 System and method for autonomously inspecting object closures Pending WO2025123132A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363608611P 2023-12-11 2023-12-11
US63/608,611 2023-12-11

Publications (1)

Publication Number Publication Date
WO2025123132A1 true WO2025123132A1 (en) 2025-06-19

Family

ID=96056272

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2024/051645 Pending WO2025123132A1 (en) 2023-12-11 2024-12-11 System and method for autonomously inspecting object closures

Country Status (1)

Country Link
WO (1) WO2025123132A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190259143A1 (en) * 2017-02-22 2019-08-22 Yushin Co., Ltd. System for checking package body with image
US20220214243A1 (en) * 2019-04-11 2022-07-07 Cryovac, Llc System for in-line inspection of seal integrity

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190259143A1 (en) * 2017-02-22 2019-08-22 Yushin Co., Ltd. System for checking package body with image
US20220214243A1 (en) * 2019-04-11 2022-07-07 Cryovac, Llc System for in-line inspection of seal integrity

Similar Documents

Publication Publication Date Title
CA2831411C (en) Method and system for identifying waste containers based on pattern
JP7621271B2 (en) A system for in-line inspection of seal integrity
EP2660787B1 (en) Defect Categorisation in a digital image
TW202301192A (en) Sorting of plastics
CN103119449B (en) Systems and methods to analyze an immunoassay test strip comb member
JP6917083B1 (en) Teacher data generator, inspection device and program
CN104809437B (en) A kind of moving vehicles detection and tracking method based on real-time video
US20230173543A1 (en) Mobile sorter
CN112789492A (en) Vacuum packaging product inspection for detecting gas inside package
KR101480149B1 (en) Apparatus for agricultural sorting products
EP3877904B1 (en) Milk analyser for classifying milk
CN118475812A (en) Tray inspection system and related method
WO2025123132A1 (en) System and method for autonomously inspecting object closures
CN114264668A (en) Method and system for detecting three-stage flaws of medical packaging box by image processing
CA3101489A1 (en) Inspection process
US9047723B2 (en) Defect categorization
Kuo et al. Design and Implementation of AI aided Fruit Grading Using Image Recognition
JP5400401B2 (en) X-ray inspection equipment
Athari et al. Design and implementation of a parcel sorter using deep learning
JP3392743B2 (en) Automatic method and apparatus for sorting lettuce
CN120629183A (en) A circulating universal smoke box detection system
JP2007196129A (en) Inspection information management system
KR102821701B1 (en) Non-destructive vision contaminated egg screening system and method thereof
Samartha et al. A Novel CNN Approach for Efficient Areca Nut Sorting Machine
Chaturvedy Machine Vision in Transit Quality Control—Automated Inspection of Pulp Bale Loading

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24901856

Country of ref document: EP

Kind code of ref document: A1