WO2022093207A1 - Computer vision model generation - Google Patents
Computer vision model generation Download PDFInfo
- Publication number
- WO2022093207A1 WO2022093207A1 PCT/US2020/057644 US2020057644W WO2022093207A1 WO 2022093207 A1 WO2022093207 A1 WO 2022093207A1 US 2020057644 W US2020057644 W US 2020057644W WO 2022093207 A1 WO2022093207 A1 WO 2022093207A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- processor
- examples
- computing device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B33—ADDITIVE MANUFACTURING TECHNOLOGY
- B33Y—ADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
- B33Y50/00—Data acquisition or data processing for additive manufacturing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
Definitions
- Three-dimensional (3D) solid parts may be produced from a digital model using additive manufacturing.
- Additive manufacturing may be used in rapid prototyping, mold generation, mold master generation, and short-run manufacturing.
- Additive manufacturing involves the application of successive layers of build material. This is unlike some machining processes that often remove material to create the final part.
- the build material may be cured or fused.
- FIG. 1 illustrates an example of a computing device for computer vision (CV) model generation
- FIG. 2 illustrates an example of a server and a computing device for CV model generation
- FIG. 3 is a block diagram illustrating an example of a computer- readable medium encoded with instructions for CV model generation; and [0006] FIG. 4 is a flow diagram illustrating an example method for CV model generation.
- 3D printing is an example of additive manufacturing.
- 3D solid objects e.g., parts
- 3D printer file may be produced from a digital model (referred to herein as a 3D printer file) using an additive printing process.
- 3D printing may be used in rapid prototyping, mold generation, mold master generation, and short-run manufacturing.
- Some 3D-printing techniques are considered additive processes because they involve the application of successive layers of material.
- some 3D printing systems may concurrently build multiple 3D objects in the build volume as part of a build operation.
- plastics e.g., polymers
- the techniques described herein may be utilized in various examples of additive manufacturing. For instance, some examples may be utilized for plastics, polymers, semi-crystalline materials, metals, biological materials, etc.
- Some additive manufacturing techniques may be powder-based and driven by powder fusion.
- Some examples of the approaches described herein may be applied to area-based powder bed fusionbased additive manufacturing, such as Stereolithography (SLA), Multi Jet Fusion (MJF), Metal Jet Fusion, Selective Laser Melting (SLM), Selective Laser Sintering (SLS), liquid resin-based printing, etc.
- SLA Stereolithography
- MJF Multi Jet Fusion
- SLM Selective Laser Melting
- SLS Selective Laser Sintering
- a post-printing operation may be performed on a resulting 3D object.
- a 3D printer may produce a number of different types of 3D objects in one printing run.
- multiple 3D objects may be taken from the build volume and sorted.
- 3D objects may be binned (e.g., based on sorting).
- binning refers to the placement of a 3D object in a particular location.
- the 3D objects may be inspected for defects and/or to determine that the 3D parts meet quality control standards.
- the 3D objects may be cleaned of residue (e.g., powder caking) from the 3D printing process.
- the post-printing operations may be performed by a human.
- the post-printing operations may be labor intensive and slow. For example, a person may unpack the 3D parts from the 3D printer, clean the 3D parts, examine the 3D parts for defects, and sort the 3D parts.
- CV computer vision
- CV refers to acquiring, processing, and responding to visual information by a computing device.
- a CV operation device may be an electronic device that uses CV to perform an operation.
- a CV operation device may be an electronic device that includes circuitry to perform an operation based on visual information.
- a CV operation device may be a mechanism that includes an image sensor component (e.g., camera) for capturing visual information about the output of a 3D printing process.
- a CV operation device may also include an actuator component (e.g., motor, solenoid, etc.) for performing an operation based on the visual information.
- the CV operation device may be a tablet computer with a camera and a display.
- the CV operation device may be augmented reality (AR) glasses with a camera.
- the CV operation device may be a robot with a camera and an actuator mechanism.
- the CV operation device may perform post-printing operations on the output of a 3D printer. For example, a CV operation device may sort 3D parts based on visual information. In another example, a CV operation device may bin 3D parts based on visual information. In another example, a CV operation device may detect defects in 3D parts based on visual information. In yet another example, a CV operation device may clean 3D parts based on visual information. In some examples, the CV operation device (e.g., a mechanical device, robot, etc.) may autonomously perform a CV-based post- printing operation. In other examples, the CV operation device may perform a CV-based post-printing operation with the assistance of a human operator.
- a CV operation device may sort 3D parts based on visual information.
- a CV operation device may bin 3D parts based on visual information.
- a CV operation device may detect defects in 3D parts based on visual information.
- a CV operation device may clean 3D parts based on visual information.
- the CV operation device e.g
- CV operations described herein may use a CV model.
- a model is a file or data structure that has been trained using machine learning (ML) to detect patterns in information.
- ML machine learning
- the CV model may be trained using ML to detect patterns in visual information.
- a 3D print includes the output of a 3D printing process.
- a 3D print may include a 3D object, multiple 3D objects and/or printing artifacts (e.g., residue, powder caking, etc.) from the 3D printing process.
- a 3D printer file may be used to generate the CV model.
- FIG. 1 illustrates an example of a computing device 102 for computer vision (CV) model generation.
- Examples of the computing device 102 may include workstations, controllers, laptop computers, desktop computers, servers, tablet devices, cellular phones, smartphones, wireless communication devices, etc.
- the computing device 102 may include a processor 104.
- the processor 104 may be any of a microcontroller (e.g., embedded controller), a central processing unit (CPU), a semiconductor-based microprocessor, graphics processing unit (GPU), field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), a circuit, a chipset, and/or other hardware device suitable for retrieval and execution of instructions stored in a memory 116.
- the processor 104 may fetch, decode, and/or execute instructions stored in memory 116. While a single processor 104 is shown in FIG. 1 , in other examples, the processor 104 may include multiple processors (e.g., a CPU and a GPU).
- the memory 116 of the computing device 102 may be any electronic, magnetic, optical, and/or other physical storage device that contains or stores electronic information (e.g., instructions and/or data).
- the memory 116 may be, for example, Random Access Memory (RAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Dynamic Random Access Memory (DRAM), Synchronous DRAM (SDRAM), magnetoresistive randomaccess memory (MRAM), phase change RAM (PCRAM), non-volatile randomaccess memory (NVRAM), memristor, flash memory, a storage device, and/or an optical disc, etc.
- the memory 116 may be a non-transitory tangible computer-readable storage medium, where the term “non-transitory” does not encompass transitory propagating signals.
- the processor 104 may be in electronic communication with the memory 116.
- the computing device 102 may communicate with a 3D printer 118.
- the computing device 102 may control the 3D printer 118.
- the computing device 102 may send instructions to a 3D printer 118 to generate a 3D print 120.
- the computing device 102 may be a controller for the 3D printer 118 and may be referred to as the compute node of the 3D printer 118.
- the computing device 102 may communicate (e.g., send and/or receive information) with the 3D printer 118, but the 3D printer 118 may include a controller for processing instructions for generating the 3D print 120. In yet other examples, the computing device 102 may not communicate with the 3D printer 118.
- the computing device 102 may receive a 3D printer file 106 for a 3D print 120.
- the 3D printer file 106 may include instructions for creating the 3D print 120.
- the 3D printer file 106 may include a 3D model of an object that is to be printed by the 3D printer 118.
- the 3D printer file 106 may include geometry, color, texture, and/or materials for the 3D print 120.
- the 3D printer file 106 may further include instructions for how the 3D printer 118 is to generate the 3D print 120.
- the 3D printer file 106 may include sequences for applying a print material to build the 3D print 120.
- Some examples of 3D printer file formats include STEP, STL, OBJ, AMF, 3MF, VRML, X3D, FBX, and IGES.
- the processor 104 of the computing device 102 may receive the 3D printer file 106 from another (e.g., remote) computing device (not shown).
- the 3D printer file 106 may be downloaded to the computing device 102 for printing by the 3D printer 118.
- the computing device 102 may receive the 3D printer file 106 through creation of the 3D printer file 106 on the computing device 102.
- a user may design a 3D part using a program on the computing device 102, which creates a 3D printer file 106.
- the 3D printer file 106 may be stored in memory 116 on the computing device 102.
- generating a computer vision (CV) model 110 for the 3D print 120 may include training by a machine learning (ML) network based on the 3D printer file 106.
- the processor 104 may include a CV model generator 108 to train a CV model 110 using the 3D printer file 106 for input to a ML network.
- the CV model 110 may be used by a CV operation device 122 to detect patterns in visual information of the 3D print 120.
- Examples of the machine learning networks described herein may include neural networks, deep neural networks, spatio-temporal neural networks, etc.
- model data may define a node or nodes, a connection or connections between nodes, a network layer or network layers, and/or a neural network or neural networks.
- Examples of neural networks include convolutional neural networks (CNNs) (e.g., basic CNN, deconvolutional neural network, inception module, residual neural network, etc.) and recurrent neural networks (RNNs) (e.g., basic RNN, multi-layer RNN, bi-directional RNN, fused RNN, clockwork RNN, etc.).
- CNNs convolutional neural networks
- RNNs recurrent neural networks
- Some approaches may utilize a variant or variants of RNN (e.g., Long Short Term Memory Unit (LSTM), peephole LSTM, no input gate (NIG), no forget gate (NFG), no output gate (NOG), no input activation function (NIAF), no output activation function (NOAF), no peepholes (NP), coupled input and forget gate (CIFG), full gate recurrence (FGR), gated recurrent unit (GRU), etc.).
- RNN Long Short Term Memory Unit
- NAG no forget gate
- NOG no output gate
- NP no input activation function
- NOAF no peepholes
- NP coupled input and forget gate
- FGR full gate recurrence
- GRU gated recurrent unit
- ML training of the CV model 110 may be based on simulated visual information created from the 3D printer file 106.
- a 3D rendering may be created from the 3D printer file 106.
- a 3D rendering includes a generated image of a 3D object from a particular perspective.
- a 3D rendering may include the geometry of a 3D object, surface details, color, texture, a simulated focal point, a distance from a simulated camera, etc.
- a plurality of different 3D renderings may be generated to illustrate different aspects of a 3D object.
- the processor 104 may generate multiple 3D renderings showing the 3D object in different orientations.
- the processor 104 may generate multiple 3D renderings showing the 3D object with different lighting conditions.
- the processor 104 may generate multiple 3D renderings showing the 3D object with different surface conditions.
- the processor 104 may generate multiple 3D renderings showing the 3D object with different printing artifacts (e.g., powder caking) of the printing process.
- the processor 104 may use the 3D printer file 106 to generate multiple 3D renderings that include different 3D objects.
- a 3D print 120 may include multiple 3D objects.
- the different 3D objects may have different geometry.
- the processor 104 may generate a plurality of 3D renderings for the different 3D objects.
- the multiple 3D renderings may show the different 3D objects in different orientations. Therefore, the processor 104 may create a plurality of 3D renderings of the 3D print 120 (e.g., 3D objects included in the 3D print 120) from different visual perspectives.
- the processor 104 may generate multiple 3D renderings of a 3D object included in the 3D printer file 106 positioned in different orientations with respect to a simulated camera perspective.
- the processor 104 may create the plurality of 3D renderings of the 3D print 120 with simulated defects.
- the processor 104 may use the 3D printer file 106 to generate multiple 3D renderings of a 3D object in the 3D print 120 that includes a defect.
- a 3D part of a particular geometry of a 3D part may be likely to have a defect.
- a defect or multiple defects may be simulated (e.g., generated) in a plurality of 3D renderings.
- the multiple 3D renderings may show a 3D object with a defect in different orientations.
- the simulated defects may be based on knowledge of the type of 3D printer 118 and/or 3D printing process. For example, a particular 3D printing process or particular 3D printer 118 may be likely to result in a particular defect.
- the particular defect may be simulated based on the 3D printing process and/or 3D printing device.
- the processor 104 may label the plurality of 3D renderings with labels. For example, as the processor 104 creates a 3D rendering with a particular property, the processor 104 may include the property as a label to the 3D rendering. For instance, the property may include orientation information for a rendered 3D object. Other properties may include object type, object name, surface effects, surface conditions, simulated defects, simulated printing artifacts, and/or lighting conditions.
- the CV model 110 may be generated by performing ML training based on the plurality of 3D renderings.
- the ML training may also include providing labels of the 3D renderings to the ML model.
- the labels may aid training a CV model 110 for different properties (e.g., object orientation, object type, object name, surface effects, surface conditions, simulated defects, simulated printing artifacts, lighting conditions, etc.).
- the CV model 110 may be used for a post-printing operation.
- the CV model generator 108 may generate the CV model 110 to perform an operation on the 3D print 120 using visual information of the 3D print 120.
- the computing device 102 may send the CV model 110 to a CV operation device 122.
- the CV operation device 122 may be a mechanism that includes an image sensor component (e.g., camera) for capturing visual information about the 3D print 120.
- the CV operation device 122 may also include an actuator component (e.g., motor, solenoid, pneumatic device, etc.) for performing a physical operation on the 3D print 120 based on the visual information.
- the post-printing operation performed by the CV operation device 122 may include sorting and/or binning of the 3D print 120.
- the 3D print 120 may include one or multiple 3D objects.
- the CV operation device 122 may sort the one or multiple 3D objects of the 3D print 120 based on visually observable properties of the 3D objects.
- the CV operation device 122 may sort a printed 3D object based on shape, size, color, texture or other observable property of the 3D object. Therefore, the CV operation device 122 may use the CV model 110 for CV-based sorting of the 3D print 120.
- the processor 104 may generate the CV model 110 for orientation detection of the 3D print 120.
- orientation detection may include determining the position in space of a 3D object or multiple 3D objects within a 3D print 120.
- the 3D print 120 may have a particular orientation that is observable by the CV operation device 122.
- the 3D print 120 may be removed from the 3D printer 118 and placed on a surface (e.g., a bin, table, etc.) for sorting, binning or other post-printing operation.
- the CV model 110 may be trained for detecting the orientation of a 3D object in the 3D print 120 using the plurality of 3D renderings of the 3D object in different orientations.
- the post-printing operation performed by the CV operation device 122 may include detecting defects in the 3D print 120.
- the CV operation device 122 may observe the 3D print 120 to detect whether a 3D object includes a defect.
- the CV operation device 122 may remove the defective 3D object from further post-printing operations.
- the processor 104 may generate the CV model 110 for defect detection of the 3D print 120.
- the CV operation device 122 may use this CV model 110 to detect defects in the 3D print 120.
- the post-printing operation performed by the CV operation device 122 may include cleaning printing artifacts (e.g., powder caking, printing residue, etc.) from the 3D print 120.
- the CV operation device 122 may use the CV model 110 for CV-based cleaning of the 3D print 120.
- the processor 104 may generate the CV model 110 for printing artifact detection of the 3D print 120.
- the processor 104 may perform ML training of the CV model 110 using a plurality of 3D renderings with simulated printing artifacts.
- the processor 104 may create the plurality of 3D renderings of the 3D print 120 (e.g., a 3D object in the 3D print 120) with simulated printing artifacts.
- the simulated printing artifacts may be generated based on the type of 3D printing performed by the 3D printer 118 and/or known performance characteristics of the 3D printer 118.
- the CV operation device 122 may use this CV model 110 to detect and clean printing artifacts in the 3D print 120.
- the CV operation device 122 may include a tablet computing device and/or augmented reality glasses used by a human user.
- the CV operation device 122 may include a camera for observing the 3D print 120.
- the CV operation device 122 may process the visual information captured by the camera based on the CV model 110 to present information to the user for a post-printing operation.
- the CV operation device 122 e.g., tablet, AR glasses
- the CV operation device 122 may detect 3D objects, defects and/or printing artifacts in the 3D print 120.
- the CV operation device 122 e.g., tablet, AR glasses
- the CV operation device 122 may highlight (e.g., present information on a display) observed 3D objects for sorting, binning, cleaning and/or removal due to defects.
- the processor 104 may generate the CV model 110 during printing of the 3D print 120.
- the printing process may take some time to complete. In the case of some 3D printing technologies, and based on the size of the 3D print 120, printing may take many minutes or hours.
- the processor 104 may generate the CV model 110. This approach may reserve computing resources for when they are to be used (e.g., upon completion of the 3D print 120).
- the processor 104 may generate the CV model before the 3D printing process begins.
- the computing device 102 may be instructed to generate the CV model 110.
- the processor 104 may generate the CV model after the 3D printing process completes.
- a 3D print 120 may be completed and, at some point in time, a post-printing operation on the 3D print 120 may be desired.
- the 3D printer file 106 used to print the 3D print 120 may be loaded to generate the CV model 1 10 as described above.
- the CV model 1 10 may be stored in memory 1 16 for future printing of the 3D print 120 based on the 3D printer file 106. For example, if a future printing of the 3D print 120 is to be performed, the saved CV model 1 10 may be used for a CV-based post-printing operation.
- the processor 104 may store (e.g., cache) the CV model 1 10 in memory 1 16. In other examples, the processor 104 may store the CV model 1 10 in a database of CV models.
- the computing device 102 may send the CV model 1 10 to a remote computing device (e.g., a server). For example, upon generating the CV model 1 10, the computing device 102 may send the CV model 1 10 to the remote computing device for storage.
- the CV model 1 10 may be shared as part of a cloud or edge database.
- the computing device 102 (or the server) may also send the CV model 1 10 to other computing devices that control 3D printers, thus reducing the computing load for repeated 3D printing of a particular 3D print 120.
- the examples described herein may increase the speed of postprinting operations.
- the CV-based post-printing operations may be fully automated by a CV operation device 122.
- human operators may be presented with CV-based information to assist in post-printing operations.
- the described examples may also improve the accuracy of the postprinting operations.
- FIG. 2 illustrates an example of a server 212 and a computing device 202 for computer vision (CV) model generation.
- the server 212 may be a computing device as described in FIG. 1.
- the server 212 may include a processor 204 and memory 216 as described in FIG. 1 .
- the server 212 may include a communication component 214 for communicating with a computing device 202.
- the communication component 214 may facilitate network communication.
- the communication component 214 may be a network interface device to establish wireless or wired communication on a network.
- the server 212 may be on the same local network as the computing device 202. In other examples, the server 212 may be on a different local network than the computing device 202. In some examples, the server 212 may communicate with the computing device 202 over an internet connection.
- the server 212 may manage a number of computing devices 202 for 3D printing.
- the server 212 may be referred to as a build manager or a universal build manager (UBM).
- UBM universal build manager
- the server 212 may communicate with multiple computing devices 202 that control printing by 3D printers 118.
- the computing device 202 and 3D printer 218 may be implemented as described in FIG. 1.
- the server 212 may assign 3D printing jobs to the computing devices 202.
- the processor 204 of the server 212 may receive a 3D printer file 206 for a 3D print 220.
- the 3D printer file 206 may be received from a remote computing device, may be generated on the server 212, and/or may be stored in memory 216 on the server 212.
- the server 212 may include a database of 3D objects for printing.
- the 3D printer file 206 may be for a 3D print 220 that is to be printed by a 3D printer 218 controlled by another computing device 202.
- the processor 204 may generate a CV model 210 for the 3D print 220 using ML training based on the 3D printer file 206. This may be accomplished as described in FIG. 1.
- the processor 204 may include a CV model generator 208 to train a CV model 210 using the 3D printer file 206 for input to a ML network.
- the CV model 210 may be used for CV-based sorting of the 3D print 220.
- the processor 204 may generate (e.g., train) the CV model 210 for orientation detection and/or object detection of the 3D print 220.
- the CV model 210 may be used for CV-based defect detection of the 3D print 220.
- the processor 204 may generate (e.g., train) the CV model 210 for defect detection of the 3D print 220.
- the CV model 210 may be used for CV-based cleaning of the 3D print 220.
- the processor 204 may generate (e.g., train) the CV model 210 for printing artifact detection of the 3D print 220.
- the server 212 may allocate generating the CV model 210 to a plurality of computing devices 202. For example, the server 212 may assign different computing devices 202 to train different aspects of the CV model 210. In another example, the server 212 may assign different computing devices 202 to generate 3D renderings that are used for the ML training of the CV model 210. In this example, the plurality of computing devices 202 may return their assigned parts of the CV model generation to the server 212. The server 212 may combine the received parts into the CV model 210.
- the server 212 may send the CV model 210 to a computing device 202.
- the server 212 may send the CV model 210 to a computing device 202 that controls the 3D printer 218 for a 3D print 220.
- the computing device 202 may forward the CV model 210 to a CV operation device for a post-printing operation.
- the server 212 may send the CV model 210 directly to the CV operation device.
- the server 212 may save the CV model 210 in memory 216 for a future printing of the 3D print 220 based on the 3D printer file 206. For example, if the server 212 assigns a future printing of the 3D print 220 using the 3D printer file 206, the server 212 may provide the generated CV model 210 to a computing device 202 (or CV operation device) that is to manage post-printing operations of the 3D print 220.
- a computing device 202 or CV operation device
- FIG. 3 is a block diagram illustrating an example of a computer- readable medium 323 encoded with instructions for CV model generation.
- the computer-readable medium 323 may be a non-transitory, tangible computer- readable medium 323.
- the computer-readable medium 323 may be, for example, RAM, EEPROM, a storage device, an optical disc, and the like.
- the computer-readable medium 323 may be volatile and/or non-volatile memory, such as DRAM, EEPROM, MRAM, PCRAM, memristor, flash memory, and the like.
- the computer-readable medium 323 described in FIG. 3 may be an example of memory for a computing device 102 described in FIG. 1 or a server 212 described in FIG.
- code e.g., data and/or executable code or instructions
- the computer- readable medium 323 may be transferred and/or loaded to memory or memories of the computing device.
- the computer-readable medium 323 may include code (e.g., data and/or executable code or instructions).
- the computer-readable medium 323 may include 3D rendering creation instructions 324, 3D rendering labeling instructions 326, and CV model training instructions 328.
- the 3D rendering creation instructions 324 may be instructions that when executed cause the processor of the computing device to create a plurality of three-dimensional (3D) renderings of a 3D print using a 3D printer file.
- the processor may create the plurality of 3D renderings of the 3D print from different visual perspectives.
- the processor may create the plurality of 3D renderings of the 3D print with simulated printing artifacts.
- the processor may create the plurality of 3D renderings of the 3D print with simulated defects. In some examples, this may be accomplished as described in FIG. 1 .
- the 3D rendering labeling instructions 326 may be instructions that when executed cause the processor of the computing device to label the plurality of 3D renderings with labels. For example, as the processor creates a 3D rendering with a particular property, the processor may tag the 3D rendering with a label or multiple labels.
- a label may be a property of the 3D rendering.
- a property may include orientation information for a rendered 3D object, object type, object name, surface effects, surface conditions, simulated defects, simulated printing artifacts, and/or lighting conditions. In some examples, this may be accomplished as described in FIG. 1 .
- the CV model training instructions 328 may cause the processor to perform machine learning (ML) training of a computer vision (CV) model for the 3D print based on the plurality of 3D renderings and labels.
- the processor may provide the plurality of 3D renderings and labels to a ML network to train the CV model.
- the ML network may train the CV model to detect visual patterns derived from the 3D renderings.
- the labels may further guide the training of the CV model by the ML network. In some examples, this may be accomplished as described in FIG. 1 .
- FIG. 4 is a flow diagram illustrating an example method 400 for CV model generation.
- the method 400 and/or an element or elements of the method 400 may be implemented by the computing device 102 of FIG. 1 or the server 212 of FIG. 2.
- the method 400 may be described with reference to the computing device 102.
- the computing device 102 may receive a 3D printer file 106 for a 3D print 120.
- the 3D printer file 106 may include a 3D model of an object that is to be printed by the 3D printer 118.
- the computing device 102 may create a plurality of 3D renderings of the 3D print 120 using the 3D printer file 106.
- the computing device 102 may generate a visual representation of a 3D object in the 3D print 120.
- the computing device 102 may create the plurality of 3D renderings of the 3D print from different visual perspectives.
- the computing device 102 may create multiple 3D renderings of a 3D object in different orientations with respect to a simulated camera view.
- the CV model 110 may be used for CV-based sorting of the 3D print 120.
- the computing device 102 may create the plurality of 3D renderings of the 3D print with simulated defects.
- the CV model 110 may be used for CV-based defect detection of the 3D print 120.
- the computing device 102 may create the plurality of 3D renderings of the 3D print with simulated printing artifacts.
- the CV model 110 may be used for CV-based cleaning of the 3D print 120.
- the computing device 102 may label the plurality of 3D renderings with labels. For example, as the computing device 102 creates a 3D rendering with a particular property, the processor may tag the 3D rendering with a label of that property.
- a property may include orientation information for a rendered 3D object, object type, object name, surface effects, surface conditions, simulated defects, simulated printing artifacts, and/or lighting conditions.
- the computing device 102 may perform machine learning (ML) training to the CV model for the 3D print 120 based on the plurality of 3D renderings and labels.
- the computing device 102 may include a ML network (e.g., CNN).
- the plurality of 3D renderings may be provided to the ML network to train the CV model 110 based on the visual information included in the 3D renderings.
- the labels may further guide the training of the CV model 110.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Chemical & Material Sciences (AREA)
- Manufacturing & Machinery (AREA)
- Materials Engineering (AREA)
Abstract
Examples of computing devices for computer vision (CV) model generation are described. In some examples, a computing device may include a processor to receive a three-dimensional (3D) printer file for a 3D print. In some examples, the processor may generate a CV model for the 3D print using machine learning (ML) training based on the 3D printer file.
Description
COMPUTER VISION MODEL GENERATION
BACKGROUND
[0001] Three-dimensional (3D) solid parts may be produced from a digital model using additive manufacturing. Additive manufacturing may be used in rapid prototyping, mold generation, mold master generation, and short-run manufacturing. Additive manufacturing involves the application of successive layers of build material. This is unlike some machining processes that often remove material to create the final part. In some additive manufacturing techniques, the build material may be cured or fused.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] Various examples will be described below by referring to the following figures.
[0003] FIG. 1 illustrates an example of a computing device for computer vision (CV) model generation;
[0004] FIG. 2 illustrates an example of a server and a computing device for CV model generation;
[0005] FIG. 3 is a block diagram illustrating an example of a computer- readable medium encoded with instructions for CV model generation; and [0006] FIG. 4 is a flow diagram illustrating an example method for CV model generation.
[0007] Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements. The figures are not necessarily to scale, and the size of some parts may be exaggerated to more clearly
illustrate the example shown. Moreover, the drawings provide examples and/or implementations in accordance with the description; however, the description is not limited to the examples and/or implementations provided in the drawings.
DETAILED DESCRIPTION
[0008] The examples described herein relate to three-dimensional (3D) printing. 3D printing is an example of additive manufacturing. In 3D printing, 3D solid objects (e.g., parts) may be produced from a digital model (referred to herein as a 3D printer file) using an additive printing process. In some examples, 3D printing may be used in rapid prototyping, mold generation, mold master generation, and short-run manufacturing. Some 3D-printing techniques are considered additive processes because they involve the application of successive layers of material. Moreover, some 3D printing systems may concurrently build multiple 3D objects in the build volume as part of a build operation.
[0009] While plastics (e.g., polymers) may be utilized as a way to illustrate some of the approaches described herein, the techniques described herein may be utilized in various examples of additive manufacturing. For instance, some examples may be utilized for plastics, polymers, semi-crystalline materials, metals, biological materials, etc. Some additive manufacturing techniques may be powder-based and driven by powder fusion. Some examples of the approaches described herein may be applied to area-based powder bed fusionbased additive manufacturing, such as Stereolithography (SLA), Multi Jet Fusion (MJF), Metal Jet Fusion, Selective Laser Melting (SLM), Selective Laser Sintering (SLS), liquid resin-based printing, etc.
[0010] Upon completion of a 3D printing process, a post-printing operation may be performed on a resulting 3D object. In some examples, a 3D printer may produce a number of different types of 3D objects in one printing run. In an example of a post-printing operation, multiple 3D objects may be taken from the build volume and sorted. In other examples, 3D objects may be binned (e.g., based on sorting). As used herein, binning refers to the placement of a 3D
object in a particular location. In other examples, the 3D objects may be inspected for defects and/or to determine that the 3D parts meet quality control standards. In yet other examples of a post-printing operation, the 3D objects may be cleaned of residue (e.g., powder caking) from the 3D printing process.
[0011] In some cases, the post-printing operations may be performed by a human. In this case, the post-printing operations may be labor intensive and slow. For example, a person may unpack the 3D parts from the 3D printer, clean the 3D parts, examine the 3D parts for defects, and sort the 3D parts.
[0012] In some examples, computer vision (CV) may be used to perform post-printing operations. As used herein, CV refers to acquiring, processing, and responding to visual information by a computing device.
[0013] In an example, a CV operation device may be an electronic device that uses CV to perform an operation. In some examples, a CV operation device may be an electronic device that includes circuitry to perform an operation based on visual information. In an example, a CV operation device may be a mechanism that includes an image sensor component (e.g., camera) for capturing visual information about the output of a 3D printing process. In some examples, a CV operation device may also include an actuator component (e.g., motor, solenoid, etc.) for performing an operation based on the visual information. In some examples, the CV operation device may be a tablet computer with a camera and a display. In other examples, the CV operation device may be augmented reality (AR) glasses with a camera. In yet other examples, the CV operation device may be a robot with a camera and an actuator mechanism.
[0014] In some examples, the CV operation device may perform post-printing operations on the output of a 3D printer. For example, a CV operation device may sort 3D parts based on visual information. In another example, a CV operation device may bin 3D parts based on visual information. In another example, a CV operation device may detect defects in 3D parts based on visual information. In yet another example, a CV operation device may clean 3D parts based on visual information. In some examples, the CV operation device (e.g., a mechanical device, robot, etc.) may autonomously perform a CV-based post-
printing operation. In other examples, the CV operation device may perform a CV-based post-printing operation with the assistance of a human operator.
[0015] The CV operations described herein may use a CV model. As used herein, a model is a file or data structure that has been trained using machine learning (ML) to detect patterns in information. In the case of a CV model, the CV model may be trained using ML to detect patterns in visual information.
[0016] Examples are described herein for generating a CV model that is trained for a post-printing operation that is to be performed on a 3D print. As used herein, a 3D print includes the output of a 3D printing process. For example, a 3D print may include a 3D object, multiple 3D objects and/or printing artifacts (e.g., residue, powder caking, etc.) from the 3D printing process. In the described examples, a 3D printer file may be used to generate the CV model.
[0017] FIG. 1 illustrates an example of a computing device 102 for computer vision (CV) model generation. Examples of the computing device 102 may include workstations, controllers, laptop computers, desktop computers, servers, tablet devices, cellular phones, smartphones, wireless communication devices, etc.
[0018] In some examples, the computing device 102 may include a processor 104. The processor 104 may be any of a microcontroller (e.g., embedded controller), a central processing unit (CPU), a semiconductor-based microprocessor, graphics processing unit (GPU), field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), a circuit, a chipset, and/or other hardware device suitable for retrieval and execution of instructions stored in a memory 116. The processor 104 may fetch, decode, and/or execute instructions stored in memory 116. While a single processor 104 is shown in FIG. 1 , in other examples, the processor 104 may include multiple processors (e.g., a CPU and a GPU).
[0019] The memory 116 of the computing device 102 may be any electronic, magnetic, optical, and/or other physical storage device that contains or stores electronic information (e.g., instructions and/or data). The memory 116 may be, for example, Random Access Memory (RAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Dynamic Random Access
Memory (DRAM), Synchronous DRAM (SDRAM), magnetoresistive randomaccess memory (MRAM), phase change RAM (PCRAM), non-volatile randomaccess memory (NVRAM), memristor, flash memory, a storage device, and/or an optical disc, etc. In some examples, the memory 116 may be a non-transitory tangible computer-readable storage medium, where the term “non-transitory” does not encompass transitory propagating signals. The processor 104 may be in electronic communication with the memory 116.
[0020] The computing device 102 may communicate with a 3D printer 118. In some examples, the computing device 102 may control the 3D printer 118. For example, the computing device 102 may send instructions to a 3D printer 118 to generate a 3D print 120. In this case, the computing device 102 may be a controller for the 3D printer 118 and may be referred to as the compute node of the 3D printer 118.
[0021] In other examples, the computing device 102 may communicate (e.g., send and/or receive information) with the 3D printer 118, but the 3D printer 118 may include a controller for processing instructions for generating the 3D print 120. In yet other examples, the computing device 102 may not communicate with the 3D printer 118.
[0022] The computing device 102 may receive a 3D printer file 106 for a 3D print 120. The 3D printer file 106 may include instructions for creating the 3D print 120. For example, the 3D printer file 106 may include a 3D model of an object that is to be printed by the 3D printer 118. In some examples, the 3D printer file 106 may include geometry, color, texture, and/or materials for the 3D print 120. The 3D printer file 106 may further include instructions for how the 3D printer 118 is to generate the 3D print 120. For example, the 3D printer file 106 may include sequences for applying a print material to build the 3D print 120. Some examples of 3D printer file formats include STEP, STL, OBJ, AMF, 3MF, VRML, X3D, FBX, and IGES.
[0023] In some examples, the processor 104 of the computing device 102 may receive the 3D printer file 106 from another (e.g., remote) computing device (not shown). For example, the 3D printer file 106 may be downloaded to the computing device 102 for printing by the 3D printer 118. In other examples, the
computing device 102 may receive the 3D printer file 106 through creation of the 3D printer file 106 on the computing device 102. For example, a user may design a 3D part using a program on the computing device 102, which creates a 3D printer file 106. In yet other examples, the 3D printer file 106 may be stored in memory 116 on the computing device 102.
[0024] In some examples, generating a computer vision (CV) model 110 for the 3D print 120 may include training by a machine learning (ML) network based on the 3D printer file 106. For example, the processor 104 may include a CV model generator 108 to train a CV model 110 using the 3D printer file 106 for input to a ML network. The CV model 110 may be used by a CV operation device 122 to detect patterns in visual information of the 3D print 120.
[0025] Examples of the machine learning networks described herein may include neural networks, deep neural networks, spatio-temporal neural networks, etc. For instance, model data may define a node or nodes, a connection or connections between nodes, a network layer or network layers, and/or a neural network or neural networks. Examples of neural networks include convolutional neural networks (CNNs) (e.g., basic CNN, deconvolutional neural network, inception module, residual neural network, etc.) and recurrent neural networks (RNNs) (e.g., basic RNN, multi-layer RNN, bi-directional RNN, fused RNN, clockwork RNN, etc.). Some approaches may utilize a variant or variants of RNN (e.g., Long Short Term Memory Unit (LSTM), peephole LSTM, no input gate (NIG), no forget gate (NFG), no output gate (NOG), no input activation function (NIAF), no output activation function (NOAF), no peepholes (NP), coupled input and forget gate (CIFG), full gate recurrence (FGR), gated recurrent unit (GRU), etc.). Different depths of a neural network or neural networks may be utilized.
[0026] In some examples, ML training of the CV model 110 may be based on simulated visual information created from the 3D printer file 106. For example, a 3D rendering may be created from the 3D printer file 106. As used herein a 3D rendering includes a generated image of a 3D object from a particular perspective. In some examples, a 3D rendering may include the geometry of a
3D object, surface details, color, texture, a simulated focal point, a distance from a simulated camera, etc.
[0027] A plurality of different 3D renderings may be generated to illustrate different aspects of a 3D object. For example, the processor 104 may generate multiple 3D renderings showing the 3D object in different orientations. In other examples, the processor 104 may generate multiple 3D renderings showing the 3D object with different lighting conditions. In other examples, the processor 104 may generate multiple 3D renderings showing the 3D object with different surface conditions. In other examples, the processor 104 may generate multiple 3D renderings showing the 3D object with different printing artifacts (e.g., powder caking) of the printing process.
[0028] In some examples, the processor 104 may use the 3D printer file 106 to generate multiple 3D renderings that include different 3D objects. For example, a 3D print 120 may include multiple 3D objects. In some examples, the different 3D objects may have different geometry. In this case, the processor 104 may generate a plurality of 3D renderings for the different 3D objects. In some examples, the multiple 3D renderings may show the different 3D objects in different orientations. Therefore, the processor 104 may create a plurality of 3D renderings of the 3D print 120 (e.g., 3D objects included in the 3D print 120) from different visual perspectives. For example, the processor 104 may generate multiple 3D renderings of a 3D object included in the 3D printer file 106 positioned in different orientations with respect to a simulated camera perspective.
[0029] In some examples, the processor 104 may create the plurality of 3D renderings of the 3D print 120 with simulated defects. For example, the processor 104 may use the 3D printer file 106 to generate multiple 3D renderings of a 3D object in the 3D print 120 that includes a defect. For example, a 3D part of a particular geometry of a 3D part may be likely to have a defect. A defect or multiple defects may be simulated (e.g., generated) in a plurality of 3D renderings. In some examples, the multiple 3D renderings may show a 3D object with a defect in different orientations. In some examples, the simulated defects may be based on knowledge of the type of 3D printer 118
and/or 3D printing process. For example, a particular 3D printing process or particular 3D printer 118 may be likely to result in a particular defect. The particular defect may be simulated based on the 3D printing process and/or 3D printing device.
[0030] In some examples, the processor 104 may label the plurality of 3D renderings with labels. For example, as the processor 104 creates a 3D rendering with a particular property, the processor 104 may include the property as a label to the 3D rendering. For instance, the property may include orientation information for a rendered 3D object. Other properties may include object type, object name, surface effects, surface conditions, simulated defects, simulated printing artifacts, and/or lighting conditions.
[0031] The CV model 110 may be generated by performing ML training based on the plurality of 3D renderings. In some examples, the ML training may also include providing labels of the 3D renderings to the ML model. In some examples, the labels may aid training a CV model 110 for different properties (e.g., object orientation, object type, object name, surface effects, surface conditions, simulated defects, simulated printing artifacts, lighting conditions, etc.).
[0032] The CV model 110 may be used for a post-printing operation. For example, the CV model generator 108 may generate the CV model 110 to perform an operation on the 3D print 120 using visual information of the 3D print 120. In some examples, the computing device 102 may send the CV model 110 to a CV operation device 122.
[0033] In some examples, the CV operation device 122 may be a mechanism that includes an image sensor component (e.g., camera) for capturing visual information about the 3D print 120. In some examples, the CV operation device 122 may also include an actuator component (e.g., motor, solenoid, pneumatic device, etc.) for performing a physical operation on the 3D print 120 based on the visual information.
[0034] In some examples, the post-printing operation performed by the CV operation device 122 may include sorting and/or binning of the 3D print 120. For example, the 3D print 120 may include one or multiple 3D objects. The CV
operation device 122 may sort the one or multiple 3D objects of the 3D print 120 based on visually observable properties of the 3D objects. For example, the CV operation device 122 may sort a printed 3D object based on shape, size, color, texture or other observable property of the 3D object. Therefore, the CV operation device 122 may use the CV model 110 for CV-based sorting of the 3D print 120.
[0035] In some examples, to facilitate identification of a 3D object in the 3D print 120, the processor 104 may generate the CV model 110 for orientation detection of the 3D print 120. As used herein, orientation detection may include determining the position in space of a 3D object or multiple 3D objects within a 3D print 120. For example, after printing, the 3D print 120 may have a particular orientation that is observable by the CV operation device 122. In an example, the 3D print 120 may be removed from the 3D printer 118 and placed on a surface (e.g., a bin, table, etc.) for sorting, binning or other post-printing operation. In some examples, the CV model 110 may be trained for detecting the orientation of a 3D object in the 3D print 120 using the plurality of 3D renderings of the 3D object in different orientations.
[0036] In some examples, the post-printing operation performed by the CV operation device 122 may include detecting defects in the 3D print 120. For example, the CV operation device 122 may observe the 3D print 120 to detect whether a 3D object includes a defect. In some examples, the CV operation device 122 may remove the defective 3D object from further post-printing operations. In this case, the processor 104 may generate the CV model 110 for defect detection of the 3D print 120. The CV operation device 122 may use this CV model 110 to detect defects in the 3D print 120.
[0037] In some examples, the post-printing operation performed by the CV operation device 122 may include cleaning printing artifacts (e.g., powder caking, printing residue, etc.) from the 3D print 120. In this case, the CV operation device 122 may use the CV model 110 for CV-based cleaning of the 3D print 120. In this example, the processor 104 may generate the CV model 110 for printing artifact detection of the 3D print 120. For example, the processor 104 may perform ML training of the CV model 110 using a plurality of 3D
renderings with simulated printing artifacts. Therefore, the processor 104 may create the plurality of 3D renderings of the 3D print 120 (e.g., a 3D object in the 3D print 120) with simulated printing artifacts. In some examples, the simulated printing artifacts may be generated based on the type of 3D printing performed by the 3D printer 118 and/or known performance characteristics of the 3D printer 118. The CV operation device 122 may use this CV model 110 to detect and clean printing artifacts in the 3D print 120.
[0038] In some examples, the CV operation device 122 may include a tablet computing device and/or augmented reality glasses used by a human user. For example, the CV operation device 122 may include a camera for observing the 3D print 120. The CV operation device 122 may process the visual information captured by the camera based on the CV model 110 to present information to the user for a post-printing operation. For example, the CV operation device 122 (e.g., tablet, AR glasses) may detect 3D objects, defects and/or printing artifacts in the 3D print 120. The CV operation device 122 (e.g., tablet, AR glasses) may present information to the user about what the user is to do with the 3D print 120. For example, the CV operation device 122 may highlight (e.g., present information on a display) observed 3D objects for sorting, binning, cleaning and/or removal due to defects.
[0039] In some examples, the processor 104 may generate the CV model 110 during printing of the 3D print 120. For example, the printing process may take some time to complete. In the case of some 3D printing technologies, and based on the size of the 3D print 120, printing may take many minutes or hours. During the time that the 3D printer 118 is printing, the processor 104 may generate the CV model 110. This approach may reserve computing resources for when they are to be used (e.g., upon completion of the 3D print 120).
[0040] In other examples, the processor 104 may generate the CV model before the 3D printing process begins. For example, in preparation for printing the 3D print 120, the computing device 102 may be instructed to generate the CV model 110.
[0041] In yet other examples, the processor 104 may generate the CV model after the 3D printing process completes. For example, a 3D print 120 may be
completed and, at some point in time, a post-printing operation on the 3D print 120 may be desired. In this case, the 3D printer file 106 used to print the 3D print 120 may be loaded to generate the CV model 1 10 as described above.
[0042] In some examples, the CV model 1 10 may be stored in memory 1 16 for future printing of the 3D print 120 based on the 3D printer file 106. For example, if a future printing of the 3D print 120 is to be performed, the saved CV model 1 10 may be used for a CV-based post-printing operation. In some examples, the processor 104 may store (e.g., cache) the CV model 1 10 in memory 1 16. In other examples, the processor 104 may store the CV model 1 10 in a database of CV models.
[0043] In some examples, the computing device 102 may send the CV model 1 10 to a remote computing device (e.g., a server). For example, upon generating the CV model 1 10, the computing device 102 may send the CV model 1 10 to the remote computing device for storage. In some examples, the CV model 1 10 may be shared as part of a cloud or edge database. The computing device 102 (or the server) may also send the CV model 1 10 to other computing devices that control 3D printers, thus reducing the computing load for repeated 3D printing of a particular 3D print 120.
[0044] The examples described herein may increase the speed of postprinting operations. For example, the CV-based post-printing operations may be fully automated by a CV operation device 122. In other examples, human operators may be presented with CV-based information to assist in post-printing operations. The described examples may also improve the accuracy of the postprinting operations.
[0045] FIG. 2 illustrates an example of a server 212 and a computing device 202 for computer vision (CV) model generation. In some examples, the server 212 may be a computing device as described in FIG. 1. For example, the server 212 may include a processor 204 and memory 216 as described in FIG. 1 .
[0046] In some examples, the server 212 may include a communication component 214 for communicating with a computing device 202. The communication component 214 may facilitate network communication. For example, the communication component 214 may be a network interface device
to establish wireless or wired communication on a network. In some examples, the server 212 may be on the same local network as the computing device 202. In other examples, the server 212 may be on a different local network than the computing device 202. In some examples, the server 212 may communicate with the computing device 202 over an internet connection.
[0047] In some examples, the server 212 may manage a number of computing devices 202 for 3D printing. The server 212 may be referred to as a build manager or a universal build manager (UBM). For example, the server 212 may communicate with multiple computing devices 202 that control printing by 3D printers 118. The computing device 202 and 3D printer 218 may be implemented as described in FIG. 1. The server 212 may assign 3D printing jobs to the computing devices 202.
[0048] The processor 204 of the server 212 may receive a 3D printer file 206 for a 3D print 220. In some examples, the 3D printer file 206 may be received from a remote computing device, may be generated on the server 212, and/or may be stored in memory 216 on the server 212. For example, the server 212 may include a database of 3D objects for printing. In this case, the 3D printer file 206 may be for a 3D print 220 that is to be printed by a 3D printer 218 controlled by another computing device 202.
[0049] The processor 204 may generate a CV model 210 for the 3D print 220 using ML training based on the 3D printer file 206. This may be accomplished as described in FIG. 1. For example, the processor 204 may include a CV model generator 208 to train a CV model 210 using the 3D printer file 206 for input to a ML network.
[0050] In some examples, the CV model 210 may be used for CV-based sorting of the 3D print 220. In this case, the processor 204 may generate (e.g., train) the CV model 210 for orientation detection and/or object detection of the 3D print 220. In other examples, the CV model 210 may be used for CV-based defect detection of the 3D print 220. In this case, the processor 204 may generate (e.g., train) the CV model 210 for defect detection of the 3D print 220. In yet other examples, the CV model 210 may be used for CV-based cleaning of
the 3D print 220. In this case, the processor 204 may generate (e.g., train) the CV model 210 for printing artifact detection of the 3D print 220.
[0051] In some examples, the server 212 may allocate generating the CV model 210 to a plurality of computing devices 202. For example, the server 212 may assign different computing devices 202 to train different aspects of the CV model 210. In another example, the server 212 may assign different computing devices 202 to generate 3D renderings that are used for the ML training of the CV model 210. In this example, the plurality of computing devices 202 may return their assigned parts of the CV model generation to the server 212. The server 212 may combine the received parts into the CV model 210.
[0052] Upon generating the CV model 210, the server 212 may send the CV model 210 to a computing device 202. For example, the server 212 may send the CV model 210 to a computing device 202 that controls the 3D printer 218 for a 3D print 220. The computing device 202 may forward the CV model 210 to a CV operation device for a post-printing operation. In another example, the server 212 may send the CV model 210 directly to the CV operation device.
[0053] In some examples, the server 212 may save the CV model 210 in memory 216 for a future printing of the 3D print 220 based on the 3D printer file 206. For example, if the server 212 assigns a future printing of the 3D print 220 using the 3D printer file 206, the server 212 may provide the generated CV model 210 to a computing device 202 (or CV operation device) that is to manage post-printing operations of the 3D print 220.
[0054] FIG. 3 is a block diagram illustrating an example of a computer- readable medium 323 encoded with instructions for CV model generation. The computer-readable medium 323 may be a non-transitory, tangible computer- readable medium 323. The computer-readable medium 323 may be, for example, RAM, EEPROM, a storage device, an optical disc, and the like. In some examples, the computer-readable medium 323 may be volatile and/or non-volatile memory, such as DRAM, EEPROM, MRAM, PCRAM, memristor, flash memory, and the like. In some examples, the computer-readable medium 323 described in FIG. 3 may be an example of memory for a computing device 102 described in FIG. 1 or a server 212 described in FIG. 2, any of which may
be referred to generally as a “computing device” in FIG. 3 In some examples, code (e.g., data and/or executable code or instructions) of the computer- readable medium 323 may be transferred and/or loaded to memory or memories of the computing device.
[0055] The computer-readable medium 323 may include code (e.g., data and/or executable code or instructions). For example, the computer-readable medium 323 may include 3D rendering creation instructions 324, 3D rendering labeling instructions 326, and CV model training instructions 328.
[0056] In some examples, the 3D rendering creation instructions 324 may be instructions that when executed cause the processor of the computing device to create a plurality of three-dimensional (3D) renderings of a 3D print using a 3D printer file. For example, the processor may create the plurality of 3D renderings of the 3D print from different visual perspectives. In other examples, the processor may create the plurality of 3D renderings of the 3D print with simulated printing artifacts. In yet other examples, the processor may create the plurality of 3D renderings of the 3D print with simulated defects. In some examples, this may be accomplished as described in FIG. 1 .
[0057] In some examples, the 3D rendering labeling instructions 326 may be instructions that when executed cause the processor of the computing device to label the plurality of 3D renderings with labels. For example, as the processor creates a 3D rendering with a particular property, the processor may tag the 3D rendering with a label or multiple labels. In some examples, a label may be a property of the 3D rendering. In some examples, a property may include orientation information for a rendered 3D object, object type, object name, surface effects, surface conditions, simulated defects, simulated printing artifacts, and/or lighting conditions. In some examples, this may be accomplished as described in FIG. 1 .
[0058] In some examples, the CV model training instructions 328 may cause the processor to perform machine learning (ML) training of a computer vision (CV) model for the 3D print based on the plurality of 3D renderings and labels. For example, the processor may provide the plurality of 3D renderings and labels to a ML network to train the CV model. The ML network may train the CV
model to detect visual patterns derived from the 3D renderings. The labels may further guide the training of the CV model by the ML network. In some examples, this may be accomplished as described in FIG. 1 .
[0059] FIG. 4 is a flow diagram illustrating an example method 400 for CV model generation. The method 400 and/or an element or elements of the method 400 may be implemented by the computing device 102 of FIG. 1 or the server 212 of FIG. 2. The method 400 may be described with reference to the computing device 102.
[0060] At 402, the computing device 102 may receive a 3D printer file 106 for a 3D print 120. For example, the 3D printer file 106 may include a 3D model of an object that is to be printed by the 3D printer 118.
[0061] At 404, the computing device 102 may create a plurality of 3D renderings of the 3D print 120 using the 3D printer file 106. For example, the computing device 102 may generate a visual representation of a 3D object in the 3D print 120. In some examples, the computing device 102 may create the plurality of 3D renderings of the 3D print from different visual perspectives. In other words, the computing device 102 may create multiple 3D renderings of a 3D object in different orientations with respect to a simulated camera view. In this case, the CV model 110 may be used for CV-based sorting of the 3D print 120.
[0062] In other examples, the computing device 102 may create the plurality of 3D renderings of the 3D print with simulated defects. In this case, the CV model 110 may be used for CV-based defect detection of the 3D print 120.
[0063] In yet other examples, the computing device 102 may create the plurality of 3D renderings of the 3D print with simulated printing artifacts. In this case, the CV model 110 may be used for CV-based cleaning of the 3D print 120.
[0064] At 406, the computing device 102 may label the plurality of 3D renderings with labels. For example, as the computing device 102 creates a 3D rendering with a particular property, the processor may tag the 3D rendering with a label of that property. In some examples, a property may include orientation information for a rendered 3D object, object type, object name,
surface effects, surface conditions, simulated defects, simulated printing artifacts, and/or lighting conditions.
[0065] At 408, the computing device 102 may perform machine learning (ML) training to the CV model for the 3D print 120 based on the plurality of 3D renderings and labels. For example, the computing device 102 may include a ML network (e.g., CNN). The plurality of 3D renderings may be provided to the ML network to train the CV model 110 based on the visual information included in the 3D renderings. The labels may further guide the training of the CV model 110.
[0066] It should be noted that while various examples of systems and methods are described herein, the disclosure should not be limited to the examples. Variations of the examples described herein may be implemented within the scope of the disclosure. For example, functions, aspects, or elements of the examples described herein may be omitted or combined.
Claims
1 . A computing device, comprising: a processor to: receive a three-dimensional (3D) printer file for a 3D print; and generate a computer vision (CV) model for the 3D print using machine learning (ML) training based on the 3D printer file.
2. The computing device of claim 1 , wherein the processor is to generate the CV model during printing of the 3D print.
3. The computing device of claim 1 , wherein the processor is to generate the CV model for orientation detection of the 3D print.
4. The computing device of claim 1 , wherein the processor is to generate the CV model for defect detection of the 3D print.
5. The computing device of claim 1 , wherein the processor is to generate the CV model for printing artifact detection of the 3D print.
6. The computing device of claim 1 , wherein the processor to generate the CV model comprises the processor to: create a plurality of 3D renderings of the 3D print using the 3D printer file; and perform the ML training of the CV model based on the plurality of 3D renderings.
7. A server, comprising: a processor to: receive a three-dimensional (3D) printer file for a 3D print; and generate a computer vision (CV) model for the 3D print using machine learning (ML) training based on the 3D printer file; and
a communication component to send the CV model to a computing device.
8. The server of claim 7, wherein the processor to generate the CV model comprises the processor to allocate generating the CV model to a plurality of computing devices.
9. The server of claim 7, further comprising a memory to save the CV model for a future printing of the 3D print based on the 3D printer file.
10. The server of claim 7, wherein a CV operation device is to use the CV model for CV-based sorting of the 3D print.
11 . The server of claim 7, wherein a CV operation device is to use the CV model for CV-based cleaning of the 3D print.
12. A non-transitory tangible computer-readable medium comprising instructions when executed cause a processor of a computing device to: create a plurality of three-dimensional (3D) renderings of a 3D print using a 3D printer file; label the plurality of 3D renderings with labels; and perform machine learning (ML) training of a computer vision (CV) model for the 3D print based on the plurality of 3D renderings and labels.
13. The non-transitory tangible computer-readable medium of claim 12, wherein the processor is to create the plurality of 3D renderings of the 3D print from different visual perspectives.
14. The non-transitory tangible computer-readable medium of claim 12, wherein the processor is to create the plurality of 3D renderings of the 3D print with simulated defects.
19
15. The non-transitory tangible computer-readable medium of claim 12, wherein the processor is to create the plurality of 3D renderings of the 3D print with simulated printing artifacts.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2020/057644 WO2022093207A1 (en) | 2020-10-28 | 2020-10-28 | Computer vision model generation |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2020/057644 WO2022093207A1 (en) | 2020-10-28 | 2020-10-28 | Computer vision model generation |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2022093207A1 true WO2022093207A1 (en) | 2022-05-05 |
Family
ID=81383099
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2020/057644 Ceased WO2022093207A1 (en) | 2020-10-28 | 2020-10-28 | Computer vision model generation |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2022093207A1 (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170293286A1 (en) * | 2016-01-06 | 2017-10-12 | Wiivv Wearables Inc. | Generating of 3d-printed custom wearables |
| US20180079125A1 (en) * | 2013-08-07 | 2018-03-22 | Massachusetts Institute Of Technology | Automatic process control of additive manufacturing device |
| US20190054700A1 (en) * | 2017-08-15 | 2019-02-21 | Cincinnati Incorporated | Machine learning for additive manufacturing |
| WO2019203851A1 (en) * | 2018-04-20 | 2019-10-24 | Hewlett-Packard Development Company, L.P. | Three-dimensional shape classification and retrieval using convolutional neural networks and majority vote |
-
2020
- 2020-10-28 WO PCT/US2020/057644 patent/WO2022093207A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180079125A1 (en) * | 2013-08-07 | 2018-03-22 | Massachusetts Institute Of Technology | Automatic process control of additive manufacturing device |
| US20170293286A1 (en) * | 2016-01-06 | 2017-10-12 | Wiivv Wearables Inc. | Generating of 3d-printed custom wearables |
| US20190054700A1 (en) * | 2017-08-15 | 2019-02-21 | Cincinnati Incorporated | Machine learning for additive manufacturing |
| WO2019203851A1 (en) * | 2018-04-20 | 2019-10-24 | Hewlett-Packard Development Company, L.P. | Three-dimensional shape classification and retrieval using convolutional neural networks and majority vote |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Brion et al. | Generalisable 3D printing error detection and correction via multi-head neural networks | |
| Malik et al. | An application of 3D model reconstruction and augmented reality for real-time monitoring of additive manufacturing | |
| Alexopoulos et al. | Digital twin-driven supervised machine learning for the development of artificial intelligence applications in manufacturing | |
| Nuchitprasitchai et al. | Factors effecting real-time optical monitoring of fused filament 3D printing | |
| JP7023939B2 (en) | Energy density mapping in an additive manufacturing environment | |
| Lyu et al. | Online convolutional neural network-based anomaly detection and quality control for fused filament fabrication process | |
| Cannizzaro et al. | In-situ defect detection of metal additive manufacturing: an integrated framework | |
| CN111448050B (en) | Thermal behavior prediction from continuous tone maps | |
| US11597156B2 (en) | Monitoring additive manufacturing | |
| CN112912232A (en) | Heat mapping | |
| Kopsacheilis et al. | In Situ Visual Quality Control in 3D Printing. | |
| US11967037B2 (en) | Object deformation determination | |
| Cannizzaro et al. | Image analytics and machine learning for in-situ defects detection in Additive Manufacturing | |
| Langeland | Automatic error detection in 3D pritning using computer vision | |
| CN115937059A (en) | Part inspection system with generatively trained models | |
| US20240037925A1 (en) | Simulated powdered model generation for neural networks | |
| Kohtala et al. | Photogrammetry-based 3D scanning for supporting design activities and testing in early stage product development | |
| CN116894829A (en) | Methods, devices, computer equipment and storage media for weld defect detection | |
| WO2022093207A1 (en) | Computer vision model generation | |
| Chaudhary et al. | Application of 3D scanning for reverse manufacturing and inspection of mechanical components | |
| WO2020209851A1 (en) | Adaptive thermal diffusivity | |
| US20220413464A1 (en) | Registering objects | |
| US20230205176A1 (en) | Management system, modeling management system, management method, and computer-readable medium | |
| US10703047B2 (en) | Three-dimensional printing method | |
| Preissler et al. | Approach for optical innervolumetric 3-dimensional data acquisition |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20960122 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 20960122 Country of ref document: EP Kind code of ref document: A1 |