[go: up one dir, main page]

WO2025102077A1 - Cna activé par apprentissage automatique - Google Patents

Cna activé par apprentissage automatique Download PDF

Info

Publication number
WO2025102077A1
WO2025102077A1 PCT/US2024/055542 US2024055542W WO2025102077A1 WO 2025102077 A1 WO2025102077 A1 WO 2025102077A1 US 2024055542 W US2024055542 W US 2024055542W WO 2025102077 A1 WO2025102077 A1 WO 2025102077A1
Authority
WO
WIPO (PCT)
Prior art keywords
analog
machine
digital
learning
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/055542
Other languages
English (en)
Inventor
Luke Urban
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Magnolia Electronics Inc
Original Assignee
Magnolia Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Magnolia Electronics Inc filed Critical Magnolia Electronics Inc
Publication of WO2025102077A1 publication Critical patent/WO2025102077A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M1/00Analogue/digital conversion; Digital/analogue conversion
    • H03M1/10Calibration or testing
    • H03M1/1009Calibration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/065Analogue means
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present disclosure relates generally to the field of digital-to-analog conversion and machine-learning, and more specifically to digital-to-analog converters (DACs) used for converting digital signals into corresponding analog outputs.
  • DACs digital-to-analog converters
  • Digital-to-analog converters are essential components in modem electronic devices, enabling digital signals to be converted into analog form for various applications, including audio playback, video display, and control systems.
  • Conventional DACs including those used in audio and video systems, typically function by employing various architectures such as resistor-ladder, current-steering, or sigma-delta modulation techniques. Each architecture has specific strengths and weaknesses, particularly concerning accuracy, speed, noise, and power efficiency.
  • resistor-ladder DACs offer a relatively simple structure and can achieve high linearity, but they often struggle with high-speed applications due to the resistive load.
  • Current-steering DACs are known for their speed and high-resolution potential, making them common in high-performance audio systems.
  • these DACs are sensitive to process variations and can consume significant power, especially at higher resolutions and frequencies.
  • sigma-delta modulation DACs employ oversampling and noise-shaping to achieve high resolution with reduced distortion, often in audio applications where precision is paramount.
  • sigma-delta DACs typically require higher power consumption and complex circuitry, which can limit their applicability in battery-powered or low-cost devices.
  • a machine-learning enabled DAC is an apparatus that can perform digital to analog conversion.
  • the system comprises a machine-learning unit, an array of analog feature generators, and a combining unit.
  • the machine-learning unit receives a digital signal and produces a set of instructions in a high dimensional digital format based at least in part on the digital signal.
  • the array of analog feature generators receives the instructions, and each analog feature generator produces an analog feature based on its respective instruction.
  • the combining unit receives the analog features and produces an analog output signal based at least in part on the analog features.
  • the analog output signal represents the digital samples in the digital input signal.
  • the analog output signal can be processed downstream as if it were generated through standard DAC approaches.
  • the advantage of this approach comes from the ability of machine-learning to automatically translate between complex data formats. This ability allows for the digital-to-analog elements of the system to be built of parts that are easy to manufacture, but have complex or nonlinear characteristics.
  • the machine-learning unit includes a machine-learning algorithm.
  • the machine-learning algorithm having been previously calibrated, operates in inference mode, in which it receives a digital signal in the standard digital Shannon-Nyquist sample format.
  • the machine-learning enabled DAC can also be calibrated to perform signal processing techniques or feature extraction on the digital signal that result in a customized analog output that differs from the standard Shannon-Nyquist format.
  • the machine-learning enabled DAC can also be configured to accept multiple digital input signals and/or generate multiple analog output signals.
  • the device can also include a range of sensors to monitor the environment and behavior of the device, which can be fed as additional inputs into the device’s machine-learning unit.
  • the device can also incorporate onboard calibration and/or real-time dimensionality reduction units to optimize the performance of the device after manufacturing.
  • FIG. 3 illustrates an example of a multiplexed digital-to-analog converter that generates a respective analog feature, according to an implementation of the present disclosure.
  • FIG. 4 illustrates an example of an array of multiplexed digital-to-analog converters, each converter tuned to a respective carrier frequency, according to an implementation of the present disclosure.
  • FIG. 5 illustrates an example of a Monte Carlo digital-to-analog converter, which produces a pulse of voltage at a fixed amplitude, duration, and timing offset, based on a respective instruction, according to an implementation of the present disclosure.
  • FIG. 6 illustrates an example of an array of Monte Carlo digital-to-analog converters, each configured with a respective amplitude, duration, and timing offset, according to an implementation of the present disclosure.
  • FIG. 7 illustrates an example of a heterogeneous array of digital-to-analog converters, wherein at least one analog feature is produced with a method that differs from a method used to produce another analog feature.
  • FIG. 8 illustrates an example of a combining unit that linearly adds together a plurality of analog features to produce an analog output, according to an implementation of the present disclosure.
  • FIG. 10 illustrates a machine-learning unit configured to produce an instruction set based at least in part on an environmental sensor input, according to an implementation of the present disclosure.
  • FIG. 11 illustrates a combining unit configured to produce a second analog output signal, based at least in part on a plurality of analog features, according to an implementation of the present disclosure.
  • FIG. 12 illustrates a machine-learning unit configured to produce a digital output based at least in part on an input first digital signal, according to an implementation of the present disclosure.
  • FIG. 13 illustrates a machine-learning enabled digital-to-analog converter, wherein the instruction set from the machine-learning unit is stored in a storage system, and retrieved later to be applied to the array of DACs, allowing for asynchronous operation, according to an implementation of the present disclosure.
  • FIG. 15 illustrates a flow-chart outlining production of the plurality of analog features using an array of multiplex digital-to-analog converters, according to an implementation of the present disclosure.
  • FIG. 16 illustrates a flow-chart outlining an algorithm for calibrating a machine-learning algorithm to operate the machine-learning enabled digital-to-analog converter, according to an implementation of the present disclosure.
  • FIG. 17 illustrates a flow-chart outlining an algorithm for creating a first machine-learning algorithm to produce an instruction set, based on a digital signal, that can cause the machine-learning enabled DAC to produce an analog output signal that represents the digital signal, according to an implementation of the present disclosure.
  • FIG. 18 illustrates a flow-chart outlining an algorithm for creating a second machine-learning algorithm to produce an instruction set, based on a digital signal, that can cause the machine-learning enabled DAC to produce an analog output signal that represents a user-defined process applied to the digital signal, according to an implementation of the present disclosure.
  • FIG. 19 illustrates an example of an operation in which a machine-learning enabled DAC produces a PAM4 voltage signal given a stream of digital numbers, according to an implementation of the present disclosure.
  • FIG. 20 illustrates an example of operation with an environmental sensor input in which machine-learning enabled DAC produces a PAM4 voltage signal given a stream of digital numbers and a device temperature reading, according to an implementation of the present disclosure. This allows the machine-learning unit to compensate for temperature related changes in the behavior of the array of DACs and the combining unit by adjusting the instruction set such that the device produces the correct PAM4 signal.
  • FIG. 21 illustrates an example of operating with a user-defined process in which a machine-learning enabled DAC produces an analog output signal used to control a motor in a robotic system based on an user-defined process applied to a digital image, according to an implementation of the present disclosure.
  • FIG. 23 illustrates an example of operating with a digital input flag in which a machine-learning enabled DAC produces audio of English text converted into the Greek language, according to an implementation of the present disclosure.
  • FIG. 24 illustrates an example of operating with a second digital signal in which a machine-learning enabled DAC produces an analog output music signal based on the music represented by the digital samples in an MP3 file, the first digital signal, with the lyrics in the music replaced by the lyrics in a text file, the second digital signal, according to an implementation of the present disclosure.
  • FIG. 25 illustrates an example of operating with multiple analog outputs in which a machine-learning enabled DAC produces separate tracks from music contained in an MP3 file, according to an implementation of the present disclosure.
  • a machine-learning enabled DAC 100 comprises a machine-learning unit 110, an array of digital-to-analog converters (DACs) 125, and a combining unit 145.
  • the machine-learning unit 110 contains a previously calibrated machine-learning algorithm that accepts a first digital signal 105 and produces an N-dimensional instruction set 115, where N is equal to the number of digital-to-analog converters in the DAC array 125.
  • Each instruction 120 of the instruction set 115 drives a respective digital-to-analog converter 130 of the DAC array 125 to produce an analog feature 140.
  • the instruction set 115 drives the array of digital-to-analog converters 125 to produce a number N of analog features 135.
  • the combining unit 145 produces a first analog output signal 150 based at least in part on the N analog features 135.
  • An instruction set comprises instructions, which are digital values. Unlike a mathematical set, here the term “set” is used to mean an ordered collection that permits multiple instances of the same value to exist within the set. Here, the term “ordered” means that the order of elements matters: ⁇ 0, 1 ⁇ ⁇ 1, 0 ⁇ .
  • the digital values can be Booleans, integers, floating point numbers, complex numbers with real and imaginary components, pairs, lists or any other kind of information that can be represented digitally.
  • An instruction can be a composite data structure, for example a pair of pairs of integers.
  • the implementation of the DAC that receives the instruction determines which types of information are suitable as instruction values. The overall structure of the instruction set and the validity of information types for each value are together the format of the instruction set.
  • the configuration of the DAC array 125 determines the format of the instruction set.
  • an individual Monte Carlo DAC which is configured to generate a pulse with a fixed amplitude, duration, and timing offset, can use a single Boolean bit instruction to determine whether it triggers on a given clock strike.
  • a multiplexer unit DAC which generates an analog feature that it frequency-shifts into a predetermined range, can use an integer or a floating point number instruction to set the variable amplitude gain on its feature.
  • a spectral generator DAC which generates a sine wave and a cosine wave at a frequency that is set by a local oscillator, can use a pair of integers to set the variable gain on the sine and cosine waves.
  • a nested DAC comprising multiple Monte Carlo DACs configured to produce the analog feature for a multiplexer unit can use an instruction including a pair in which the first half is an ordered collection of bits and the second half is an integer.
  • the instruction set 115 can drive the array of digital-to-analog converters 125 to generate analog features 135, which, when combined by the combining unit 145, produce the first analog output signal 150.
  • the analog output signal 150 is the analog signal which corresponds to digital samples in Shannon-Nyquist form x r X 2 , ... , X ; .
  • This mapping is defined by a set of parameters that are specific to the chosen machine-learning implementation, such as the weights in a neural network.
  • the exact mathematical nature of the mapping between the input and the output is determined through a calibration procedure, where the machine-learning unit 110 is provided with example inputs and their desired outputs as detailed later in connection with Fig. 16.
  • the parameters are adjusted through automatic techniques, such as backpropagation, until the machine-learning unit 110 produces the desired output within an acceptable tolerance.
  • This calibration procedure occurs before manufacturing, using numerical simulation, and/or after manufacturing, on test devices. Active learning, which is a slow and power intensive process, does not need to occur on the device.
  • the parameters can be hard coded, which can be updated through firmware, or hardwired, which is permanent, onto the device, whereupon the machine-learning unit 110 can process data in inference mode, without any further learning.
  • “Inference mode” refers to an operational state of a machine-learning algorithm in which the machine-learning algorithm executes its computations without adjusting its settings.
  • the machine-learning unit 110 can be implemented 1) in software, 2) using a field programmable gate array (FPGA), or 3) implemented as an application specific integrated circuit (ASIC). The choice of implementation depends on a tradeoff between flexibility and performance, with software having the most flexibility and an ASIC having the best performance.
  • a first implementation of the machine-learning unit 110 is a feedforward neural network (FIG. 2).
  • the machine-learning unit 110 includes a digital neural network 210 that receives the first digital signal 105 at the network’s input and produces the instruction set 115 at the network’s output.
  • This approach involves a collection of nodes that form weighted connections with each other, arranged in sequential layers.
  • the neural network 210 includes an input layer 220, one or more hidden layers 230, and an output layer 240.
  • the nodes in the input layer 220 of the neural network 210 receive the digital input signal 105 in the form of a vector of digital values of length P.
  • Each node j in one of the layers 220, 230, 240 sums the products of the weights (w ) and the digital values (x fc ) that are inputs to the node, applies an activation function ( ⁇ p) to that sum, and outputs the result of that activation function as the output o of the node.
  • the activation function is an open design choice in various implementations. Activation functions such as rectified linear unit, hyperbolic tangent, or sigmoid each have advantages, but the rectified linear unit is particularly suitable because it is relatively simple to calculate.
  • the input layer 220 passes its outputs to the nodes in a hidden layer from among the hidden layers 230 in the neural network 210.
  • Each neuron in the hidden layer applies the activation function to the sums of its weighted inputs, and passes its output to the next layer.
  • the number of hidden layers, the topology of the connections between nodes, and the number of nodes in each layer are open design choices in assorted implementations.
  • the final hidden layer passes its outputs to the output layer 240.
  • the neurons in the output layer 240 apply the same weighted sum and activation process, and each of their outputs is a digital value that represents an instruction 120 among the instruction set 115.
  • the instructions 120 are passed to the DAC array 125, which produces a plurality of analog features 135, each respective feature 140 based at least in part on a respective instruction 120 from the instruction set 115.
  • the nodes in the network form a set of sequential layers. As a result there are no loops in the system, such that the output of one node is never fed backwards to the same or a lower layer.
  • RNN is the fully recurrent neural network.
  • Other recurrent neural network architectures such as Long Short-Term Memory, Jordan, Elman, or Hopfield networks, can be implemented as fully recurrent neural networks in which certain weights w are set to zero, effectively disconnecting those edges.
  • a possible advantage is that the recurrent loops allow the network to maintain a form of memory, which can affect future results.
  • FIR finite response filter
  • HR infinite response filter
  • An RNN can be advantageous when applying a particular filter to the expected digital output, or if using a collection of non-time-invariant elements in the DAC array 125.
  • the machine-learning unit 110 includes a digital recurrent neural network that receives the first digital signal 105 at the network’s input and produces, at the network’s output, the instruction set 115. At least one node within at least one layer passes its output to a node in the same layer or a previous layer in the network, such that at least one node’s output is based at least in part on the output of a node in the same or a subsequent layer.
  • the machine-learning algorithm is not limited to being a neural network.
  • Other implementations include support vector machine (SVM), ridge regression, hidden Markov models, clustering algorithms, and a naive Bayes classifier.
  • the digital-to-analog converters 130 in the DAC array 125 are analog feature generators that provide the building blocks of the first analog output signal 150. These circuits all receive a respective digital instruction 120 and produce an analog feature 140 as a response, and the method by which they achieve that response differs with the category of converter.
  • the categories of approaches vary in their accuracy, power efficiency, manufacturing complexity, and response times, so different approaches to generating analog features are suitable in different applications.
  • the array 125 can be constructed from DACs selected to make best use of the characteristics of one or more approaches.
  • the DAC array 125 can include digital-to-analog converters that implement a multiplexing approach.
  • FIG. 3 illustrates an example of a multiplexed digital-to-analog converter that generates a respective analog feature, according to an implementation of the present disclosure.
  • the converter 130 is implemented by a Multiplexer Unit 310 that comprises a digital-to-analog converter 320 and a mixer circuit 340.
  • the Multiplexer Unit 310 passes an instruction 120 to the digital-to-analog converter 320, resulting in an intermediate analog feature 325.
  • the Multiplexer Unit 310 then feeds this intermediate analog feature 325, y (t), into a respective mixer circuit 340 with a local oscillator 330 that produces an oscillating signal LO.(t) at a fixed carrier frequency W LO that shifts the intermediate analog feature 325 into a different frequency range, producing analog feature 140 x (t).
  • the DAC array 125 can be implemented using multiple Multiplexer Units 310, each tuned with a respective local oscillator, as illustrated in FIG. 4.
  • this mixer circuit 340 within the Multiplexer Unit 310 can be implemented using a pure sinusoidal waveform, a digital clocking signal, or another periodic oscillator as the local oscillator 330.
  • the local oscillator 330 of each Multiplexer Unit 310 can be tuned with a carrier frequency such that each intermediate analog feature 325 is shifted into a unique frequency band in the DAC array 125 of Multiplexer Units.
  • the plurality of carrier frequencies are selected such that the DAC array 125 of Multiplexer Units is tuned to achieve sufficient coverage over the entire frequency range of the analog output signal 150.
  • the output of each Multiplexer Unit 310 is then an analog feature 140.
  • the DAC array 125 includes a digital-to-analog converter 130 that includes a digital-to-analog converter 320 that produces an intermediate analog feature 325, based at least in part on a respective instruction 120 from the instruction set 115; and a multiplexing mixer circuit 340 that produces the respective analog feature 140 by frequency shifting the intermediate analog feature 325 to a frequency band centered on a predetermined frequency.
  • DAC array 125 configuration implements a set of mathematics similar to the Taylor series approximation.
  • the Taylor Series theory states that a smooth function f(x) is equivalent to the sum of an infinite series of terms comprising the function’s derivatives.
  • the function can be approximated near a point a, by summing a subset of these terms.
  • Each term is the nth derivative of the function (x) evaluated at point a multiplied by the nth power of the difference between x and a, divided by n factorial.
  • T n f J n ( v a)- ⁇ n! ⁇ .
  • Taylor Series theory can be reformulated to state that an analog signal x(t) can be created near a time t by summing an infinite number of the signal’s time derivatives.
  • an analog signal can be approximated by summing the first k terms of the signal’s time derivatives.
  • the DAC array 125 can be built from an array of k voltage sources, .(t), that create voltage outputs of increasing polynomials.
  • Each voltage source is paired with a gain-control circuit that can adjust the amplitude of the respective voltage source.
  • the DAC array 125 in a machine-learning enabled digital-to-analog converter 100 includes a digital-to-analog converter 130, which produces a polynomial voltage source that is gain-controlled based at least in part on a respective instruction 120 from the instruction set 115.
  • DAC Array - Spectral Generator 1.2.4 DAC Array - Spectral Generator
  • Another approach to generating an analog output signal 150 is to produce the signal’s frequency content.
  • the frequency content of an analog signal is assumed to have infinite resolution.
  • the frequency content of a sine-wave is a pair of perfect delta functions.
  • the perfect delta functions of the sine-wave become sine functions.
  • This blurring process arising from time-restricting the signal is identical to the smoothing effect caused by frequency-bounding the signal in standard Shannon-Nyquist sampling.
  • frequency-bounding a signal allows the continuous-time signal to be defined by a set of discrete points
  • time-restricting the signal allows the continuous-frequency content of the signal to be defined by a set of discrete frequencies.
  • FFT fast Fourier transform
  • This finite sequence of time domain data can generate a corresponding finite sequence of frequency data, i.e., the results from the FFT. If the original time domain data is discarded, and the inverse fast Fourier transform “iFFT” is applied to the finite frequency data, the original time domain data can be recovered. This shows that a finite set of frequency points can generate the information to create an analog output.
  • the parallel spectral generating approach simply skips the stage of creating the signal in the time domain, and instead generates the elements of the Fourier transform.
  • the basic method for generating the elements of the Fourier transform is to build an array of paired spectral generator circuits.
  • the first circuit of each pair generates a local sine wave oscillating at the desired frequency co.
  • the gain of this local sine-wave is controlled by an instruction and variable gain amplifier.
  • the second circuit of each pair can generate a second local sine wave oscillating at the same frequency as the first, but 90 degrees out of phase.
  • the gain of this second local sine wave can be controlled by a second instruction and a second variable gain amplifier.
  • the two resulting sine-waves are output as analog features x and y .
  • x (t) A cos(oo.t).
  • the local oscillators in the array of spectral generator circuits are tuned to generate each frequency in the analog output signal 150.
  • the array 125 of digital-to-analog converters in a machine-learning enabled digital-to-analog converter 100 includes at least two digital-to-analog converters 130, each of which produces an analog feature 140 that is a sine wave oscillating at a fixed frequency with an amplitude based at least in part on its respective instruction, wherein the second sine wave is ninety degrees out of phase with the first
  • the local oscillator in the previous implementation requires perfect sine-waves, such that each circuit produces only two frequency components of the Fourier transform. Generating perfect sine waves can be challenging to implement. Instead, a randomized spectral generator circuit uses multiple signal generators, each of which generates an unknown waveform that is periodic over a known period. This unknown waveform will cause the spectral generator circuit to create an analog feature containing an unknown mixture of frequency components. Each unknown oscillator in the array should have a different shape, such that no two spectral generator circuits produce the same analog feature. Instead, during calibration, the machine-learning unit 110 learns, through trial and error, to make linear combinations of these unknown waveforms to create the final desired output waveform.
  • the array 125 of digital-to-analog converters in a machine-learning enabled digital-to-analog converter 100 includes a plurality of digital-to-analog converters 130, each of which produces an periodic analog feature with a fixed frequency, whose amplitude is based at least in part on that converter’s respective instruction.
  • the methods described so far involve the high-dimensional instruction set driving an array 125 of digital-to-analog converters 130 to produce intermediate analog features 325. These approaches rely on the digital-to-analog converters 130 to produce a feature with known characteristics.
  • the machine-learning enabled DAC 100 can break away from this constraint.
  • the array of digital-to-analog converters includes a very large number of voltage pulse generators.
  • FIG. 6 illustrates an example of an array of Monte Carlo digital-to-analog converters, each configured with a respective amplitude, duration, and timing offset, according to an implementation of the present disclosure.
  • Each Monte Carlo Unit pulse generator 510 can produce an arbitrary but fixed voltage spike that begins and ends at an arbitrary but fixed time point.
  • FIG. 5 An example circuit implementation of the Monte Carlo Unit 510 is shown in FIG. 5, which illustrates an example of a Monte Carlo digital-to-analog converter, which produces a pulse of voltage at a fixed amplitude, duration, and timing offset, based on a respective instruction, according to an implementation of the present disclosure.
  • the Monte Carlo Unit 510 comprises a pair of resistors, R and R forming a voltage divider, a capacitor C, a delay element, and a transistor.
  • the relative resistance values of the resistors forming the voltage divider determine the amplitude of the pulse.
  • the capacitance value of the capacitor determines the duration of the pulse.
  • the delay element and the transistor control the timing of the pulse onset.
  • the resistors’ resistance ratio and capacitor’s capacitance value can be tweaked to produce a variety of voltage pulses. Manufacturing variance on the resistors and capacitor can increase the variation in amplitude and duration of the pulse produced by the Monte Carlo digital-to-analog converter. The greater the number and variation of pulses that the array can generate, the greater the likelihood that some permutation of pulse features exists that approximates (after feature combination in the combining unit 145) the analog output signal 150. For any vector of resistor and capacitor values across the DAC array of Monte Carlo Units 125, and a given variance expected from each fabricated resistor or capacitor, the DAC array’s expected accuracy in producing the analog output signal 150 can be calculated probabilistically. During calibration, the machine-learning unit 110 can learn empirically how to effectively activate this array to produce the analog output signal 150.
  • the array 125 of digital-to-analog converters in a machine-learning enabled digital-to-analog converter 100 includes a Monte Carlo Unit 510 that produces the respective analog feature as a voltage pulse of a fixed amplitude, duration, and time offset, triggered by a respective instruction 120 from the instruction set 115.
  • the machine-learning unit 110 can be unaware of the implementation of any converter 130, or the existence of any common design principle among the DAC array 125. Therefore, the array 125 can include converters 130 that are drawn from different approaches, as illustrated in FIG. 7, which illustrates an example of a heterogeneous array of digital-to-analog converters 125, wherein at least one analog feature is produced by a DAC with a method that differs from a method used by another DAC to produce another analog feature.
  • a design could include pairs of spectral sine wave generators and a spectral generator of unknown morphology.
  • Another more complex array 125 could include multiple multiplexing circuits, a few Taylor Series circuits, and additional Monte Carlo circuits.
  • a Multiplexer Unit 310 can convert an instruction 120 into an analog intermediate feature 325 using a conventional DAC prior to shifting that intermediate analog feature 325 into an analog feature 140 in a desired frequency band.
  • the intermediate analog feature 325 can be generated using another approach, such as a Taylor Series digital-to-analog converter.
  • the intermediate analog feature used by a Taylor Series digital-to-analog converter as, for example, its quadratic term could be created by a Multiplexer Unit 310.
  • the hard work of ascertaining which instruction set will properly activate the DAC array 125 to produce the analog output signal 150 can be offloaded to the machine-learning unit 110.
  • the array 125 of digital-to-analog converters in a machine-learning enabled digital-to-analog converter 100 includes at least one converter that produces a respective analog feature utilizing a method that differs from the method used by another converter in the array of digital-to-analog converters.
  • the combining unit 145 combines analog features 135 into an analog output signal 150. In the standard configuration it produces a single analog output signal 150. In an alternative configuration, it can produce multiple analog output signals. Operations conventionally performed in analog circuitry, by which multiple analog features are combined to produce an output based at least in part on each respective analog feature, can form a basis for a combining unit 145. Simple operations such as addition, multiplication, subtraction and division are examples of suitable bases for the combining unit 145.
  • the combining unit 145 also can combine features by performing comparisons, such as selecting the maximum, minimum, mode or median feature from among the available features. These operations can also be nested, for example, by multiplying two features together and then adding that product to a third feature.
  • a combination operation can be performed in conjunction with operations that modify a single feature, such as modulating that feature’s frequency or amplitude, integrating or differentiating that feature.
  • combination operations can be performed with a feature and itself, for example, squaring a feature by multiplying it against itself.
  • a simple configuration to understand is a combining unit 145 including a linear summation circuit 810, as shown in FIG. 8, wherein the linear summation circuit 810 simply adds a plurality of analog features together to produce an analog output, according to an implementation of the present disclosure.
  • the combining unit 145 can apply gain to an individual feature before it is added to the other features.
  • the combining unit 145 of a machine-learning enabled DAC 100 includes an analog summation circuit 810 that produces the first analog output signal 150 based at least in part by linearly adding together the plurality of analog features 135.
  • the standard configuration of the machine-learning enabled DAC 100 is to produce an analog output signal 150 from a digital input according to the Shannon-Nyquist principle.
  • This principle states that the digital code represents instantaneous measurements of the analog output signal 150 taken at a fixed rate at least twice as high as the highest frequency in the analog output signal 150. There may be times when it is desirable for a result to differ from this principle. For example, it might be desirable to apply high-pass filtering to the signal or to isolate specific patterns.
  • Various processes that can be applied to a digital signal can be approximated by a machine-learning algorithm incorporated into the machine-learning unit 110.
  • the standard configuration of the machine-learning enabled DAC 100 is to accept a single digital signal 105 and to translate it into a single analog output signal 150.
  • the apparatus can also accept multiple input digital signals and/or produce multiple analog output signals.
  • FIG. 9 illustrates a machine-learning unit 100 configured to produce the instruction set 115 based at least in part on the first digital signal 105 and a second digital signal 910, according to an implementation of the present disclosure.
  • FIG. 12 illustrates a machine-learning unit 110 configured to produce a digital output 1210 based at least in part on an input first digital signal 105, according to an implementation of the present disclosure.
  • the machine-learning unit 110 produces the digital output signal 1210 in addition to the instruction set 115.
  • FIG. 11 illustrates a combining unit 145 configured to produce a first analog output signal 150 and a second analog output signal 1110, based at least in part on the plurality of analog features 135, according to an implementation of the present disclosure.
  • the device can be configured to accept multiple input signals and generate multiple output signals in a combination of Figs. 9 and 11.
  • an external sensor can be installed. Such an installation can be advantageous if the external environment of the device 100 is unknown or changes over time, for example, in the case of a satellite traveling around the earth. While exposed to sunlight, the satellite is warm, and when out of direct light, the satellite is cold. As a result, the performance of a DAC array 125 and the combining unit 145 mounted on the satellite can differ in the two temperature conditions.
  • the external sensor can be a temperature sensor.
  • FIG. 10 illustrates a machine-learning unit 110 configured to produce an instruction set 115 based at least in part on an additional environmental sensor input 1010, according to an implementation of the present disclosure.
  • the temperature sensor’s output can be the additional digital signal 1010.
  • This temperature sensor can be incorporated into the wafer of the device 100 or be connected externally.
  • the sensor input 1010 can be binary, indicating whether some threshold is exceeded, a range, or a finely graded temperature reading.
  • the machine-learning unit 110 is calibrated by repeating the calibration steps under the various temperatures that the device can experience (see 5.2 Calibrating Under Various Environmental Conditions).
  • a component of the DAC array 125 and the combining unit 145 is a capacitor, which is formed by placing two capacitive plates a precise distance apart. The exact distance between the two plates defines the capacitive value of the capacitor, which is used in the calculations performed by the DAC array 125 and the combining unit 145. If external forces are applied to the microchip containing the device 100, be it through faulty package installation or warping of a PCB including the microchip, then the distances between the two plates in each capacitor can be altered. This alteration can distort the capacitive values of the DAC array 125 and the combining unit 145 and degrade the performance of the device 100.
  • strain gauges can be installed along the major or minor axes of the device 100, and the output of these strain gauges can be incorporated into or as the sensor input 1010 to the device’s machine-learning unit 110.
  • an adjusted calibration phase can be implemented with machine-learning, where the calibration process occurs at the various forces that the device 100 can experience.
  • the device 100 also can be affected by magnetic fields that surround the device.
  • An example case is in an MRI machine, which applies strong magnetic fields to a human body and measures precise changes in response.
  • capacitors are formed by placing charge on two metal plates. This charging is assumed to occur in a static environment, but a changing magnetic field can affect how charge builds up on these metal plates. This affected charge build-up can distort the performance of the underlying capacitors in the DAC array 125 and the combining unit 145 and degrade the performance of the device 100.
  • a hall effect sensor can be installed on the device 100, and an output of the hall effect sensor can be incorporated as or into the sensor input 1010 to the machine-learning algorithm 110 to compensate for this affected charge build-up effect.
  • an adjusted calibration phase can be implemented with machine-learning, where the learning process occurs at the various magnetic fields that the device 100 can experience (see 5.2 Calibrating Under Various Environmental Conditions).
  • an electrical insulator can be inserted between two metal plates to prevent charge from leaking across the gap between the two metal plates.
  • This insulator is made of a dielectric material, which has a specific permittivity value that defines how well the material prevents charge leakage. This permittivity value is critical in defining the capacitive value of the overall capacitor.
  • Evidence shows that, over time the dielectric material can break down, which alters its permittivity value. This change can affect the capacitor values in the DAC array 125 and the combining unit 145 and thus degrade the device 100 performance. This degradation can limit the ability to deploy a device 100 in a remote location and expect it to operate for years or decades.
  • This dielectric breakdown can be compensated for by monitoring the age of the device 100 since manufacture.
  • the manufacturing date can be hard coded into a memory on the device 100 and can be compared with the current date by a processor during operation.
  • the current date can be supplied to the device 100 either locally, with a battery powered clock, or externally from circuitry outside the device 100. This “time since manufacture” value can then be incorporated as or into the sensor input 1010 to the machine-learning unit 110.
  • an adjusted calibration phase can be implemented with machine-learning, where the learning process occurs at various time points that the device 100 may experience (simulated and/or experimentally tested) (see 5.2 Calibrating Under Various Environmental Conditions).
  • Power line noise is another factor that can degrade the performance of the device 100. It is assumed that the device 100 can be supplied with a constant and known DC voltage. The DAC array 125 and the combining unit 145 use this voltage source as a benchmark for generating internal voltages. If the supplied voltage to the device 100 significantly deviates or fluctuates from the expected DC voltage, the DAC array 125 and the combining unit 145 can produce faulty outputs, and the device 100 performance can suffer. There can be cases where the cost or engineering to stabilize the DC voltage within a desired tolerance is too high. In such a case, the voltage supplied to the device 100 can be measured using an additional analog-to-digital converter (ADC) subunit, and the result can be fed into the machine-learning unit 110 as an additional digital signal 1010. This modification again can be implemented with an adjusted calibration phase and machine-learning, wherein the learning process can occur at various voltage levels that the device 100 is expected to experience (see 5.2 Calibrating Under Various Environmental Conditions).
  • ADC analog-to-digital converter
  • clock jitter Another factor that is external to the analog output signal 150, but can affect device 100 performance, is clock jitter.
  • the device 100 can produce the analog output signal 150 by changing the voltage at a fixed rate. This fixed rate can be accomplished using a digital clock that oscillates proportionately with the frequency of the desired sample rate. However, a physically realized digital clock can deviate from its ideal frequency, a phenomenon which is known as jitter. As a result, the timing of the analog output signal 150 can deviate according to this jitter. If this jitter is large enough, the overall device 100 performance can degrade, since the output voltage might no longer line up with the expected timing for the device 100.
  • the clock jitter can be measured and fed into the device’s machine-learning unit 110.
  • the machine-learning unit 110 can compensate for deterministic jitter by a processor counting the clock cycles and feeding that clock count as a digital input 910 into the machine-learning unit 110, as shown in Fig. 9. Predictable distortion patterns then can be extracted from this clock count. This compensation can be implemented with an adjusted calibration phase in machine-learning where the learning process occurs at various jitter offsets that the device 100 is expected to experience (see 5.2 Calibrating Under Various Environmental Conditions).
  • the external effects described above involve solutions that are specific to the particular physical phenomenon whose performance degradation they attempt to counteract (e.g., a temperature sensor for temperature, a hall effect sensor for magnetic fields, power-line sensor for voltage fluctuations, etc). However, these physical phenomena share a similar effect on the device 100: distorting the behavior of the DAC array 125 and the combining unit 145.
  • the DAC array 125 and the combining unit 145 primarily include resistors and capacitors. While the exact resistance and capacitance of each element in the DAC array 125 and the combining unit 145 might not be known, the external environment can be expected to have a local effect, such that nearby resistors and capacitors will experience similar distortions.
  • the DAC100 can be populated with a random collection of resistors and capacitors, of different sizes and orientations, each connected to a measurement unit. These measurement units can monitor the resistance and capacitive values of these components through standard techniques (e.g., Wheatstone Bridge, Wien Bridge, etc). Changes in these measurements can correlate to changes in the performance of the nearby DAC array 125 and the combining unit 145. Thus, as shown in Fig. 10, these random resistor and capacitor sensors can be fed in as a sensor input 1010 to the machine-learning unit 110. Finally, this modification can be implemented with an adjusted calibration phase in machine-learning where the learning process occurs for the various environmental factors that the DAC 100 is expected to experience (through a multiphysics simulator or an experimental test rig) (see 5.2 Calibrating Under Various Environmental Conditions).
  • Some events can cause the machine-learning enabled DAC 100 to produce an analog output signal 150 that fails to match the digital signal 105, which can be identified using a process failure flag. Without this flag, the incorrect analog output signal 150 can propagate to downstream components, causing poor performance of the overall system. These errors can occur as a result of changes in the DAC array 125 or the combining unit 145, or a misconfiguration of the machine-learning unit 110.
  • a change in behavior of the DAC array 125 or the combining unit 145 can occur if the device 100 is placed in an unexpected environment that significantly differs from the calibration assumptions, such as being placed in a very hot environment when calibrated for a very cold environment.
  • a divergence in the behavior of the DAC array 125 or the combining unit 145 from the expected behavior can also result from anomalies in manufacturing.
  • a DAC array 125 or the combining unit 145 exhibit behavior that is different from that of the devices or algorithms on whose behavior the machine-learning unit 110 has been trained, then the instruction set 115 produced by the machine-learning unit 110 can cause the DAC 100 to output an analog output signal 150 that does not match the digital signal 105.
  • a misconfiguration of the machine-learning unit 110 can also occur through faulty calibration, such as where improper validation testing can cause an overfitting of the machine-learning algorithm.
  • a misconfiguration of the machine-learning unit 110 can also occur if improper calibration signals are used. This improper calibration can happen if the machine-learning unit 110 is trained on simplified calibration signals with different characteristics than the actual digital signal 105 encountered during operation.
  • the machine-learning enabled DAC 100 can produce a faulty analog output signal 150 that does not represent the first digital signal 105.
  • the machine-learning enabled DAC 100 can throw a process failure flag. This flagging can be accomplished by including within a device including the DAC 100 an additional independent set of analog circuitry that measures feature(s) of the analog output signal 150.
  • Example fault-check features can include the DC offset, the root-mean squared value, every ‘Nth’ value, etc.
  • the additional independent circuitry can then compare the measured feature(s) against the digital signal 105. Because the check feature is easy to analytically calculate from both the first digital signal 105 and the analog output signal 150, independent fault check circuitry can compare the check values of the two signals. To perform this calculation, a sequence of predetermined digital mathematical operations are performed on the digital signal 105 to predict the measured feature(s). For example, the root-mean squared value of the digital signal 105 can be calculated in digital circuitry and then compared against the analog calculation of a root-mean squared value of the analog output signal 150.
  • the machine-learning unit 110 can apply filters or feature extraction to the first digital signal 105.
  • the analog output signal 150 might not represent the analog version of the standard Shannon-Nyquist samples in the first digital signal 105. This non-representation can prevent analytical calculation of the fault check feature(s).
  • the machine-learning unit 110 can be reconfigured to output the instruction set 115 and the expected digital representation of the fault-check feature(s) 1210, as illustrated in Fig. 12.
  • the measurement of the fault-check feature(s) produced by the independent circuitry can be compared against the digital output signal 1210 including predicted feature(s) produced by the machine-learning unit 110. If the predicted feature(s) deviate from the measured feature(s) more than a specified tolerance, a processor can throw a process failure flag.
  • an onboard calibration unit can be installed.
  • This onboard calibration unit can be installed as a separate set of circuitry that performs the optimization of the machine-learning unit 110, or this onboard calibration unit can also be implemented in software that is executed by an external processor.
  • This onboard calibration can occur locally on the same wafer as the DAC 100, externally on a separate unit on the same PCB, or remotely on a distant resource.
  • This calibration can occur as a one-off procedure, to refine the performance of a particular DAC 100, or as an ongoing process, to improve the long-term performance of the DAC 100. This process can occur in the factory prior to deployment and/or out in the field after deployment.
  • the operating principle of the calibration unit is to drive a known N-dimensional instruction set 115 into the DAC array 125 and monitor the resulting analog output 150. This process can be performed across various environmental conditions the device experiences, to capture the full behavior of the device.
  • the instruction set 115 can originate from the calibration unit itself, deriving from a locally stored calibration instruction set pool or generated from a local random number generator. In both cases, the calibration unit can use an analog-to-digital converter (ADC) to measure the analog output signal 150 of the DAC array 125 to the calibration instruction set.
  • ADC analog-to-digital converter
  • the resulting analog output signal 150 can also be measured externally from the calibration unit.
  • the calibration unit can use the external source to receive and record the analog output signal 150 produced by its calibration instruction set. This recorded analog output can be provided as a separate input to the calibration unit. This additional input can be directly connected to the external source, or transmitted through an intermediary (such as via WiFi to a microprocessor or microcontroller).
  • the calibration unit can also access the DAC array 125 of the device 100.
  • the machine-learning enabled DAC 100 can stop operating its machine-learning unit 110, and begin to transmit the instruction sets from the calibration unit directly to the DAC array 125. This transition can occur based on a calibration flag sent to the device.
  • the calibration instruction set can be routed directly to the DAC array 125 through the same circuitry as the machine-learning unit 110, such as by using an autoencoder, or through an independent set of circuitry.
  • the resulting analog output signal 150 can be then fed directly into the calibration unit from the device, or sampled and transmitted through an intermediary, such as via WiFi to a microprocessor or microcontroller.
  • the expected analog output 150 can follow the standard Shannon Nyquist format based on the digital signal 105.
  • the calibration unit can also train the machine-learning unit 110 to apply a user-defined process to the digital signal 105, and produce an instruction set 115 that results in an analog output signal 150 that represents the result of the user-defined process applied to the digital signal 105.
  • This user-defined process can be provided remotely by the user or stored locally in the calibration unit.
  • the calibration unit can optimize the parameters of the machine-learning unit 110.
  • This optimization process can vary depending on the machine-learning technique that is implemented on the machine-learning unit 110 of the DAC 100. This optimization process can differ from the optimization during manufacturing, and be customized for the specific application of the specific DAC 100 (such as sacrificing resolution to conserve energy, or vice versa). This optimization can happen in the 2-phase calibration process described below.
  • the digitized analog output signal can serve as an intended input of the machine-learning unit 110, and the calibration instruction set that generated the analog result 150 can serve as the expected output of the machine-learning unit 110.
  • the optimized parameters can be installed onto the machine-learning unit 110 of the device. These parameters can be directly sent to the DAC 100 from the calibration unit, or through an intermediary (such as via WiFi to a microprocessor or microcontroller). Once the new parameters are installed, the calibration unit can be deactivated. The machine-learning enabled DAC 100 can be notified by dropping the calibration process flag, and the device's machine-learning unit 110 can be reengaged with the new parameters. The DAC 100 can then operate as normal. In various implementations, the DAC 100 can be recalibrated multiple times, through user request or as notified by the process failure flag.
  • an onboard dimensionality reduction unit can be installed.
  • This unit can be installed as a separate set of circuitry that performs the dimensionality reduction algorithm, or this unit can be implemented in software that is executed by an external processor.
  • This onboard dimensionality reduction can occur locally on the same wafer as the device 100, externally on a separate unit on the same PCB, or remotely on a distant resource.
  • this dimensionality reduction unit assumes the machine-learning enabled DAC 100 is correctly mapping the digital signal 105 to the correct high-dimensional instruction set 115.
  • the machine-learning enabled DAC 100 begins passing as output the first digital signal 105 in addition to the instruction set 115.
  • This dimensionality reduction flag can be thrown by the user or be triggered by the dimensionality reduction unit itself.
  • a processor of the unit can monitor the analog output signal 150, and look for patterns, such as by using principal component analysis. If a significant pattern is found, indicating that the analog output signal 150 can be produced by a modified subset of the DAC array 125, the processor can throw the flag and engage the dimensionality reduction procedure.
  • the dimensionality reduction unit can passively monitor both the digital signal 105 and the instruction set 115. These streams of information can be received directly from the machine-learning enabled DAC 100, or transmitted through an intermediary, such as via WiFi to a microprocessor or microcontroller. The dimensionality reduction unit can then apply the dimensionality reduction techniques or customized techniques optimized for real-time performance.
  • the dimensionality reduction unit can numerically simulate the performance of the DAC array 125 and the combining unit 145 when excluding at least one converter 130 in the DAC array 125. Which converters get excluded during simulation can be specific to the dimensionality reduction algorithm.
  • the dimensionality reduction unit can create its own machine-learning algorithm, similar to the machine-learning unit 110, to map the digital signal 105 to a reduced instruction set 115 for a reduced DAC array 125. This machine-learning algorithm can be trained on real data transmitted from the machine-learning enabled DAC 100.
  • the dimensionality reduction unit can simulate the performance of a machine-learning algorithm.
  • the dimensionality reduction unit can implement procedures to reduce the complexity of the machine-learning algorithm, such as removing nodes and layers of a neural network in the machine-learning unit 110. If these simulations successfully map the digital input 105 to the correct instruction code 115, then the dimensionality reduction unit can update the parameters of the machine-learning unit 110, allowing the device 100 to operate in a more efficient manner.
  • the dimensionality reduction unit can be disengaged after adjusting the machine-learning unit 110.
  • Cases where there are strong temporal patterns in the first digital signal 105 can merit dynamic dimensionality reduction. For example, there could be long sequences of high frequency data followed by long sequences of low frequency data.
  • the parameters of the machine-learning unit 110 can be changed between each output period of the device.
  • the dimensionality reduction unit can monitor the process failure flag after updating the device 100. If the machine-learning enabled DAC 100 throws a process failure flag over a prolonged period of time, the dimensionality reduction unit can be reactivated into a recovery mode. In this condition, the dimensionality reduction unit can return the machine-learning enabled DAC 100 to its original state. A deactivated converter 130 in the DAC array 125 can be reactivated, and the parameters of the machine-learning unit 110 can be returned to their original configuration. The machine-learning enabled DAC 100 can then operate as previously. The dimensionality reduction unit can be disengaged, and the dimensionality reduction unit can passively monitor the analog output signal 150 for new emerging patterns. This process allows the dimensionality reduction unit to capture nonstationarity of the first digital signal 105 over the lifetime of the device.
  • FIG. 14 illustrates an algorithm 1400 performed by various implementations of the present disclosure. Prior to executing the algorithm, the machine-learning unit 110 should converge on an appropriate input-output solution.
  • the algorithm 1400 begins at S1405, and the algorithm 1400 advances to S1410.
  • the machine-learning enabled DAC 100 receives a digital signal 105 at the machine-learning unit 110.
  • the algorithm 1400 then advances to optional S1410.
  • the machine-learning enabled DAC 100 receives a second digital signal at the machine-learning unit 110, as discussed in connection with Fig. 9.
  • the algorithm 1400 then advances to optional S1415.
  • a second digital signal 910 is received at the machine-learning unit 110.
  • an example second digital signal 910 is an environmental signal 1010 containing information about environmental conditions affecting at least one of the components in the machine-learning enabled digital-to-analog converter.
  • the algorithm 1400 advances to S1425.
  • the machine-learning unit 110 produces, with a machine-learning algorithm, an instruction set 115 based at least in part on the digital signal 105.
  • a neural network 210 included in the machine-learning unit 110 can receive the digital signal 105 as an input, and produce an instruction set 115 at an output, based on the digital signal 105.
  • the machine-learning algorithm is included in the machine-learning unit 110 in many implementations. Further, in implementations in which S1415 is performed, the machine-learning unit 110 can produce the instruction set 115 based at least in part on this second digital signal 910. The algorithm 1400 then advances to optional S1430.
  • the machine-learning algorithm can produce, with the machine-learning unit 110, a digital output signal 1210, at least in part based on the digital signal 105, as shown in connection with Fig. 12.
  • the algorithm 1400 advances to S1435.
  • the DAC array 125 produces a plurality of analog features 135, each based at least in part on a respective instruction 120 from the instruction set 115.
  • the details of S1435 are discussed with more detail in connection with Fig. 15.
  • the algorithm 1400 advances to S1440.
  • a combining unit 145 of the device produces a first analog output signal 150 based at least in part on the plurality of analog features 135.
  • the algorithm 1400 then advances to optional S1445.
  • the combining unit produces, with the combining unit 145, a second analog output signal 1110, based at least in part on the plurality of analog features 135, as discussed above in connection with Fig. 11.
  • the algorithm 1400 then advances to S1450 and concludes.
  • the machine-learning algorithm is not limited to being a neural network 210.
  • Other implementations of the machine-learning algorithm include support vector machine (SVM), ridge regression, hidden Markov models, clustering algorithms, and a naive Bayes classifier.
  • FIG. 15 illustrates an implementation of step S1435 in the algorithm 1400.
  • a multiplexed digital-to-analog converter 310 produces a respective analog feature 140 of the plurality of analog features 135, based at least in part on a respective instruction 120.
  • the algorithm 1500 begins at S1510, and the algorithm 1500 advances to S1520.
  • the multiplexed digital-to-analog converter 310 produces an intermediate analog feature 325, based at least in part on a respective instruction 120.
  • the algorithm 1500 advances to S1530.
  • a mixing circuit 340 within the multiplexer unit 310 produces an analog feature 140, based at least in part on frequency-shifting the respective intermediate analog feature 325 into a predetermined frequency band using a local oscillator 330.
  • the algorithm 1500 advances to S1540. 3.3 Monte Carlo Operation
  • a second variation of step S1435 involves producing the plurality of analog features 135 by producing each respective analog feature 140 as a voltage pulse with a fixed amplitude, duration, and timing offset, triggered by a respective instruction 120. This variation was discussed above in connection with Fig. 5.
  • a third variation of step S1435 involves producing at least one analog feature 140 from the plurality of analog features 135 using a method that differs from the method used to create another analog feature in the plurality of analog features 135. This variation was discussed above in connection with Fig. 7.
  • a fourth variation of the method of operation S1435 involves the combining unit 145 producing the first analog output signal 150 by linearly adding together the plurality of analog features 135. This variation was discussed above in connection with Fig. 8.
  • the machine-learning unit 110 can be implemented with a multi-stage calibration procedure.
  • machine-learning algorithms use a large database of examples for training. This database contains a collection of example inputs to the machine-learning unit 110, each input paired with the expected output. The machine-learning unit 110 then learns to produce the desired output for each example input.
  • the process uses two phases.
  • an arbitrary N-dimensional (where N is the number of DAC elements) instruction set 115 is input to the DAC array 125, such that each element of the DAC array 130 receives an instruction 120 and produces an analog feature 140 based thereon.
  • the plurality of analog features 135 produced by the DAC array 125 is combined with a combining unit 145, and the analog output signal 150 of this combining unit 145 is digitized by an appropriate analog-to-digital converter.
  • the analog-to-digital converter samples the analog output signal 150, to produce a sampled one-dimensional digital signal representing the analog output signal 150.
  • this first phase creates a mapping between the instruction set 115 and the sampled one-dimensional digital signal representing the analog output signal 150 in Shannon-Nyquist format.
  • the calibration procedure inverts the mapping from instruction set to digital signal from the first phase, and then trains the machine-learning unit 110 to achieve the inverted mapping, from digital signal to instruction set.
  • the sampled digital signal from the first phase is treated as if it were the digital input signal 105 to the machine-learning unit 110, and the arbitrary N-dimensional instruction set 115 that drove the DAC array 125 in the first phase is treated as the desired output of the network.
  • This two phase process namely mapping from instruction sets to digital signals in the first phase and training the machine-learning unit 110 to map from digital signals to instruction sets in the second phase, can be thought of as the machine-learning unit 110 “exploring” and then “learning” the behavior of the DAC array 125 and combining unit 145. It is likely similar to how animal brains develop connections to and from the sensory and motive organs in animal bodies. Instructions are sent out, the outcome is recorded, and then the instructions that most closely matched the desired outcome are reinforced as the appropriate instruction set for controlling the appropriate mechanism. Instead of learning to walk like an animal, in this instance, the machine-learning unit 110 is trained to generate an analog output signal 150.
  • knowledge about the DAC array 125 is relevant for preparing a testing dataset.
  • the number of DACs, the range of values that each DAC can process, and the smoothness of the analog features produced by the DACs all affect the determination of whether the training dataset can produce an algorithm that can produce the desired range of analog output signal 150 outcomes.
  • a dataset of examples exploring the DAC array 125 and the combining unit 145 is collected, in which each example maps the instruction set 115 to a digital output signal.
  • the process of generating training data can be simulated in software, or tested with a physical device by measuring the output of the DAC array 125 with an ADC.
  • the dataset can be culled to remove non-compliant analog outputs.
  • An instruction code that produces an analog output signal 150 result (and accompanying sampled digital signal) that is not within the operating specifications of the machine-learning enabled digital-to-analog converter 100, with regards to voltage, current, power, or frequency characteristics, can be removed from the training dataset, and relegated to an error dataset.
  • This error dataset can then be used to study the structure of bad instruction code designs, and serve as a penalty dataset to discourage the first phase exploration from producing faulty codes.
  • Examples can also be selected from the dataset according to specific guidelines, such as matching a specific frequency or similarity to specific waveforms. From those selected waveforms, small variations on the instruction codes can be added to tweak the response of the DAC array 125. These tweaks will result in new examples, and the cycle of fleshing out the example database with good waveforms can be repeated. This cycle can help generate a large dataset of meaningful examples.
  • a database of example mappings can be produced with a generative adversarial network.
  • a generative network is trained to produce an instruction set 115.
  • This instruction set 115 passes through the DAC array 125, and the resulting analog output signal 150 is digitized by an ADC.
  • the digitized signal is passed off to a discriminator network, which judges if the resulting analog output signal 150 matches a database of predetermined signals. If the result fails, an error signal is sent back to the generative network to correct itself, and the process is repeated.
  • the generator is considered properly calibrated, and the results from the network can be used to build the training database.
  • the database of predetermined signals used by the adversarial network can be supplied directly by the user.
  • the database can be randomly generated, such as using a random number generator to create a random voltage signal at a specific frequency.
  • a parametric model can be used, such as a Gaussian or Poisson process, or a known pattern.
  • Example recordings can also be used as the signals.
  • a database of appropriate digital signals are created.
  • An adversarial network is trained to predict the likelihood that a given input signal was derived from the database.
  • a generative network is created, and seeded with a large number of random numbers. This generative network produces a large number of instruction sets 115 that are applied to the DAC array 125, resulting in an analog output signal 150. That analog output signal 150 is digitized by an ADC, and the results are graded by the adversarial network.
  • the adversarial network is then retrained, through standard optimization techniques, to improve its ability to identify that batch of resulting signals as being fake, i.e., originating from the generative network rather than from the reference set provided by the user.
  • the generative network is then trained in a slightly different way, because the network is not training to match a particular result. Instead, it is being trained to generate a result that the discriminator network cannot easily identify as fake or not fake.
  • Gradients for each parameter in the generative network are calculated as normal via backpropagation, using a mean squared error (MSE) loss function.
  • MSE mean squared error
  • the mean squared error is not necessarily calculated, because the desired instruction set 115 can be unknown. Instead, the value of that calculation arises from the likelihood produced by the discriminator network.
  • the generative network aims for a predicted likelihood of 50%, meaning that the discriminator network is unable to provide a better identification than a coin flip. If the likelihood is 50%, then the assumed error is zero. The further the likelihood is from 50%, the more confidently the discriminator network can identify the example as fake or not fake, and the bigger the assumed MSE is.
  • the gradients are calculated as if the MSE were calculated, but the assumed MSE value is actually an equation such as what follows, where A is a predetermined scaling constant, and P fake is the discriminator network’s assessed likelihood of the examp rle in q -Iuestion arising a from the generator network:
  • the learning step is performed, using standard gradient descent techniques, with a small learning step multiplied by each gradient.
  • Each parameter in the generative model is updated, and the entire process of reseeding and generating analog signals can be repeated.
  • FIG. 16 illustrates a flow-chart outlining calibrating 1600 a machine-learning algorithm to operate the machine-learning enabled digital-to-analog converter 100, according to an implementation of the present disclosure.
  • the algorithm 1600 begins at S1605 and advances to S1610.
  • S1610 a first machine-learning algorithm is produced.
  • the first machine-learning algorithm produces an instruction set, based at least in part on a digital signal 105, to drive the DAC array 125 such that the machine-learning enabled DAC 100 produces an analog output signal 150 that represents the digital signal 105.
  • the algorithm 1600 then advances to optional S1615.
  • a second machine-learning algorithm is produced, based at least in part on the first machine-learning algorithm and a user-defined process.
  • This second machine-learning algorithm produces an instruction set 115, based at least in part on a digital signal 105, to drive the DAC array 125 such that the device 100 produces an analog output signal 150 that represents the result of applying a user-defined process to the digital signal 105.
  • implementations including S1615 can have the same effect as applying a user-defined process to the digital signal 105 before it enters the machine-learning unit 110.
  • the algorithm 1600 then advances to S1620, in which the algorithm 1600 concludes.
  • FIG. 17 illustrates a flow-chart outlining an algorithm 1700 for creating a first machine-learning algorithm to produce an instruction set, based on a digital signal, that can cause the machine-learning enabled DAC to produce an analog output signal that represents the digital signal, according to an implementation of the present disclosure.
  • the algorithm 1700 begins at S1705, and the algorithm 1700 advances to S1710.
  • a first process creates an instruction set 115.
  • the instruction set 115 is a vector that contains N values, where N is equal to the number of DACs in the DAC array 125 that use instructions.
  • An example first process is generating the instruction set 115 with a random number generator. The algorithm 1700 then advances to S1715.
  • a plurality of analog features 135 are produced, with the DAC array 125, each analog feature based at least in part on a respective instruction 120 from the instruction set 115.
  • the plurality of analog features 135 are produced by a physical implementation of the DAC array 125.
  • the plurality of analog features 135 are produced with a numerically simulated DAC array.
  • the algorithm 1700 then advances to S1720.
  • the algorithm 1700 produces, with a combining unit, a first analog output 150, based at least in part on the plurality of analog features 135.
  • the combining unit is implemented physically. In other implementations, the combining unit is numerically simulated.
  • the algorithm 1700 then advances to optional S1725.
  • a second analog output signal 1110 is produced with the combining unit 145, based at least on the plurality of analog features 135.
  • the algorithm then advances to optional S1730.
  • the algorithm 1700 produces, with a second process, a sensor input 1010. This process is illustrated in connection with Fig. 10, for example.
  • the analog circuitry in the DAC array 125 or the combining unit 145 is likely to be the most affected by the environmental conditions experienced by the machine-learning enabled DAC 100.
  • An example of this second process is producing the first environmental digital signal 1010 with an environmental sensor.
  • the algorithm 1700 then advances to S1735.
  • the algorithm 1700 produces a first digital signal, based at least in part on the first analog output signal 150, with a first analog-to-digital converter.
  • the first analog-to-digital converter is physically implemented. In other implementations, the first analog-to-digital converter is numerically simulated.
  • the algorithm 1700 then advances to optional S1740.
  • a second digital signal is produced, based at least in part on the second analog output signal 1110, with a second digital-to-analog converter.
  • the second analog output signal 1110 was discussed in connection with Fig. 11.
  • the algorithm 1700 then advances to S1745.
  • the algorithm 1700 produces a first training example based at least in part on the instruction set 115 and the first digital signal.
  • a training example has an input vector, containing the first digital signal, and an output vector, containing the instruction set 115.
  • the first environmental digital signal 1010 can be included in the input vector.
  • the second digital signal 910 can be included in the input vector.
  • the algorithm 1700 then advances to S1750.
  • a test result is produced by applying a test process, at least in part based at least in part on the first digital signal.
  • An example test process is determining if the first digital signal is within the operating specifications of the machine-learning enabled DAC 100 with regards to voltage, current, power, or frequency characteristics.
  • the test result can be produced based at least in part on the second digital signal.
  • the algorithm 1700 then advances to S1760.
  • the algorithm 1700 checks if the test result is positive. If the test result is positive, the first training example is stored in a first dataset. If the test result is negative, the training example can be discarded. The algorithm 1700 then advances to S1765.
  • a first machine-learning algorithm for the machine-learning unit is trained, with the first dataset, to produce an instruction set based at least in part on a digital signal.
  • the first machine-learning algorithm can be trained to produce the instruction set based at least in part on the first environmental digital signal 1010.
  • Environmental conditions have a large effect on the behavior of analog circuitry, and thus the output of the machine-learning enabled DAC 100 is affected to a great deal by environmental conditions.
  • the first machine-learning algorithm can be trained to produce an instruction set based at least in part on the second digital signal.
  • the algorithm 1700 then advances to S1770.
  • the first machine-learning algorithm can be installed in the machine-learning unit in the machine-learning enabled DAC 100, which produces the instruction set 115 based at least in part on the digital signal 105.
  • the instruction set can be based at least in part on the first environmental digital signal 1010.
  • the instruction set 115 can be based at least in part on the second digital signal 1010. The algorithm 1700 then advances to S1775 and concludes.
  • a further variation of the standard calibration method 1700 for the first machine-learning algorithm involves exposing the DAC array 125 or the combining unit 145 to environmental conditions that vary by more than a certain range. These variable conditions can affect the hardware such that it produces different outputs. For example, many conductive materials have a positive temperature coefficient of resistance, meaning that the resistance of the material increases with temperature. Many insulative materials have a negative temperature coefficient, meaning that the resistance of the material decreases with temperature. Functional components in electrical devices contain conductive and insulative materials. The resistance of materials in the device changes with the temperature. For this reason, the electrical properties of a device can change if the temperature of the device changes.
  • analog circuits are specified to behave in a certain way within a particular range of environmental conditions.
  • the functional circuits that transform signals depend on an assumption that the values of electrical properties in the circuits’ constituent materials relate to each other in fixed proportions. Changes in temperature can change the values of the electrical properties, and can change the ratios between those values, and can break assumptions about the ratios. If these ratio assumptions are false, then the functional circuits operate outside of their specification, and can behave abnormally.
  • the extent to which and the speed with which environmental conditions affect a material within a microchip can depend on, among other things, the material, the amount of material, the magnitude of the environmental phenomena, and relevant laws of thermodynamics and electromagnetism.
  • the machine-learning unit 110 can be calibrated using training data recorded while the DAC array 125 or the combining unit 145 are operated while experiencing a range of environmental conditions. By calibrating across this variety of training examples, the machine-learning unit 110 can find a robust solution that works regardless of the environmental conditions.
  • the DAC array 125 and the combining unit 145 produce the analog features 135 and the first analog output 150 while affected by shifts in at least one environmental condition.
  • the first machine-learning algorithm can be trained on these training examples to minimize the error across all training examples.
  • the machine-learning algorithm can therefore learn to produce an instruction set 115 that produces an analog signal 150 that most closely represents the digital signal 105, regardless of the environmental conditions.
  • the instruction set 115 can drive the DAC array 125 to produce redundant analog features 140 that will average out environmental distortion at the combining unit 145.
  • Another example is an instruction set 115 that drives a converter 130 in the DAC array 125 in a conservative domain that is consistent across environmental conditions.
  • Environmental conditions, and their corresponding ranges, that can be calibrated against include the following: a temperature range of at least ten degrees Celsius, a humidity range of at least five percent relative humidity, an operational age range of at least four hundred hours, an electric field strength range of at least one volt per meter, a magnetic field strength range of at least one microTesla, a variation in the supply voltage of at least one millivolt, or a barometric pressure range of at least 10 kiloPascals.
  • These conditions can be physically applied to a physical DAC array 125 and/or combining unit 145, or the conditions can be numerically simulated for a numerically simulated DAC array 125 and/or numerically simulated combining unit 145.
  • a second training example can be produced by repeating the algorithm starting at S1710.
  • FIG. 18 illustrates a flow-chart outlining an algorithm 1800 for creating a second machine-learning algorithm.
  • the second machine-learning algorithm produces an instruction set, based on a digital signal, that can cause the machine-learning enabled DAC to produce an analog output signal that represents a user-defined process applied to the digital signal, according to an implementation of the present disclosure.
  • Fig. 18 performs an implementation of S1615.
  • the algorithm 1800 begins at S1805, and the algorithm 1800 advances to S1810.
  • the algorithm 1800 produces a first custom digital signal, with a third process.
  • An example third process is generating the first custom digital signal with a random number generator.
  • the algorithm 1800 then advances to optional S1815.
  • the algorithm 1800 can produce a second custom digital signal with a fourth process.
  • An example fourth process is generating the second custom digital signal with a random number generator.
  • the algorithm 1800 advances to 1820.
  • a first user-defined process produces a modified digital signal, based at least in part on the first custom digital signal.
  • the modified digital signal can be produced, based at least in part on the second custom digital signal.
  • the algorithm then advances to optional S1825.
  • a digital output signal is produced based at least in part on the first custom digital signal, using a second user-defined process.
  • the algorithm 1800 advances to S1830.
  • the first-machine-learning algorithm produces an instruction set based at least in part on the modified digital signal.
  • the algorithm 1800 advances to S1835.
  • a custom training example is produced, based at least in part on the instruction set and the first custom digital signal.
  • the second custom digital signal can be included in the first custom training example.
  • the first digital output signal can be included in the first custom training example.
  • the algorithm 1800 advances to S1840.
  • the algorithm 1800 stores the first custom training example in a second dataset.
  • the algorithm 1800 advances to S1845.
  • the algorithm 1800 trains a second machine-learning algorithm, with the second dataset, to produce an instruction set based at least in part on a first digital signal.
  • the second machine-learning algorithm can be trained based at least in part on the second custom digital signal.
  • a first digital output can be produced by the second machine-learning algorithm, based at least in part on a digital signal.
  • the algorithm 1800 advances to S1850.
  • the second machine-learning algorithm is installed on the machine-learning enabled DAC 100.
  • the second machine-learning algorithm produces an instruction set 115 based at least in part on a first digital signal 105.
  • the second machine-learning algorithm can produce the instruction set 115 based at least in part on the second digital signal 910.
  • the second machine-learning algorithm produces the instruction set 115 and a first digital output 1210, both based at least in part on the first digital signal 105.
  • the algorithm 1800 passes the instruction set 115 to the DAC array 125, and the resulting analog output signal 150 represents the result of the user process applied to the first digital signal 105.
  • the resulting analog output signal 150 can also represent the result of the user process applied to the second digital signal 910.
  • the first digital output 1210 can be transmitted from the device 100.
  • An example digital output signal 1210 could be a flag that is thrown when a certain type of digital signal 105 is received.
  • the digital output signal 1210 can also be a modified version of the digital input signal 105 based on a user-defined process.
  • FIG. 19 illustrates an example of an operation in which a machine-learning enabled DAC produces a PAM4 voltage signal given a stream of digital numbers, according to an implementation of the present disclosure.
  • FIG. 20 illustrates an example of operation with an environmental sensor input in which machine-learning enabled DAC produces a PAM4 voltage signal given a stream of digital numbers and a device temperature reading, according to an implementation of the present disclosure. This allows the machine-learning unit to compensate for temperature related changes in the behavior of the array of DACs and the combining unit by adjusting the instruction set such that the device produces the correct PAM4 signal.
  • FIG. 21 illustrates an example of operating with a user-defined process in which a machine-learning enabled DAC produces an analog output signal used to control a motor in a robotic system based on an user-defined process applied to a digital image, according to an implementation of the present disclosure.
  • FIG. 22 illustrates an example of operation with a digital output in which a machine-learning enabled DAC produces both an analog output signal that represents the music contained in an MP3 and a digital signal containing the name of the musician, according to an implementation of the present disclosure.
  • the musician’s name is not necessarily contained in the metadata, but rather the machine-learning unit has been trained to recognize the musician's name based on the music contained in the MP3.
  • FIG. 23 illustrates an example of operating with a digital input flag in which a machine-learning enabled DAC produces audio of English text converted into the Greek language, according to an implementation of the present disclosure.
  • the machine-learning unit receives the English text and the language flag set to Greek, and produces the audio translated into Greek.
  • FIG. 24 illustrates an example of operating with a second digital signal in which a machine-learning enabled DAC produces an analog output music signal based on the music represented by the digital samples in an MP3 file, the first digital signal, with the lyrics in the music replaced by the lyrics in a text file, the second digital signal, according to an implementation of the present disclosure.
  • FIG. 25 illustrates an example of operating with multiple analog outputs in which a machine-learning enabled DAC produces separate tracks from music contained in an MP3 file, according to an implementation of the present disclosure.
  • a machine-learning-enabled digital-to-analog converter includes a machine-learning unit that receives a first digital signal and is configured to produce an instruction set based at least in part on a machine-learning algorithm and the first digital signal; an array of digital-to-analog converters that produces a plurality of analog features, each based at least in part on a respective instruction from the instruction set, and a combining unit that produces a first analog signal based at least in part on the plurality of analog features.
  • Example AA2 is the machine-learning-enabled digital-to-analog converter of Example AA1 , wherein the machine-learning unit includes a digital neural network that receives the first digital signal at the network’s input and produces the instruction set at the network’s output.
  • Example AA3 is the machine-learning-enabled digital-to-analog converter of any of Example AA1 or Example AA2, wherein a converter in the array of digital-to-analog converters includes a digital-to-analog converter that produces an intermediate analog feature, based at least in part on a respective instruction from the instruction set; and a multiplexing circuit that produces the respective analog feature by frequency shifting the intermediate analog feature to a frequency band centered on a predetermined frequency.
  • Example AA4 is the machine-learning-enabled digital-to-analog converter of any of Examples AA1-AA3, wherein a converter in the array of digital-to-analog converters includes a digital-to-analog converter that produces the respective analog feature as a voltage pulse of a fixed amplitude, duration, and time offset, to be triggered by a respective instruction from the instruction set.
  • Example AA5 is the machine-learning-enabled digital-to-analog converter of any of Examples AA1-AA4, wherein the array of digital-to-analog converters includes at least one converter that produces a respective analog feature utilizing a method that differs from a method used by another converter in the array of digital-to-analog converters to produce a respective analog feature.
  • Example AA6 is the machine-learning-enabled digital-to-analog converter of any of Examples AA1-AA5, wherein the combining unit includes an analog summation circuit that produces the first analog signal based at least in part by linearly adding together the plurality of analog features.
  • Example AA7 is the machine-learning-enabled digital-to-analog converter of any of Examples AA1-AA6, wherein the machine-learning unit receives a second digital signal and produces the instruction set based at least in part on the second digital signal.
  • Example AA8 is the machine-learning-enabled digital-to-analog converter of Example AA7, wherein the second digital signal, received by the machine-learning unit, contains information about environmental conditions that affect at least one of the components in the machine-learning-enabled digital-to-analog converter.
  • Example AA9 is the machine-learning-enabled digital-to-analog converter of any of Examples AA1-AA8, wherein the combining unit produces a second analog signal, based at least in part on the plurality of analog features.
  • Example AA10 is the machine-learning-enabled digital-to-analog converter of any of Examples AA1-AA9, wherein the machine-learning unit produces a digital output signal, based at least in part on the first digital signal.
  • Example AM1 a method is to be implemented by a machine-learning-enabled digital-to-analog converter and includes receiving a first digital signal at a machine-learning unit; producing, with a machine-learning algorithm contained in the machine-learning unit, an instruction set, based at least in part on the first digital signal; producing, with an array of digital-to-analog converters, a plurality of analog features, each based at least in part on a respective instruction from the instruction set; and producing, with a combining unit, a first analog signal, based at least in part on the plurality of analog features.
  • Example AM2 is the method of Example AM1 , wherein the producing the instruction set includes receiving the first digital signal at a digital neural network’s input and producing the instruction set at the network’s output.
  • Example AM3 is the method of any of Example AM1 or Example AM2, wherein producing the plurality of analog features further comprises producing a plurality of intermediate analog features, each based at least in part on a respective instruction from the instruction set; and producing a plurality of respective analog features, each based at least in part on frequency-shifting a respective intermediate analog feature into a predetermined frequency band.
  • Example AM4 is the method of any of Examples AM1-AM3, wherein the producing the plurality of analog features includes producing the respective analog feature based at least in part on a voltage pulse with a fixed amplitude, duration, and time offset, to be triggered by a respective instruction from the instruction set.
  • Example AM5 is the method of any of Examples AM1-AM4, wherein the producing the plurality of analog features includes producing an analog feature of the plurality of analog features using a method that differs from the method of producing another analog feature in the plurality of features.
  • Example AM6 is the method of any of Examples AM1-AM5, wherein producing the analog signal, with the combining unit, includes linearly adding together the plurality of analog features.
  • Example AM7 is the method of any of Examples AM1-AM6, further comprising: receiving a second digital signal at the machine learning unit; and producing, with the machine-learning unit, the instruction set, based at least in part on the second digital signal.
  • Example AM8 is the method of Example AM7, wherein receiving, at the machine learning unit, the second digital signal involves information about environmental conditions that affect at least one of the components in the machine-learning enabled digital-to-analog converter.
  • Example AM9 is the method of any of Examples AM1-AM8, further comprising: producing, with a combining unit, a second analog signal, based at least in part on the plurality of analog features.
  • Example AM 10 is the method of any of Examples AM1-AM9, further comprising: producing, with the machine-learning unit, a digital output signal based at least in part on the first digital signal.
  • a method includes producing , with a first process, a first instruction set; producing, with an array of digital-to-analog converters of a machine-learning-enabled digital-to-analog converter, a plurality of analog features, each based at least in part on a respective instruction from the first instruction set; producing, with a combining unit of the machine-learning-enabled digital-to-analog converter, a first analog output signal, based at least in part on the plurality of analog features; producing, with a first analog-to-digital converter, a first digital signal, based at least in part on the first analog signal; producing a first training example, based at least in part on both the first instruction set and the first digital signal; producing, with a test process, a test result, based at least in part on the first digital signal; storing , if the test result is positive, the first training example in a first dataset; training a first machine-learning algorithm, with the first dataset, to produce an instruction
  • Example BM3 is the method of Example BM1 or Example BM2, producing the test result includes determining if the first digital signal is within the operating specifications of the machine-learning enabled digital-to-analog converter, with regards to voltage, current, power, or frequency characteristics.
  • Example BM4 is the method of any of Examples BM1-BM3, further comprising: producing , with a second process, a first environmental digital signal, based at least in part on an environmental condition affecting at least one of the components in the machine-learning enabled digital-to-analog converter; producing the first training example, based at least in part on the first environmental digital signal; training the first machine-learning algorithm, with the first dataset, to produce an instruction set based at least in part on an environmental digital signal; and producing, with the first machine-learning algorithm, an instruction set based at least in part on an environmental digital signal.
  • Example BM5 is the method of any of Examples BM1-BM4, wherein producing at least one of the plurality of analog features, producing the first analog output signal, or producing the first digital signal occurs by numerical simulation.
  • Example BM6 is the method of any of Examples BM1-BM5, further comprising producing, with the combining unit, a second analog signal, based at least in part on the plurality of analog features; producing, with a second analog-to-digital converter, a second digital signal, based at least in part on the second analog signal; producing the example, based at least in part on the second digital signal; producing, with the test process, the test result, based at least in part on the second digital signal; training the first machine-learning algorithm, with the first dataset, to produce an instruction set based at least in part on a second digital signal; and producing, with the first machine-learning algorithm, an instruction set based at least in part on a second digital signal.
  • Example BM7 is the method of any of Examples BM1-BM6, further comprising: producing the plurality of analog features, with the array of digital-to-analog converters, or producing the first analog output signal, with the combining unit, while the array of digital-to-analog converters or the combining unit are subject to environmental conditions that vary by at least one of the following: a temperature range of at least ten degrees Celsius, a humidity range of at least five percent relative humidity, an operational age range of at least four hundred hours, an electric field strength range of at least one volt per meter, a variation in supply voltage of at least one millivolt, a magnetic field strength range of at least one microTesla, or a barometric pressure range of at least 10 kiloPascals.
  • environmental conditions that vary by at least one of the following: a temperature range of at least ten degrees Celsius, a humidity range of at least five percent relative humidity, an operational age range of at least four hundred hours, an electric field strength range of at least one volt per meter
  • Example BM8 is the method of any of Examples BM1-BM7, further comprising: producing the plurality of analog features, with the array of digital-to-analog converters, or producing the first analog output signal, with the combining unit, while the array of digital-to-analog converters or the combining unit are subject to numerically simulated environmental conditions that vary by at least one of the following: a temperature range of at least ten degrees Celsius, a humidity range of at least five percent relative humidity, an operational age range of at least four hundred hours, an electric field strength range of at least one volt per meter, a variation in supply voltage of at least one millivolt, a magnetic field strength range of at least one microTesla, or a barometric pressure range of at least 10 kiloPascals.
  • Example BM9 is the method of any of Examples BM1-BM8, further comprising: producing, with a third process, a first custom digital signal; producing, with a first user-defined process, a processed digital input signal, based at least in part on the first custom digital signal; producing, with the first machine-learning algorithm, an instruction set, based at least in part on the processed digital input signal; producing a first custom training example, based at least in part on the first custom digital signal and the instruction set; storing the first custom training example in a second dataset; training , with the second dataset, a second machine-learning algorithm to produce an instruction set based at least in part on a first digital signal; and producing, with the second machine-learning algorithm, an instruction set based at least in part on a first digital signal.
  • Example BM10 is the method of Example BM9, further comprising: producing, with a fourth process, a second custom digital signal; producing, with the first user-defined process, the processed digital input signal, based at least in part on the second custom digital signal; producing the first custom training example, based at least in part on the second custom digital signal; training, with the second dataset, the second machine-learning algorithm to produce an instruction set based at least in part on a second digital signal; and producing, with the second machine-learning algorithm, an instruction set based at least in part on a second digital signal.
  • Example BM11 is the method of Example BM9 or Example BM10, further comprising: producing, with a second user-defined process, a digital output signal, based at least in part on the first custom digital signal; producing the first custom training example, based at least in part on the digital output signal; training, with the second dataset, the second machine-learning algorithm to produce a digital output signal based at least in part on a first digital signal; and producing, with the second machine-learning algorithm, a digital output signal based at least in part on a first digital signal.
  • Example BM12 is the method of any of Examples BM9-BM11 , wherein producing, with a third process, a first custom digital signal includes generating a random number.
  • Example BM13 is the method of Example BMW, wherein producing, with a fourth process, a second custom digital signal includes generating a random number.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Neurology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Analogue/Digital Conversion (AREA)

Abstract

Un convertisseur numérique-analogique activé par apprentissage automatique comprend une unité d'apprentissage automatique qui reçoit un premier signal numérique et produit un ensemble d'instructions sur la base, au moins en partie, d'un algorithme d'apprentissage automatique et du premier signal numérique; un réseau de convertisseurs numérique-analogique qui produit une pluralité de caractéristiques analogiques, chacune étant basée au moins en partie sur une instruction respective provenant de l'ensemble d'instructions; et une unité de combinaison qui produit un premier signal analogique sur la base, au moins en partie, de la pluralité de caractéristiques analogiques.
PCT/US2024/055542 2023-11-09 2024-11-12 Cna activé par apprentissage automatique Pending WO2025102077A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363547926P 2023-11-09 2023-11-09
US63/547,926 2023-11-09

Publications (1)

Publication Number Publication Date
WO2025102077A1 true WO2025102077A1 (fr) 2025-05-15

Family

ID=95696702

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/055542 Pending WO2025102077A1 (fr) 2023-11-09 2024-11-12 Cna activé par apprentissage automatique

Country Status (1)

Country Link
WO (1) WO2025102077A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005012750A (ja) * 2003-06-18 2005-01-13 Northrop Grumman Corp 拡張された範囲のディジタル・アナログ変換
US20120005456A1 (en) * 2006-12-05 2012-01-05 D-Wave Systems Inc. Systems, methods and apparatus for local programming of quantum processor elements
JP2022523291A (ja) * 2019-01-18 2022-04-22 シリコン ストーリッジ テクノロージー インコーポレイテッド 深層学習人工ニューラルネットワーク内のアナログニューラルメモリにおいてニューロン電流をニューロン電流ベースの時間パルスに変換するためのシステム
KR102514932B1 (ko) * 2021-04-16 2023-03-29 한국과학기술원 기계 학습용 아날로그 내적 연산기, 이를 이용한 기계 학습 프로세서 및 학습 방법
US20230179212A1 (en) * 2021-12-03 2023-06-08 Infineon Technologies Ag Microphones with an On-Demand Digital-To-Analog Converter

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005012750A (ja) * 2003-06-18 2005-01-13 Northrop Grumman Corp 拡張された範囲のディジタル・アナログ変換
US20120005456A1 (en) * 2006-12-05 2012-01-05 D-Wave Systems Inc. Systems, methods and apparatus for local programming of quantum processor elements
JP2022523291A (ja) * 2019-01-18 2022-04-22 シリコン ストーリッジ テクノロージー インコーポレイテッド 深層学習人工ニューラルネットワーク内のアナログニューラルメモリにおいてニューロン電流をニューロン電流ベースの時間パルスに変換するためのシステム
KR102514932B1 (ko) * 2021-04-16 2023-03-29 한국과학기술원 기계 학습용 아날로그 내적 연산기, 이를 이용한 기계 학습 프로세서 및 학습 방법
US20230179212A1 (en) * 2021-12-03 2023-06-08 Infineon Technologies Ag Microphones with an On-Demand Digital-To-Analog Converter

Similar Documents

Publication Publication Date Title
Holleman et al. A 3$\mu $ W CMOS True Random Number Generator With Adaptive Floating-Gate Offset Cancellation
Malloug et al. Mostly-digital design of sinusoidal signal generators for mixed-signal BIST applications using harmonic cancellation
Zhang et al. CEPA: CNN-based early performance assertion scheme for analog and mixed-signal circuit simulation
Zhou et al. A digital-circuit-based evolutionary-computation algorithm for time-interleaved ADC background calibration
US20250300666A1 (en) Machine learning-enabled analog-to-digital converter
WO2025102077A1 (fr) Cna activé par apprentissage automatique
Huang et al. Testing and characterization of the one-bit first-order delta-sigma modulator for on-chip analog signal analysis
JP7331261B2 (ja) ディープラーニングを用いてキュービット較正データをノイズ除去する
Jin et al. Low-cost high-quality constant offset injection for SEIR-based ADC built-in-self-test
Ergün et al. An ADC based random number generator from a discrete time chaotic map
Arteaga et al. Blind adaptive estimation of integral nonlinear errors in ADCs using arbitrary input stimulus
Mousa et al. Test pattern generator optimization for digital testing of analogue circuits
Kallinger et al. Bayesian frequency analysis of HD 201433 observations with BRITE
Bernard et al. Analog BIST generator for ADC testing
Fabbri et al. Very low cost entropy source based on chaotic dynamics retrofittable on networked devices to prevent RNG attacks
Dubois et al. Hierarchical parametric test metrics estimation: A ΣΔ converter BIST case study
Barragan et al. Multi-condition alternate test of analog, mixed-signal, and RF systems
CN118641204B (zh) 轴承的性能退化起始点检测方法、装置、设备及存储介质
Andrejević et al. Delay Defects Diagnosis Using ANNs
Ahmed et al. Behavioral test benches for digital clock and data recovery circuits using Verilog-A
Fredriksson et al. Adaptive downsampling of traces with FPGAs
Kim Statistical error compensation for robust digital signal processing and machine learning
Simancas-García et al. Integrated analogical signs generator for testing mixed integrated circuits
Litovski 9 Diagnosis of Analog and Mixed-Signal Circuits
Pavlidis Analog Hardware Fault Diagnosis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24889798

Country of ref document: EP

Kind code of ref document: A1