[go: up one dir, main page]

WO2000072106A1 - Phase extraction in optical processing - Google Patents

Phase extraction in optical processing Download PDF

Info

Publication number
WO2000072106A1
WO2000072106A1 PCT/IL2000/000284 IL0000284W WO0072106A1 WO 2000072106 A1 WO2000072106 A1 WO 2000072106A1 IL 0000284 W IL0000284 W IL 0000284W WO 0072106 A1 WO0072106 A1 WO 0072106A1
Authority
WO
WIPO (PCT)
Prior art keywords
data set
image
transform
data
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IL2000/000284
Other languages
French (fr)
Inventor
Efraim Goldenberg
David Mendlovic
Emanuel Marom
Leonard Bergstein
Aviram Sariel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JTC 2000 DEVELOPMENT (DELAWARE) Inc
Original Assignee
JTC 2000 DEVELOPMENT (DELAWARE) Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/IL1999/000479 external-priority patent/WO2000072267A1/en
Application filed by JTC 2000 DEVELOPMENT (DELAWARE) Inc filed Critical JTC 2000 DEVELOPMENT (DELAWARE) Inc
Priority to AU46079/00A priority Critical patent/AU4607900A/en
Priority to DE60017738T priority patent/DE60017738T2/en
Priority to EP00927694A priority patent/EP1190286B1/en
Priority to AT00927694T priority patent/ATE288098T1/en
Publication of WO2000072106A1 publication Critical patent/WO2000072106A1/en
Anticipated expiration legal-status Critical
Priority to US11/470,800 priority patent/US7515753B2/en
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06EOPTICAL COMPUTING DEVICES; COMPUTING DEVICES USING OTHER RADIATIONS WITH SIMILAR PROPERTIES
    • G06E3/00Devices not provided for in group G06E1/00, e.g. for processing analogue or hybrid data
    • G06E3/001Analogue devices in which mathematical operations are carried out with the aid of optical or electro-optical elements
    • G06E3/003Analogue devices in which mathematical operations are carried out with the aid of optical or electro-optical elements forming integrals of products, e.g. Fourier integrals, Laplace integrals, correlation integrals; for analysis or synthesis of functions using orthogonal functions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06EOPTICAL COMPUTING DEVICES; COMPUTING DEVICES USING OTHER RADIATIONS WITH SIMILAR PROPERTIES
    • G06E1/00Devices for processing exclusively digital data
    • G06E1/02Devices for processing exclusively digital data operating upon the order or content of the data handled
    • G06E1/04Devices for processing exclusively digital data operating upon the order or content of the data handled for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06E1/045Matrix or vector computation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06EOPTICAL COMPUTING DEVICES; COMPUTING DEVICES USING OTHER RADIATIONS WITH SIMILAR PROPERTIES
    • G06E3/00Devices not provided for in group G06E1/00, e.g. for processing analogue or hybrid data
    • G06E3/001Analogue devices in which mathematical operations are carried out with the aid of optical or electro-optical elements
    • G06E3/005Analogue devices in which mathematical operations are carried out with the aid of optical or electro-optical elements using electro-optical or opto-electronic means

Definitions

  • the invention relates to optical methods and apparatus for performing computations and in particular to transforming a first data set into a second data set by a linear transformation and determining the phase of data elements in the second data set.
  • Optical data processing can often be used to process data more rapidly and efficiently than conventional computational methods.
  • optical methods can be used to perform linear transformations of data sets rapidly and efficiently.
  • converging lenses can be used to substantially "instantaneously" transform a first image into a second image that is a Fourier transform of the first image.
  • the Fourier transform is a relationship between the complex amplitudes of light in the images and not between the intensities of light in the images.
  • the transformation is a transformation of complex amplitudes of light and not intensities of light.
  • a second image is said to be a Fourier, or other, transform of a first image
  • the spatial pattern of the complex amplitude of light in the second image is the Fourier, or other, transform of the spatial pattern of the complex amplitude of light in the first image.
  • the second image is coded with data that is the Fourier transform of the data in the first image.
  • a suitable optical processor can therefore provide substantial advantages in comparison to a conventional data processor when a spectral analysis of a data set is desired.
  • a Fourier transform of a data set in general involves complex numbers, even if the data set comprises only real numbers. Therefore, in order to properly detect an "optical" Fourier transform of a data set, phase as well as intensity of light of an image representing the Fourier transform must be detected. While this can be accomplished, most light detectors are generally sensitive only to light intensity and are not responsive to phase. It is therefore generally more convenient to determine values for data represented by an image from only the intensity of light in the image.
  • An aspect of some embodiments of the present invention relates to providing a method for determining the sign of data encoded in an output image of a linear optical processor using measurements of intensity of light in the output image, hereinafter referred to as a "data output image".
  • the data output image is assumed to be generated by the processor responsive to an input image, a "data input image”, encoded with input data that is real.
  • the input data is either all positive or all negative. For clarity of presentation it is assumed that the input data is all positive.
  • a reference input image is defined for the optical processor. Magnitude and phase of amplitude of a "reference" output image generated by the processor responsive to the input reference image are used to determine the sign of data represented by the data output image.
  • the operation of a linear optical processor may be described by the equation
  • F(u,v) O(u,v:x,y)f(x,y).
  • f(x,y) is a complex amplitude of light in an input image, i.e. a data input image, that represents input data, which data input image is located on an input plane of the processor, and x and y are coordinates of the input plane.
  • F(u,v) is a complex amplitude of light in a data output image that the processor generates responsive to f(x,y).
  • the data output image is located on an output plane of the processor having position coordinates u and v corresponding respectively to position coordinates x and y of the input plane. Intensity of light in the data input image is equal to
  • O(u,v:x,y) represents any continuous or discrete linear operator that transforms a first real data set into a second real data set.
  • O(u,v:x,y) may for example represent the continuous or discrete sine or cosine transform or the Hartley transform.
  • u, v, x and y are continuous and multiplication in the equation representing operation of the processor represents integration over the x, y coordinates.
  • discrete linear operators u, v, x, and y are discrete coordinates and multiplication represents summation over the x, y coordinates.
  • the input data is assumed to be real and positive
  • the phase of f(x,y) is constant and input data is represented by the magnitude of f(x,y).
  • F(u,v) also represents a real data set. However F(u,v) may have both positive and negative data. Data having positive values are represented by values of F(u,v) having a same first phase. Data having negative values are represented by values of F(u,v) having a same second phase 180° different from the first phase.
  • R(x,y) and R(u,v) Both r(x,y) R(u,v), and intensity of light in the reference output image
  • R(u,v) is a real data set comprising values all of which have a same sign.
  • the data set comprises one of or a combination of positive, negative and complex values.
  • 2 is measured.
  • F(u,v) can be determined from
  • 2 )/2R(u,v) provides a magnitude and a phase for F(u,v).
  • the phase is known to within an ambiguity, for example, a symmetry ambiguity or a 180°.
  • the ambiguity is removed and the phase extracted by determining a combined image C(u,v) for two or more different reference images r(x,y). The phase can be extracted for example by solving for F(u,v) using the two combined and reference images.
  • the reference image is chosen so that
  • An aspect of some embodiments of the present invention relates to providing an improved method for generating a cosine transform of an "input" image using an optical processor that generates a Fourier transformed output image from an input image.
  • a first Fourier image that is a Fourier transform of the input image is generated by the optical processor and the intensity of the Fourier image measured and stored.
  • a second Fourier image is generated by the optical processor from the input image plus a known first reference image and the intensity of the second Fourier image is measured and stored.
  • the input image is parity transformed to generate a second input image, referred to as a "parity image”.
  • a third Fourier image which is a Fourier transform of the parity image is generated and its intensity measured and stored.
  • a fourth Fourier image is generated which is a Fourier transform of the parity image plus a known second reference image.
  • the intensities of the four Fourier images and the amplitudes of the known reference images are used to determine the cosine transform of the input image.
  • the first and second reference images are the same.
  • a method of optical data processing comprising: providing a first data set to be optically transformed using a transform; combining a reference data set with said first data set to generate a combined data set; optically transforming said combined data set into a transformed combined data set; and extracting a second data set that represents a transform of said first data set, from an amplitude portion of said transformed combined data set, using said reference image to extract a phase of at least one element of said second data set.
  • said transformed combined data set is detected using a power detector.
  • said transformed combined data set is encoded using incoherent light.
  • said transformed combined data set is a discrete data set.
  • said first data set comprises a one-dimensional data set.
  • said first data set comprises a two-dimensional data set.
  • said first data set comprises an image.
  • said first data set comprises at least one positive value.
  • said first data set comprises at least one negative value.
  • said first data set comprises at least one complex value.
  • extracting comprises extracting using electronic processing.
  • combining a reference data set comprises adding at least one additional value to an existing element of said first data set.
  • combining a reference data set comprises replacing at least one existing element of said first data set with an element from a second data set.
  • the method comprises compensating for an effect of said replaced value after said extraction.
  • said compensating comprises compensating using electronic processing.
  • combining a reference data set comprises adding at least one additional value alongside existing elements of said first data set.
  • said at least one additional value is arranged at a corner of a matrix layout of said first data set.
  • the method comprises selecting said reference image to create a desired offset in said transformed combined data set.
  • said selecting takes into account system imperfections.
  • said offset is substantially uniform.
  • said offset is substantially non-uniform.
  • said reference data is at least one delta- function.
  • said reference data comprises a plurality of delta-functions.
  • said at least one delta function has an amplitude substantially greater than that of any of the data elements of said first data set.
  • said at least one delta function has an amplitude substantially greater than that of any of the data elements of said first data set that have a certain phase.
  • said at least one delta function has an amplitude substantially greater than an amplitude of a component of any of the data elements of said first data set that fit in a certain phase range. In an exemplary embodiment of the invention, said at least one delta function has an amplitude not greater than that of any of the data elements of said first data set.
  • said amplitudes are measured as amplitudes of transform elements.
  • combining comprises combining electronically and generating a combined modulated light beam.
  • combining comprises combining optically.
  • combining comprises creating said reference image optically.
  • said reference image is created using a refractive optical element.
  • said reference image is created using a dedicated light source.
  • said transform is a Fourier-derived transform.
  • said transform is a DCT transform.
  • extracting a phase comprises extracting only a sign.
  • Fig. 1 schematically shows an optical processor generating a Fourier transform of an image according to prior art
  • Fig. 2 schematically shows the optical processor shown in Fig. 1 generating a cosine transform of an image in accordance with prior art
  • FIGs. 3A and 3B schematically show an optical processor generating a cosine transform of an image in accordance with an embodiment of the present invention
  • Fig. 4A schematically shows an optical processor that generates a reference image that is a delta function, in accordance with an embodiment of the present invention
  • Fig. 4B schematically shows a lens system for generating a delta function reference image, in accordance with an embodiment of the present invention
  • Figs. 5A-5D schematically illustrate a method of generating a cosine transform of an image, in accordance with an embodiment of the present invention.
  • a real linear transform performed by an optical processor is a cosine transform.
  • the optical processor uses the Fourier transform properties of converging lenses whereby a converging lens transforms an image into its Fourier transform, to generate a cosine transform of an image.
  • the Fourier transform properties of lenses are described in "Introduction to Fourier Optics" by J. W. Goodman, McGraw Hill-Hill Companies, second edition Copyright 1996.
  • Fig. 1 schematically shows an optical processor 20 that functions to transform images into their Fourier transforms according to prior art.
  • Optical processor 20 comprises a converging lens 22, an input plane 24 and an output plane 26. Input and output planes 24 and 26 are coincident with focal planes of lens 22.
  • lens 22 can be used to generate an image on output plane 26 that is a Fourier transform of an image on input plane 24.
  • a spatial light modulator 30 having pixels 32 is located at input plane 24 and that the spatial light modulator is illuminated with collimated coherent light, represented by wavy arrows 34, from a suitable light source.
  • Pixels 32 have transmittances as a function of position that are proportional to a desired function.
  • Spatial light modulator 30 may, for example, be a photographic transparency, a printed half tone image, a liquid crystal array or a multiple quantum well (MQW) modulator.
  • the transmittances are determined so that when spatial light modulator 30 is illuminated by light 34 a happy face 36 is formed at input plane 24.
  • Lens 22 will form an image (not shown) on output plane 26 that is the Fourier transform of the happy face 36 on input plane 24.
  • the Fourier transform of the function (l/4)[f(x,y)+f(-x,y) + f(x,-y)+ f(-x,-y)] is the cosine transform of f(x,y).
  • Each of the functions in the square brackets is a parity transform, or a one dimensional reflection in the x or y axis, of the other functions in the brackets. It is therefore seen that the cosine transform of a two dimensional function can be generated by Fourier transforming all possible parity transforms of the function.
  • Fig. 2 illustrates how optical processor 20 shown in Fig. 1 can be used to generate a cosine transform of an image 40 in accordance with prior art by Fourier transforming all of the image's parity transforms.
  • Image 40 may, by way of example, be an 8 by 8 block of pixels from an image that is to be compressed according to the JPEG standard using a discrete cosine transform.
  • positions on input plane 24 and spatial light modulator 32 be defined by coordinates along x and y axes indicated on the spatial light modulator and positions on output plane 26 by coordinates along u and v axes indicated on the output plane.
  • positions on output plane 26 by coordinates along u and v axes indicated on the output plane.
  • respective origins 25 and 27 of the x, y coordinates and the u, v coordinates be the intersections of the optic axis (not shown) of lens 22 with input and output planes 24 and 26 respectively.
  • Image 40 is formed on the upper right quadrant of spatial light modulator 32 and reflections 42 and 44 of image 40 in the x and y axes are respectively formed in the lower right and upper left quadrants of the spatial light modulator.
  • a reflection 46 of image 40 along a 45° diagonal (not shown) to the x axis through the origin is formed in the lower left quadrant of spatial light modulator 30.
  • Let the amplitude of light in image 40 be represented by f(x,y).
  • f (x,y) (l/4)[f(x,y)+f(-x,y)+f(x,-y)+f(-x,-y)]. (The decrease in amplitude by 75%, i.e. the factor 1/4, which is not necessary, can of course be achieved by proper control of spatial light modulator 30). If the amplitude of light in an image formed on output plane 26 by lens 22 responsive to f (x,y) is represented by F(u,v) then F(u,v) is the Fourier transform of f (x,y). Because of the symmetry of the image on input plane 24, F(u,v) is also the cosine transform of f(x,y). If F.T.
  • f(x,y) and f (x,y) represent data that is either all positive or all negative.
  • f(x,y) is assumed to be positive.
  • F(u,v) also represents real data.
  • F(u,v) may have both positive and negative data. Therefore, the cosine transform of image f(x,y) cannot be determined from the image on output plane 26 by measuring only the intensity
  • Figs. 3A and 3B schematically show an optical processor 50 being used to determine the sign and magnitude of the cosine transform F(u,v) of image 40, i.e. f(x,y), in accordance with an embodiment of the present invention.
  • Optical processor 50 is similar to optical processor 20 and comprises a lens 22, input and output planes 24 and 26.
  • processor 50 preferably comprises an array 52 of rows and columns of photosensors 54.
  • Each photosensor 54 generates a signal responsive to an intensity of light in an image on output plane 26 at a position determined by the row and column of array 52 in which the photosensor 54 is located and a pitch of array 52.
  • Photosensors 52 sample intensity of light at "discrete" positions (u,v) in output plane 26.
  • the number of photosensors 52 is equal to the number of pixels 32 in spatial light modulator 30 and the locations of photosensors 52 are homologous with the locations of pixels 32.
  • spatial light modulator 30 generates a first image at input plane 24 comprising image 40 and its parity reflections 42, 44 and 46.
  • the image is the same as the image comprising image 40 and its reflections shown in Fig. 2.
  • Lens 22 forms an image at output plane 26 having amplitude F(u,v).
  • spatial light modulator 30 generates a second "combined" image at input plane 24 that comprises the image shown on the input plane in Fig. 3 A with the addition of a reference image 60 having a known amplitude r(x,y).
  • r(x,y) is chosen so that its Fourier transform is real, i.e. it has a symmetry with respect to the origin of axes x and y which results in its Fourier transform being real.
  • reference image 60 is formed by controlling central pixels 61, 62, 63 and 64 located at the origin of coordinates of input plane 24 to transmit light and appear bright.
  • lens 22 forms an image (not shown) on output plane 26 that is the Fourier transform of c(x,y) and photosensors 54 generate signals responsive to intensity, IC(u,v), of light in the image.
  • C(u,v) represents the Fourier transform of c(x,y)
  • IF(u,v), IC(u,v) and the known Fourier transform of r(x,y) are used to determine the magnitude and sign of F(u,v) and thereby the cosine transform of f(x,y).
  • the magnitude of F(u,v) is determined from the square root of IF(u,v).
  • the sign of F(u,v) can be determined by comparing IF(u,v) and IR(u,v) with IC(u,v). If IF(u,v) > IC(u,v) or IR(u,v) >IC(u,v) then R(u,v) and F(u,v) have opposite sign. Otherwise they have the same sign. Since the sign of R(u,v) is known the sign of F(u,v) is known.
  • reference image 60 is a symmetric image located at the center of origin of the x,y coordinates other reference images are possible and can be used in the practice of the present invention.
  • pixels 32 at the corners of spatial light modulator 30 can be used to generate useful reference images.
  • pixels 32 only in certain regions of spatial light modulator 30 are used to represent data. Pixels that are not needed for data are used, in some embodiments of the present invention, to generate reference images.
  • some data pixels are canceled or provided elsewhere n the image, for example as pixels in overlapping blocks.
  • extra pixels are provided for the reference image, for example by inserting one or more rows or columns per block.
  • data pixels may be restricted to alternate rows or columns of pixels.
  • each data pixel may be su ⁇ ounded by four pixels that are not used for data.
  • 9x9 blocks of data are used for an 8x8 block transform, with at least some of the extra pixels being used as a reference image, alternatively or additionally, the effect of missing pixels may be co ⁇ ected using an electronic or optical post processing step.
  • dark pixels pixels that are "turned off, that do not transmit light can function to generate reference images.
  • an image on spatial light modulator 30 has bright pixels at the origin of coordinates (i.e. pixels 61, 62, 63 and 64 in Fig. 3B) a reference image can be generated by "turning off the pixels. Turning off pixels in an image is of course equivalent to adding a reference image to the image for which light at the turned off pixels has a phase opposite to that of the light in the image.
  • reference image f(x,y) is chosen so that
  • IC(u,v) is measured is required to determine the magnitude and phase of F(u,v). If at a point (u,v), IC(u,v)-IR(u,v) > 0 then the signs F(u,v) and R(u,v) are the same at the point otherwise the signs are opposite.
  • Fig. 4A schematically shows a side view of an optical processor 70, in accordance with an embodiment of the present invention, that generates a reference field for which
  • Optical processor 70 comprises a "Fourier" lens 22 having an output plane 26 coincident with a focal plane of lens 22, a spatial light modulator 72 and a "beam partitioner” 74.
  • a detector array 76 is located at output plane 26 and measures intensity of light at the output plane.
  • Spatial light modulator 72 defines an input plane for Fourier lens 22 and may be located at substantially any position to the left of output plane 26.
  • spatial light modulator 72 is located by way of example adjacent to lens 22.
  • Beam partitioner 72 preferably receives an incident beam 78 of coherent collimated light generated by an appropriate source (not shown) and focuses a portion of the light to a point 80 and transmits a portion of the light as a transmitted beam of light 82 parallel to the incident beam.
  • Light from transmitted beam 82 illuminates and is transmitted through spatial light modulator 72 and is focused by lens 22 to form a Fourier transform F(u,v) of a transmittance pattern f(x,y) formed on the spatial light modulator. It is assumed that the transmittance pattern has an appropriate symmetry so that the Fourier transform is a cosine transform of a desired image.
  • Point 80 functions substantially as a point source of light and provides a reference image r(x,y) for f(x,y) that is substantially a delta function A ⁇ (x,y), where A is proportional to an intensity of light focused to point 80.
  • a Fourier image, R(u,v), of light from point 80 is also formed on output plane 26 by lens 22. Since r(x,y) is substantially a delta function, R(u,v) is substantially constant and equal to A.
  • beam partitioner 74 is a diffractive optical element such as a Fresnel zone plate having reduced efficiency.
  • beam partitioner 74 comprises an optical system 90 of a type shown in a side view in Fig. 4B.
  • Optical system 90 comprises a positive lens 92 and a weak negative lens 94.
  • Positive lens 92 is preferably coated with an antireflective coating using methods known in the art to minimize reflections.
  • Weak negative lens 92 is treated so that at its surfaces light is reflected with a reflectivity ⁇ .
  • Light from incident beam 78, represented by arrowed lines 96, that is transmitted through both positive lens 92 and negative lens 94 without reflections is focused to produce the point reference light source A ⁇ (x,y) at point 80. If the intensity of light in light beam 78 is "I" the amount of light focused to point 80 is substantially equal to 1(1 - ⁇ )2.
  • Light that undergoes internal reflection twice in negative lens 94 is transmitted as transmitted beam of light 82 substantially parallel to incident beam 78.
  • the amount of energy in transmitted beam 82 is substantially equal to 1(1 - ⁇ )2 c ⁇ 2 .
  • the ratio of energy focused to point 80 to that contained in transmitted beam && is therefore equal to 1/ ⁇ 2 .
  • the cosine transform of f(x,y) can be determined from the intensities IFp(u,v), ICp(u,v) and Ap and IF m (u,v), IC m (u,v) and A m . It should be noted that whereas a delta function has been added as a reference field for f(x,y) and f(x,-y) in the above calculations, similar results can obtain for other reference functions r(x,y). Figs.
  • FIG. 5A -5D illustrate a method, in accordance with an embodiment of the present invention by which the functions IFp(u,v), ICp(u,v) and Ap and IF m (u,v), IC m (u,v) and A m are evaluated using an optical processor 100 to generate a cosine transform of a function f(x,y).
  • Optical processor 100 is similar to optical processors 50 and 70 and comprises a Fourier lens 22, a photosensor array 52 at an output plane 26, which is located at a focal plane of lens 22 and a spatial light modulator 30.
  • Fig. 5A assume that function f(x,y) is represented by an image 40 formed by spatial light modulator 30.
  • Optical modulator 100 generates the Fourier transform F(u,v) of f(x,y) and acquires values for IFp(u,v).
  • Processor 100 Fourier transforms Cp(x,y) and acquires ICp(u,v).
  • Point light source may be provided using any methods known in the art. In some embodiments of the present invention point light source is provided by methods and apparatus that are similar to those described in the discussion of Figs. 4A and 4B.
  • spatial light modulator 30 forms an image f(x,-y) and acquires IF m (u,v).
  • a delta function reference function A m ⁇ (x,y) is added to f(x,-y) and IC m (u,v) is acquired.
  • a suitable processor (not shown) receives the acquired data and uses it to determine
  • the present application is related to the following four PCT applications filed on same date as the instant application in the IL receiving office, by applicant JTC2000 Development (Delaware), Inc.: attorney docket 141/01582 which especially describes matching of discrete and continuous optical elements, attorney docket 141/01541 which especially describes reflective and incoherent optical processor designs, attorney docket 141/01580 which especially describes various architectures for non-imaging or diffractive based optical processing, and attorney docket 141/01542 which especially describes a method of processing by separating a data set into bit-planes and or using feedback.
  • attorney docket 141/01582 which especially describes matching of discrete and continuous optical elements
  • attorney docket 141/01541 which especially describes reflective and incoherent optical processor designs
  • attorney docket 141/01580 which especially describes various architectures for non-imaging or diffractive based optical processing
  • attorney docket 141/01542 which especially describes a method of processing by separating a data set into bit-plan
  • each of the verbs, "comprise” “include” and “have”, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of members, components, elements or parts of the subject or subjects of the verb.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Optics & Photonics (AREA)
  • Nonlinear Science (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Complex Calculations (AREA)
  • Image Analysis (AREA)
  • Optical Modulation, Optical Deflection, Nonlinear Optics, Optical Demodulation, Optical Logic Elements (AREA)
  • Image Processing (AREA)

Abstract

A method for determining the sign of data represented by amplitude of light in a second image that is a linear transform of amplitude of light in a first image, wherein the linear transform that relates the two images transforms real data sets into real data sets, the method comprising: measuring intensity of light in the second image; determining a reference image representing real data and a transform of the reference image generated by the linear transform; generating a third image by transforming the first image plus the reference image with the linear transform; measuring intensity of light in the third image; and determining the sign of the amplitude of light in the second image at a location in the second image using the measured intensities of light in the second and third images and the known amplitude of light in the transform of the reference image at the location.

Description

PHASE EXTRACTION IN OPTICAL PROCESSING RELATED APPLICATIONS
This application is a continuation in part of PCT application PCT/IL99/00479, filed September 5, 1999, by applicant Lenslet Ltd. in the IL receiving office and designating the US, the disclosure of which is incorporated herein by reference.
FIELD OF THE INVENTION The invention relates to optical methods and apparatus for performing computations and in particular to transforming a first data set into a second data set by a linear transformation and determining the phase of data elements in the second data set. BACKGROUND OF THE INVENTION
Optical data processing can often be used to process data more rapidly and efficiently than conventional computational methods. In particular, optical methods can be used to perform linear transformations of data sets rapidly and efficiently.
For example, it is well known that converging lenses can be used to substantially "instantaneously" transform a first image into a second image that is a Fourier transform of the first image. It is to be noted that the Fourier transform is a relationship between the complex amplitudes of light in the images and not between the intensities of light in the images. The same is generally true with respect to other transformations of images, the transformation is a transformation of complex amplitudes of light and not intensities of light. It is therefore to be understood that when a second image is said to be a Fourier, or other, transform of a first image, what is meant is that the spatial pattern of the complex amplitude of light in the second image is the Fourier, or other, transform of the spatial pattern of the complex amplitude of light in the first image.
If the first image is coded with data, the second image is coded with data that is the Fourier transform of the data in the first image. A suitable optical processor can therefore provide substantial advantages in comparison to a conventional data processor when a spectral analysis of a data set is desired. However, a Fourier transform of a data set in general involves complex numbers, even if the data set comprises only real numbers. Therefore, in order to properly detect an "optical" Fourier transform of a data set, phase as well as intensity of light of an image representing the Fourier transform must be detected. While this can be accomplished, most light detectors are generally sensitive only to light intensity and are not responsive to phase. It is therefore generally more convenient to determine values for data represented by an image from only the intensity of light in the image. Consequently, it is usually advantageous to process data optically using methods that generate only real numbers from the data. For example, it is often preferable to optically process data coded in an image in accordance with a cosine transform to perform a spectral analysis of the data rather than a Fourier transform. The cosine transform of a real data set generates real values. However, whereas a cosine transform of a real data set does not generate complex numbers it does, usually, generate both positive and negative numbers. Therefore, while most of the information in an optical cosine transform of an image can be acquired from measurements of intensity of light in the image, sign information is not preserved in the intensity measurements. As a result, an optical processor that transforms an input image into an output image that represents the cosine transform of the input image requires a means for determining which of the numbers represented by the output image are positive and which are negative.
K.W. Wong et al, in an article entitled "Optical cosine transform using microlens array and phase-conjugate mirror ", Jpn J. Appl. Phys. vol. 31, 1672-1676, the disclosure of which is incorporated herein by reference, describes a method of distinguishing positive and negative data in a cosine transform of an image. The problem of distinguishing the sign of numbers represented by an image when only the intensity of light in the image is measured is of course not limited to the case of data optically generated by a cosine transform. The problem affects all real linear transforms, such as for example the sine and discrete sine transforms and the Hartley transform, when the transforms are executed optically and only their intensities are sensed, if they generate both positive and negative values from a real data set.
SUMMARY OF THE INVENTION An aspect of some embodiments of the present invention relates to providing a method for determining the sign of data encoded in an output image of a linear optical processor using measurements of intensity of light in the output image, hereinafter referred to as a "data output image". The data output image is assumed to be generated by the processor responsive to an input image, a "data input image", encoded with input data that is real. The input data is either all positive or all negative. For clarity of presentation it is assumed that the input data is all positive. According to an aspect of some embodiments of the present invention, a reference input image is defined for the optical processor. Magnitude and phase of amplitude of a "reference" output image generated by the processor responsive to the input reference image are used to determine the sign of data represented by the data output image. The operation of a linear optical processor may be described by the equation
F(u,v)=O(u,v:x,y)f(x,y). In the equation f(x,y) is a complex amplitude of light in an input image, i.e. a data input image, that represents input data, which data input image is located on an input plane of the processor, and x and y are coordinates of the input plane. Similarly, F(u,v) is a complex amplitude of light in a data output image that the processor generates responsive to f(x,y). The data output image is located on an output plane of the processor having position coordinates u and v corresponding respectively to position coordinates x and y of the input plane. Intensity of light in the data input image is equal to |f(x,y)|2 and intensity of light in the data output image is equal to |F(u,v) .
O(u,v:x,y) represents any continuous or discrete linear operator that transforms a first real data set into a second real data set. O(u,v:x,y) may for example represent the continuous or discrete sine or cosine transform or the Hartley transform. For continuous linear transformations u, v, x and y are continuous and multiplication in the equation representing operation of the processor represents integration over the x, y coordinates. For discrete linear operators u, v, x, and y are discrete coordinates and multiplication represents summation over the x, y coordinates.
Since, in accordance with embodiments of the present invention, the input data is assumed to be real and positive, the phase of f(x,y) is constant and input data is represented by the magnitude of f(x,y). F(u,v) also represents a real data set. However F(u,v) may have both positive and negative data. Data having positive values are represented by values of F(u,v) having a same first phase. Data having negative values are represented by values of F(u,v) having a same second phase 180° different from the first phase.
Let the reference input image and its corresponding reference output image be represented by r(x,y) and R(u,v). Both r(x,y) R(u,v), and intensity of light in the reference output image |R(u,v)|2 are known. It is to be noted that it is possible to define and synthesize any predefined reference function r(x,y) and use it for sign reconstruction in accordance with embodiments of the present invention. Whereas descriptions of the present invention assume that r(x,y) is real the invention is not limited to the reference image being real. Magnitude and phase of R(u,v) are known from the transform that the optical processor executes and can be checked experimentally using methods known in the art. Preferably, r(x,y) is real. Therefore R(u,v) preferably corresponds to a real data set. In some embodiments of the present invention
R(u,v) is a real data set comprising values all of which have a same sign. In some embodiments of the present invention the data set comprises one of or a combination of positive, negative and complex values.
In accordance with an embodiment of the present invention, to determine both the magnitude and sign of F(u,v) the intensity of the data output image |F(u,v|2 is measured. In addition, in accordance with an embodiment of the present invention, a combined input image c(x,y) = f(x,y) + r(x,y) are processed by the processor to provide a combined output image C(u,v) = F(u,v) + R(u,v). Intensity of light in the combined output image, which is equal to |C(u,v)| = |F(u,v)|2 + |R(u,v)|2 + 2F(u,v)R(u,v), is measured. Since |F(u,v)|2, |R(u,v)|2 and R(u,v) are known, the sign of F(u,v) can be determined from the "interference" term 2F(u,v)R(u,v).
It is to be noted that not only sign of F(u,v) can be determined from |C(u,v)|2, |F(u,v)| , |R(u,v)|2 and R(u,v). In general, (|C(u,v)| - |F(u,v)|2 - |R(u,v)|2)/2R(u,v) provides a magnitude and a phase for F(u,v). In some cases the phase is known to within an ambiguity, for example, a symmetry ambiguity or a 180°. In some embodiments of the invention the ambiguity is removed and the phase extracted by determining a combined image C(u,v) for two or more different reference images r(x,y). The phase can be extracted for example by solving for F(u,v) using the two combined and reference images.
In some embodiments of the present invention the reference image is chosen so that |R(u,v)| > ]F(u,v)| for all values of u and v for which R(u,v) and F(u,v) have opposite signs. For these embodiments of the present invention only the combined input image c(x,y) = f(x,y) + r(x,y) is processed by the processor to determine both the magnitude and sign of F(u,v). If the intensity of light in the combined image minus the intensity light in the reference image at a point (u,v) in the output plane of the processor is greater than zero, the signs F(u,v) and R(u,v) are the same at the point. If on the other hand the difference is less than zero, the signs of F(u,v) and R(u,v) are opposite. Since the sign of R(u,v) is known, the sign of F(u,v) is known. The magnitude of F(u,v) at the point can be determined from the intensity |C(u,v)|2 and the known magnitude and sign of R(u,v) by solving a quadratic equation.
An aspect of some embodiments of the present invention relates to providing an improved method for generating a cosine transform of an "input" image using an optical processor that generates a Fourier transformed output image from an input image. In accordance with an embodiment of the present invention, a first Fourier image that is a Fourier transform of the input image is generated by the optical processor and the intensity of the Fourier image measured and stored. A second Fourier image is generated by the optical processor from the input image plus a known first reference image and the intensity of the second Fourier image is measured and stored. The input image is parity transformed to generate a second input image, referred to as a "parity image". A third Fourier image, which is a Fourier transform of the parity image is generated and its intensity measured and stored. A fourth Fourier image is generated which is a Fourier transform of the parity image plus a known second reference image. The intensities of the four Fourier images and the amplitudes of the known reference images are used to determine the cosine transform of the input image. In some embodiments of the present invention the first and second reference images are the same.
There is thus provided in accordance with an exemplary embodiment of the invention, a method of optical data processing, comprising: providing a first data set to be optically transformed using a transform; combining a reference data set with said first data set to generate a combined data set; optically transforming said combined data set into a transformed combined data set; and extracting a second data set that represents a transform of said first data set, from an amplitude portion of said transformed combined data set, using said reference image to extract a phase of at least one element of said second data set. Optionally, said transformed combined data set is detected using a power detector. Alternatively or additionally, said transformed combined data set is encoded using incoherent light.
In an exemplary embodiment of the invention, said transformed combined data set is a discrete data set. Alternatively or additionally, said first data set comprises a one-dimensional data set. Alternatively, said first data set comprises a two-dimensional data set. Optionally, said first data set comprises an image.
In an exemplary embodiment of the invention, said first data set comprises at least one positive value. Alternatively or additionally, said first data set comprises at least one negative value. Alternatively or additionally, said first data set comprises at least one complex value.
In an exemplary embodiment of the invention, extracting comprises extracting using electronic processing. In an exemplary embodiment of the invention, combining a reference data set comprises adding at least one additional value to an existing element of said first data set. Alternatively or additionally, combining a reference data set comprises replacing at least one existing element of said first data set with an element from a second data set. Optionally, the method comprises compensating for an effect of said replaced value after said extraction. Optionally, said compensating comprises compensating using electronic processing.
In an exemplary embodiment of the invention, combining a reference data set comprises adding at least one additional value alongside existing elements of said first data set. Optionally, said at least one additional value is arranged at a corner of a matrix layout of said first data set.
In an exemplary embodiment of the invention, the method comprises selecting said reference image to create a desired offset in said transformed combined data set. Optionally, said selecting takes into account system imperfections. Alternatively or additionally, said offset is substantially uniform. Alternatively, said offset is substantially non-uniform. In an exemplary embodiment of the invention, said reference data is at least one delta- function. Optionally, said reference data comprises a plurality of delta-functions. Alternatively or additionally, said at least one delta function has an amplitude substantially greater than that of any of the data elements of said first data set.
In an exemplary embodiment of the invention, said at least one delta function has an amplitude substantially greater than that of any of the data elements of said first data set that have a certain phase.
In an exemplary embodiment of the invention, said at least one delta function has an amplitude substantially greater than an amplitude of a component of any of the data elements of said first data set that fit in a certain phase range. In an exemplary embodiment of the invention, said at least one delta function has an amplitude not greater than that of any of the data elements of said first data set.
Optionally, said amplitudes are measured as amplitudes of transform elements.
In an exemplary embodiment of the invention, combining comprises combining electronically and generating a combined modulated light beam. Alternatively, combining comprises combining optically. Optionally, combining comprises creating said reference image optically. Optionally, said reference image is created using a refractive optical element. Alternatively, said reference image is created using a dedicated light source. In an exemplary embodiment of the invention, said transform is a Fourier-derived transform.
In an exemplary embodiment of the invention, said transform is a DCT transform. In an exemplary embodiment of the invention, extracting a phase comprises extracting only a sign.
BRIEF DESCRIPTION OF FIGURES
A description of exemplary embodiments of the present invention follows. In the figures, identical structures, elements or parts that appear in more than one figure are generally labeled with the same numeral in all the figures in which they appear. Dimensions of components and features shown in the figures are chosen for convenience and clarity of presentation and are not necessarily shown to scale. The figures are listed below.
Fig. 1 schematically shows an optical processor generating a Fourier transform of an image according to prior art;
Fig. 2 schematically shows the optical processor shown in Fig. 1 generating a cosine transform of an image in accordance with prior art;
Figs. 3A and 3B schematically show an optical processor generating a cosine transform of an image in accordance with an embodiment of the present invention;
Fig. 4A schematically shows an optical processor that generates a reference image that is a delta function, in accordance with an embodiment of the present invention; Fig. 4B schematically shows a lens system for generating a delta function reference image, in accordance with an embodiment of the present invention; and
Figs. 5A-5D schematically illustrate a method of generating a cosine transform of an image, in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF EMBODIMENTS In the following discussion an embodiment of the present invention is described in which a real linear transform performed by an optical processor is a cosine transform. The optical processor uses the Fourier transform properties of converging lenses whereby a converging lens transforms an image into its Fourier transform, to generate a cosine transform of an image. The Fourier transform properties of lenses are described in "Introduction to Fourier Optics" by J. W. Goodman, McGraw Hill-Hill Companies, second edition Copyright 1996.
Fig. 1 schematically shows an optical processor 20 that functions to transform images into their Fourier transforms according to prior art. Optical processor 20 comprises a converging lens 22, an input plane 24 and an output plane 26. Input and output planes 24 and 26 are coincident with focal planes of lens 22. It is well known that lens 22 can be used to generate an image on output plane 26 that is a Fourier transform of an image on input plane 24. For example, assume that a spatial light modulator 30 having pixels 32 is located at input plane 24 and that the spatial light modulator is illuminated with collimated coherent light, represented by wavy arrows 34, from a suitable light source. Pixels 32 have transmittances as a function of position that are proportional to a desired function. Spatial light modulator 30 may, for example, be a photographic transparency, a printed half tone image, a liquid crystal array or a multiple quantum well (MQW) modulator. In Fig. 1, by way of example, the transmittances are determined so that when spatial light modulator 30 is illuminated by light 34 a happy face 36 is formed at input plane 24. Lens 22 will form an image (not shown) on output plane 26 that is the Fourier transform of the happy face 36 on input plane 24. Given a function f(x,y), the Fourier transform of the function (l/4)[f(x,y)+f(-x,y) + f(x,-y)+ f(-x,-y)] is the cosine transform of f(x,y). Each of the functions in the square brackets is a parity transform, or a one dimensional reflection in the x or y axis, of the other functions in the brackets. It is therefore seen that the cosine transform of a two dimensional function can be generated by Fourier transforming all possible parity transforms of the function. Fig. 2 illustrates how optical processor 20 shown in Fig. 1 can be used to generate a cosine transform of an image 40 in accordance with prior art by Fourier transforming all of the image's parity transforms. Image 40 may, by way of example, be an 8 by 8 block of pixels from an image that is to be compressed according to the JPEG standard using a discrete cosine transform. Let positions on input plane 24 and spatial light modulator 32 be defined by coordinates along x and y axes indicated on the spatial light modulator and positions on output plane 26 by coordinates along u and v axes indicated on the output plane. Let respective origins 25 and 27 of the x, y coordinates and the u, v coordinates be the intersections of the optic axis (not shown) of lens 22 with input and output planes 24 and 26 respectively.
Image 40 is formed on the upper right quadrant of spatial light modulator 32 and reflections 42 and 44 of image 40 in the x and y axes are respectively formed in the lower right and upper left quadrants of the spatial light modulator. A reflection 46 of image 40 along a 45° diagonal (not shown) to the x axis through the origin is formed in the lower left quadrant of spatial light modulator 30. Let the amplitude of light in image 40 be represented by f(x,y). Let the amplitude of light in the image formed on input plane 24 comprising image 40 and its parity reflections be f (x,y). Then f (x,y) = (l/4)[f(x,y)+f(-x,y)+f(x,-y)+f(-x,-y)]. (The decrease in amplitude by 75%, i.e. the factor 1/4, which is not necessary, can of course be achieved by proper control of spatial light modulator 30). If the amplitude of light in an image formed on output plane 26 by lens 22 responsive to f (x,y) is represented by F(u,v) then F(u,v) is the Fourier transform of f (x,y). Because of the symmetry of the image on input plane 24, F(u,v) is also the cosine transform of f(x,y). If F.T. represents the operation of the Fourier transform and C.T. represents the operation of the cosine transform then the relationships between F(u,v), f (x,y) and f(x,y) is expressed by the equation F(u,v) = F.T. {f(x,y)} = C.T. {f(x,y)}.
It is to be noted that f(x,y) and f (x,y) represent data that is either all positive or all negative. For clarity of presentation data represented by f(x,y) is assumed to be positive. Further, since the cosine transform performed by optical processor 20 is a real linear transform, as noted above, F(u,v) also represents real data. However, F(u,v) may have both positive and negative data. Therefore, the cosine transform of image f(x,y) cannot be determined from the image on output plane 26 by measuring only the intensity |F(u,v)|2 .
Figs. 3A and 3B schematically show an optical processor 50 being used to determine the sign and magnitude of the cosine transform F(u,v) of image 40, i.e. f(x,y), in accordance with an embodiment of the present invention.
Optical processor 50 is similar to optical processor 20 and comprises a lens 22, input and output planes 24 and 26. At output plane 26, processor 50 preferably comprises an array 52 of rows and columns of photosensors 54. Each photosensor 54 generates a signal responsive to an intensity of light in an image on output plane 26 at a position determined by the row and column of array 52 in which the photosensor 54 is located and a pitch of array 52. Photosensors 52 sample intensity of light at "discrete" positions (u,v) in output plane 26. Preferably, the number of photosensors 52 is equal to the number of pixels 32 in spatial light modulator 30 and the locations of photosensors 52 are homologous with the locations of pixels 32.
In Fig. 3 A, in accordance with an embodiment of the present invention, spatial light modulator 30 generates a first image at input plane 24 comprising image 40 and its parity reflections 42, 44 and 46. The image is the same as the image comprising image 40 and its reflections shown in Fig. 2. Lens 22 forms an image at output plane 26 having amplitude F(u,v). Photosensors 54 generate signals responsive to intensity IF(u,v) of light in the image, where IF = |F(u,v)|2, at their respective locations u,v.
In Fig. 3B, in accordance with an embodiment of the present invention, spatial light modulator 30 generates a second "combined" image at input plane 24 that comprises the image shown on the input plane in Fig. 3 A with the addition of a reference image 60 having a known amplitude r(x,y). Preferably r(x,y) is chosen so that its Fourier transform is real, i.e. it has a symmetry with respect to the origin of axes x and y which results in its Fourier transform being real. By way of example, in Fig. 3B, reference image 60 is formed by controlling central pixels 61, 62, 63 and 64 located at the origin of coordinates of input plane 24 to transmit light and appear bright.
If c(x,y) = (f (x,y) + r(x,y)) then lens 22 forms an image (not shown) on output plane 26 that is the Fourier transform of c(x,y) and photosensors 54 generate signals responsive to intensity, IC(u,v), of light in the image. If C(u,v) represents the Fourier transform of c(x,y), then the amplitude of light in the image is C(u,v)and IC(u,v) = |C(u,v)|2.
In accordance with some embodiments of the present invention IF(u,v), IC(u,v) and the known Fourier transform of r(x,y) are used to determine the magnitude and sign of F(u,v) and thereby the cosine transform of f(x,y).
C(u,v) = F.T.{c(x,y)} = F.T.{f(x,y) + r(x,y)} = F.T.{f(x,y)} + F.T.{r(x,y)} = F(u,v)+R(u,v), where R(u,v) is the known and/or measured Fourier transform of r(x,y). Therefore, IC(u,v) - [|F(u,v)|2+|R(u,v)| +2F(u,v)R(u,v)] = IF(u,v)+IR(u,v)+ 2F(u,v)R(u,v), where IR(u,v) = |R(u,v)|2. IR(u,v) can be calculated from the known Fourier transform of r(x,y) or measured experimentally. In some embodiments of the present invention the sign and magnitude of F(u,v) are determined from the equation F(u,v)=[IC(u,v)-IF(u,v)-IR(u,v)]/2R(u,v).
In some embodiments of the present invention the magnitude of F(u,v) is determined from the square root of IF(u,v). The sign of F(u,v) can be determined by comparing IF(u,v) and IR(u,v) with IC(u,v). If IF(u,v) > IC(u,v) or IR(u,v) >IC(u,v) then R(u,v) and F(u,v) have opposite sign. Otherwise they have the same sign. Since the sign of R(u,v) is known the sign of F(u,v) is known.
Whereas, in Figs 3A and 3B reference image 60 is a symmetric image located at the center of origin of the x,y coordinates other reference images are possible and can be used in the practice of the present invention. For example, pixels 32 at the corners of spatial light modulator 30 can be used to generate useful reference images. In some embodiments of the present invention pixels 32 only in certain regions of spatial light modulator 30 are used to represent data. Pixels that are not needed for data are used, in some embodiments of the present invention, to generate reference images. In some embodiments, some data pixels are canceled or provided elsewhere n the image, for example as pixels in overlapping blocks. In other examples extra pixels are provided for the reference image, for example by inserting one or more rows or columns per block. For example "data" pixels may be restricted to alternate rows or columns of pixels. Or each data pixel may be suπounded by four pixels that are not used for data. In an exemplary embodiment, 9x9 blocks of data are used for an 8x8 block transform, with at least some of the extra pixels being used as a reference image, alternatively or additionally, the effect of missing pixels may be coπected using an electronic or optical post processing step.
It should also be noted that dark pixels, pixels that are "turned off, that do not transmit light can function to generate reference images. For example, if an image on spatial light modulator 30 has bright pixels at the origin of coordinates (i.e. pixels 61, 62, 63 and 64 in Fig. 3B) a reference image can be generated by "turning off the pixels. Turning off pixels in an image is of course equivalent to adding a reference image to the image for which light at the turned off pixels has a phase opposite to that of the light in the image.
In some embodiments of the present invention, reference image f(x,y) is chosen so that |R(u,v)| > |F(u,v)| for all values of u and v for which R(u,v) and F(u,v) have opposite signs. For these embodiments of the present invention it is not necessary to determine IF(u,v) and only the operation shown in Fig. 3B in which IC(u,v) is measured is required to determine the magnitude and phase of F(u,v). If at a point (u,v), IC(u,v)-IR(u,v) > 0 then the signs F(u,v) and R(u,v) are the same at the point otherwise the signs are opposite. The magnitude of F(u,v) at the point can be determined from IC(u,v) by solving the quadratic equation IC(u,v) = [|F(u,v)|2+|R(u,v)|2+2F(u,v)R(u,v)] for F(u,v).
Fig. 4A schematically shows a side view of an optical processor 70, in accordance with an embodiment of the present invention, that generates a reference field for which |R(u,v)| > |F(u,v)| for all values of u and v for which R(u,v) and F(u,v) have opposite signs. Optical processor 70 comprises a "Fourier" lens 22 having an output plane 26 coincident with a focal plane of lens 22, a spatial light modulator 72 and a "beam partitioner" 74. A detector array 76 is located at output plane 26 and measures intensity of light at the output plane. Spatial light modulator 72 defines an input plane for Fourier lens 22 and may be located at substantially any position to the left of output plane 26. In optical processor 70 spatial light modulator 72 is located by way of example adjacent to lens 22.
Beam partitioner 72 preferably receives an incident beam 78 of coherent collimated light generated by an appropriate source (not shown) and focuses a portion of the light to a point 80 and transmits a portion of the light as a transmitted beam of light 82 parallel to the incident beam. Light from transmitted beam 82 illuminates and is transmitted through spatial light modulator 72 and is focused by lens 22 to form a Fourier transform F(u,v) of a transmittance pattern f(x,y) formed on the spatial light modulator. It is assumed that the transmittance pattern has an appropriate symmetry so that the Fourier transform is a cosine transform of a desired image.
Point 80 functions substantially as a point source of light and provides a reference image r(x,y) for f(x,y) that is substantially a delta function Aδ(x,y), where A is proportional to an intensity of light focused to point 80. A Fourier image, R(u,v), of light from point 80 is also formed on output plane 26 by lens 22. Since r(x,y) is substantially a delta function, R(u,v) is substantially constant and equal to A.
The magnitude of F(u,v) at a point (u,v) is of course proportional to the intensity of light in transmitted beam 82. In accordance with an embodiment of the present invention beam partitioner 74 is designed so that the relative portions of light focused to point 80 and transmitted in transmitted beam 82 beam are such that A = |R(u,v| is greater than |F(u,v)| for all values of u and v for which R(u,v) and F(u,v) have opposite signs.
In some embodiments of the present invention beam partitioner 74 is a diffractive optical element such as a Fresnel zone plate having reduced efficiency. In some embodiments of the present invention, beam partitioner 74 comprises an optical system 90 of a type shown in a side view in Fig. 4B. Optical system 90 comprises a positive lens 92 and a weak negative lens 94. Positive lens 92 is preferably coated with an antireflective coating using methods known in the art to minimize reflections. Weak negative lens 92 is treated so that at its surfaces light is reflected with a reflectivity α. Light from incident beam 78, represented by arrowed lines 96, that is transmitted through both positive lens 92 and negative lens 94 without reflections is focused to produce the point reference light source Aδ(x,y) at point 80. If the intensity of light in light beam 78 is "I" the amount of light focused to point 80 is substantially equal to 1(1 -α)2. Light that undergoes internal reflection twice in negative lens 94 is transmitted as transmitted beam of light 82 substantially parallel to incident beam 78. The amount of energy in transmitted beam 82 is substantially equal to 1(1 -α)2 cχ2 . The ratio of energy focused to point 80 to that contained in transmitted beam && is therefore equal to 1/α2.
In accordance with an embodiment of the present invention R can be chosen so that A = |R(u,v| is greater than |F(u,v)| for all values of u and v for which R(u,v) and F(u,v) have opposite signs.
Given a function f(x,y) it can be shown that the cosine transform C.T.f(x,y) = l/2[ReF.T.{f(x,y)} + ReF.T.{f(x,-y)}] = l/2[ReFp(u,v) + ReFm(u,v)] where Re indicates the real part of a complex number and Fp(u,v) and Fm(u,v) are the Fourier transforms of f(x,y) and f(x,-y) respectively. Let cp(x,y) = f(x,y) + Apδ(x,y) and cm(x,y) = f(x,-y) + Amδ(x,y). The Fourier transform, Cp(u,v), of Cp(x,y) may be written Cp(u,v) = [Fp(u,v)+A] = [ReFp(u,v)+Im Fp(u,v)+Ap], where Im indicates the imaginary part of a complex number and Ap is assumed to be real. Similarly the Fourier transform of cm(x,y) may be written Cm(u,v) = [Fm(u,v)+Am] = [ReFm(u,v)+Im Fm(u,v)+Am]. If the "intensities" of the Fourier transforms Fp(u,v) and Cp(u,v) are written as
IFp(u,v) and ICp(u,v) respectively so that IFp(u,v) = |Fp(u,v)|2 and ICp(u,v) = |Cp(u,v)|2 then it can be shown that ReFp(u,v) = [ICp(u,v) - IFp(u,v) - Ap2]/2Ap. Similarly, ReFm(u,v) = [ICm(u,v) - IFm(u,v) - Am 2]/2Am where IFm(u,v) = |Fm(u,v)|2 and ICm(u,v) = |Cm(u,v)|2.
Therefore the cosine transform of f(x,y) can be determined from the intensities IFp(u,v), ICp(u,v) and Ap and IFm(u,v), ICm(u,v) and Am. It should be noted that whereas a delta function has been added as a reference field for f(x,y) and f(x,-y) in the above calculations, similar results can obtain for other reference functions r(x,y). Figs. 5A -5D illustrate a method, in accordance with an embodiment of the present invention by which the functions IFp(u,v), ICp(u,v) and Ap and IFm(u,v), ICm(u,v) and Am are evaluated using an optical processor 100 to generate a cosine transform of a function f(x,y). Optical processor 100 is similar to optical processors 50 and 70 and comprises a Fourier lens 22, a photosensor array 52 at an output plane 26, which is located at a focal plane of lens 22 and a spatial light modulator 30.
Referring to Fig. 5A assume that function f(x,y) is represented by an image 40 formed by spatial light modulator 30. Optical modulator 100 generates the Fourier transform F(u,v) of f(x,y) and acquires values for IFp(u,v). In Fig. 5B, a point light source 102 generates a delta function reference Apδ(x,y) image which is added to f(x,y) to form an image Cp(x,y) = f(x,y) + Apδ(x,y). Processor 100 Fourier transforms Cp(x,y) and acquires ICp(u,v). Point light source may be provided using any methods known in the art. In some embodiments of the present invention point light source is provided by methods and apparatus that are similar to those described in the discussion of Figs. 4A and 4B.
In Fig. 5C, spatial light modulator 30 forms an image f(x,-y) and acquires IFm(u,v). In Fig. 5D a delta function reference function Amδ(x,y) is added to f(x,-y) and ICm(u,v) is acquired. A suitable processor (not shown) receives the acquired data and uses it to determine
ReFp(u,v) and ReFm(u,v) from which the cosine transform of f(x,y) may be determined as shown above.
The present application is related to the following four PCT applications filed on same date as the instant application in the IL receiving office, by applicant JTC2000 Development (Delaware), Inc.: attorney docket 141/01582 which especially describes matching of discrete and continuous optical elements, attorney docket 141/01541 which especially describes reflective and incoherent optical processor designs, attorney docket 141/01580 which especially describes various architectures for non-imaging or diffractive based optical processing, and attorney docket 141/01542 which especially describes a method of processing by separating a data set into bit-planes and or using feedback. The disclosures of all of these applications are incorporated herein by reference.
In the description and claims of the present application, each of the verbs, "comprise" "include" and "have", and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of members, components, elements or parts of the subject or subjects of the verb.
The present invention has been described using detailed descriptions of embodiments thereof that are provided by way of example and are not intended to limit the scope of the invention. The described embodiments comprise different features, not all of which are required in all embodiments of the invention. Some embodiments of the present invention utilize only some of the features or possible combinations of the features. Variations of embodiments of the present invention that are described and embodiments of the present invention comprising different combinations of features noted in the described embodiments will occur to persons of the art. The scope of the invention is limited only by the following claims.

Claims

1. A method of optical data processing, comprising: providing a first data set to be optically transformed using a transform; combining a reference data set with said first data set to generate a combined data set; optically transforming said combined data set into a transformed combined data set; and extracting a second data set that represents a transform of said first data set, from an amplitude portion of said transformed combined data set, using said reference image to extract a phase of at least one element of said second data set.
2. A method according to claim 1, wherein said transformed combined data set is detected using a power detector.
3. A method according to claim 1, wherein said transformed combined data set is encoded using incoherent light.
4. A method according to any of claims 1-3, wherein said transformed combined data set is a discrete data set.
5. A method according to any of claims 1-4, wherein said first data set comprises a one- dimensional data set.
6. A method according to any of claims 1-4, wherein said first data set comprises a two- dimensional data set.
7. A method according to claim 6, wherein said first data set comprises an image.
8. A method according to any of claims 1-7, wherein said first data set comprises at least one positive value.
9. A method according to any of claims 1-8, wherein said first data set comprises at least one negative value.
10. A method according to any of claims 1-9, wherein said first data set comprises at least one complex value.
11. A method according to any of claims 1-10, wherein extracting comprises extracting using electronic processing.
12. A method according to any of claims 1-11, wherein combining a reference data set comprises adding at least one additional value to an existing element of said first data set.
13. A method according to any of claims 1-12, wherein combining a reference data set comprises replacing at least one existing element of said first data set with an element from a second data set.
14. A method according to claim 13, comprising compensating for an effect of said replaced value after said extraction.
15. A method according to claim 14, wherein said compensating comprises compensating using electronic processing.
16. A method according to any of claims 1-15, wherein combining a reference data set comprises adding at least one additional value alongside existing elements of said first data set.
17. A method according to claim 16, wherein said at least one additional value is arranged at a corner of a matrix layout of said first data set.
18. A method according to any of claims 1-17, comprising selecting said reference image to create a desired offset in said transformed combined data set.
19. A method according to claim 18, wherein said selecting takes into account system imperfections.
20. A method according to claim 18 or claim 19, wherein said offset is substantially uniform.
21. A method according to claim 18 or claim 19, wherein said offset is substantially non- uniform.
22. A method according to claim 18, wherein said reference data is at least one delta- function.
23. A method according to claim 22, wherein said reference data comprises a plurality of delta-functions.
24. A method according to claim 22 or claim 23, wherein said at least one delta function has an amplitude substantially greater than that of any of the data elements of said first data set.
25. A method according to claim 22 or claim 23, wherein said at least one delta function has an amplitude substantially greater than that of any of the data elements of said first data set that have a certain phase.
26. A method according to claim 22 or claim 23, wherein said at least one delta function has an amplitude substantially greater than an amplitude of a component of any of the data elements of said first data set that fit in a certain phase range.
27. A method according to claim 22 or claim 23, wherein said at least one delta function has an amplitude not greater than that of any of the data elements of said first data set.
28. A method according to any of claims 24-27, wherein said amplitudes are measured as amplitudes of transform elements.
29. A method according to any of claims 1-28, wherein combining comprises combining electronically and generating a combined modulated light beam.
30. A method according to any of claims 1-28, wherein combining comprises combining optically.
31. A method according to claim 30, wherein combining comprises creating said reference image optically.
32. A method according to claim 31, wherein said reference image is created using a refractive optical element.
33. A method according to claim 31, wherein said reference image is created using a dedicated light source.
34. A method according to any of claims 1-33, wherein said transform is a Fourier-derived transform.
35. A method according to any of claims 1-34, wherein said transform is a DCT transform.
36. A method according to any of claims 1-35, wherein exfracting a phase comprises extracting only a sign.
PCT/IL2000/000284 1999-05-19 2000-05-19 Phase extraction in optical processing Ceased WO2000072106A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
AU46079/00A AU4607900A (en) 1999-05-19 2000-05-19 Phase extraction in optical processing
DE60017738T DE60017738T2 (en) 1999-05-19 2000-05-19 PHASE EXTRACTION IN OPTICAL DATA PROCESSING
EP00927694A EP1190286B1 (en) 1999-05-19 2000-05-19 Phase extraction in optical data processing
AT00927694T ATE288098T1 (en) 1999-05-19 2000-05-19 PHASE EXTRACTION IN OPTICAL DATA PROCESSING
US11/470,800 US7515753B2 (en) 1999-05-19 2006-09-07 Phase extraction in optical processing

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
IL130038 1999-05-19
IL131094 1999-07-25
ILPCT/IL99/00479 1999-09-05
PCT/IL1999/000479 WO2000072267A1 (en) 1999-05-19 1999-09-05 Image compression

Related Parent Applications (3)

Application Number Title Priority Date Filing Date
PCT/IL1999/000479 Continuation-In-Part WO2000072267A1 (en) 1999-05-19 1999-09-05 Image compression
US09/926,547 Continuation-In-Part US7194139B1 (en) 1999-05-19 1999-09-05 Image compression
US92654702A Continuation-In-Part 1999-05-19 2002-03-05

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US09979182 A-371-Of-International 2000-05-19
US11/470,800 Continuation US7515753B2 (en) 1999-05-19 2006-09-07 Phase extraction in optical processing

Publications (1)

Publication Number Publication Date
WO2000072106A1 true WO2000072106A1 (en) 2000-11-30

Family

ID=11062742

Family Applications (5)

Application Number Title Priority Date Filing Date
PCT/IL2000/000286 Ceased WO2000072108A1 (en) 1999-05-19 2000-05-19 Optical separation processing
PCT/IL2000/000285 Ceased WO2000072107A1 (en) 1999-05-19 2000-05-19 Optical processing
PCT/IL2000/000283 Ceased WO2000072104A1 (en) 1999-05-19 2000-05-19 Optical processor architecture
PCT/IL2000/000282 Ceased WO2000072105A1 (en) 1999-05-19 2000-05-19 Input output matching in optical processing
PCT/IL2000/000284 Ceased WO2000072106A1 (en) 1999-05-19 2000-05-19 Phase extraction in optical processing

Family Applications Before (4)

Application Number Title Priority Date Filing Date
PCT/IL2000/000286 Ceased WO2000072108A1 (en) 1999-05-19 2000-05-19 Optical separation processing
PCT/IL2000/000285 Ceased WO2000072107A1 (en) 1999-05-19 2000-05-19 Optical processing
PCT/IL2000/000283 Ceased WO2000072104A1 (en) 1999-05-19 2000-05-19 Optical processor architecture
PCT/IL2000/000282 Ceased WO2000072105A1 (en) 1999-05-19 2000-05-19 Input output matching in optical processing

Country Status (1)

Country Link
WO (5) WO2000072108A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6724334B2 (en) 2001-09-03 2004-04-20 Lenslet Ltd. Digital to analog converter array

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7515753B2 (en) 1999-05-19 2009-04-07 Lenslet Labs Ltd. Phase extraction in optical processing
EP1275228A2 (en) 2000-04-10 2003-01-15 JTC 2000 Development (Delaware), Inc. Ofdm modem with an optical processor
CN106547079B (en) * 2017-01-17 2018-11-20 中国科学院上海光学精密机械研究所 Real-time three-dimensional laser fluorescence microscopic imaging device
CN114912604A (en) 2018-05-15 2022-08-16 轻物质公司 Photonic computing system and optical method for performing matrix-vector multiplication
AU2019282632B2 (en) 2018-06-05 2024-04-18 Lightelligence PTE. Ltd. Optoelectronic computing systems
US11734556B2 (en) 2019-01-14 2023-08-22 Lightelligence PTE. Ltd. Optoelectronic computing systems
US11209856B2 (en) 2019-02-25 2021-12-28 Lightmatter, Inc. Path-number-balanced universal photonic network
KR20220039775A (en) 2019-07-29 2022-03-29 라이트매터, 인크. Systems and Methods for Analog Computation Using a Linear Photonic Processor
GB2626540A (en) * 2023-01-24 2024-07-31 Salience Labs Ltd Optical apparatus for the generation of multiple adjusted signals
WO2024240969A1 (en) * 2023-05-22 2024-11-28 Berilio Investigaciones A.I.E. Neuromorphic photonic processor using optical low-coherence interferometry

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4016413A (en) * 1974-03-29 1977-04-05 The United States Of America As Represented By The Secretary Of The Army Optoelectronic means for one-dimensional integral transforms of two dimensional information
US4124278A (en) * 1977-06-22 1978-11-07 Hughes Aircraft Company Optical subtraction of images in real time

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2568076B1 (en) * 1984-07-18 1986-11-21 Onera (Off Nat Aerospatiale) HYBRID MULTIPLEX IMAGE ANALYSIS DEVICE.

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4016413A (en) * 1974-03-29 1977-04-05 The United States Of America As Represented By The Secretary Of The Army Optoelectronic means for one-dimensional integral transforms of two dimensional information
US4124278A (en) * 1977-06-22 1978-11-07 Hughes Aircraft Company Optical subtraction of images in real time

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JUVELLS ET AL: "Optical pattern recognition etc.", JOURNAL OF MODERN OPTICS, vol. 44, no. 2, February 1997 (1997-02-01), LONDON, GB, pages 313 - 325, XP000912224, ISSN: 0950-0340 *
ROBLIN: "Sur la restitution du signe de la phase en interferometrie", OPTICS COMMUNICATIONS., vol. 15, no. 3, - December 1975 (1975-12-01), pages 379 - 383, XP000896197, ISSN: 0030-4018 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6724334B2 (en) 2001-09-03 2004-04-20 Lenslet Ltd. Digital to analog converter array
US7536431B2 (en) 2001-09-03 2009-05-19 Lenslet Labs Ltd. Vector-matrix multiplication

Also Published As

Publication number Publication date
WO2000072104A1 (en) 2000-11-30
WO2000072105A1 (en) 2000-11-30
WO2000072107A1 (en) 2000-11-30
WO2000072108A1 (en) 2000-11-30

Similar Documents

Publication Publication Date Title
EP1190286B1 (en) Phase extraction in optical data processing
JP4197844B2 (en) Improvements on pattern recognition
US9213882B2 (en) Devices and methods of reading monochromatic patterns
Casasent et al. Optical hit–miss morphological transform
US5327286A (en) Real time optical correlation system
Leger et al. Hybrid optical processor for pattern recognition and classification using a generalized set of pattern functions
WO2000072106A1 (en) Phase extraction in optical processing
Ambs et al. Optical implementation of the Hough transform by a matrix of holograms
US7515753B2 (en) Phase extraction in optical processing
JPH0484374A (en) System for recognizing optical pattern for multiple reference picture
Garcia-Martínez et al. Nonlinear morphological correlation: optoelectronic implementation
Schaefer et al. Nonlinear optical hit–miss transform for detection
Yang et al. Real-time localization and classification of the fast-moving target based on complementary single-pixel detection
Ambs et al. Computerized design and generation of space-variant holographic filters. 1: System design considerations and applications of space-variant filters to image processing
Casasent et al. Nonlinear local image preprocessing using coherent optical techniques
Ai et al. Fast modulation measurement profilometry based on phase-shifting and modulation ratio
Garcia et al. Circular-harmonic minimum average correlation energy filter for color pattern recognition
Szoplik et al. Phase-change visualization in two-dimensional phase objects with a semiderivative real filter
Esteve-Taboada et al. Rotation-invariant optical recognition of three-dimensional objects
Garcia-Martinez et al. Nonlinear rotation-invariant pattern recognition by use of the optical morphological correlation
Brasher et al. Incoherent optical correlators and phase encoding of identification codes for access control or authentication
JP7412166B2 (en) Imaging device and imaging method
Tian et al. Three-dimensional digital imaging based on shifted point-array encoding
Wu et al. Analysis technique for the measurement of a three-dimensional object shape
US12439175B2 (en) Systems and methods for ultra-low-power, high-speed sensors using optical filters

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2000927694

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 09979182

Country of ref document: US

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 2000927694

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP

WWG Wipo information: grant in national office

Ref document number: 2000927694

Country of ref document: EP

DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)